1
0
mirror of https://pagure.io/fedora-qa/os-autoinst-distri-fedora.git synced 2024-11-09 09:04:20 +00:00
os-autoinst-distri-fedora/main.pm

300 lines
11 KiB
Perl
Raw Normal View History

2015-01-22 12:38:16 +00:00
# Copyright (C) 2014 SUSE Linux GmbH
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
use strict;
use testapi;
use autotest;
use needle;
# distribution-specific implementations of expected methods
my $distri = testapi::get_var("CASEDIR") . '/lib/fedoradistribution.pm';
require $distri;
testapi::set_distribution(fedoradistribution->new());
## UTILITY SUBROUTINES
# Stolen from openSUSE.
sub unregister_needle_tags($) {
my $tag = shift;
my @a = @{ needle::tags($tag) };
for my $n (@a) { $n->unregister(); }
}
# The purpose of this function is to un-register all needles which have
# at least one tag that starts with a given string (the 'prefix'), if
# it does not have any tag that matches the pattern 'prefix-value', for
# any of the values given in an array. The first argument passed must
# be the prefix; the second must be a reference to the array of values.
# For instance, if the 'prefix' is LANGUAGE and the 'values' are
# ENGLISH and FRENCH, this function would un-reference a needle which
# had only the tag 'LANGUAGE-DUTCH', but it would keep a needle which
# had the tag 'LANGUAGE-ENGLISH', or a needle with no tag starting in
# 'LANGUAGE-' at all.
sub unregister_prefix_tags {
my ($prefix, $valueref) = @_;
add a french (encrypted) test Summary: this handles Non-English European Language Install. Basically it's a bunch of new screenshots for existing tag names, plus a bit of configurability in _boot_to_anaconda and tweaking some existing needles to do non-text matches. The weird 'half-the- icon' needles are for cases where there may or may not be a warning triangle but we want to click it either way (saves duplicating the needle). This also sets up a convention for tagging what languages a needle is appropriate for. If it's specifically appropriate for one or more languages, a tag ENV-LANGUAGE-(LANGUAGE) should be applied for each language, where (LANGUAGE) is the install language in upper-case ('LANGUAGE' variable, which should also be the string that will be typed into the language selection screen). If the needle ought to be used for *all* languages - i.e. it's not a text match, or any text in the match is known not to be translated - the tag ENV-INSTLANG-ALL should be applied. To back this, main.pm now unregisters all needles that are not tagged with either ENV-LANGUAGE-ALL or the tag for the language actually being used (if the LANGUAGE var is not set, we assume english). The point of this is to check the install is actually translated; if we allow all needles to match, the test would pass even if no translations appeared at all. Test Plan: Run all tests and make sure you get the expected results. You can schedule a run against 23 Beta TC1 to see the French test fails 'correctly' when translations are missing. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D577
2015-09-15 01:08:58 +00:00
NEEDLE: for my $needle ( needle::all() ) {
my $unregister = 0;
for my $tag ( @{$needle->{'tags'}} ) {
if ($tag =~ /^\Q$prefix/) {
# We have at least one tag matching the prefix, so we
# *MAY* want to un-register the needle
$unregister = 1;
for my $value ( @{$valueref} ) {
# At any point if we hit a prefix-value match, we
# know we need to keep this needle and can skip
# to the next
next NEEDLE if ($tag eq "$prefix-$value");
}
}
add a french (encrypted) test Summary: this handles Non-English European Language Install. Basically it's a bunch of new screenshots for existing tag names, plus a bit of configurability in _boot_to_anaconda and tweaking some existing needles to do non-text matches. The weird 'half-the- icon' needles are for cases where there may or may not be a warning triangle but we want to click it either way (saves duplicating the needle). This also sets up a convention for tagging what languages a needle is appropriate for. If it's specifically appropriate for one or more languages, a tag ENV-LANGUAGE-(LANGUAGE) should be applied for each language, where (LANGUAGE) is the install language in upper-case ('LANGUAGE' variable, which should also be the string that will be typed into the language selection screen). If the needle ought to be used for *all* languages - i.e. it's not a text match, or any text in the match is known not to be translated - the tag ENV-INSTLANG-ALL should be applied. To back this, main.pm now unregisters all needles that are not tagged with either ENV-LANGUAGE-ALL or the tag for the language actually being used (if the LANGUAGE var is not set, we assume english). The point of this is to check the install is actually translated; if we allow all needles to match, the test would pass even if no translations appeared at all. Test Plan: Run all tests and make sure you get the expected results. You can schedule a run against 23 Beta TC1 to see the French test fails 'correctly' when translations are missing. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D577
2015-09-15 01:08:58 +00:00
}
# We get here if we hit no prefix-value match, but we only want
# to unregister the needle if we hit any prefix match, i.e. if
# 'unregister' is 1.
$needle->unregister() if ($unregister);
add a french (encrypted) test Summary: this handles Non-English European Language Install. Basically it's a bunch of new screenshots for existing tag names, plus a bit of configurability in _boot_to_anaconda and tweaking some existing needles to do non-text matches. The weird 'half-the- icon' needles are for cases where there may or may not be a warning triangle but we want to click it either way (saves duplicating the needle). This also sets up a convention for tagging what languages a needle is appropriate for. If it's specifically appropriate for one or more languages, a tag ENV-LANGUAGE-(LANGUAGE) should be applied for each language, where (LANGUAGE) is the install language in upper-case ('LANGUAGE' variable, which should also be the string that will be typed into the language selection screen). If the needle ought to be used for *all* languages - i.e. it's not a text match, or any text in the match is known not to be translated - the tag ENV-INSTLANG-ALL should be applied. To back this, main.pm now unregisters all needles that are not tagged with either ENV-LANGUAGE-ALL or the tag for the language actually being used (if the LANGUAGE var is not set, we assume english). The point of this is to check the install is actually translated; if we allow all needles to match, the test would pass even if no translations appeared at all. Test Plan: Run all tests and make sure you get the expected results. You can schedule a run against 23 Beta TC1 to see the French test fails 'correctly' when translations are missing. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D577
2015-09-15 01:08:58 +00:00
}
}
sub cleanup_needles() {
if (!get_var('LIVE') and !get_var('CANNED')) {
## Unregister smaller hub needles. Live and 'canned' installers have
## a smaller hub with no repository spokes. On other images we want
## to wait for repository setup to complete, but if we match that
## spoke's "ready" icon, it breaks live and canned because they
## don't have that spoke. So we have a needle which doesn't match
## on that icon, but we unregister it for other installs so they
## don't match on it too soon.
unregister_needle_tags("INSTALLER-smallhub");
}
add a french (encrypted) test Summary: this handles Non-English European Language Install. Basically it's a bunch of new screenshots for existing tag names, plus a bit of configurability in _boot_to_anaconda and tweaking some existing needles to do non-text matches. The weird 'half-the- icon' needles are for cases where there may or may not be a warning triangle but we want to click it either way (saves duplicating the needle). This also sets up a convention for tagging what languages a needle is appropriate for. If it's specifically appropriate for one or more languages, a tag ENV-LANGUAGE-(LANGUAGE) should be applied for each language, where (LANGUAGE) is the install language in upper-case ('LANGUAGE' variable, which should also be the string that will be typed into the language selection screen). If the needle ought to be used for *all* languages - i.e. it's not a text match, or any text in the match is known not to be translated - the tag ENV-INSTLANG-ALL should be applied. To back this, main.pm now unregisters all needles that are not tagged with either ENV-LANGUAGE-ALL or the tag for the language actually being used (if the LANGUAGE var is not set, we assume english). The point of this is to check the install is actually translated; if we allow all needles to match, the test would pass even if no translations appeared at all. Test Plan: Run all tests and make sure you get the expected results. You can schedule a run against 23 Beta TC1 to see the French test fails 'correctly' when translations are missing. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D577
2015-09-15 01:08:58 +00:00
# Unregister desktop needles of other desktops when DESKTOP is specified
if (get_var('DESKTOP')) {
unregister_prefix_tags('DESKTOP', [ get_var('DESKTOP') ])
}
# Unregister non-language-appropriate needles. See unregister_except_
# tags for details; basically all needles with at least one LANGUAGE-
# tag will be unregistered unless they match the current langauge.
my $langref = [ get_var('LANGUAGE') || 'english' ];
unregister_prefix_tags('LANGUAGE', $langref);
}
$needle::cleanuphandler = \&cleanup_needles;
## TEST LOADING SUBROUTINES
2015-01-22 12:38:16 +00:00
sub load_upgrade_tests() {
convert upgrade tests to dnf-plugin-system-upgrade Summary: This is a first cut which more or less works for now. Issues: 1) We're not really testing the BUILD, here. All the test does is try and upgrade to the specified VERSION - so it'll be using the latest 'stable' for the given VERSION at the time the test runs. This isn't really that terrible, but especially for TC/RC validation, we might want to make things a bit more elaborate and set up the repo for the actual BUILD (and disable the main repos). 2) We'd actually need --nogpgcheck for non-Rawhide, at one specific point in the release cycle - after Branching but before Bodhi activation (which is when we can be sure all packages are signed). This won't matter until 24 branches, and maybe releng will have it fixed by then...if not, I'll tweak it. 3) We don't really test that the upgrade actually *happened* for desktop, at the moment - the only thing in the old test that really checked that was where we checked for the fedup boot menu entry, but that has no analog in dnf. What we should probably do is check that GUI login works, then switch to a console and check /etc/fedora-release just as the minimal test does. Test Plan: Run the tests. Note that creating the desktop disk image doesn't work ATM, so I can't verify the desktop test works, but the minimal one seems to (with D565). There'll be a matching diff for openqa_fedora_tools to update the test case names there. Reviewers: jskladan, garretraziel Reviewed By: jskladan, garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D567
2015-09-10 21:49:13 +00:00
# all upgrade tests consist of: preinstall phase (where packages are upgraded and
# dnf-plugin-system-upgrade is installed), run phase (where upgrade is run) and postinstall
# phase (where is checked if fedora was upgraded successfully)
autotest::loadtest "tests/upgrade_preinstall.pm";
autotest::loadtest "tests/upgrade_run.pm";
# set postinstall test
consolidate login waits, use postinstall not entrypoint for base Summary: I started out wanting to fix an issue I noticed today where graphical upgrade tests were failing because they didn't wait for the graphical login screen properly; the test was sitting at the 'full Fedora logo' state of plymouth for a long time, so the current boot_to_login_screen's wait_still_screen was triggered by it and the function wound up failing on the assert_screen, because it was still some time before the real login screen appeared. So I tweaked the boot_to_login_screen implementation to work slightly differently (look for a login screen match, *then* - if we're dealing with a graphical login - wait_still_screen to defeat the 'old GPU buffer showing login screen' problem and assert the login screen again). But while working on it, I figured we really should consolidate all the various places that handle the bootloader -> login, we were doing it quite differently in all sorts of different places. And as part of that, I converted the base tests to use POSTINSTALL (and thus go through the shared _wait_login tests) instead of handling boot themselves. As part of *that*, I tweaked main.pm to not require all POSTINSTALL tests have the _postinstall suffix on their names, as it really doesn't make sense, and renamed the tests. Test Plan: Run all tests, see if they work. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1015
2016-09-27 18:48:15 +00:00
set_var('POSTINSTALL', "upgrade_postinstall" );
}
sub load_install_tests() {
# normal installation test consists of several phases, from which some of them are
# loaded automatically and others are loaded based on what env variables are set
# generally speaking, install test consists of: boot phase, customization phase, installation
# and reboot phase, postinstall phase
# boot phase is loaded automatically every time
autotest::loadtest "tests/_boot_to_anaconda.pm";
2015-01-30 09:35:13 +00:00
# if this is a kickstart install, that's all folks
return if (get_var("KICKSTART"));
if (get_var('ANACONDA_TEXT')) {
# since it differs much, handle text installation separately
autotest::loadtest "tests/install_text.pm";
return;
}
## Networking
if (get_var('ANACONDA_STATIC')) {
Add support for testing updates Summary: This adds an entirely new workflow for testing distribution updates. The `ADVISORY` variable is introduced: when set, `main.pm` will load an early post-install test that sets up a repository containing the packages from the specified update, runs `dnf -y update`, and reboots. A new templates file is added, `templates-updates`, which adds two new flavors called `updates-server` and `updates-workstation`, each containing job templates for appropriate post-install tests. Scheduler is expected to post `ADVISORY=(update ID) HDD_1=(base image) FLAVOR=updates-(server|workstation)`, where (base image) is one of the stable release base disk images produced by `createhdds` and usually used for upgrade testing. This will result in the appropriate job templates being loaded. We rejig postinstall test loading and static network config a bit so that this works for both the 'compose' and 'updates' test flows: we have to ensure we bring up networking for the tap tests before we try and install the updates, but still allow later adjustment of the configuration. We take advantage of the openQA feature that was added a few months back to run the same module multiple times, so the `_advisory_update` module can reboot after installing the updates and the modules that take care of bootloader, encryption and login get run again. This looks slightly wacky in the web UI, though - it doesn't show the later runs of each module. We also use the recently added feature to specify `+HDD_1` in the test suites which use a disk image uploaded by an earlier post-install test, so the test suite value will take priority over the value POSTed by the scheduler for those tests, and we will use the uploaded disk image (and not the clean base image POSTed by the scheduler) for those tests. My intent here is to enhance the scheduler, adding a consumer which listens out for critpath updates, and runs this test flow for each one, then reports the results to ResultsDB where Bodhi could query and display them. We could also add a list of other packages to have one or both sets of update tests run on it, I guess. Test Plan: Try a post something like: HDD_1=disk_f25_server_3_x86_64.img DISTRI=fedora VERSION=25 FLAVOR=updates-server ARCH=x86_64 BUILD=FEDORA-2017-376ae2b92c ADVISORY=FEDORA-2017-376ae2b92c CURRREL=25 PREVREL=24 Pick an appropriate `ADVISORY` (ideally, one containing some packages which might actually be involved in the tests), and matching `FLAVOR` and `HDD_1`. The appropriate tests should run, a repo with the update packages should be created and enabled (and dnf update run), and the tests should work properly. Also test a regular compose run to make sure I didn't break anything. Reviewers: jskladan, jsedlak Reviewed By: jsedlak Subscribers: tflink Differential Revision: https://phab.qa.fedoraproject.org/D1143
2017-01-25 16:16:12 +00:00
autotest::loadtest "tests/_anaconda_network_static.pm";
}
## Installation source
add NFS tests (and DHCP/DNS in the support server) Summary: Set up the support server to provide DHCP/DNS functionality and an NFS server, providing a kickstart. Add a kickstart test just like the other root-user-crypted-net kickstart tests except it gets the kickstart from the support server via NFS. Also add NFS repository tests and a second support server for Server-dvd-iso flavor: this test must run on that flavor to ensure that packages are actually available. The support server just mounts the attached 'DVD' and exports it via NFS. Note we don't need to do anything clever to avoid IP conflicts between the two support servers, because os-autoinst-openvswitch ensures each worker group is on its own VLAN. As part of adding the NFS repo tests, I did a bit of cleanup, moving little things we were repeating a lot into anacondatest, and sharing the 'check if the repo was used' logic between all the tests (by making it into a test step that's loaded for all of them). I also simplified the 'was repo used' checks a bit, it seems silly to run a 'grep' command inside the VM then have os-autoinst do a grep on the output (which is effectively what we were doing before), instead we'll just use a single grep within the VM, and clean up the messy quoting/escaping a bit. Test Plan: Run all tests - at least all repository tests - and check they work (make sure the tests are actually still sane, not just that they pass). I've done runs of all the repo tests and they look good to me, but please double-check. I'm currently re-running the whole 24-20160609.n.0 test on staging with these changes. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D888
2016-06-13 15:42:30 +00:00
if (get_var('MIRRORLIST_GRAPHICAL') || get_var("REPOSITORY_GRAPHICAL")) {
autotest::loadtest "tests/install_source_graphical.pm";
add NFS tests (and DHCP/DNS in the support server) Summary: Set up the support server to provide DHCP/DNS functionality and an NFS server, providing a kickstart. Add a kickstart test just like the other root-user-crypted-net kickstart tests except it gets the kickstart from the support server via NFS. Also add NFS repository tests and a second support server for Server-dvd-iso flavor: this test must run on that flavor to ensure that packages are actually available. The support server just mounts the attached 'DVD' and exports it via NFS. Note we don't need to do anything clever to avoid IP conflicts between the two support servers, because os-autoinst-openvswitch ensures each worker group is on its own VLAN. As part of adding the NFS repo tests, I did a bit of cleanup, moving little things we were repeating a lot into anacondatest, and sharing the 'check if the repo was used' logic between all the tests (by making it into a test step that's loaded for all of them). I also simplified the 'was repo used' checks a bit, it seems silly to run a 'grep' command inside the VM then have os-autoinst do a grep on the output (which is effectively what we were doing before), instead we'll just use a single grep within the VM, and clean up the messy quoting/escaping a bit. Test Plan: Run all tests - at least all repository tests - and check they work (make sure the tests are actually still sane, not just that they pass). I've done runs of all the repo tests and they look good to me, but please double-check. I'm currently re-running the whole 24-20160609.n.0 test on staging with these changes. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D888
2016-06-13 15:42:30 +00:00
autotest::loadtest "tests/_check_install_source.pm";
}
if (get_var("REPOSITORY_VARIATION")){
add NFS tests (and DHCP/DNS in the support server) Summary: Set up the support server to provide DHCP/DNS functionality and an NFS server, providing a kickstart. Add a kickstart test just like the other root-user-crypted-net kickstart tests except it gets the kickstart from the support server via NFS. Also add NFS repository tests and a second support server for Server-dvd-iso flavor: this test must run on that flavor to ensure that packages are actually available. The support server just mounts the attached 'DVD' and exports it via NFS. Note we don't need to do anything clever to avoid IP conflicts between the two support servers, because os-autoinst-openvswitch ensures each worker group is on its own VLAN. As part of adding the NFS repo tests, I did a bit of cleanup, moving little things we were repeating a lot into anacondatest, and sharing the 'check if the repo was used' logic between all the tests (by making it into a test step that's loaded for all of them). I also simplified the 'was repo used' checks a bit, it seems silly to run a 'grep' command inside the VM then have os-autoinst do a grep on the output (which is effectively what we were doing before), instead we'll just use a single grep within the VM, and clean up the messy quoting/escaping a bit. Test Plan: Run all tests - at least all repository tests - and check they work (make sure the tests are actually still sane, not just that they pass). I've done runs of all the repo tests and they look good to me, but please double-check. I'm currently re-running the whole 24-20160609.n.0 test on staging with these changes. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D888
2016-06-13 15:42:30 +00:00
autotest::loadtest "tests/_check_install_source.pm";
}
2015-02-04 13:45:37 +00:00
if (get_var('LIVE')) {
# No package set selection for lives.
set_var('PACKAGE_SET', "default");
}
2015-01-30 09:35:13 +00:00
## Select package set. Minimal is the default, if 'default' is specified, skip selection.
autotest::loadtest "tests/_software_selection.pm";
## Disk partitioning.
# If PARTITIONING is set, we pick the storage test
# to run based on the value (usually we run the test with the name
# that matches the value, except for a couple of commented cases).
my $storage = '';
my $partitioning = get_var('PARTITIONING');
# if PARTITIONING is unset, or one of [...], use disk_guided_empty,
# which is the simplest / 'default' case.
if (! $partitioning || $partitioning ~~ ['guided_empty', 'guided_free_space']) {
$storage = "tests/disk_guided_empty.pm";
}
else {
$storage = "tests/disk_".$partitioning.".pm";
}
autotest::loadtest $storage;
2015-02-04 16:16:21 +00:00
if (get_var("ENCRYPT_PASSWORD")){
autotest::loadtest "tests/disk_guided_encrypted.pm";
2015-01-27 13:22:35 +00:00
}
2015-01-27 12:35:27 +00:00
# Start installation, set user & root passwords, reboot
# install and reboot phase is loaded automatically every time (except when KICKSTART is set)
autotest::loadtest "tests/_do_install_and_reboot.pm";
}
Add support for testing updates Summary: This adds an entirely new workflow for testing distribution updates. The `ADVISORY` variable is introduced: when set, `main.pm` will load an early post-install test that sets up a repository containing the packages from the specified update, runs `dnf -y update`, and reboots. A new templates file is added, `templates-updates`, which adds two new flavors called `updates-server` and `updates-workstation`, each containing job templates for appropriate post-install tests. Scheduler is expected to post `ADVISORY=(update ID) HDD_1=(base image) FLAVOR=updates-(server|workstation)`, where (base image) is one of the stable release base disk images produced by `createhdds` and usually used for upgrade testing. This will result in the appropriate job templates being loaded. We rejig postinstall test loading and static network config a bit so that this works for both the 'compose' and 'updates' test flows: we have to ensure we bring up networking for the tap tests before we try and install the updates, but still allow later adjustment of the configuration. We take advantage of the openQA feature that was added a few months back to run the same module multiple times, so the `_advisory_update` module can reboot after installing the updates and the modules that take care of bootloader, encryption and login get run again. This looks slightly wacky in the web UI, though - it doesn't show the later runs of each module. We also use the recently added feature to specify `+HDD_1` in the test suites which use a disk image uploaded by an earlier post-install test, so the test suite value will take priority over the value POSTed by the scheduler for those tests, and we will use the uploaded disk image (and not the clean base image POSTed by the scheduler) for those tests. My intent here is to enhance the scheduler, adding a consumer which listens out for critpath updates, and runs this test flow for each one, then reports the results to ResultsDB where Bodhi could query and display them. We could also add a list of other packages to have one or both sets of update tests run on it, I guess. Test Plan: Try a post something like: HDD_1=disk_f25_server_3_x86_64.img DISTRI=fedora VERSION=25 FLAVOR=updates-server ARCH=x86_64 BUILD=FEDORA-2017-376ae2b92c ADVISORY=FEDORA-2017-376ae2b92c CURRREL=25 PREVREL=24 Pick an appropriate `ADVISORY` (ideally, one containing some packages which might actually be involved in the tests), and matching `FLAVOR` and `HDD_1`. The appropriate tests should run, a repo with the update packages should be created and enabled (and dnf update run), and the tests should work properly. Also test a regular compose run to make sure I didn't break anything. Reviewers: jskladan, jsedlak Reviewed By: jsedlak Subscribers: tflink Differential Revision: https://phab.qa.fedoraproject.org/D1143
2017-01-25 16:16:12 +00:00
sub _load_early_postinstall_tests() {
# Early post-install test loading. Split out as a separate sub
# because we do this all twice on update tests.
2015-07-31 08:31:27 +00:00
# Unlock encrypted storage volumes, if necessary. The test name here
# follows the 'storage post-install' convention, but must be run earlier.
if (get_var("ENCRYPT_PASSWORD")) {
autotest::loadtest "tests/disk_guided_encrypted_postinstall.pm";
2015-02-04 16:16:21 +00:00
}
# Appropriate login method for install type
if (get_var("DESKTOP")) {
autotest::loadtest "tests/_graphical_wait_login.pm";
}
# Test non-US input at this point, on language tests
if (get_var("SWITCHED_LAYOUT") || get_var("INPUT_METHOD")) {
autotest::loadtest "tests/_graphical_input.pm";
}
unless (get_var("DESKTOP")) {
autotest::loadtest "tests/_console_wait_login.pm";
}
Add support for testing updates Summary: This adds an entirely new workflow for testing distribution updates. The `ADVISORY` variable is introduced: when set, `main.pm` will load an early post-install test that sets up a repository containing the packages from the specified update, runs `dnf -y update`, and reboots. A new templates file is added, `templates-updates`, which adds two new flavors called `updates-server` and `updates-workstation`, each containing job templates for appropriate post-install tests. Scheduler is expected to post `ADVISORY=(update ID) HDD_1=(base image) FLAVOR=updates-(server|workstation)`, where (base image) is one of the stable release base disk images produced by `createhdds` and usually used for upgrade testing. This will result in the appropriate job templates being loaded. We rejig postinstall test loading and static network config a bit so that this works for both the 'compose' and 'updates' test flows: we have to ensure we bring up networking for the tap tests before we try and install the updates, but still allow later adjustment of the configuration. We take advantage of the openQA feature that was added a few months back to run the same module multiple times, so the `_advisory_update` module can reboot after installing the updates and the modules that take care of bootloader, encryption and login get run again. This looks slightly wacky in the web UI, though - it doesn't show the later runs of each module. We also use the recently added feature to specify `+HDD_1` in the test suites which use a disk image uploaded by an earlier post-install test, so the test suite value will take priority over the value POSTed by the scheduler for those tests, and we will use the uploaded disk image (and not the clean base image POSTed by the scheduler) for those tests. My intent here is to enhance the scheduler, adding a consumer which listens out for critpath updates, and runs this test flow for each one, then reports the results to ResultsDB where Bodhi could query and display them. We could also add a list of other packages to have one or both sets of update tests run on it, I guess. Test Plan: Try a post something like: HDD_1=disk_f25_server_3_x86_64.img DISTRI=fedora VERSION=25 FLAVOR=updates-server ARCH=x86_64 BUILD=FEDORA-2017-376ae2b92c ADVISORY=FEDORA-2017-376ae2b92c CURRREL=25 PREVREL=24 Pick an appropriate `ADVISORY` (ideally, one containing some packages which might actually be involved in the tests), and matching `FLAVOR` and `HDD_1`. The appropriate tests should run, a repo with the update packages should be created and enabled (and dnf update run), and the tests should work properly. Also test a regular compose run to make sure I didn't break anything. Reviewers: jskladan, jsedlak Reviewed By: jsedlak Subscribers: tflink Differential Revision: https://phab.qa.fedoraproject.org/D1143
2017-01-25 16:16:12 +00:00
}
sub load_postinstall_tests() {
# special case for the memory check test, as it doesn't need to boot
# the installed system: just load its test and return
if (get_var("MEMCHECK")) {
autotest::loadtest "tests/_memcheck.pm";
return;
}
# load the early tests
_load_early_postinstall_tests();
2015-01-30 09:35:13 +00:00
Add support for testing updates Summary: This adds an entirely new workflow for testing distribution updates. The `ADVISORY` variable is introduced: when set, `main.pm` will load an early post-install test that sets up a repository containing the packages from the specified update, runs `dnf -y update`, and reboots. A new templates file is added, `templates-updates`, which adds two new flavors called `updates-server` and `updates-workstation`, each containing job templates for appropriate post-install tests. Scheduler is expected to post `ADVISORY=(update ID) HDD_1=(base image) FLAVOR=updates-(server|workstation)`, where (base image) is one of the stable release base disk images produced by `createhdds` and usually used for upgrade testing. This will result in the appropriate job templates being loaded. We rejig postinstall test loading and static network config a bit so that this works for both the 'compose' and 'updates' test flows: we have to ensure we bring up networking for the tap tests before we try and install the updates, but still allow later adjustment of the configuration. We take advantage of the openQA feature that was added a few months back to run the same module multiple times, so the `_advisory_update` module can reboot after installing the updates and the modules that take care of bootloader, encryption and login get run again. This looks slightly wacky in the web UI, though - it doesn't show the later runs of each module. We also use the recently added feature to specify `+HDD_1` in the test suites which use a disk image uploaded by an earlier post-install test, so the test suite value will take priority over the value POSTed by the scheduler for those tests, and we will use the uploaded disk image (and not the clean base image POSTed by the scheduler) for those tests. My intent here is to enhance the scheduler, adding a consumer which listens out for critpath updates, and runs this test flow for each one, then reports the results to ResultsDB where Bodhi could query and display them. We could also add a list of other packages to have one or both sets of update tests run on it, I guess. Test Plan: Try a post something like: HDD_1=disk_f25_server_3_x86_64.img DISTRI=fedora VERSION=25 FLAVOR=updates-server ARCH=x86_64 BUILD=FEDORA-2017-376ae2b92c ADVISORY=FEDORA-2017-376ae2b92c CURRREL=25 PREVREL=24 Pick an appropriate `ADVISORY` (ideally, one containing some packages which might actually be involved in the tests), and matching `FLAVOR` and `HDD_1`. The appropriate tests should run, a repo with the update packages should be created and enabled (and dnf update run), and the tests should work properly. Also test a regular compose run to make sure I didn't break anything. Reviewers: jskladan, jsedlak Reviewed By: jsedlak Subscribers: tflink Differential Revision: https://phab.qa.fedoraproject.org/D1143
2017-01-25 16:16:12 +00:00
# do standard post-install static network config if the var is set
# this is here not in early_postinstall_tests as there's no need
# to do it twice
if (get_var("POST_STATIC")) {
autotest::loadtest "tests/_post_network_static.pm";
}
# if scheduler passed an advisory, update packages from that advisory
# (intended for the updates testing workflow, so we install the updates
# to be tested)
if (get_var("ADVISORY")) {
autotest::loadtest "tests/_advisory_update.pm";
# now load the early boot tests again, as _advisory_update reboots
_load_early_postinstall_tests();
}
# from now on, we have fully installed and booted system with root/specified user logged in
2015-07-31 08:31:27 +00:00
# If there is a post-install test to verify storage configuration worked
# correctly, run it. Again we determine the test name based on the value
# of PARTITIONING
2015-07-31 08:31:27 +00:00
my $storagepost = '';
if (get_var('PARTITIONING')) {
my $casedir = get_var("CASEDIR");
my $loc = "tests/disk_" . get_var('PARTITIONING') . "_postinstall.pm";
$storagepost = $loc if (-e "$casedir/$loc");
2015-03-06 09:36:25 +00:00
}
2015-07-31 08:31:27 +00:00
autotest::loadtest $storagepost if ($storagepost);
if (get_var("UEFI")) {
autotest::loadtest "tests/uefi_postinstall.pm";
}
# console avc / crash check
# it makes no sense to run this after logging in on most post-
# install tests (hence ! BOOTFROM) but we *do* want to run it on
# upgrade tests after upgrading (hence UPGRADE)
# desktops have specific tests for this (hence !DESKTOP). For
# desktop upgrades we should really upload a disk image at the end
# of upgrade and run all the desktop post-install tests on that
if (!get_var("DESKTOP") && (!get_var("BOOTFROM") || get_var("UPGRADE"))) {
autotest::loadtest "tests/_console_avc_crash.pm";
}
# generic post-install test load
if (get_var("POSTINSTALL")) {
add a cockpit realmd FreeIPA join test Summary: This requires a few other changes: * turn clone_host_resolv into clone_host_file, letting you clone any given host file (cloning /etc/hosts seems to make both server deployment and client enrolment faster/more reliable) * allow loading of multiple POSTINSTALL tests (so we can share the freeipa_client_postinstall test). Note this is compatible, existing uses will work fine * move initial password change for the IPA test users into the server deployment test (so the client tests don't conflict over doing that) * add GRUB_POSTINSTALL, for specifying boot parameters for boot of the installed system, and make it work by tweaking _console_wait _login (doesn't work for _graphical_wait_login yet, as I didn't need that) * make the static networking config for tap tests into a library function so the tests can share it * handle ABRT problem dirs showing up in /var/spool/abrt as well as /var/tmp/abrt (because the enrol attempt hits #1330766 and the crash report shows up in /var/spool/abrt, don't ask me why the difference, I just work here) * specify the DNS servers from the worker host's resolv.conf as the forwarders for the FreeIPA server when deploying it; if we don't do this, rolekit defaults to using the root servers as forwarders(!) and thus we get the public, not phx2-appropriate, results for e.g. mirrors.fedoraproject.org, some of which the workers can't reach, so PackageKit package install always fails (boy, was it fun figuring THAT mess out) Even after all that, the test still doesn't actually pass, but I'm reasonably confident this is because it's hitting actual bugs, not because it's broken. It runs into #1330766 nearly every time (I think I saw *one* time the enrolment actually succeeded), and seems to run into a subsequent bug I hadn't seen before when trying to work around that by trying the join again (see https://bugzilla.redhat.com/show_bug.cgi?id=1330766#c37 ). Test Plan: Run the test, see what happens. If you're really lucky, it'll actually pass. But you'll probably run into #1330766#c37, I'm mostly posting for comment. You'll need a tap-capable openQA instance to test this. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D880
2016-06-07 20:00:39 +00:00
my @pis = split(/ /, get_var("POSTINSTALL"));
foreach my $pi (@pis) {
consolidate login waits, use postinstall not entrypoint for base Summary: I started out wanting to fix an issue I noticed today where graphical upgrade tests were failing because they didn't wait for the graphical login screen properly; the test was sitting at the 'full Fedora logo' state of plymouth for a long time, so the current boot_to_login_screen's wait_still_screen was triggered by it and the function wound up failing on the assert_screen, because it was still some time before the real login screen appeared. So I tweaked the boot_to_login_screen implementation to work slightly differently (look for a login screen match, *then* - if we're dealing with a graphical login - wait_still_screen to defeat the 'old GPU buffer showing login screen' problem and assert the login screen again). But while working on it, I figured we really should consolidate all the various places that handle the bootloader -> login, we were doing it quite differently in all sorts of different places. And as part of that, I converted the base tests to use POSTINSTALL (and thus go through the shared _wait_login tests) instead of handling boot themselves. As part of *that*, I tweaked main.pm to not require all POSTINSTALL tests have the _postinstall suffix on their names, as it really doesn't make sense, and renamed the tests. Test Plan: Run all tests, see if they work. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1015
2016-09-27 18:48:15 +00:00
autotest::loadtest "tests/${pi}.pm";
add a cockpit realmd FreeIPA join test Summary: This requires a few other changes: * turn clone_host_resolv into clone_host_file, letting you clone any given host file (cloning /etc/hosts seems to make both server deployment and client enrolment faster/more reliable) * allow loading of multiple POSTINSTALL tests (so we can share the freeipa_client_postinstall test). Note this is compatible, existing uses will work fine * move initial password change for the IPA test users into the server deployment test (so the client tests don't conflict over doing that) * add GRUB_POSTINSTALL, for specifying boot parameters for boot of the installed system, and make it work by tweaking _console_wait _login (doesn't work for _graphical_wait_login yet, as I didn't need that) * make the static networking config for tap tests into a library function so the tests can share it * handle ABRT problem dirs showing up in /var/spool/abrt as well as /var/tmp/abrt (because the enrol attempt hits #1330766 and the crash report shows up in /var/spool/abrt, don't ask me why the difference, I just work here) * specify the DNS servers from the worker host's resolv.conf as the forwarders for the FreeIPA server when deploying it; if we don't do this, rolekit defaults to using the root servers as forwarders(!) and thus we get the public, not phx2-appropriate, results for e.g. mirrors.fedoraproject.org, some of which the workers can't reach, so PackageKit package install always fails (boy, was it fun figuring THAT mess out) Even after all that, the test still doesn't actually pass, but I'm reasonably confident this is because it's hitting actual bugs, not because it's broken. It runs into #1330766 nearly every time (I think I saw *one* time the enrolment actually succeeded), and seems to run into a subsequent bug I hadn't seen before when trying to work around that by trying the join again (see https://bugzilla.redhat.com/show_bug.cgi?id=1330766#c37 ). Test Plan: Run the test, see what happens. If you're really lucky, it'll actually pass. But you'll probably run into #1330766#c37, I'm mostly posting for comment. You'll need a tap-capable openQA instance to test this. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D880
2016-06-07 20:00:39 +00:00
}
}
# load the ADVISORY post-install test - this records which update
# packages were actually installed during the test
if (get_var("ADVISORY")) {
autotest::loadtest "tests/_advisory_post.pm";
}
# we should shut down before uploading disk images
if (get_var("STORE_HDD_1") || get_var("PUBLISH_HDD_1")) {
autotest::loadtest "tests/_console_shutdown.pm";
}
2015-01-26 14:58:07 +00:00
}
## LOADING STARTS HERE
2015-01-22 12:38:16 +00:00
# if user set ENTRYPOINT, run required test directly
# (good for tests where it doesn't make sense to use _boot_to_anaconda, _software_selection etc.)
# if you want to run more than one test via ENTRYPOINT, separate them with space
if (get_var("ENTRYPOINT")) {
my @entrs = split(/ /, get_var("ENTRYPOINT"));
foreach my $entr (@entrs) {
autotest::loadtest "tests/${entr}.pm";
}
}
elsif (get_var("UPGRADE")) {
load_upgrade_tests;
}
consolidate login waits, use postinstall not entrypoint for base Summary: I started out wanting to fix an issue I noticed today where graphical upgrade tests were failing because they didn't wait for the graphical login screen properly; the test was sitting at the 'full Fedora logo' state of plymouth for a long time, so the current boot_to_login_screen's wait_still_screen was triggered by it and the function wound up failing on the assert_screen, because it was still some time before the real login screen appeared. So I tweaked the boot_to_login_screen implementation to work slightly differently (look for a login screen match, *then* - if we're dealing with a graphical login - wait_still_screen to defeat the 'old GPU buffer showing login screen' problem and assert the login screen again). But while working on it, I figured we really should consolidate all the various places that handle the bootloader -> login, we were doing it quite differently in all sorts of different places. And as part of that, I converted the base tests to use POSTINSTALL (and thus go through the shared _wait_login tests) instead of handling boot themselves. As part of *that*, I tweaked main.pm to not require all POSTINSTALL tests have the _postinstall suffix on their names, as it really doesn't make sense, and renamed the tests. Test Plan: Run all tests, see if they work. Reviewers: jskladan, garretraziel Reviewed By: garretraziel Subscribers: tflink Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D1015
2016-09-27 18:48:15 +00:00
elsif (!get_var("START_AFTER_TEST") && !get_var("BOOTFROM")) {
# for now we can assume START_AFTER_TEST and BOOTFROM mean the
# test picks up after an install, so we skip to post-install
load_install_tests;
}
if (!get_var("ENTRYPOINT")) {
load_postinstall_tests;
}
2015-01-30 09:35:13 +00:00
2015-01-22 12:38:16 +00:00
1;
# vim: set sw=4 et: