2015-08-05 06:23:59 +00:00
|
|
|
use base "installedtest";
|
2015-05-13 11:00:46 +00:00
|
|
|
use strict;
|
|
|
|
use testapi;
|
|
|
|
|
|
|
|
sub run {
|
2016-01-28 05:40:07 +00:00
|
|
|
my $self = shift;
|
2016-08-12 17:08:46 +00:00
|
|
|
my $release = lc(get_var("VERSION"));
|
2015-10-27 01:02:22 +00:00
|
|
|
# disable screen blanking (download can take a long time)
|
2015-11-04 13:38:36 +00:00
|
|
|
script_run "setterm -blank 0";
|
2015-10-27 01:02:22 +00:00
|
|
|
|
use compose repository (not master repo) for most tests
Summary:
we have a long-standing problem with all the tests that hit
the repositories. The tests are triggered as soon as a compose
completes. At this point in time, the compose is not synced to
the mirrors, where the default 'fedora' repo definition looks;
the sync happens after the compose completes, and there is also
a metadata sync step that must happen after *that* before any
operation that uses the 'fedora' repository definition will
actually use the packages from the new compose. Thus all net
install tests and tests that installed packages have been
effectively testing the previous compose, not the current one.
We have some thoughts about how to fix this 'properly' (such
that the openQA tests wouldn't have to do anything special,
but their 'fedora' repository would somehow reflect the compose
under test), but none of them is in place right now or likely
to happen in the short term, so in the mean time this should
deal with most of the issues. With this change, everything but
the default_install tests for the netinst images should use
the compose-under-test's Everything tree instead of the 'fedora'
repository, and thus should install and test the correct
packages.
This relies on a corresponding change to openqa_fedora_tools
to set the LOCATION openQA setting (which is simply the base
location of the compose under test).
Test Plan:
Do a full test run, check (as far as you can) tests run sensibly
and use appropriate repositories.
Reviewers: jskladan, garretraziel
Reviewed By: garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D989
2016-09-01 15:22:59 +00:00
|
|
|
# use compose repo
|
|
|
|
$self->repo_setup();
|
|
|
|
my $params = "-y --releasever=${release}";
|
|
|
|
if ($release eq "rawhide") {
|
|
|
|
$params .= " --nogpgcheck";
|
|
|
|
}
|
|
|
|
assert_script_run "dnf ${params} system-upgrade download", 6000;
|
2015-05-13 11:00:46 +00:00
|
|
|
|
convert upgrade tests to dnf-plugin-system-upgrade
Summary:
This is a first cut which more or less works for now. Issues:
1) We're not really testing the BUILD, here. All the test does
is try and upgrade to the specified VERSION - so it'll be using
the latest 'stable' for the given VERSION at the time the test
runs. This isn't really that terrible, but especially for TC/RC
validation, we might want to make things a bit more elaborate
and set up the repo for the actual BUILD (and disable the main
repos).
2) We'd actually need --nogpgcheck for non-Rawhide, at one
specific point in the release cycle - after Branching but
before Bodhi activation (which is when we can be sure all
packages are signed). This won't matter until 24 branches, and
maybe releng will have it fixed by then...if not, I'll tweak
it.
3) We don't really test that the upgrade actually *happened*
for desktop, at the moment - the only thing in the old test
that really checked that was where we checked for the fedup
boot menu entry, but that has no analog in dnf. What we should
probably do is check that GUI login works, then switch to a
console and check /etc/fedora-release just as the minimal test
does.
Test Plan:
Run the tests. Note that creating the desktop disk
image doesn't work ATM, so I can't verify the desktop test
works, but the minimal one seems to (with D565). There'll be
a matching diff for openqa_fedora_tools to update the test
case names there.
Reviewers: jskladan, garretraziel
Reviewed By: jskladan, garretraziel
Subscribers: tflink
Differential Revision: https://phab.qadevel.cloud.fedoraproject.org/D567
2015-09-10 21:49:13 +00:00
|
|
|
upload_logs "/var/log/dnf.log";
|
|
|
|
upload_logs "/var/log/dnf.rpm.log";
|
2015-05-13 11:00:46 +00:00
|
|
|
|
2016-10-22 00:42:28 +00:00
|
|
|
script_run "dnf system-upgrade reboot", 0;
|
2015-12-07 23:46:20 +00:00
|
|
|
# fail immediately if we see a DNF error message
|
2016-07-08 15:56:57 +00:00
|
|
|
die "DNF reported failure" if (check_screen "upgrade_fail", 15);
|
|
|
|
if (get_var("ENCRYPT_PASSWORD")) {
|
|
|
|
$self->boot_decrypt(60);
|
|
|
|
}
|
|
|
|
# in encrypted case we need to wait a bit so postinstall test
|
|
|
|
# doesn't bogus match on the encryption prompt we just completed
|
|
|
|
# before it disappears from view
|
|
|
|
if (get_var("ENCRYPT_PASSWORD")) {
|
|
|
|
sleep 5;
|
|
|
|
}
|
2015-05-13 11:00:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
sub test_flags {
|
|
|
|
# without anything - rollback to 'lastgood' snapshot if failed
|
|
|
|
# 'fatal' - whole test suite is in danger if this fails
|
|
|
|
# 'milestone' - after this test succeeds, update 'lastgood'
|
|
|
|
# 'important' - if this fails, set the overall state to 'fail'
|
|
|
|
return { fatal => 1 };
|
|
|
|
}
|
|
|
|
|
|
|
|
1;
|
|
|
|
|
|
|
|
# vim: set sw=4 et:
|