Summary:
Loading the same module more than once *kinda* works, but it
shows up all kinds of funky in the openQA web interface. There's
a POO for this:
https://progress.opensuse.org/issues/10514
But it doesn't seem like it's going to be resolved immediately,
so in the mean time maybe we should avoid doing it so we don't
have to deal with the weirdness it produces in the web UI. So
here's a kinda icky hack that uses symlinks and stuff to load
multiple instances of 'the same' test module.
Test Plan:
Run an update test, look at how it looks in the web
UI and confirm it's a lot clearer and less buggy. Check there
aren't any bugs in the loading approach. This is deployed on stg
so you can look at it there.
Reviewers: jsedlak, jskladan
Reviewed By: jsedlak
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1186
It's not really a good idea to have the comments that explain
the test_flags in *every* test, because they can go stale and
then we either have to live with them being old or update them
all. Like, now. So let's just take 'em all out. There's always
a reference in the openQA and os-autoinst docs, and those get
updated faster.
More importantly, add the new `ignore_failure` flag to relevant
tests - all the tests that don't have the 'important' or
'fatal' flag at present. Upstream killed the 'important' flag
(making all tests 'important' by default), I got it replaced
with the 'ignore_failure' flag, we now need to explicitly mark
all modules we want the 'ignore_failure' behaviour for.
The way this was set up before, if `anaconda_main_hub` matched
immediately but some spoke was still in a 'processing' state,
it only had 30 seconds (default `assert_and_click` timeout) to
complete and allow the 'Begin Installation' button to appear.
It seems unnecessary to match on *both* needles, really, so
let's just give 300 seconds for the `begin_installation` needle
to appear. It's not going to appear on any other screen.
This problem caused a couple of spurious failures today -
https://openqa.fedoraproject.org/tests/77839 and
https://openqa.fedoraproject.org/tests/77858 - because they
took a bit too long for the INSTALLATION DESTINATION spoke to
clear.
Summary:
This adds a new test suite, run for Workstation and KDE live
images, which does not create a user during install. It then
expects initial-setup (KDE) or gnome-initial-setup (Workstation)
to appear after install, creates a user, and proceeds with
normal boot.
Note the ARM image test already covers the initial-setup text
mode, and the ARM minimal image is the only case where that
actually matters (it's not included in Server).
Test Plan:
Run the new tests, check they work. Run all old
tests, check the changes didn't break them.
Reviewers: jsedlak, jskladan
Reviewed By: jsedlak
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1185
There's an issue where the follow-on _advisory_post test tries
to log in before the 'login failed' error has cleared. We can
easily avoid this by using tty2 for the login tests, then
_advisory_post will switch to tty3 for its stuff.
Summary:
For some reason, we have `USER_LOGIN` set to 'false' for the KDE
package set install test. I really don't know / remember why
that would be; I'd think we should create a user and log in as
that user to make sure it works properly when installing KDE
from the traditional installer. It's not strictly part of the
package set test, true, but still, seems worth doing.
Also, when `USER_LOGIN` is set to 'false' and the installer runs,
we create a user called 'false'. This doesn't seem like what we
wanted, so let's not do that. I dunno if there are any other
cases besides the KDE one that this commit changes, but still.
Test Plan:
Run the full test suite and look for weirdness, check
KDE package set test works as intended (now creates a user called
'test' and logs in as that user).
Reviewers: jsedlak, jskladan
Reviewed By: jsedlak
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1182
These tests don't work right at all at present: they don't test
the update at all, they just boot the base image and run the
test, which is stupid.
I looked into various ways of fixing this but it's messy and I
don't think it can work properly without a lot of hacking. Even
if we get the test to do 'the right thing' - boot, set up the
update repo, update, reboot, do the test prep, reboot again, do
the actual test - I don't think it'll be quite a valid test,
because I think any AVCs or crashes that happen *before* the
update is installed will still appear as notifications when the
test finally does log into the desktop. So the test can fail
even if there are no post-update crashes or AVCs, I think.
I decided to give up on trying to make this test work properly
for now and just disable it. We can come back to it later if we
have great ideas and/or lots of time...
Committing without review as this is pretty trivial and I've
had it on staging for the last few days without issue. Just gets
us somewhat better info for debugging FreeIPA issues.
This repo is causing problems for Branched update tests. The
repo is not available for 26 at all yet. This shouldn't be a
problem as the repo is disabled by default, but it seems that
some things - at least realmd, as used in the FreeIPA enrolment
tests - still try to update the repo's metadata when installing
packages, and fail because it 404s.
Since none of our tests actually needs this repo AFAIK, let's
just delete it in repo_setup.
Branched update tests are all failing because the baseurl in
fedora.repo is incorrect for Branched. This is a rather hacky
fix for this problem. It relies on the scheduler setting the
DEVELOPMENT variable when the update is for Branched (I named
the variable DEVELOPMENT rather than BRANCHED to be more
future-proof).
Alternative options I rejected were:
i) stick with MM links
ii) do something 'clever' to retrieve the URLs from MM
Rejected i) because the timing problem where the infra repo gets
updated before MM has the updated repodata checksums is just too
much of a problem; whenever that happens, dnf will refuse to use
the metadata from the infra repo and go pull it from an external
mirror, which can wind up timing out.
Rejected ii) because it seemed too fancy and not really any more
robust than just doing this and adapting it if Things Change In
Future (TM).
Explicitly specify the ahci0.0 bus for the HDD in install_sata.
This is needed to work if we are using CDMODEL=ide-cd (which we
need at present to work around a bug with SCSI CDs), and is a
good idea anyway to ensure the drive is actually connected to
the SATA bus (I dunno if it was before or not).
We used to do this only for KDE, but I've seen the new update
tests sometimes fail at this point for no apparent reason, and
I'm thinking a wait may help (in case they're clicking the
button before it's really 'ready').
This keeps failing because the default `assert_script_run`
timeout changed from 90 to 30 in the last os-autoinst update
(an unintended consequence of a change I made). This has been
fixed upstream, but in the mean time, let's just set an
explicit timeout on the call.
Noticed in e.g. https://openqa.fedoraproject.org/tests/58798
we're doing this wrong, `boot_decrypt` was moved into utils as
a function, but we were still calling it as a method...
Summary:
This adds some logging related to the update testing workflow,
so we have some idea what we actually tested. We log precisely
which packages were actually downloaded from the update - this
is important as updates can be edited and when examining results
we'll want to know which packages actually got used. We also
add a new module which runs at the end of postinstall and tries
to figure out which packages from the update were installed in
the course of the test. This still isn't a guarantee the test
actually *tested them* in any way, but it at least means they
got installed successfully and didn't interfere with the test.
Test Plan:
Run the update test workflow, check the logs get
uploaded and seem accurate (sometimes some RPM garbage messages
wind up in the package log, I'm not too worried about that at
present). Run the compose test workflow and check it didn't
break.
Reviewers: jsedlak
Reviewed By: jsedlak
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1149
Summary:
This adds an entirely new workflow for testing distribution
updates. The `ADVISORY` variable is introduced: when set,
`main.pm` will load an early post-install test that sets up
a repository containing the packages from the specified update,
runs `dnf -y update`, and reboots. A new templates file is
added, `templates-updates`, which adds two new flavors called
`updates-server` and `updates-workstation`, each containing
job templates for appropriate post-install tests. Scheduler is
expected to post `ADVISORY=(update ID) HDD_1=(base image)
FLAVOR=updates-(server|workstation)`, where (base image) is one
of the stable release base disk images produced by `createhdds`
and usually used for upgrade testing. This will result in the
appropriate job templates being loaded.
We rejig postinstall test loading and static network config a
bit so that this works for both the 'compose' and 'updates' test
flows: we have to ensure we bring up networking for the tap
tests before we try and install the updates, but still allow
later adjustment of the configuration. We take advantage of the
openQA feature that was added a few months back to run the same
module multiple times, so the `_advisory_update` module can
reboot after installing the updates and the modules that take
care of bootloader, encryption and login get run again. This
looks slightly wacky in the web UI, though - it doesn't show the
later runs of each module.
We also use the recently added feature to specify `+HDD_1` in
the test suites which use a disk image uploaded by an earlier
post-install test, so the test suite value will take priority
over the value POSTed by the scheduler for those tests, and we
will use the uploaded disk image (and not the clean base image
POSTed by the scheduler) for those tests.
My intent here is to enhance the scheduler, adding a consumer
which listens out for critpath updates, and runs this test flow
for each one, then reports the results to ResultsDB where Bodhi
could query and display them. We could also add a list of other
packages to have one or both sets of update tests run on it, I
guess.
Test Plan:
Try a post something like:
HDD_1=disk_f25_server_3_x86_64.img DISTRI=fedora VERSION=25
FLAVOR=updates-server ARCH=x86_64 BUILD=FEDORA-2017-376ae2b92c
ADVISORY=FEDORA-2017-376ae2b92c CURRREL=25 PREVREL=24
Pick an appropriate `ADVISORY` (ideally, one containing some
packages which might actually be involved in the tests), and
matching `FLAVOR` and `HDD_1`. The appropriate tests should run,
a repo with the update packages should be created and enabled
(and dnf update run), and the tests should work properly. Also
test a regular compose run to make sure I didn't break anything.
Reviewers: jskladan, jsedlak
Reviewed By: jsedlak
Subscribers: tflink
Differential Revision: https://phab.qa.fedoraproject.org/D1143
They got rid of the 'Dashboard' text we were matching on, so
let's change this needle. This 'Hardware' text should show up
in all cockpit versions, I think.
The rule for test priorities is pretty simple. Ranges of
priority values map to the 'Milestone' by which the test must
be passing, per the release criteria. The priority for each
openQA test is the *highest* priority for any wiki test case /
criterion it covers.
0-20: critical smoke tests (higher than Alpha priority)
20-29: Alpha priority
30-39: Beta priority
40-49: Final priority
50+: Optional priority
Note that tests for non-release-blocking arches or images must
always be over 50; I've simply added 50 to the values for all
i386 tests in this change. Other than that, I just corrected a
few values which had got out of whack or were originally set
wrong.