Similar to how we split out iSCSI server, and for the same
purpose. This drops two tests from the cluster on x86_64, the
BIOS and UEFI tests.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
Combining the old universal tests into server-dvd-iso gives us
a big cluster (9 jobs), which isn't ideal for smaller pet
deployments. Let's split it up. This splits out the iSCSI server
job into a separate test, to start with.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
Per #196, the "universal" flavor doesn't really make much sense
as originally intended any more. The idea was for it to contain
tests that could run on any installer image, and it would be
scheduled for the "best" image for each arch in each compose.
This dated from before the composes were set up such that some
images are "fatal" - the compose is considered failed unless
that image composes - and when it was fairly likely that we'd
test a 'compose' with no Server DVD image. These days it's much
less likely: the Server DVD image is fatal for release-blocking
arches (as of https://pagure.io/pungi-fedora/pull-request/935),
and even before that it hadn't failed in a completed compose for
years. Some of the so-called "universal" tests also don't really
work on netinst images (because of tap networking), and as the
final nail in the coffin, #196 points out that we can no longer
really treat all installer images as interchangeable for storage
tests, because they now use very different storage setups (Server
DVD uses xfs-on-LVM, others use btrfs).
To start with, here in the templates, we move most of the tests
into Server-dvd-iso, leaving only upgrade tests in 'universal',
which is now considered as being only for tests that use no
compose asset *at all*. To fully address #196, later commits will
add some of the tests to other flavors.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
Per https://pagure.io/fedora-qa/issue/650 , we dropped these
from the wiki, and I agree with @kparal that it doesn't make much
sense to run them any more.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
Setting HDD_1 to %HDD_2% is broken in recent openQA:
https://github.com/os-autoinst/openQA/pull/3309#issuecomment-721906935
I count this as a bug, but we can solve it like this, I think -
we don't actually need to set this test up so carefully to only
have the disk image as HDD_1 and no HDD_2, we can actually just
let the disk image be HDD_2 and have an empty disk as HDD_1 and
the test still works, qemu will boot from the second disk and we
can upload it and everything's fine. So let's just go with that.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
These tests assume the desktop base disk image has an ext4-on-LVM
layout, but from F33 onwards it doesn't, it uses btrfs as that's
the new distro default.
We need a proper fix for this, but for now, just make the test
use the F32 disk image. This buys us six months at least.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
We have variant versions of several of the base tests which exist
to account for differences in required variable settings to run
essentially the same test in different situations. By twiddling
the variables a little, including inventing a new variable
defined in the flavors and substituted into the test suites so
the same test suite can have a different START_AFTER_TEST when
run on different flavors, I think we can unite them all and make
this a lot simpler.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
So, there's a problem with how we figure out the NetworkManager
connection to use in setup_tap_static: it expects the first
connection in the list to be the right one, but this is only
actually true so long as it's *active*. When we're in the tap
case, it's usually not going to actually *work* out of the box
on boot (or else we wouldn't need setup_tap_static at all...),
so some time after boot, NetworkManager gives up on it and marks
it as inactive. And after that, setup_tap_static won't work any
more.
I never noticed this as a problem before because usually we do
setup_tap_static before that point. But it seems in the vnc
client tests, on aarch64, desktop boot and login is slow enough
that by the time we switch to a VT and try to setup the network,
we're very close to that cutoff, and sometimes miss it.
This, I hope, avoids the problem by doing the network setup in
that test before we deal with the desktop login, then doing the
desktop login, then doing the actual VNC bits.
The alternative here would be to figure out a better way to do
setup_tap_static, but I can't.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This sets us up to test the release-blocking aarch64 disk images
(Minimal, Server and Workstation). It also allows for testing
armhfp disk images on aarch64 worker hosts (though my testing of
that isn't going too well so far), and fixes the initial-setup
handling for a change upstream ('use password' is now the default
so we don't need to choose it). We rewire disk image deployment
test loading to work through the generic loader code rather than
using ENTRYPOINT, as it allows us to more gracefully handle
graphical (Workstation) vs. console (Server, Minimal), moving
the code for handling console initial-setup to a helper function
just like the code for gnome-initial-setup and having _console_
wait_login call it when appropriate. We also tweak desktop_vt a
bit because now we need to switch from a console running as test
to a desktop, which breaks the assumption that the highest
numbered session of user test is the desktop...
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This should work even if the ifcfg plugin is not present (hi,
CoreOS) or 'predictable' (har) network names are on.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
In Fedora 33, we generally no longer include a disk-based swap
partition by default (instead swap-on-ZRAM is used, see
https://fedoraproject.org/wiki/Changes/SwapOnZRAM ). This tweaks
our tests to account for that. In tests that aren't to do with
swap at all, we stop including a swap partition in order to be
closer to the default layout. We replace the old _no_swap blivet
and custom tests with _with_swap tests that, as the name implies,
*explicitly include* a swap partition, and adjust the postinstall
test to check the disk swap partition is there.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This is a bit complex to automate, because we cannot really use
the production Zezere server (provision.fedoraproject.org) as
the test case shows, as we'd have to solve authentication and
we also don't really want to constantly keep registering new
hosts to it that are going to disappear and never be seen again.
So, instead we'll do it by setting up our *own* Zezere, and
provisioning our IoT system in that. We run two tests. The
'ignition' test is the actual IoT 'device'; all it really does
is boot up, sit around, and wait to be provisioned. The 'server'
test first sets up a Zezere server, then logs into it, adds an
ssh key, claims the IoT device, provisions it, and connects to
it to create a special file which tells the 'ignition' test
everything worked and it can close out.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This is to make the infra folks happy, apparently using 10.0.x.x
and 10.1.x.x is causing conflicts since our actual infra network
uses those ranges too.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
Since systemd-246~rc1-1.fc33, network install images crash if
booted with only 2GiB of RAM. No-one seems to be fixing this
as a matter of urgency, so let's just bump the tests to 3GiB at
least for now.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
https://fedoraproject.org/wiki/Changes/SwapOnZRAM changes things
a lot here. No swap on disk is now the default case, making these
tests obsolete. We'll probably want to test various different
configurations around ZRAM swap instead - ZRAM swap *and* disk
swap, disk swap only, all swaps disabled - but that's a more
complex change that will need a ticket and a PR, so for now let's
just cut these tests.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This adds a test that automates
https://fedoraproject.org/wiki/QA:Testcase_Clevis. It requires
os-autoinst-4.6-18.20200623git5038d8c or newer, and a worker
host in the 'tpm' class which is set up to have an instance of
swtpm running at /tmp/mytpmX , where X is the worker instance
number, for each worker. The Fedora infrastructure ansible
plays have been updated to handle this via an instantiated
systemd service, which other instances can also adopt.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This adds new variant test suites for several of the base tests
to run them on Cloud disk images. We can't re-use the existing
test suites because they have `START_AFTER_TEST` defined and
there's no viable way to undefine it (there are things you can
do to override the value, but you can't just unset it). Also
add a stray test we weren't running on aarch64, and correct some
priorities.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This adds a pair of tests, one which does almost all the work
from the test case, the other just a client test to check that
we can connect to an HTTP server running in a container on the
host. We also have to bump the _console_wait_login timeout on
this path a bit as we're booting a disk image that was installed
with DHCP working, but we change the network setup so DHCP does
not work any more, and the system spends quite some time trying
to bring the network up on boot before eventually giving up and
proceeding.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
I started out trying to fix os-release for the recent change to
add "Prerelease" tags to the VERSION and PRETTY_NAME fields, then
things spiralled. It got me thinking about the awkward DEVELOPMENT
variable we use, so I decided to get rid of it and refactor the
few things that use it. I refactored the anaconda prerelease tag
check, and wrote a new giant comment that gives details about
exactly how anaconda decides whether to show those tags, to give
context to our choices about when to expect them. This check now
uses a new LABEL variable the scheduler now sets. I also wound up
creating new UP1REL and UP2REL vars to define the 'source' release
for upgrade tests, separate from CURRREL and PREVREL, which are
now never lies - they really are the current stable and previous
stable release, even for update upgrade tests.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This adds a new test that implementsQA:Testcase_desktop_login
on both GNOME and KDE.
While working on this, we realized that the "desktop_clean"
needles were really "app menu" needles, and for KDE, this was
a duplication with the new "system menu" needles, because on KDE
the app menu and the system menu are the same. So I (Adam)
started to de-duplicate that, but also realized that "app menu
button" is a much more accurate name for these needles, so I was
renaming the old desktop_clean needles to app_menu_button. That
led me to the realization that "check_desktop_clean" is itself a
dumb name, because we don't (at least, any more, way back in the
mists of time we may have done) do anything to check that the
desktop is "clean" - we're really just asserting that we're at a
desktop *at all*. While thinking *that* through, I *also* realized
that the whole "open the overview and look for the app grid icon"
workaround it did is no longer necessary, because GNOME doesn't
use a translucent top bar any more. That went away in GNOME 3.32,
which is in Fedora 30, our oldest supported release.
So I threw that away, renamed the function "check_desktop",
cleaned up all the needle naming and tagging, and also added an
app menu needle for GNOME in Japanese because we were missing
one (the Japanese tests have been using the "app grid icon"
workaround the whole time).
IoT is becoming a release-blocking edition for F32, so we should
be testing it for sure. We may add specific tests, but for now
let's run the install and base tests on it.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
We were only running this test on the Workstation boot ISO...
then that was removed, and so we actually never ran this test at
all any more. We should run it on at least one image as check-
compose uses the output to check whether installer memory usage
has increased. I initially ran it on Workstation as that installs
the most packages; so let's run it on Everything but use the
Workstation package set.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
I and @lruzicka (and I think @jskladan and @jsedlak and
@michelmno and everyone else who's ever touched it...) are being
gradually driven nuts by manually editing the test templates.
The bigger the files get the more awkward it is to keep them
straight and be sure we're doing it right. Upstream doesn't do
things the same way we do (they mostly edit in the web UI and
dump to file for the record), but we do still think making
changes in the repo and posting to the web UI is the right way
around to do it, we just wish the format was saner.
Upstream has actually recently introduced a YAML-based approach
to storing job templates which tries to condense things a bit,
and you can dump to that format with dump-templates --json, but
@lruzicka and I agree that that format is barely better for
hand editing in a text editor than the older one our templates
currently use.
So, this commit introduces...Fedora Intermediate Format (FIF) -
an alternative format for representing job templates - and some
tools for working with it. It also contains our existing
templates in this new format, and removes the old template files.
The format is documented in the docstrings of the tools, but
briefly, it keeps Machines, Products and TestSuites but improves
their format a bit (by turning dicts-of-lists into dicts-of-
dicts), and adds Profiles, which are combinations of Machines and
Products. TestSuites can indicate which Profiles they should be
run on.
The intermediate format converter (`fifconverter`) converts
existing template data (in JSON format; use tojson.pm to convert
our perl templates to JSON) to the intermediate format and
writes it out. As this was really intended only for one-time use
(the idea is that after one-time conversion, we will edit the
templates in the intermediate format from now on), its operation
is hardcoded and relies on specific filenames.
The intermediate format loader (`fifloader`) generates
JobTemplates from the TestSuites and Profiles, reverses the
quality-of-life improvements of the intermediate format, and
produces template data compatible with the upstream loader, then
can write it to disk and/or call the upstream loader directly.
The check script (`fifcheck`) runs existing template data through
both the converter and the loader, then checks that the result is
equivalent to the input. Again this was mostly written for one-
time use so is fairly rough and hard-coded, but I'm including it
in the commit so others can check the work and so on.
Signed-off-by: Adam Williamson <awilliam@redhat.com>