1
0
mirror of https://pagure.io/fedora-qa/os-autoinst-distri-fedora.git synced 2024-12-20 17:33:08 +00:00
Commit Graph

14 Commits

Author SHA1 Message Date
Adam Williamson
a98671a670 Give install_default_update_live 4G of RAM too
This test seems to be struggling with 3G of RAM also.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2022-10-17 14:19:50 -07:00
Adam Williamson
aadcc428d6 Give live image build even longer and bump max job time
Rawhide live image builds are still taking an awful long time
and often failing. I will look more into why later, but for now,
let's bump the timeouts even more just to try and get through
the job backlog.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2022-08-19 15:35:24 -04:00
Adam Williamson
66e5276544 Drop NUMDISKS=2 for update server flavor
It causes a bit of an awkward problem for tests which use disk
images from another test, specifically the cockpit tests. We
put the update repo on this second disk and update /etc/fstab
but we aren't actually uploading the second disk image and using
it as the second disk on the child tests, so they get messed
up.

I'm having trouble coming up with an elegant solution so for now
let's kick the affected flavor (server) back to one disk. I'll
try and figure a more permanent fix tomorrow.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2022-08-02 16:28:15 -07:00
Adam Williamson
f5946e678c Make update testing more robust for very large updates
openQA choked badly on
https://bodhi.fedoraproject.org/updates/FEDORA-2022-6256981a71
because it's, well, huge - 87 builds including texlive, which
has hundreds (thousands?) of subpackages. This exposed several
frailties against such updates.

First of all, we set NUMDISKS to at least 2 for *all* update
tests, which should mean they all stash the RPMs from the update
on a non-system disk and avoid problems with space exhaustion.
After that, just extend a few timeouts in particularly fragile
places, including one which is specific to texlive (as I don't
know of any other source package with so many subpackages).

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2022-08-02 11:29:21 -07:00
Adam Williamson
343ef8226b Use new RETRY variable handling to restart update tests on fail
Upstream recently implemented support for using the variable
RETRY to specify how many times a test should be restarted on
failure. This is something we currently handle with a downstream
openQA plugin; if we switch to using this upstream feature
instead, we can drop the plugin.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2022-07-06 10:28:57 -07:00
Adam Williamson
25f59d86da Bump HDDSIZEGB for more GNOME and KDE products
We have it at 20G for Workstation live, but not for KDE live or
for the update Workstation or KDE products. We just hit an issue
today where anaconda thinks 10G isn't enough space for a KDE
live install after a grub2 update (which I think only bumped
the required space veeery slightly, but enough to throw anaconda
over the limit). Let's just go up to 15G for all GNOME and KDE
cases where we're not at 20G.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2022-06-08 16:53:47 -07:00
Adam Williamson
23c9adac93 Update names of qcow2 base disk images
We're changing these to be named `foo.qcow2` not `foo.img` due
to a change in qemu and os-autoinst to do with backing file
format detection.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2021-09-10 10:58:08 -07:00
Adam Williamson
9174472637 Run podman tests on updates
It has been noted that updates have broken podman in the past and
this is a major issue for some users. Let's create a new update
flavor and run the test in it. We'll use the server image as a
base, but it's not really a server test, so I'm giving it its own
flavor so it's not run on updates that we only want to run server
tests on, and we can schedule just this test to run on container-y
updates.

As part of this, we need to install podman before running the
test; for flavors we currently run it on we expect podman to be
preinstalled, but that's not true for the server base image.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2021-06-21 12:20:09 -07:00
Adam Williamson
bc1e9681f9 Add KDE live image build and test for updates
I hacked this up quickly on staging to test a specific update,
but there's really no reason we shouldn't just do it generally.
We have the capacity.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2021-04-22 16:00:16 -07:00
Adam Williamson
72edbfe991 Use qemu host IP 172.16.2.2 not 10.0.2.2
This is to make the infra folks happy, apparently using 10.0.x.x
and 10.1.x.x is causing conflicts since our actual infra network
uses those ranges too.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2020-07-23 16:40:45 -07:00
Adam Williamson
3bc1e8335a Put /var/lib/mock on separate disk for live image build test
The update live image build test keeps running out of disk space.
We've bumped the minimal disk image from 12GB all the way up to
20GB so far but it keeps happening. So let's try a different
strategy: use a scratch disk to mount /var/lib/mock. That's where
all the space gets used. This should allow us to reduce the size
of the minimal disk image again, and giving it 25GB of empty disk
should avoid it running out of space again for a while.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2020-05-05 21:12:12 -07:00
Adam Williamson
80c54d5491 Add cockpit_updates and remote_logging tests to updates
Again, no reason not to run these on updates. Includes adding
oldcantarell versions of several needles for current cockpit,
as they're needed for the tests to pass. Also tweak a couple of
needles to avoid false matches (add more empty space).

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2020-01-24 21:03:41 +01:00
Adam Williamson
44d1dc3607 Add base_reboot_unmount and base_system_logging to update tests
No reason not to run these on updates as well. And it's much
easier with FIF!

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2020-01-24 20:50:39 +01:00
Adam Williamson
2c197d520c Add a whole intermediate template format ('FIF') and tools
I and @lruzicka (and I think @jskladan and @jsedlak and
@michelmno and everyone else who's ever touched it...) are being
gradually driven nuts by manually editing the test templates.
The bigger the files get the more awkward it is to keep them
straight and be sure we're doing it right. Upstream doesn't do
things the same way we do (they mostly edit in the web UI and
dump to file for the record), but we do still think making
changes in the repo and posting to the web UI is the right way
around to do it, we just wish the format was saner.

Upstream has actually recently introduced a YAML-based approach
to storing job templates which tries to condense things a bit,
and you can dump to that format with dump-templates --json, but
@lruzicka and I agree that that format is barely better for
hand editing in a text editor than the older one our templates
currently use.

So, this commit introduces...Fedora Intermediate Format (FIF) -
an alternative format for representing job templates - and some
tools for working with it. It also contains our existing
templates in this new format, and removes the old template files.
The format is documented in the docstrings of the tools, but
briefly, it keeps Machines, Products and TestSuites but improves
their format a bit (by turning dicts-of-lists into dicts-of-
dicts), and adds Profiles, which are combinations of Machines and
Products. TestSuites can indicate which Profiles they should be
run on.

The intermediate format converter (`fifconverter`) converts
existing template data (in JSON format; use tojson.pm to convert
our perl templates to JSON) to the intermediate format and
writes it out. As this was really intended only for one-time use
(the idea is that after one-time conversion, we will edit the
templates in the intermediate format from now on), its operation
is hardcoded and relies on specific filenames.

The intermediate format loader (`fifloader`) generates
JobTemplates from the TestSuites and Profiles, reverses the
quality-of-life improvements of the intermediate format, and
produces template data compatible with the upstream loader, then
can write it to disk and/or call the upstream loader directly.

The check script (`fifcheck`) runs existing template data through
both the converter and the loader, then checks that the result is
equivalent to the input. Again this was mostly written for one-
time use so is fairly rough and hard-coded, but I'm including it
in the commit so others can check the work and so on.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2020-01-24 15:21:23 +01:00