1
0
mirror of https://pagure.io/fedora-qa/os-autoinst-distri-fedora.git synced 2024-11-21 21:43:08 +00:00
Commit Graph

10 Commits

Author SHA1 Message Date
Adam Williamson
9a0ef37a25 Revert "Update live install tests: handle awkward install ordering"
This reverts commit 56936df7a5. It
was a lovely idea, but forgot that the 'matching update version'
check doesn't actually use the allpkgs.txt list...
2023-02-22 15:55:24 -08:00
Adam Williamson
56936df7a5 Update live install tests: handle awkward install ordering
There's this awkward path for the live image install tests on
updates. We run the 'are the correct versions of all the packages
installed' check on these tests to ensure the right versions
actually made it onto the live image. So we don't run
`dnf -y update` at the end of repo_setup_updates on that path,
because if we did that, even if the packages on the live image
were old, we'd update them there and hide the problem.

However, this causes a bit of an ordering issue, because in
order to set up the advisory repo, we need to install a few
packages. What if the update under test includes one of those
packages, or a dependency that wasn't already installed? In
that case, we wind up with the older stable version of the
package (because obviously we can't install the newer version
from the advisory repo *before we've set up the advisory repo*),
don't update it later, and so the 'correct version' check at
the end of the test fails. See:
https://openqa.fedoraproject.org/tests/1778707 for a case of
this happening with a python-cryptography update.

Up till now I was trying to handle this by just updating the
specific packages we install, but that doesn't account for
*dependencies* of them. I looked down the path of trying to
generate a list of all those dependencies and update all of
them but it looks a bit mad. So instead let's try this. On that
specific path, we'll generate the "all installed packages" list
*before* we run repo_setup, so it just doesn't include anything
that gets installed during repo_setup. The implementation is a
bit icky but not too horrible.

We *could* just *always* generate the all installed packages
list earlier, but then that would mean we *wouldn't* catch dep
issues in this kind of package on the other test paths, whereas
currently we do. I don't want to lose that.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2023-02-22 14:54:18 -08:00
Adam Williamson
03b6663339 Add tests to build a Silverblue installer image and install it
This is like the existing tests that build network install and
live images then install them, only for Silverblue. First we
build an ostree, using the standard configuration for the release
and subvariant but with the 'advisory' and 'workarounds' repos
included, so it will contain current stable packages plus the
packages from the update and any workarounds. Then we build an
ostree installer image with the ostree embedded, again including
advisory and workarounds repos in the installer build config so
packages from them will be included in the installer environment.
The image is uploaded, which completes the _ostree_build test.
Then an install_default_update_ostree test runs, which does a
standard install and boot from the installer image.

We do make a change that affects other tests, too. We now run
_advisory_post on live image install tests, as well as this new
ostree install image install test. It was skipped before because
of an exception that's really only needed for the netinst image
install test. In that test, packages from the update won't be
included in the installed system, so we can't run _advisory_post
on it. But for ostree and live image build/install tests, the
installed system *should* include packages from the update, so
we should check and make sure that it does.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2022-11-30 13:17:28 -08:00
Adam Williamson
1a65993d36 Add a perltidy check and apply it to the entire codebase
Signed-off-by: Adam Williamson <awilliam@redhat.com>
2022-07-28 14:38:38 -07:00
Adam Williamson
12e103e3da Factor meat out of advisory_post and do it in postfail too
If an update test fails before reaching advisory_post, we don't
generate the 'what update packages were installed' and 'were
any update packages *not* installed when they should have been'
logs, but these may well be useful for diagnosing the failure -
so let's also do the same stuff there. Only let's not do it all
twice.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2018-12-12 22:17:29 -08:00
Adam Williamson
764c6dbd95 Notice when update package should have been installed but wasn't
We hit an interesting case in update testing recently:

https://bodhi.fedoraproject.org/updates/FEDORA-2018-115068f60e

An earlier version of that update failed testing. When we dug
into it a bit, we found that the test was failing because an
earlier version of the `pki-server` package was installed than
the version that was in the update; when asked (as part of
FreeIPA deployment) to install it, dnf had noticed that there
were dependency issues with the version of the package from the
update, but it happened to be able to install the version from
the frozen 'stable' repo...so it just went ahead and did that.

In this case, the 'missed' package resulted in a test failure,
but it'd actually be possible for this to happen and the test
to complete; we really ought to notice when this happens, and
treat it as a test failure.

So what this attempts to do is: at the end of all update tests,
check for all installed packages with the same name as a package
from the update, and compare their full NEVR to the one of the
package from the update. If a package with the same name as one
of the update packages is installed, but does not appear to be
the *same NEVR*, we fail, and upload the lists of packages for
manual investigation as to what the heck's going on.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2018-12-12 22:17:29 -08:00
Adam Williamson
14ad5b97f1 Still try and upload testedpkgs even if comm 'fails'
From local experimentation, it still actually produces the
output, even though it prints the message about the order being
wrong and exits 1.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2018-11-08 17:46:45 -08:00
Adam Williamson
ddf6ba5a6b update tests: don't fail if comm is unhappy about the alphabet
Weirdly, occasionally some update tests seem to fail because
the 'comm' util we use to produce the list of packages from the
update that were actually tested during the job doesn't think
one of the input files is in alphabetical order, even though we
sort them both when they're produced. I don't know if this is
possibly due to the definition of 'alphabetical order' changing
as part of the update, or what. But we really shouldn't *fail*
the test when this happens, as it's not part of the functional
test, we're just producing convenience data. So, let's handle
the command failing, and if it happens, upload the input files
so we can maybe figure out why it's unhappy...

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2018-11-08 16:59:06 -08:00
Adam Williamson
e68e113f76 Remove test_flags comments, add ignore_failure flag
It's not really a good idea to have the comments that explain
the test_flags in *every* test, because they can go stale and
then we either have to live with them being old or update them
all. Like, now. So let's just take 'em all out. There's always
a reference in the openQA and os-autoinst docs, and those get
updated faster.

More importantly, add the new `ignore_failure` flag to relevant
tests - all the tests that don't have the 'important' or
'fatal' flag at present. Upstream killed the 'important' flag
(making all tests 'important' by default), I got it replaced
with the 'ignore_failure' flag, we now need to explicitly mark
all modules we want the 'ignore_failure' behaviour for.
2017-04-10 15:00:10 -07:00
Adam Williamson
461f3a6132 Update testing: log packages in update and installed packages
Summary:
This adds some logging related to the update testing workflow,
so we have some idea what we actually tested. We log precisely
which packages were actually downloaded from the update - this
is important as updates can be edited and when examining results
we'll want to know which packages actually got used. We also
add a new module which runs at the end of postinstall and tries
to figure out which packages from the update were installed in
the course of the test. This still isn't a guarantee the test
actually *tested them* in any way, but it at least means they
got installed successfully and didn't interfere with the test.

Test Plan:
Run the update test workflow, check the logs get
uploaded and seem accurate (sometimes some RPM garbage messages
wind up in the package log, I'm not too worried about that at
present). Run the compose test workflow and check it didn't
break.

Reviewers: jsedlak

Reviewed By: jsedlak

Subscribers: tflink

Differential Revision: https://phab.qa.fedoraproject.org/D1149
2017-02-23 14:51:19 -08:00