Chasing updated package versions is silly. We already have other tests
to make sure the blueprints support version numbers there is no need to
fail a test at the whim of an upstream repo.
The enabled bool is now being used so the cli should only show the types
actually available on the architecture.
Also modifies the test in test_compose_sanity.sh
Related: rhbz#1751998
The callers, and the documentation, all expect int 0/1 to use as the
exit status for the program. Not True/False, even though that works most
of the time.
These use beakerlib to download a Fedora boot.iso and run mkksiso on
it. It currently does not try to boot the resulting iso, it mounts it
and checks that the expected config files have been modified and the
extra files have been added.
This builds a boot.iso in the vm, copies it out, and boots it.
The tests that run inside the boot.iso
(/tests/lorax/test_boot_bootiso.sh) cannot use beakerlib so it needs to
be a simple shell script returning 1 on failure along with a descriptive
message.
/var/log/audit/audit.log isn't always available (eg. tar liveimg
install), but it is logged to the journal, which can be grepped with
'journalctl -g' so use that instead.
Note: use podman-docker to avoid changing tests too much. This
is also what we have on the RHEL branches.
There's no service to be started/restarted so remove everything
related to docker service.
On Fedora 31 passworless root login is no longer working. We already
install a ssh key, may as well use it.
This also reduces the live boot timeout to 2s from 60s, which should
help with timeout problems when booting.
Nested virt is not reliable enough, especially on other arches, to rely
on for testing the created images. This moves the test code into
test_boot_* scripts to be run from inside the booted images.
It also adds copying the results of the build into
/var/tmp/test-results/, and includes the generated ssh key so that
whatever boots the image can also log in.
The tests/test_image.sh script has been added to handle running the
test_boot_* scripts without any of the extra lorax-composer specific
setup.
The 'enabled' field in the /compose/types output now reflects whether or
not the type is supported on the current architecture. Disabled types
are not allowed to be built, and will raise an error like:
Compose type 'alibaba' is disabled on this architecture
This uses a new Ansible module, ec2_snapshot_import, which is included
here until it is available from upstream.
It will upload the AMI to s3, convert it to a snapshot, and then
register the snapshot as an AMI. The s3 object is deleted when it has
been successfully uploaded.
Since we have both compose uuids and upload uuids they need to be
clearly named. This updates the upload naming to use 'upload_uuid' in
the inputs, and 'upload_id' in the output (_id instead of _uuid for
consistency with build_id naming in the status responses).
This also adds 'upload_id' to the /upload/log response.
This tests the routes for saving a profile, listing profiles, deleting
profiles, as well as composing with upload.
The composes run fake composes with upload data, one selects a profile
the other passes in the settings. No actual upload is done, but it tests
that the info, log, and cancel routes work.
This also updates the test setup to copy over the share/lifted directory
so that the providers are available to the tests.
Output from some of these are different from API v0. Instead of mixing
the two this moved v1 tests into a new class - ServerAPIV1TestCase to
make them easier to maintain, and removes the v1 tests from
ServerAPIV0TestCase
uploads should only be included in the V1 API routes, add `api`
selection to the relevant helper functions and calls to them from v0.py
Add new V1 routes with api=1 to include the uploads information in the
results.
Also add tests to ensure that V0 requests do not include uploads.
A recipe that is valid TOML can still be an invalid recipe (eg. missing
the 'name' field) so this should also catch RecipeError.
Also added tests for this, as well as making sure commit_recipe_file()
raises the correct errors.
Resolves: rhbz#1755068
- save compose logs under /var/log/$TEST
- save qemu logs under /var/log/$TEST/qemu.log
- download everything to $TEST_ATTACHMENTS so it can be saved
in CI results
tests: export BLUEPRINTS_DIR for use in tests
Depending on the way the tests are run the directory may be a temporary
dir, or it may be the standard /var/lib/lorax/... path.
Related: rhbz#1714103
This loads the system dnf vars from /etc/dnf/vars at startup if
system repos are enabled, and it substitutes the values in the sources
when loaded, and when a new source is added.
Also includes tests.
This changes the source 'name' field to match the DNF usage of it as a
descriptive string. 'id' is now used as the short name to refer to the
source. The v0 API remains unchanged.
Tests for v1 behavior have been added.
Now that the v1 API is in use the status message will return api: 1
This creates a tar suitable for use with the anaconda kickstart liveimg
command. It adds the kernel, grub2, and grub2-tools packages to the tar
template.
libdnf-0.22.5-5 changed something and now the repos with fake urls are
failing when loaded by test_server.py (they still work fine with
test_projects.py) so only use the 'good' repos with the test_server.py
tests -- the others weren't needed for any of its tests anyway.
make_squashfs has been removed, make_runtime is now used in all paths to
create the install.img
Add a tests for squashfs only and squashfs+ext4 (requires loop so only
runs as root).
In python 3 f.seek() on text doesn't work like it does in py2/C because
text is now unicode. So change read_tail to use byte mode and take
unicode into account. Also add tests for it.
Previously it was looping, waiting for FINISHED|FAILED but was not
actually failing the test if the compose failed to build.
This adds a function to check the status of the compose and calls it
after each compose.
dnf seems to have changed the default for skip_if_unavailable. Some
mock repositories are still around in later tests, which then fail
because metadata cannot be synced.
Also expose skip_if_unavailable in dnf_repo_to_file_repo(), so that
tests checking for equality of repo files continue to pass.
bacause this requires additional Python modules and we don't
really use it! Fixes
[ WARNING ] :: cannot create journal.xml due to missing python interpreter
This makes sure that required fields are included, and that sections are
not empty. It does not check for all optional fields.
If there are errors it will gather up all of them and then raise a
RecipeError with a string of all the errors.
The new toml library, introduced with abe7df34f, outputs different
whitespace from the old one. Fix the test expectation and strip()
results from toml.dumps(), because it contains superfluous newlines at
the end.
Add -monitor none to turn off the qemu monitor multiplexing.
Pass -boot d for -cdrom booting instead of 'c'.
Add 'console=ttyS0,115200n8' to the boot arguments so that kernel output
will show up on the serial port.
This also includes extensive tests for each of the currently supported
customizations. It should be generic enough to continue working as long
as the list of dicts includes a 'name' or 'user' field in the dict.
Otherwise support for a new dict key will need to be added to the
customizations_diff function.
the biggest slow down is fetching data for many repositories
over a slow network. The previous retry count and sleep times
sometimes are not enough on Fedora.
It's necessary to make sure the blueprints directory doesn't contain
the git/ directory before the tests are run, so that we can just simply
modify the blueprint files without using blueprints push.
Related: rhbz#1714298
Beakerlib upstream can't do this yet, but might at some point:
https://github.com/beakerlib/beakerlib/issues/42
This is only enabled in combination with the `--sit` option of the
`test/check-*` scripts. It leaves the system in exacly the state it was
in when an assertion failed. Finishing the test run would run cleanup as
well (such as deleting created images). It also takes longer.
`setup_tests()` expected BLUEPRINTS_DIR to be set, but it wasn't when
running in automated mode (with $CLI set).
Fix this and move share and blueprint dirs to function arguments.
Not all parts of the script has been switched from awscli to ansible yet,
because the ansible aws modules do not support importing s3 object as snapshots.
(https://github.com/ansible/ansible/issues/53453)
Workaround using the image_location parameter of the ec2_ami ansible module
would mean adding extra code for generating AMI manifest with pre-signed
URLs.
We were checking for composer's FINISHED status only, which meant that
when a compose failed, the test ran until it timed out.
Check for failed as well. Also, always time out after 30 minutes.
Some test runners don't have nested virtualization enabled. Because
these checks are only checking that a boot works, kvm doesn't give us
that much. Disable for now.
Also remove the check for qemu-kvm. It doesn't abort the test
prematurely anyway.