Beakerlib upstream can't do this yet, but might at some point:
https://github.com/beakerlib/beakerlib/issues/42
This is only enabled in combination with the `--sit` option of the
`test/check-*` scripts. It leaves the system in exacly the state it was
in when an assertion failed. Finishing the test run would run cleanup as
well (such as deleting created images). It also takes longer.
`setup_tests()` expected BLUEPRINTS_DIR to be set, but it wasn't when
running in automated mode (with $CLI set).
Fix this and move share and blueprint dirs to function arguments.
Test the HTTP API directly from outside the VM by forwarding
/run/weldr/api.socket to a TCP port on the host.
This is only a start to test that the API always returns JSON (see
previous commit).
this makes it possible to have more granular test execution
reported as separate statuses on GitHub. ATM we will have:
- cockpit/fedora-30
- cockpit/fedora-30/live-iso
- cockpit/fedora-30/qcow2
- cockpit/fedora-30/aws
- cockpit/fedora-30/azure
- cockpit/fedora-30/openstack
- cockpit/fedora-30/vmware
Helps in figuring out which tests are in a file without having to open
it. Use like this:
$ test/check-cli -l
TestImages.test_live_iso
TestImages.test_partitioned_disk
TestImages.test_qcow2
TestImages.test_tar
TestSanity.test_blueprint_sanity
TestSanity.test_compose_sanity
Names of classes containing multiple tests can be given, just like
normal:
$ test/check-cli -l TestSanity
TestSanity.test_blueprint_sanity
TestSanity.test_compose_sanity
Not all parts of the script has been switched from awscli to ansible yet,
because the ansible aws modules do not support importing s3 object as snapshots.
(https://github.com/ansible/ansible/issues/53453)
Workaround using the image_location parameter of the ec2_ami ansible module
would mean adding extra code for generating AMI manifest with pre-signed
URLs.
Commit 4783f6562f introduced the assumption that beakerlib is already
installed in non-RHEL images. As the test script runs without `set -e`,
this hasn't been noticed as the test silently succeeds.
Go back to installing beakerlib everywhere.
We were checking for composer's FINISHED status only, which meant that
when a compose failed, the test ran until it timed out.
Check for failed as well. Also, always time out after 30 minutes.
Some test runners don't have nested virtualization enabled. Because
these checks are only checking that a boot works, kvm doesn't give us
that much. Disable for now.
Also remove the check for qemu-kvm. It doesn't abort the test
prematurely anyway.
A compose can change the hosts' policy, which can lead to docker
crashing if the container-selinux policy is not included. Add a
workaround and bug link.
The docker phase always failed because `-ti` was passed even though the
the output was not a terminal.
Also remove the check for /usr/bin/docker in the setup phase. It didn't
test that the daemon was running. More importantly, it didn't abort the
test anwyay (and there doesn't seem to be a good way to do this in
beakerlib).
Allows to run the tests on multiple operating systems and on the
infrastructure that the Cockpit team maintains.
`make vm` downloads one of Cockpit's test images (override which one
with TEST_OS) and installs rpms build from the local checkout of lorax.
The resulting image is placed in `test/images/$TEST_OS`.
TEST_OS can be set to any of Cockpit's supported images (default:
fedora-30).
Run `make check-vm` to run the CLI checks in the VM. The bulk of the
work is done in `test/check-cli`, which uses Cockpit's `bots` library to
start the VM and run the script in it.
Also included is a `test/run` script, which is the entrypoint for
Cockpit's test infrastructure.
The filesystem was too small because Anaconda always adds the kernel,
but the template uses --nocore so it doesn't take that into account.
Add it to the template so that the filesystem size will be large enough
to hold the extra packages.
To maintain consistency with the other options this changes firewall to
combine the existing settings from the image template with the settings
from the blueprint.
Also updated the docs, added a new test for it, and sorted the output
for consistency.
Make it clear that the services are added to services already listed in
the image templates, and that you can specify any systemd unit filename.
Older releases are more restrictive, and this documentation will need to
be updated when these changes are backported.
Add support for enabling and disabling systemd services in the
blueprint. It works like this:
[customizations.services]
enabled = ["sshd", "cockpit.socket", "httpd"]
disabled = ["postfix", "telnetd"]
They are *added* to any existing settings in the kickstart templates.