Sometimes it is necessary to modify the kernel command-line of the
image, this adds support for a [customizations.kernel] section to the
blueprint:
[customizations.kernel]
append = "nosmt=force"
This will be appended to the kickstart's bootloader --append argument.
Includes tests for modifying the bootloader line, the kickstart
template, and examining the final-kickstart.ks created for a compose.
Related: rhbz#1688335
Reading a blueprint wasn't checking to see if it had been deleted so it
was returning the most recent commit before it had been deleted. This
allowed things like starting a compose with a blueprint that technically
doesn't exist.
One exception to this is the /changes/ route, it must be available so
that you can use the commit hash to undo a delete.
This also adds tests for the various operations.
(cherry picked from commit d32f477e0b)
Resolves: rhbz#1683442
The root account checks are applied to generated and deployed images
to make sure that root account is locked, except for live ISO.
Related: rhbz#1687595
Add the correct composer-cli command 'compose details' in places
where 'compose info' was still used on RHEL-7
(see commit 808454b561)
Related: rhbz#1687595
this will allow you to test against installed RPM like so:
# export CLI="/usr/bin/composer-cli"
# make test_images
If you already have lorax-composer running then you can directly
execute test scripts:
# ./tests/cli/test_build_and_deploy_aws.sh
Related: rhbz#1687595
- on some arches (also Fedora x86_64) systemd-nspawn may not be
available
- delete composes from other tests in rlPhaseStartCleanup because
we're seeing the tar compose kind of hanging in Jenkins and that
test script is executed last so the slave may be running out of
disk space. Be a good citizen and clean up after the previous
tests.
Related: rhbz#1656105
b/c we've migrated to Upshift we must use different instance type,
specify the desired network to connect to and update how we get
the ip address of the launched VM.
Related: rhbz#1656105
otherwise we get a conflict with python-ipaddress which is a
dependency of python2-pip and can't really be removed!
Installing from RPM is also a no-go b/c openstacksdk is
not available in EPEL 7. OTOH the RDO repositories which
OpenStack recommends have older version (0.11) and ansible wants
0.12 or later!
Related: rhbz#1656105
In some cases when the host has, for whatever reason, multiple copies of
the same repo listed the build may fail with an error about running out
of space.
So this commit removes duplicate entries after the host's repos have been
loaded. It also adjusts some of the test repos to use different
temporary repo names for the tests.
Resolves: rhbz#1664128
Support both
[customizations]
hostname = "whatever"
and
[[customizations]]
hostname = "whatever"
in the blueprint data. The [[ syntax matches the other customization
directives (user, group, sshkey), and as such it's easy to accidentally
use it for the hostname without even realizing it's specifying something
different.
Add some tests for converting customizations to kickstarts.
(cherry picked from commit 35ab6a1336)
Resolves: rhbz#1666517
This makes sure that depsolving shim installs the shim-* package, and
that depsolving grub2-efi-*-cdboot installs a specific -cdboot package.
Related: rhbz#1641601
See:
fd61b54679 (diff-b4ef698db8ca845e5845c4618278f29a)
Note: may also affect master/rhel8-branch but haven't seen it
so far. For master we can do:
dnf install ansible python3-openstacksdk
Not so easy on RHEL 7
pylint has trouble with Flask response objects, thinking they are tuples
and returning no-member errors. It also doesn't recognize gevent.socket
members like AF_UNIX.
When the repository has multiple arches, eg. i686 and x86_64, it should
add a new entry to the project's builds list, not create a new project
in the list.
This handles that by adding a modified insort_left function and
examining the packages returned from dnf to make sure they aren't
already listed in the results. It also handles adding them in sorted
order so that no further sorting needs to be done on the results.
Resolves: rhbz#1657055
(cherry picked from commit 663a0dcd73)
If the system ran out of space, or was rebooted unexpectedly, the state
of the queue symlinks, or the results STATUS files may be inconsistent.
This checks them and:
* Removes broken symlinks from queue/new and queue/run
* Removes symlinks from run and sets the build to FAILED
* Sets builds w/o a STATUS to FAILED
* Sets builds with STATUS of RUNNING to FAILED
* Creates missing queue/new symlinks to results with STATUS of WAITING
So, any builds that were running during the reboot will be FAILED, and
any that were waiting to be started will be started upon rebooting.
Resolves: rhbz#1657054
(cherry picked from commit f0bac40d7f)
This is similar to the AMI type, but also adds open-vm-tools and does not do
anything special to the partitioning
(cherry picked from commit 1056bfc25b)
Resolves: rhbz#1656105
This is similar to the AMI compose type, with a handful of additional
changes specific to Azure:
* Add waagent (but leave NetworkManager enabled, despite some of the
docs)
* Disable cloud-init
* Add Hyper-V modules into initrams.
Fixes specific for RHEL:
* Create ifcfg-eth0 required by waagent.
* Install python3 and net-tools required by waagent.
Recommended changes:
* Use recommended kernel boot args.
* Disable kdump.
(cherry picked from commit e0c236ff36)
(cherry picked from commit da0435bc90)
(cherry picked from commit b594fa99bc)
Resolves: rhbz#1656105
This differs from lmc's --make-ami in that creates a full disk image instead of
an fsimage. Create a raw disk image with a / and /boot partitions, and enable
sshd, chronyd, and cockpit by default.
(cherry picked from commit 18188bf6cf)
(cherry picked from commit 81d38b6445)
Resolves: rhbz#1656105
This tests to make sure that the metadata timer is working (by setting
it to 10s and adding a new package to the repo), and that
YumLock.lock_check immediately picks up a new package.
This depends on rpmfluff which is available from Fedora or EPEL repos.
Related: rhbz#1632962
The blueprint version glob was being applied to the whole package NEVRA
by yum (it lacks a separate API for just globbing versions), so this
implements that in filterVersionGlob using fnmatchcase on the package
names, and the yum package verGT comparison on the versions for the
selected package.
Also includes tests.
Resolves: rhbz#1628114
Passing ?limit=0 to the blueprints/list, blueprints/changes,
projects/list, modules/list should always return the total possible
results, not 0.
Also move the composer-cli test_diff to the end so that it will work
consistently. Do this by naming it test_z_diff.
Note the exception string checking around compose_type. I didn't really
want to introduce a new exception type just for this, but also didn't
want to duplicate strings. I'd be open to other suggestions for how to
do this.
This adds some fairly redundant code to the beginning of all the
blueprint routes to attempt reading a commit from git for the
blueprint's recipe. If it succeeds, the blueprint exists and the route
can continue. Otherwise, return an error. Hopefully this doesn't slow
things down too much.
Note that this also changes the return type of uuid_info to return None
when an unknown ID is given. The other uuid_* functions are fine
because they are checked ahead of time.
Currently the code is not UTF8 safe, so we need to return a clear error
when invalid characters are passed in.
This also adds tests for the routes to confirm that an error is
correctly returned.
This adds empty __init__.py to tests so that a lib.py library of helper
functions can be imported from the tests.
Add captured_output to use with composer-cli tests to capture stdout/err
output from the functions.
Some results have errors and no status, others have status and errors.
Update the function to return the final rc to exit with, and a bool
indicating whether or not to continue processing the other fields.
Add a bunch of tests for the new function to make sure I have the logic
correct.
This adds a new argument to projects_depsolve and
projects_depsolve_with_size that contains the group list, unfortunately.
I would have prefered adding a function that just returns a list of all
the contents of a group and then add that to what was being passed into
projects_depsolve. However, there does not appear to be any good way to
do that in yum aside from a lot of grubbing around in the comps object,
which I am unwilling to do.
Depsolve the packages included in the templates and report any errors
using the /api/status 'msgs' field. This should help narrow down
problems with package sources not being setup correctly.
Yum needs to have some other attrs setup on the YumRepository object, so
use the function provided to ensure that everything is correct. Also
switch the related functions to use a dict instead of a YumRepository
object.
yum TumRepository.dump() function cannot be used as a .repo file Add a
new function to write this in the correct format, and limited to the
fields we use.
Add a test for the new function.
Fix /projects/source/info to return an error 400 if a nonexistant TOML
source is requested. If JSON is used the error is part of the standard
response.
Update test_server.py to check for the correct error code.
When adding a source failed it wasn't being removed from the dnf object.
This fixes that, and returns an error when setting up the source fails.
Also adds a test for it.
Otherwise the user creation fails when anaconda sees there is already a
group with that name. Log a warning and continue on.
(cherry picked from commit a363aee971)
This adds support for the optional blueprint section [customizations].
Use it like this:
[customizations]
hostname = yourhostnamehere
[[customizations.sshkey]]
user = root
key = root user key
this avoids comparing against files on disk (and the huge diff the
test runner produces in case of failure). Instead we look for a
200 HTTP response with large enough size and some well known
strings inside the response data.
Because you cannot share data between test methods these have to all be
in one big function. This adds one series to test the failed compose
results, and a 2nd function to test for the successful compose.
.get_default() returns string so make sure we're actually parsing
the value as boolean and not evaluating a non-empty string in a
boolean context (which will always return True)