At the end of disk image installs, use fstrim on the generated filesystem to
discard any blocks that were allocated during the install and are now unused.
This will allow tools such as qemu-img to create images that do not include
deleted data.
For raw disk images that do not go through qemu-img, use fallocate --dig-holes
to create sparse holes in place of the unused blocks.
(cherry picked from commit 9717b3fd98)
Related: rhbz#1628645
Related: rhbz#1628646
Related: rhbz#1628647
Related: rhbz#1628648
Anaconda requires the root password to be set or locked, so if there
isn't anything setting it we write out 'rootpw --lock'
Also adds tests for this.
Resolves: rhbz#1626122
Also kill the lorax-composer process and remove /run/weldr/api.socket
so that when this is run with podman you don't get an error about
attempting to tar up the socket.
Related: rhbz#1613058
- need to specify --sharedir so lorax-composer can find its
kickstart files
- each test script writes results into a separate directory to
avoid a passing test overwriting the results from a failing one.
To avoid reporting failures in case of previously failing tests
(e.g. during development) remove the temporary directories holding
tets results before execution!
these are built on top of beakerlib and we use its internal
protocol to figure out the result without relying on the full
test runner that is tipically used inside of a RHEL environment!
Includes a disabled test snippet for Issue #460
This is similar to the AMI type, but also adds open-vm-tools and does not do
anything special to the partitioning
(cherry picked from commit 1056bfc25b)
Resolves: rhbz#1628646
This does pretty much the same things as the AMI compose type, but also
replaces NetworkManager with the Azure linux agent.
(cherry picked from commit e0c236ff36)
Resolves: rhbz#1628648
This differs from lmc's --make-ami in that creates a full disk image instead of
an fsimage. Create a raw disk image with a / and /boot partitions, and enable
sshd, chronyd, and cockpit by default.
(cherry picked from commit 18188bf6cf)
Resolves: rhbz#1628647
Instead of specifying the fstype, just let anaconda use the default.
(cherry picked from commit 847fff4e11)
Related: rhbz#1628647
Related: rhbz#1628648
When the kickstart is handed off to Anaconda for building it will
download its own copy of the metadata and re-run the depsolve. So if the
dnf cache isn't current there will be a mismatch and the build will
fail to find some of the versions in final-kickstart.ks
This adds a new context to DNFLock, .lock_check, that will force a check
of the metadata. It also implements its own timeout and forces a
refresh of the metadata when that expires because the dnf expiration
doesn't always work as expected.
Resolves: rhbz#1631561
This tests to make sure that the metadata timer is working (by setting
it to 10s and adding a new package to the repo), and that
DNFLock.lock_check immediately picks up a new package.
This depends on rpmfluff which is available from Fedora or EPEL repos.
Related: rhbz#1631561
Ends up you cannot use the kickstart user command on root, since it
already exists, so we have to translate that into a rootpw command.
So [[customizations.user]] with name = "root" only support key, which
will set the ssh key, and password which will use rootpw to set the
password. plain text or encrypted are supported.
Related: rhbz#1626122
If we leave the root account w/o a password people will use it that way,
leading to insecure images. Also if we use a default password. So lock
the root account in the templates.
Users will need to do one of these things:
1. Use [[customizations.user]] in their blueprint to configure root or
another user.
2. Use [[customizations.sshkey]] to set a key for root
2. Install a package that configures a user at install time
3. Install a package that sets up a user at boot time (eg. cloud-init)
This also drops the auth line from the kickstart templates, allowing it
to use the default password algoritm instead of md5.
Resolves: rhbz#1626122
In the near-future there may be /lib/modules/ directories for older
kernels with weak dependencies listed. These may not match the installed
kernel(s) so we cannot depend on them to drive generate_module_data.
Instead use the existing findkernels() function to get the list of
installed kernels and iterate those, running depmod on them.
Resolves: rhbz#1632140
(cherry picked from commit 07acd2e780)
lorax uses pyanaconda's SimpleConfigParser in three different
places (twice with a copy that's been dumped into pylorax, once
by importing it), just to do a fairly simple job: read some
values out of /etc/os-release. The only value SimpleConfigParser
is adding over Python's own ConfigParser here is to read a file
with no section headers, and to unquote the values. The cost is
either a dependency on pyanaconda, or needing to copy the whole
of simpleparser plus some other utility bits from pyanaconda
into lorax. This seems like a bad trade-off.
This changes the approach: we copy one very simple utility
function from pyanaconda (`unquote`), and do some very simple
wrapping of ConfigParser to handle reading a file without any
section headers, and returning unquoted values. This way we can
read what we need out of os-release without needing a dep on
pyanaconda or to copy lots of things from it into pylorax.
Resolves: #449Resolves: #450
Signed-off-by: Adam Williamson <awilliam@redhat.com>
Related: rhbz#1613058
blueprints/changes is different, each blueprint has it's own total,
limited by the call's limit. So it needs to find the max total of all
the requested blueprints.
(cherry picked from commit 57674c9a1a)
Passing ?limit=0 to the blueprints/list, blueprints/changes,
projects/list, modules/list should always return the total possible
results, not 0.
Also move the composer-cli test_diff to the end so that it will work
consistently. Do this by naming it test_z_diff.
(cherry picked from commit 972b5c4142)
The blueprints/changes API is a bit different from the others, the total
that it includes is for each blueprint, not one total for all of them,
since there will be a different number of commits for each.
The function is passed the dict, and it can be used to select the total
to use for retrieving all of the results. If it isn't included it will
use data["total"] which works fine in most cases.
(cherry picked from commit 0a76d635ca)
Add a limit argument to all potentially paginated results, equal to
whatever the composer backend is the total number of results. This still
has the potential to provide truncated data if the number of results
increases between the two HTTP requests.
Resolves: #404
(cherry picked from commit ee98d87cea)