When the repository has multiple arches, eg. i686 and x86_64, it should
add a new entry to the project's builds list, not create a new project
in the list.
This handles that by adding a modified insort_left function and
examining the packages returned from dnf to make sure they aren't
already listed in the results. It also handles adding them in sorted
order so that no further sorting needs to be done on the results.
Resolves: rhbz#1656642
Depending on which version of dnf is installed it may return a tuple, or
may return a list. Work around this by only comparing the 1st element of
it.
Related: rhbz#1655876
Stop using fake dnf object, use the real thing. This will help catch
problems with dnf returning unexpected types like VectorString.
(cherry picked from commit 27aff75aa3)
Related: rhbz#1655876
And in an intermediate version it returns a VectorString object which
isn't serializable by the json or toml modules.
So convert it to a list so that the type is consistent in the sources
code.
(cherry picked from commit e9e5139750)
Resolves: rhbz#1655876
Anaconda is currently not able to handle cdn repo urls in the kickstart.
Add a new function that checks for extra repos and returns True.
Related: rhbz#1655623
If the system ran out of space, or was rebooted unexpectedly, the state
of the queue symlinks, or the results STATUS files may be inconsistent.
This checks them and:
* Removes broken symlinks from queue/new and queue/run
* Removes symlinks from run and sets the build to FAILED
* Sets builds w/o a STATUS to FAILED
* Sets builds with STATUS of RUNNING to FAILED
* Creates missing queue/new symlinks to results with STATUS of WAITING
So, any builds that were running during the reboot will be FAILED, and
any that were waiting to be started will be started upon rebooting.
Resolves: rhbz#1647985
(cherry picked from commit 4dd9004d13)
This is required to ensure that SELinux is configured properly while
building. It fixes the problem with building tar, and should be
installed in the other image types for consistency.
Resolves: rhbz#1645189
Anaconda, Lorax, lorax-composer, and livemedia-creator can all now run
with SELinux in Enforcing mode. It does not need to be disabled and if
there are denials they should be reported as a bug.
Log the current state of SELinux when starting, update the
documentation.
Resolves: rhbz#1645189
Running lorax-composer --no-system-repos will prevent it from copying
the dnf repositories from /etc/yum.repos.d/ into the lorax-composer repo
directory. It will *only* use repositories setup using the sources api
or written to /var/lib/lorax/composer/repos.d/
If lorax-composer has previously been run without this switch the system
repos will need to be removed from the composer/repos.d/ directory. It
would also be a good idea to remove the cached metadata in
/var/tmp/composer/
Resolves: rhbz#1650363
Images don't work without these fixes:
* Enable Network Manager.
* Disable cloud-init.
* Add Hyper-V modules into initramfs.
Fixes specific for RHEL:
* Create ifcfg-eth0 required by waagent.
* Install python3 and net-tools required by waagent.
Recommended changes:
* Use recommended kernel boot args.
* Disable kdump.
Related: rhbz#1628648
The previous method worked, but wasn't exactly idiomatic. This is more
correct, and appears to work the same (templates depsolve, version globs
work, multiple repos work).
Note that this does use a private dnf attribute ._goal, but the word is
that this is going to become a public api soon, so yes it is there on
purpose.
Resolves: rhbz#1638683
Since these images can be used to create multiple machines, they should
not have a unique machine-id attached to them. Replace /etc/machine-id
with an empty file so that it will be regenerated at boot time.
(cherry picked from commit 6fab72d894)
Related: rhbz#1628645
Related: rhbz#1628646
Related: rhbz#1628647
Related: rhbz#1628648
At the end of disk image installs, use fstrim on the generated filesystem to
discard any blocks that were allocated during the install and are now unused.
This will allow tools such as qemu-img to create images that do not include
deleted data.
For raw disk images that do not go through qemu-img, use fallocate --dig-holes
to create sparse holes in place of the unused blocks.
(cherry picked from commit 9717b3fd98)
Related: rhbz#1628645
Related: rhbz#1628646
Related: rhbz#1628647
Related: rhbz#1628648
Anaconda requires the root password to be set or locked, so if there
isn't anything setting it we write out 'rootpw --lock'
Also adds tests for this.
Resolves: rhbz#1626122
Also kill the lorax-composer process and remove /run/weldr/api.socket
so that when this is run with podman you don't get an error about
attempting to tar up the socket.
Related: rhbz#1613058
- need to specify --sharedir so lorax-composer can find its
kickstart files
- each test script writes results into a separate directory to
avoid a passing test overwriting the results from a failing one.
To avoid reporting failures in case of previously failing tests
(e.g. during development) remove the temporary directories holding
tets results before execution!
these are built on top of beakerlib and we use its internal
protocol to figure out the result without relying on the full
test runner that is tipically used inside of a RHEL environment!
Includes a disabled test snippet for Issue #460
This is similar to the AMI type, but also adds open-vm-tools and does not do
anything special to the partitioning
(cherry picked from commit 1056bfc25b)
Resolves: rhbz#1628646
This does pretty much the same things as the AMI compose type, but also
replaces NetworkManager with the Azure linux agent.
(cherry picked from commit e0c236ff36)
Resolves: rhbz#1628648
This differs from lmc's --make-ami in that creates a full disk image instead of
an fsimage. Create a raw disk image with a / and /boot partitions, and enable
sshd, chronyd, and cockpit by default.
(cherry picked from commit 18188bf6cf)
Resolves: rhbz#1628647
Instead of specifying the fstype, just let anaconda use the default.
(cherry picked from commit 847fff4e11)
Related: rhbz#1628647
Related: rhbz#1628648
When the kickstart is handed off to Anaconda for building it will
download its own copy of the metadata and re-run the depsolve. So if the
dnf cache isn't current there will be a mismatch and the build will
fail to find some of the versions in final-kickstart.ks
This adds a new context to DNFLock, .lock_check, that will force a check
of the metadata. It also implements its own timeout and forces a
refresh of the metadata when that expires because the dnf expiration
doesn't always work as expected.
Resolves: rhbz#1631561
This tests to make sure that the metadata timer is working (by setting
it to 10s and adding a new package to the repo), and that
DNFLock.lock_check immediately picks up a new package.
This depends on rpmfluff which is available from Fedora or EPEL repos.
Related: rhbz#1631561
Ends up you cannot use the kickstart user command on root, since it
already exists, so we have to translate that into a rootpw command.
So [[customizations.user]] with name = "root" only support key, which
will set the ssh key, and password which will use rootpw to set the
password. plain text or encrypted are supported.
Related: rhbz#1626122
If we leave the root account w/o a password people will use it that way,
leading to insecure images. Also if we use a default password. So lock
the root account in the templates.
Users will need to do one of these things:
1. Use [[customizations.user]] in their blueprint to configure root or
another user.
2. Use [[customizations.sshkey]] to set a key for root
2. Install a package that configures a user at install time
3. Install a package that sets up a user at boot time (eg. cloud-init)
This also drops the auth line from the kickstart templates, allowing it
to use the default password algoritm instead of md5.
Resolves: rhbz#1626122