At the end of disk image installs, use fstrim on the generated filesystem to
discard any blocks that were allocated during the install and are now unused.
This will allow tools such as qemu-img to create images that do not include
deleted data.
For raw disk images that do not go through qemu-img, use fallocate --dig-holes
to create sparse holes in place of the unused blocks.
(cherry picked from commit 9717b3fd98)
Related: rhbz#1628645
Related: rhbz#1628646
Related: rhbz#1628647
Related: rhbz#1628648
Anaconda requires the root password to be set or locked, so if there
isn't anything setting it we write out 'rootpw --lock'
Also adds tests for this.
Resolves: rhbz#1626122
This is similar to the AMI type, but also adds open-vm-tools and does not do
anything special to the partitioning
(cherry picked from commit 1056bfc25b)
Resolves: rhbz#1628646
This does pretty much the same things as the AMI compose type, but also
replaces NetworkManager with the Azure linux agent.
(cherry picked from commit e0c236ff36)
Resolves: rhbz#1628648
This differs from lmc's --make-ami in that creates a full disk image instead of
an fsimage. Create a raw disk image with a / and /boot partitions, and enable
sshd, chronyd, and cockpit by default.
(cherry picked from commit 18188bf6cf)
Resolves: rhbz#1628647
Instead of specifying the fstype, just let anaconda use the default.
(cherry picked from commit 847fff4e11)
Related: rhbz#1628647
Related: rhbz#1628648
When the kickstart is handed off to Anaconda for building it will
download its own copy of the metadata and re-run the depsolve. So if the
dnf cache isn't current there will be a mismatch and the build will
fail to find some of the versions in final-kickstart.ks
This adds a new context to DNFLock, .lock_check, that will force a check
of the metadata. It also implements its own timeout and forces a
refresh of the metadata when that expires because the dnf expiration
doesn't always work as expected.
Resolves: rhbz#1631561
Ends up you cannot use the kickstart user command on root, since it
already exists, so we have to translate that into a rootpw command.
So [[customizations.user]] with name = "root" only support key, which
will set the ssh key, and password which will use rootpw to set the
password. plain text or encrypted are supported.
Related: rhbz#1626122
In the near-future there may be /lib/modules/ directories for older
kernels with weak dependencies listed. These may not match the installed
kernel(s) so we cannot depend on them to drive generate_module_data.
Instead use the existing findkernels() function to get the list of
installed kernels and iterate those, running depmod on them.
Resolves: rhbz#1632140
(cherry picked from commit 07acd2e780)
lorax uses pyanaconda's SimpleConfigParser in three different
places (twice with a copy that's been dumped into pylorax, once
by importing it), just to do a fairly simple job: read some
values out of /etc/os-release. The only value SimpleConfigParser
is adding over Python's own ConfigParser here is to read a file
with no section headers, and to unquote the values. The cost is
either a dependency on pyanaconda, or needing to copy the whole
of simpleparser plus some other utility bits from pyanaconda
into lorax. This seems like a bad trade-off.
This changes the approach: we copy one very simple utility
function from pyanaconda (`unquote`), and do some very simple
wrapping of ConfigParser to handle reading a file without any
section headers, and returning unquoted values. This way we can
read what we need out of os-release without needing a dep on
pyanaconda or to copy lots of things from it into pylorax.
Resolves: #449Resolves: #450
Signed-off-by: Adam Williamson <awilliam@redhat.com>
Related: rhbz#1613058
blueprints/changes is different, each blueprint has it's own total,
limited by the call's limit. So it needs to find the max total of all
the requested blueprints.
(cherry picked from commit 57674c9a1a)
The blueprints/changes API is a bit different from the others, the total
that it includes is for each blueprint, not one total for all of them,
since there will be a different number of commits for each.
The function is passed the dict, and it can be used to select the total
to use for retrieving all of the results. If it isn't included it will
use data["total"] which works fine in most cases.
(cherry picked from commit 0a76d635ca)
Add a limit argument to all potentially paginated results, equal to
whatever the composer backend is the total number of results. This still
has the potential to provide truncated data if the number of results
increases between the two HTTP requests.
Resolves: #404
(cherry picked from commit ee98d87cea)
This adds the following optional arguments to the /compose/status route:
- type, matches the compose_type field
- status, matches the queue_status field
- blueprint, matches the blueprint field
(cherry picked from commit 40f23f093d)
A value of 1 is too low for heavy users of the API, such as the weldr-web
interface.
This is also systemd's default for sockets it opens. Using lorax-composer with
socket activation already results in a backlog of SOMAXCONN connections.
(cherry picked from commit be5d50e6f3)
Related: rhbz#1613058
Currently we are making MBR disk images for qcow2 and partitioned disk,
so the UEFI packages aren't required at this point.
Move the clearpart command into compose.py so that in the futute it can
use clearpart --disklabel to create a GPT image, and add the required
packages to the package set.
The idea here is to make sure all return points have the same type for
the error cases. There's not really all that many, so they just go in
one patch. Some of these could potentially turn into more specialized
errors later.
(cherry picked from commit fd901c5e3f)
Note the exception string checking around compose_type. I didn't really
want to introduce a new exception type just for this, but also didn't
want to duplicate strings. I'd be open to other suggestions for how to
do this.
(cherry picked from commit b3bb438254)
This adds some fairly redundant code to the beginning of all the
blueprint routes to attempt reading a commit from git for the
blueprint's recipe. If it succeeds, the blueprint exists and the route
can continue. Otherwise, return an error. Hopefully this doesn't slow
things down too much.
(cherry picked from commit a925cc7ddb)
Note that this also changes the return type of uuid_info to return None
when an unknown ID is given. The other uuid_* functions are fine
because they are checked ahead of time.
(cherry picked from commit 6497b4fb65)
Each element in the errors value is now a dict, with a msg field and an
id field. The id field contains a value out of errors.py that can be
used by the front end to key on. The msg field is the same as what's
been there.
The idea is to keep the number of IDs somewhat limited so there's not a
huge number of things for the front end to know.
(cherry picked from commit 9677b012da)
This should make it easier to return more complex error structures. It
also doesn't appear to matter - tests still pass without changes.
(cherry picked from commit 4c3f93e329)
We need to be root to read the certificates that give access to the
package repos. Right now, the alternative seems to be changing
permissions on the certs themselves, which seems less good. We're
running anaconda as root anyway.
This patch does two things:
1) Add "compose list", which lists compose UUIDs and other basic info,
2) Fix up "blueprints list", "modules list", "sources list", and
"compose types" so their output is just a plain list of identifiers
Make sure no UTF8 characters are allowed and return an error if they
are.
Also includes tests to make sure the correct error is returned.
(cherry picked from commit 86d79cd8a6)
Currently the code is not UTF8 safe, so we need to return a clear error
when invalid characters are passed in.
This also adds tests for the routes to confirm that an error is
correctly returned.
(cherry picked from commit 74f5def3d4)
This handles the case where a route is requested, but without a required
parameter. So, /blueprints/info is requested instead of
/blueprints/info/http-server. It accomplishes this via a decorator, so
a lot of these route-related functions now have quite a few decorators
attached to them.
Typo'd URLs (/blueprints/nfo for instance) will still return a 404. I
think this is a reasonable thing to do.
(cherry picked from commit 5daf2d416a)
Unfortunately, this isn't very useful if /modules/info is provided with
multiple modules. yum doesn't traceback when doPackageLists is given
something that doesn't exist. It just returns an empty list. If
/modules/info is given just one module and yum gives us an empty list,
it's easy to say what happened. If /modules/info is given several
modules and just one does not exist, we will not be able to detect that.
Fixing this would require doing more yum operations, which is likely to
slow things down and isn't the direction I want to be going.
(cherry picked from commit 8e948e4a4d)
Right now, this is when the compose is queued up, when it is started by
anaconda, and when it is finished (whether that's success or not).
(cherry picked from commit 3ba9d53b8b)
If one of the timestamps isn't present (for instance, the finished
timestamp for a job that is still running), null is returned.
(cherry picked from commit 17c40ef271)
This is responsible for writing out a new times.toml file, containing
important timestamps in the life of a compose. This seems a little more
reliable than attempting to infer things from the filesystem, especially
in light of the fact that we can't ever really know when a file was
created.
(cherry picked from commit b59d59b124)
Some results have errors and no status, others have status and errors.
Update the function to return the final rc to exit with, and a bool
indicating whether or not to continue processing the other fields.
Add a bunch of tests for the new function to make sure I have the logic
correct.
(cherry picked from commit 35fa067219)
We only have qemu-kvm available, so use that. This also means that there
will not me any support for using qemu with arches that are different
from the host.
A bad system repo can cause lorax-composer to fail to start. Instead of
a traceback log the error and exit.
(note that the exit still results in an OSError traceback due to part of
it running as root, this needs to be addressed in another commit).
(cherry picked from commit 49380b4b49)
This adds a new argument to projects_depsolve and
projects_depsolve_with_size that contains the group list, unfortunately.
I would have prefered adding a function that just returns a list of all
the contents of a group and then add that to what was being passed into
projects_depsolve. However, there does not appear to be any good way to
do that in yum aside from a lot of grubbing around in the comps object,
which I am unwilling to do.
(cherry picked from commit 0259f3564d)
This is the same as the output at the top level, just trimmed down to
only the options for a single subcommand. It's trigged by providing
"help" or "--help" as a subcommand option.
(cherry picked from commit f5115291bd)
This isn't a real subcommand like the others. The option processing
just intercepts it and prints the output. Given that we're subcommand
based, it makes sense to support this in addition to --help.
(cherry picked from commit 18620700fd)
Depsolve the packages included in the templates and report any errors
using the /api/status 'msgs' field. This should help narrow down
problems with package sources not being setup correctly.
(cherry picked from commit d92f2f5b04)
Use it to override the default dracut arguments (displayed as part of
the --help output). If you want to extend the default arguments they
all need to be passed in on the cmdline as well. eg.
--dracut-arg='--xz' --dracut-arg='--install /.buildstamp' ...
Resolves: rhbz#1452220
(cherry picked from commit d8ce013a2b)
This adds the sources command which can be used to list, add, change,
and delete sources using the TOML formatted source file.
(cherry picked from commit 6f6ce410c0)
DNF Repo.dump() function cannot be used as a .repo file for dnf due to
it writing baseurl and gpgkey as a list instead of a string. Add a new
function to write this in the correct format, and limited to the fields
we use.
Add a test for the new function.
Fix /projects/source/info to return an error 400 if a nonexistant TOML
source is requested. If JSON is used the error is part of the standard
response.
Update test_server.py to check for the correct error code.
(cherry picked from commit afa89ea657)
When adding a source failed it wasn't being removed from the dnf object.
This fixes that, and returns an error when setting up the source fails.
Also adds a test for it.
This also includes detecting rawhide vs. non-rawhide releases and
adjusting the tests accordingly (some of the source names change).
(cherry picked from commit dd8e4d9e99)
It was chopping off an extra directory level due to realpath removing
the trailing / from the paths when they are setup.
(cherry picked from commit 23f4b2a3ec)
We had only been indirectly pulling in GConf, and anyways
nothing was listening to these keys.
<kalev> I still think it's a fallout from 27a90d973f
Really in general, if we wanted to make changes like this
it'd probably be a lot simpler to do them on boot or so.
https://bugzilla.redhat.com/show_bug.cgi?id=1581838
(cherry picked from commit bb3d8edd06)
This uses dnf's version__glob filter to implement it. It amounts to '*'
wildcards and '?' for single character matching.
(cherry picked from commit 095829171a)
First is Anaconda uses 6k blocks per file for its estimate, and it
fudges by 10% so adjust for those with an extra 10% of headroom just in
case.
Second is an Anaconda bug that won't allow it to do a kickstart install
to a disk smaller than 3000 MB. There is a PR to fix it upstream, but
for now the minimum size has to be 3000e9
This adds support for the optional blueprint section [customizations].
Use it like this:
[customizations]
hostname = yourhostnamehere
[[customizations.sshkey]]
user = root
key = root user key