The problem this solves is that yum really isn't designed to be part of\
a long running daemon. So when repodata changes upstream, even when
you force it to download the new metadata, it doesn't change in memory
so you end up with lorax-composer depsolving against old versions, and
anaconda depsolving against new versions (because it sets up its own
YumBase and cache) and then the kickstart is no longer valid.
To solve this I have
- Added a 6h timeout to the metadata check (because yum's doesn't work
in this situation).
- Added a metadata check to the YumLock .lock property, but only when
the timeout expires.
- Added a new .lock_check property to YumLock that always checks the
metadata and resets the timeout.
If it has changed it does its best to tear down the existing YumBase,
deleting as much as it can in hopes it doesn't leak memory. And then it
sets up a totally new YumBase with the new repodata.
Resolves: rhbz#1632962
Use a common _depsolve function for projects_depsolve and
projects_depsolve_with_size so that it always uses the correct version
glob support when depsolving blueprints and templates.
Resolves: rhbz#1628114
Ends up you cannot use the kickstart user command on root, since it
already exists, so we have to translate that into a rootpw command.
So [[customizations.user]] with name = "root" only support key, which
will set the ssh key, and password which will use rootpw to set the
password. plain text or encrypted are supported.
Related: rhbz#1626120
The blueprint version glob was being applied to the whole package NEVRA
by yum (it lacks a separate API for just globbing versions), so this
implements that in filterVersionGlob using fnmatchcase on the package
names, and the yum package verGT comparison on the versions for the
selected package.
Also includes tests.
Resolves: rhbz#1628114
blueprints/changes is different, each blueprint has it's own total,
limited by the call's limit. So it needs to find the max total of all
the requested blueprints.
The blueprints/changes API is a bit different from the others, the total
that it includes is for each blueprint, not one total for all of them,
since there will be a different number of commits for each.
The function is passed the dict, and it can be used to select the total
to use for retrieving all of the results. If it isn't included it will
use data["total"] which works fine in most cases.
A value of 1 is too low for heavy users of the API, such as the weldr-web
interface.
This is also systemd's default for sockets it opens. Using lorax-composer with
socket activation already results in a backlog of SOMAXCONN connections.
(cherry picked from commit be5d50e6f3)
Add a limit argument to all potentially paginated results, equal to
whatever the composer backend is the total number of results. This still
has the potential to provide truncated data if the number of results
increases between the two HTTP requests.
Resolves: #404
This adds the following optional arguments to the /compose/status route:
- type, matches the compose_type field
- status, matches the queue_status field
- blueprint, matches the blueprint field
Currently we are making MBR disk images for qcow2 and partitioned disk,
so the UEFI packages aren't required at this point.
Move the clearpart command into compose.py so that in the futute it can
use clearpart --disklabel to create a GPT image, and add the required
packages to the package set.
The idea here is to make sure all return points have the same type for
the error cases. There's not really all that many, so they just go in
one patch. Some of these could potentially turn into more specialized
errors later.
Note the exception string checking around compose_type. I didn't really
want to introduce a new exception type just for this, but also didn't
want to duplicate strings. I'd be open to other suggestions for how to
do this.
This adds some fairly redundant code to the beginning of all the
blueprint routes to attempt reading a commit from git for the
blueprint's recipe. If it succeeds, the blueprint exists and the route
can continue. Otherwise, return an error. Hopefully this doesn't slow
things down too much.
Note that this also changes the return type of uuid_info to return None
when an unknown ID is given. The other uuid_* functions are fine
because they are checked ahead of time.
Each element in the errors value is now a dict, with a msg field and an
id field. The id field contains a value out of errors.py that can be
used by the front end to key on. The msg field is the same as what's
been there.
The idea is to keep the number of IDs somewhat limited so there's not a
huge number of things for the front end to know.
This patch does two things:
1) Add "compose list", which lists compose UUIDs and other basic info,
2) Fix up "blueprints list", "modules list", "sources list", and
"compose types" so their output is just a plain list of identifiers
Currently the code is not UTF8 safe, so we need to return a clear error
when invalid characters are passed in.
This also adds tests for the routes to confirm that an error is
correctly returned.
This handles the case where a route is requested, but without a required
parameter. So, /blueprints/info is requested instead of
/blueprints/info/http-server. It accomplishes this via a decorator, so
a lot of these route-related functions now have quite a few decorators
attached to them.
Typo'd URLs (/blueprints/nfo for instance) will still return a 404. I
think this is a reasonable thing to do.
Unfortunately, this isn't very useful if /modules/info is provided with
multiple modules. yum doesn't traceback when doPackageLists is given
something that doesn't exist. It just returns an empty list. If
/modules/info is given just one module and yum gives us an empty list,
it's easy to say what happened. If /modules/info is given several
modules and just one does not exist, we will not be able to detect that.
Fixing this would require doing more yum operations, which is likely to
slow things down and isn't the direction I want to be going.
This is responsible for writing out a new times.toml file, containing
important timestamps in the life of a compose. This seems a little more
reliable than attempting to infer things from the filesystem, especially
in light of the fact that we can't ever really know when a file was
created.
We need to be root to read the certificates that give access to the
package repos. Right now, the alternative seems to be changing
permissions on the certs themselves, which seems less good. We're
running anaconda as root anyway.