A value of 1 is too low for heavy users of the API, such as the weldr-web
interface.
This is also systemd's default for sockets it opens. Using lorax-composer with
socket activation already results in a backlog of SOMAXCONN connections.
(cherry picked from commit be5d50e6f3)
Add a limit argument to all potentially paginated results, equal to
whatever the composer backend is the total number of results. This still
has the potential to provide truncated data if the number of results
increases between the two HTTP requests.
Resolves: #404
This adds the following optional arguments to the /compose/status route:
- type, matches the compose_type field
- status, matches the queue_status field
- blueprint, matches the blueprint field
Currently we are making MBR disk images for qcow2 and partitioned disk,
so the UEFI packages aren't required at this point.
Move the clearpart command into compose.py so that in the futute it can
use clearpart --disklabel to create a GPT image, and add the required
packages to the package set.
The idea here is to make sure all return points have the same type for
the error cases. There's not really all that many, so they just go in
one patch. Some of these could potentially turn into more specialized
errors later.
Note the exception string checking around compose_type. I didn't really
want to introduce a new exception type just for this, but also didn't
want to duplicate strings. I'd be open to other suggestions for how to
do this.
This adds some fairly redundant code to the beginning of all the
blueprint routes to attempt reading a commit from git for the
blueprint's recipe. If it succeeds, the blueprint exists and the route
can continue. Otherwise, return an error. Hopefully this doesn't slow
things down too much.
Note that this also changes the return type of uuid_info to return None
when an unknown ID is given. The other uuid_* functions are fine
because they are checked ahead of time.
Each element in the errors value is now a dict, with a msg field and an
id field. The id field contains a value out of errors.py that can be
used by the front end to key on. The msg field is the same as what's
been there.
The idea is to keep the number of IDs somewhat limited so there's not a
huge number of things for the front end to know.
This patch does two things:
1) Add "compose list", which lists compose UUIDs and other basic info,
2) Fix up "blueprints list", "modules list", "sources list", and
"compose types" so their output is just a plain list of identifiers
Currently the code is not UTF8 safe, so we need to return a clear error
when invalid characters are passed in.
This also adds tests for the routes to confirm that an error is
correctly returned.
This handles the case where a route is requested, but without a required
parameter. So, /blueprints/info is requested instead of
/blueprints/info/http-server. It accomplishes this via a decorator, so
a lot of these route-related functions now have quite a few decorators
attached to them.
Typo'd URLs (/blueprints/nfo for instance) will still return a 404. I
think this is a reasonable thing to do.
Unfortunately, this isn't very useful if /modules/info is provided with
multiple modules. yum doesn't traceback when doPackageLists is given
something that doesn't exist. It just returns an empty list. If
/modules/info is given just one module and yum gives us an empty list,
it's easy to say what happened. If /modules/info is given several
modules and just one does not exist, we will not be able to detect that.
Fixing this would require doing more yum operations, which is likely to
slow things down and isn't the direction I want to be going.
This is responsible for writing out a new times.toml file, containing
important timestamps in the life of a compose. This seems a little more
reliable than attempting to infer things from the filesystem, especially
in light of the fact that we can't ever really know when a file was
created.
We need to be root to read the certificates that give access to the
package repos. Right now, the alternative seems to be changing
permissions on the certs themselves, which seems less good. We're
running anaconda as root anyway.
Some results have errors and no status, others have status and errors.
Update the function to return the final rc to exit with, and a bool
indicating whether or not to continue processing the other fields.
Add a bunch of tests for the new function to make sure I have the logic
correct.
A bad system repo can cause lorax-composer to fail to start. Instead of
a traceback log the error and exit.
(note that the exit still results in an OSError traceback due to part of
it running as root, this needs to be addressed in another commit).
This adds a new argument to projects_depsolve and
projects_depsolve_with_size that contains the group list, unfortunately.
I would have prefered adding a function that just returns a list of all
the contents of a group and then add that to what was being passed into
projects_depsolve. However, there does not appear to be any good way to
do that in yum aside from a lot of grubbing around in the comps object,
which I am unwilling to do.
Depsolve the packages included in the templates and report any errors
using the /api/status 'msgs' field. This should help narrow down
problems with package sources not being setup correctly.
Previously it was impossible to know which package in a blueprint caused
a failure, if it was just one of them, or all of them, etc. This catches
the error when calling yb.install and lists all the failures in the
error message that is raised.
This is the same as the output at the top level, just trimmed down to
only the options for a single subcommand. It's trigged by providing
"help" or "--help" as a subcommand option.
This isn't a real subcommand like the others. The option processing
just intercepts it and prints the output. Given that we're subcommand
based, it makes sense to support this in addition to --help.
The current version of libgit2 available (0.26.3) has different behavior
with SortMode.TIME. It works correctly when left at the default (which
is also how the rawhide version works).
Yum needs to have some other attrs setup on the YumRepository object, so
use the function provided to ensure that everything is correct. Also
switch the related functions to use a dict instead of a YumRepository
object.
yum also has a cache it uses for listEnabled(), but the cache isn't
invalidated when a repo is deleted it any following metadata update
will fail because it is still using the deleted repo.
We are forced to use the heavy hammer on a yum private variable yet
again to force the cache to be cleared so that it won't crash.
yum TumRepository.dump() function cannot be used as a .repo file Add a
new function to write this in the correct format, and limited to the
fields we use.
Add a test for the new function.
Fix /projects/source/info to return an error 400 if a nonexistant TOML
source is requested. If JSON is used the error is part of the standard
response.
Update test_server.py to check for the correct error code.
When adding a source failed it wasn't being removed from the dnf object.
This fixes that, and returns an error when setting up the source fails.
Also adds a test for it.
Otherwise the user creation fails when anaconda sees there is already a
group with that name. Log a warning and continue on.
(cherry picked from commit a363aee971)
This adds support for the optional blueprint section [customizations].
Use it like this:
[customizations]
hostname = yourhostnamehere
[[customizations.sshkey]]
user = root
key = root user key
And change recipe_names API variable to blueprint_names. This *only*
changes the API variable, it does not change any subsequent usage of
'recipe'. The goal here is to change the public API, not all of the
code.
The default size is always going to be wrong, so try to estimate a more
reasonable amount of space. This is more complicated than you would
expect, yum's installedsize doesn't take into account the block size of
the filesystem, nor any extra artifacts generated by pre/post scripts.
So in the end we end up with a minimum image size of 1GiB, a partition
that is 40% larger than the estimated space needed, and a disk image
that increases size in 1GiB increments. This is still better than having
a fixed 4GiB / partition that was either too large or too small.