This allows the client to request the end of the anaconda.log during and
after a build. The amount of data returned can be set by adding
?size=<kbytes>
Output is raw bytes, starting on the next available line boundry.
If the build hasn't started yet (state is WAITING) try removing the
symlink to it. If this succeeds, delete the partial results directory.
If the build makes it to RUNNING then it writes a CANCEL file in the
results directory. The callback that is passed to execWithRedirect
catches this, causing a SIGTERM to be sent to anaconda. It then exits
and cleanup happens normally. The partial results directory is then
removed.
.get_default() returns string so make sure we're actually parsing
the value as boolean and not evaluating a non-empty string in a
boolean context (which will always return True)
Also fix a bug with the name of the queue status in the status results
(it is now 'queue_status' not 'status' which is used for error
responses).
This adds the following routes:
- /compose/metadata/<uuid> to retrieve a .tar of the build metadata
- /compose/results/<uuid> to retrieve .tar of all of the build results
- /compose/logs/<uuid> to retrieve a .tar of just the logs from the build
- /compose/image/<uuid> to retrieve the output image from the build
The results is a JSON string with the following information:
* id - The uuid of the comoposition
* config - containing the configuration settings used to run Anaconda
* recipe - The depsolved recipe used to generate the kickstart
* commit - The (local) git commit hash for the recipe used
* deps - The NEVRA of all of the dependencies used in the composition
* compose_type - The type of output generated (tar, iso, etc.)
* queue_status - The final status of the composition (FINISHED or FAILED)
This adds returning the commit id from read_commit, and a new function
read_recipe_and_id() that returns the commit id and the recipe in a
tuple.
If the commit is passed in, it is used as is. If no commit is passed in
it finds the most recent commit for the file on the selected branch and
returns that.
Missing recipes now raise a RecipeError with an informative message.
eg. "No commits for missing-recipe.toml on the master branch."
Write original as recipe.toml and the depsolved version as frozen.toml
Also write 'WAITING' to the STATUS file as its first state.
The STATUS states are now WAITING -> RUNNING -> FINISHED|FAILED
Also adds .package_names and .module_names properties. Call
recipe.freeze with a list of NEVRA dependencies and it will return a new
Recipe object with all of the packages and modules set to the depsolved
version.
This adds the ability to build a tar output image. The /compose and
/compose/types API routes are now available.
To start a build POST a JSON body to /compose, like this:
{"recipe_name":"glusterfs", "compose_type":"tar", "branch":"master"}
This will return a unique build id:
{
"build_id": "4d13abb6-aa4e-4c80-a671-0b867e6e77f6",
"status": true
}
which will be used to keep track of the build status (routes for this
do not exist yet).
The queue is in /var/lib/weldr/queue/new by default. It watches the
directory for new symlinks (to /var/lib/weldr/results/<dirname>) and
handles running anaconda on the kickstart found in final-kickstart.ks
inside the symlinked directory.
Also move default_image_name into imgutils so it can be used in other
places.
When running from lorax-composer the wait() call wasn't waiting until
the tar was finished. I think this is due to gevent monkey-patching
something. Using communicate() solves this problem.
This drops support for the TCP port and switches to using a socket at
/var/run/weldr/api.socket
Also add the start of some docs for lorax-composer.
--host and --port argument have been removed.
--group sets the group name to use for access to the socket and its
parent directory. Defaults to 'weldr'
--socket sets the full path to the socket to create. Defaults to
'/var/run/weldr/api.socket'
Passing ?branch=<branch-name> will use the specified branch instead of
master.
The new branch will not exist until a /recipes/new?branch=new-branch
POST is made. At that time the branch will be created based on the
current master branch and the new commit will be added to it.
- Fix `projects_depsolve()` to not consider a successful empty response
(rc == 0) as an error.
- Fix recipe_from_dict() to default modules and packages to empty lists
instead of `None`, to avoid a Python-ism in the API for consumers and
stay compatible to the bdcs API.
Fixes#290
Returns a simple string to indicate that the API server is running.
/api/v0/status should be used instead, it provides more detailed info in
JSON format.
This includes a new configuration file at /etc/lorax/composer.conf with
built-in defaults. It also adds a YUMLOCK server config object so that
request handlers can access the yum base object without interfering
with each other.
This requires that the docs be at /usr/share/doc/lorax-*/html/ or if
running from the source tree, at ./docs/html/
They can be re-created by running 'make docs'
Recipe should have its version bumped based on the version from the
previous commit, and not be bumped on the first commit. Fix the code and
the tests.
It appears that with libgit2 v0.24.6 reverse causes it to list them
newest first. In 0.25.1 it lists them oldest first. On both versions
just using SortMode.TIME gives the desired result of oldest first.
when default value is list or dict the default arguments are
instantiated as objects at the time of definition. This is significant
(exposing visible semantics) when the object is mutable. There’s no way
of re-binding that default argument name in the function’s closure. When
function is executed multiple times with its default value the value
will change between executions, possibly leading to strange side effects.
For more information see:
http://satran.in/2012/01/12/python-dangerous-default-value-as-argument.html
The lorax-composer program will launch a BDCS compatible API server
using Flask and Gevent. Currently this is a skeleton application with
only one active route (/api/v0/status).
The API code lives in ./src/pylorax/api/v0.py with related code in other
pylorax/api/* modules.
This reduces the amount of code in livemedia-creator to the cmdline
parsing and calling of the installer functions. Moving them into other
modules will allow them to be used by other projects, like the
lorax-composer API server.
It appears that sometimes the loop device doesn't get setup properly,
this may be a race with other users of loop devices on the system, or
some other mechanism that isn't understood.
To try and prevent total failure when this happens this patch retries
the loop setup 3 times before giving up. Previously it would wait for
the loop device to appear (checking 5 times), that operation is now
executed 3 times with a new losetup attempt each time.
Resolves: rhbz#1589084
This requires OVMF to be setup on the system, and for the kickstart to
create a /boot/efi/ partition. You can then use it to create UEFI
bootable partitioned disk images.
The UEFI firmware needs to be installed manually on the system, either
in the default location of /usr/share/OVMF/ or use --ovmf-path to point
to the location.
Resolves: rhbz#1546715
Resolves: rhbz#1544805
Use it to override the default dracut arguments (displayed as part of
the --help output). If you want to extend the default arguments they
all need to be passed in on the cmdline as well. eg.
--dracut-arg='--xz' --dracut-arg='--install /.buildstamp' ...
Resolves: rhbz#1452220
This can't be done the same way as on master because there is no rpm
database inside the installroot to run rpm -qa against. Do it at the end
of the yum transaction.
Resolves: rhbz#1416155
This uses the --release value as the yum releasever so that $releasever
in a --repo will work.
It also turns on assumeyes so that any gpgkey entries in the .repo file
will be installed and used automatically if gpgcheck is enabled for the
repo.
Related: rhbz#1430479