The default size is always going to be wrong, so try to estimate a more
reasonable amount of space. This is more complicated than you would
expect, yum's installedsize doesn't take into account the block size of
the filesystem, nor any extra artifacts generated by pre/post scripts.
So in the end we end up with a minimum image size of 1GiB, a partition
that is 40% larger than the estimated space needed, and a disk image
that increases size in 1GiB increments. This is still better than having
a fixed 4GiB / partition that was either too large or too small.
The API server will run a mock compose when a test mode is passed to it.
Passing a 1 will queue a build, pretend to run the build for 10 seconds,
and then fail. Passing a 2 will do the same thing, but it will finish as
if it was successful. All results are available but the output file is
only the string "TEST IMAGE"
This should allow running tests inside docker without calling anaconda
(becuase it will not run in docker, it needs a VM).
This will prevent accidentally running more than 1 instance.
Uses /run/lorax-composer.pid and checks to make sure that the PID
written to it isn't stale.
This will return the recipe in TOML format. Note that this does not
include any extra information about errors. Just the recipes, any
unrecognized recipe names will be skipped.
This ended up requiring more intrusive changes, but it should be the
most complex of the output types. After moving the core of
livemedia-creator into a function I added more settings to compose_args,
and more defaults to start_build. It now pulls the release information
from /etc/os-release, and produces a bootable .iso
We need to be able to share the output types from livemedia-creator with
lorax-composer, so move the core of the main() function into
run_creatoe(). Pass in the cmdline args or a DataHolder with them set.
Because of how Anaconda is run it needs to be passed a baseurl (using
--repo on the anaconda cmdline), not a mirrorlist url. This fixes it so
that the first mirror is used if the main repository is using a
mirrorlist.
Also make sure the recipe directory and its contents have correct
ownership, and change the default recipe path when using the systemd
service to /var/lib/lorax/composer/recipes/
This will allow testing without having a full system setup with
anaconda, if ?test=1 is passed to the POST /compose command it will wait
10 seconds instead of running Anaconda, and then raise an error to
generate a failed build.
Passing ?test=2 will also wait 10 seconds instead of running Anaconda,
but will finish successfully.
This allows the client to request the end of the anaconda.log during and
after a build. The amount of data returned can be set by adding
?size=<kbytes>
Output is raw bytes, starting on the next available line boundry.
If the build hasn't started yet (state is WAITING) try removing the
symlink to it. If this succeeds, delete the partial results directory.
If the build makes it to RUNNING then it writes a CANCEL file in the
results directory. The callback that is passed to execWithRedirect
catches this, causing a SIGTERM to be sent to anaconda. It then exits
and cleanup happens normally. The partial results directory is then
removed.
.get_default() returns string so make sure we're actually parsing
the value as boolean and not evaluating a non-empty string in a
boolean context (which will always return True)
Also fix a bug with the name of the queue status in the status results
(it is now 'queue_status' not 'status' which is used for error
responses).
This adds the following routes:
- /compose/metadata/<uuid> to retrieve a .tar of the build metadata
- /compose/results/<uuid> to retrieve .tar of all of the build results
- /compose/logs/<uuid> to retrieve a .tar of just the logs from the build
- /compose/image/<uuid> to retrieve the output image from the build
The results is a JSON string with the following information:
* id - The uuid of the comoposition
* config - containing the configuration settings used to run Anaconda
* recipe - The depsolved recipe used to generate the kickstart
* commit - The (local) git commit hash for the recipe used
* deps - The NEVRA of all of the dependencies used in the composition
* compose_type - The type of output generated (tar, iso, etc.)
* queue_status - The final status of the composition (FINISHED or FAILED)
This adds returning the commit id from read_commit, and a new function
read_recipe_and_id() that returns the commit id and the recipe in a
tuple.
If the commit is passed in, it is used as is. If no commit is passed in
it finds the most recent commit for the file on the selected branch and
returns that.
Missing recipes now raise a RecipeError with an informative message.
eg. "No commits for missing-recipe.toml on the master branch."
Write original as recipe.toml and the depsolved version as frozen.toml
Also write 'WAITING' to the STATUS file as its first state.
The STATUS states are now WAITING -> RUNNING -> FINISHED|FAILED
Also adds .package_names and .module_names properties. Call
recipe.freeze with a list of NEVRA dependencies and it will return a new
Recipe object with all of the packages and modules set to the depsolved
version.
This adds the ability to build a tar output image. The /compose and
/compose/types API routes are now available.
To start a build POST a JSON body to /compose, like this:
{"recipe_name":"glusterfs", "compose_type":"tar", "branch":"master"}
This will return a unique build id:
{
"build_id": "4d13abb6-aa4e-4c80-a671-0b867e6e77f6",
"status": true
}
which will be used to keep track of the build status (routes for this
do not exist yet).
The queue is in /var/lib/weldr/queue/new by default. It watches the
directory for new symlinks (to /var/lib/weldr/results/<dirname>) and
handles running anaconda on the kickstart found in final-kickstart.ks
inside the symlinked directory.
Also move default_image_name into imgutils so it can be used in other
places.
When running from lorax-composer the wait() call wasn't waiting until
the tar was finished. I think this is due to gevent monkey-patching
something. Using communicate() solves this problem.
This drops support for the TCP port and switches to using a socket at
/var/run/weldr/api.socket
Also add the start of some docs for lorax-composer.
--host and --port argument have been removed.
--group sets the group name to use for access to the socket and its
parent directory. Defaults to 'weldr'
--socket sets the full path to the socket to create. Defaults to
'/var/run/weldr/api.socket'
Passing ?branch=<branch-name> will use the specified branch instead of
master.
The new branch will not exist until a /recipes/new?branch=new-branch
POST is made. At that time the branch will be created based on the
current master branch and the new commit will be added to it.
- Fix `projects_depsolve()` to not consider a successful empty response
(rc == 0) as an error.
- Fix recipe_from_dict() to default modules and packages to empty lists
instead of `None`, to avoid a Python-ism in the API for consumers and
stay compatible to the bdcs API.
Fixes#290
Returns a simple string to indicate that the API server is running.
/api/v0/status should be used instead, it provides more detailed info in
JSON format.
This includes a new configuration file at /etc/lorax/composer.conf with
built-in defaults. It also adds a YUMLOCK server config object so that
request handlers can access the yum base object without interfering
with each other.
This requires that the docs be at /usr/share/doc/lorax-*/html/ or if
running from the source tree, at ./docs/html/
They can be re-created by running 'make docs'
Recipe should have its version bumped based on the version from the
previous commit, and not be bumped on the first commit. Fix the code and
the tests.
It appears that with libgit2 v0.24.6 reverse causes it to list them
newest first. In 0.25.1 it lists them oldest first. On both versions
just using SortMode.TIME gives the desired result of oldest first.
when default value is list or dict the default arguments are
instantiated as objects at the time of definition. This is significant
(exposing visible semantics) when the object is mutable. There’s no way
of re-binding that default argument name in the function’s closure. When
function is executed multiple times with its default value the value
will change between executions, possibly leading to strange side effects.
For more information see:
http://satran.in/2012/01/12/python-dangerous-default-value-as-argument.html
The lorax-composer program will launch a BDCS compatible API server
using Flask and Gevent. Currently this is a skeleton application with
only one active route (/api/v0/status).
The API code lives in ./src/pylorax/api/v0.py with related code in other
pylorax/api/* modules.
This reduces the amount of code in livemedia-creator to the cmdline
parsing and calling of the installer functions. Moving them into other
modules will allow them to be used by other projects, like the
lorax-composer API server.
It appears that sometimes the loop device doesn't get setup properly,
this may be a race with other users of loop devices on the system, or
some other mechanism that isn't understood.
To try and prevent total failure when this happens this patch retries
the loop setup 3 times before giving up. Previously it would wait for
the loop device to appear (checking 5 times), that operation is now
executed 3 times with a new losetup attempt each time.
Resolves: rhbz#1589084
This requires OVMF to be setup on the system, and for the kickstart to
create a /boot/efi/ partition. You can then use it to create UEFI
bootable partitioned disk images.
The UEFI firmware needs to be installed manually on the system, either
in the default location of /usr/share/OVMF/ or use --ovmf-path to point
to the location.
Resolves: rhbz#1546715
Resolves: rhbz#1544805
Use it to override the default dracut arguments (displayed as part of
the --help output). If you want to extend the default arguments they
all need to be passed in on the cmdline as well. eg.
--dracut-arg='--xz' --dracut-arg='--install /.buildstamp' ...
Resolves: rhbz#1452220
This can't be done the same way as on master because there is no rpm
database inside the installroot to run rpm -qa against. Do it at the end
of the yum transaction.
Resolves: rhbz#1416155
This uses the --release value as the yum releasever so that $releasever
in a --repo will work.
It also turns on assumeyes so that any gpgkey entries in the .repo file
will be installed and used automatically if gpgcheck is enabled for the
repo.
Related: rhbz#1430479
When multiple units are passed to systemctl and one fails it doesn't
finish the others. Change the template command to call systemctl for
each unit individually.
This also removes the lvm2-activation-generator in runtime-cleanup.tmpl
Resolves: rhbz#1478247
This will allow anaconda to fetch kickstarts using https when installing
with fips=1
Leave vmlinuz and .vmlinuz.hmac in /boot
dracut-fips module needs the vmlinuz.hmac file in order to boot.
Resolves: rhbz#1341280
The previous code used losetup --list -O to return the backing store
associated with the loop device. This can fail due to losetup truncating
the output filename if sysfs isn't setup. Instead of printing the full
path it will truncate it to 64 characters with a * at the end.
See util-linux lib/loopdev.c for the code that does this.
This commit changes it to use the existing get_loop_name function, which
uses losetup -j to lookup the loop device associated with the backing
store which avoids the truncation problem.
Resolves: rhbz#1462150