Also fix a bug with the name of the queue status in the status results
(it is now 'queue_status' not 'status' which is used for error
responses).
This adds the following routes:
- /compose/metadata/<uuid> to retrieve a .tar of the build metadata
- /compose/results/<uuid> to retrieve .tar of all of the build results
- /compose/logs/<uuid> to retrieve a .tar of just the logs from the build
- /compose/image/<uuid> to retrieve the output image from the build
The results is a JSON string with the following information:
* id - The uuid of the comoposition
* config - containing the configuration settings used to run Anaconda
* recipe - The depsolved recipe used to generate the kickstart
* commit - The (local) git commit hash for the recipe used
* deps - The NEVRA of all of the dependencies used in the composition
* compose_type - The type of output generated (tar, iso, etc.)
* queue_status - The final status of the composition (FINISHED or FAILED)
This adds returning the commit id from read_commit, and a new function
read_recipe_and_id() that returns the commit id and the recipe in a
tuple.
If the commit is passed in, it is used as is. If no commit is passed in
it finds the most recent commit for the file on the selected branch and
returns that.
Missing recipes now raise a RecipeError with an informative message.
eg. "No commits for missing-recipe.toml on the master branch."
Write original as recipe.toml and the depsolved version as frozen.toml
Also write 'WAITING' to the STATUS file as its first state.
The STATUS states are now WAITING -> RUNNING -> FINISHED|FAILED
Also adds .package_names and .module_names properties. Call
recipe.freeze with a list of NEVRA dependencies and it will return a new
Recipe object with all of the packages and modules set to the depsolved
version.
This adds the ability to build a tar output image. The /compose and
/compose/types API routes are now available.
To start a build POST a JSON body to /compose, like this:
{"recipe_name":"glusterfs", "compose_type":"tar", "branch":"master"}
This will return a unique build id:
{
"build_id": "4d13abb6-aa4e-4c80-a671-0b867e6e77f6",
"status": true
}
which will be used to keep track of the build status (routes for this
do not exist yet).
The queue is in /var/lib/weldr/queue/new by default. It watches the
directory for new symlinks (to /var/lib/weldr/results/<dirname>) and
handles running anaconda on the kickstart found in final-kickstart.ks
inside the symlinked directory.
Also move default_image_name into imgutils so it can be used in other
places.
When running from lorax-composer the wait() call wasn't waiting until
the tar was finished. I think this is due to gevent monkey-patching
something. Using communicate() solves this problem.
Passing ?branch=<branch-name> will use the specified branch instead of
master.
The new branch will not exist until a /recipes/new?branch=new-branch
POST is made. At that time the branch will be created based on the
current master branch and the new commit will be added to it.
- Fix `projects_depsolve()` to not consider a successful empty response
(rc == 0) as an error.
- Fix recipe_from_dict() to default modules and packages to empty lists
instead of `None`, to avoid a Python-ism in the API for consumers and
stay compatible to the bdcs API.
Fixes#290
Returns a simple string to indicate that the API server is running.
/api/v0/status should be used instead, it provides more detailed info in
JSON format.
This includes a new configuration file at /etc/lorax/composer.conf with
built-in defaults. It also adds a YUMLOCK server config object so that
request handlers can access the yum base object without interfering
with each other.
This requires that the docs be at /usr/share/doc/lorax-*/html/ or if
running from the source tree, at ./docs/html/
They can be re-created by running 'make docs'
Recipe should have its version bumped based on the version from the
previous commit, and not be bumped on the first commit. Fix the code and
the tests.
It appears that with libgit2 v0.24.6 reverse causes it to list them
newest first. In 0.25.1 it lists them oldest first. On both versions
just using SortMode.TIME gives the desired result of oldest first.
when default value is list or dict the default arguments are
instantiated as objects at the time of definition. This is significant
(exposing visible semantics) when the object is mutable. There’s no way
of re-binding that default argument name in the function’s closure. When
function is executed multiple times with its default value the value
will change between executions, possibly leading to strange side effects.
For more information see:
http://satran.in/2012/01/12/python-dangerous-default-value-as-argument.html
The lorax-composer program will launch a BDCS compatible API server
using Flask and Gevent. Currently this is a skeleton application with
only one active route (/api/v0/status).
The API code lives in ./src/pylorax/api/v0.py with related code in other
pylorax/api/* modules.
This reduces the amount of code in livemedia-creator to the cmdline
parsing and calling of the installer functions. Moving them into other
modules will allow them to be used by other projects, like the
lorax-composer API server.
It appears that sometimes the loop device doesn't get setup properly,
this may be a race with other users of loop devices on the system, or
some other mechanism that isn't understood.
To try and prevent total failure when this happens this patch retries
the loop setup 3 times before giving up. Previously it would wait for
the loop device to appear (checking 5 times), that operation is now
executed 3 times with a new losetup attempt each time.
Resolves: rhbz#1589084
Use it to override the default dracut arguments (displayed as part of
the --help output). If you want to extend the default arguments they
all need to be passed in on the cmdline as well. eg.
--dracut-arg='--xz' --dracut-arg='--install /.buildstamp' ...
Resolves: rhbz#1452220
This can't be done the same way as on master because there is no rpm
database inside the installroot to run rpm -qa against. Do it at the end
of the yum transaction.
Resolves: rhbz#1416155
When multiple units are passed to systemctl and one fails it doesn't
finish the others. Change the template command to call systemctl for
each unit individually.
This also removes the lvm2-activation-generator in runtime-cleanup.tmpl
Resolves: rhbz#1478247
This will allow anaconda to fetch kickstarts using https when installing
with fips=1
Leave vmlinuz and .vmlinuz.hmac in /boot
dracut-fips module needs the vmlinuz.hmac file in order to boot.
Resolves: rhbz#1341280
The previous code used losetup --list -O to return the backing store
associated with the loop device. This can fail due to losetup truncating
the output filename if sysfs isn't setup. Instead of printing the full
path it will truncate it to 64 characters with a * at the end.
See util-linux lib/loopdev.c for the code that does this.
This commit changes it to use the existing get_loop_name function, which
uses losetup -j to lookup the loop device associated with the backing
store which avoids the truncation problem.
Resolves: rhbz#1462150
It seems that on rare occasions losetup can return before the /dev/loopX
is ready for use, causing problems with mkfs. This tries to make sure
that the loop device really is associated with the backing file before
continuing.
Resolves: rhbz#1462150
When using the template install command copying the same file to itself
shouldn't crash. Just log the error and continue.
Also copy the s390 configuration files for use with livemedia-creator
Resolves: rhbz#1269213
(cherry picked from commit 701ab02619)
Recently, Fedora has been trying to do a 3 product split. As part of
that, lorax was changed to do "installpkg lorax-product-*" via
provides.
I think that approach is awkward; a much simpler approach is to simply
specify the product package as input to lorax on the command line, via
external rel-eng scripts.
This patch therefore adds --installpkgs (and we should probably add an
option to remove the implicit lorax-product-* glob).
(cherry picked from commit 52d962d613)
Resolves: rhbz#1272222
The system the image boots on will likely not match the host where lorax
was run, and in some cases this can cause systems to hang.
Resolves: rhbz#1258498
installimg SRCDIR DESTFILE
Create a compressed cpio archive of the contents of SRCDIR and place
it in DESTFILE.
If SRCDIR doesn't exist or is empty nothing is created.
Examples:
installimg ${LORAXDIR}/product/ images/product.img
(cherry picked from commit b064ae6166)
Related: rhbz#1202278
I originally added --add-template to support doing something similar
to pungi, which injects content into the system to be used by default.
However, this causes the content to be part of the squashfs, which
means PXE installations have to download significantly more data that
they may not need (if they actually want to pull the tree data from
the network, which is not an unusual case).
What I actually need is to be able to modify *both* the runtime image
and the arch-specific content. For the runtime, I need to change
/usr/share/anaconda/interactive-defaults.ks to point to the new
content. (Although, potentially we could patch Anaconda itself to
auto-detect an ostree repository configured in disk image, similar to
what it does for yum repositories)
For the arch-specfic image, I want to drop my content into the ISO
root.
So this patch adds --add-arch-template and --add-arch-template-var
in order to do the latter, while preserving the --add-template
to affect the runtime image.
Further, the templates will automatically graft in a directory named
"iso-graft/" from the working directory (if it exists).
(I suggest that external templates create a subdirectory named
"content" to avoid clashes with any future lorax work)
Thus, this will be used by the Atomic Host lorax templates to inject
content/repo, but could be used by e.g. pungi to add content/rpms as
well.
I tried to avoid code deduplication by creating a new template for the
product.img bits and this, but that broke because the parent boot.iso
code needs access to the `${imggraft}` variable. I think a real fix
here would involve turning the product.img, content/, *and* boot.iso
into a new template.
Resolves: rhbz#1202278
removekmod GLOB [GLOB...] --allbut KEEPGLOB [KEEPGLOB...]
This can be used to remove kernel modules from under
/lib/modules/*/kernel/ while keeping specific items. This should be
easier than constructing find arguments to select the right things to
save.
(cherry picked from commit 11c9e0e8ee)
Resolves: rhbz#1230356
Resolves: rhbz#1184021
--make-pxe-live target generate live squashfs and initrd for pxe boot.
Also generates pxe config template.
--make-ostree-live is used for installations of Atomic Host. Additionally to
--make-pxe-live it ensures using deployment root instead of physical root of
installed disk image where needed. Atomic installation needs to be virt
installation with /boot on separate partition (the only way supported by
Anaconda currently). Content of boot partition is added to live root fs so that
ostree can find deployment by boot configuration.
What I need is to make something like the traditional DVD which also
includes packages. At present this is apparently handled by the
entirely separate pungi tool.
At the moment for me, it's the least bad option to modify lorax to
inject data from an external source than to create a new tool, or
attempt to also modify pungi to do this.
This would also allow pungi's DVD creation to eventually be a set of
external templates for Lorax.
(cherry picked from commit 66359415be)
Resolves: rhbz#1157777
tar recurses into directories by default, but find is feeding it all the
files and directories so the tar it produces is considerably larger than
it needs to be due to duplicate files. Add --no-recursion flag so that
tar will only add the specific files and directories piped to it by find.
Related: rhbz#1144140
This adds the --make-tar option which will produce a xz compressed tar
of the root filesystem. This works with either virt-install or no-virt
modes. Use --image-name to set the output filename.
--compression is used to set the compression type to use, which defaults
to xz. Supported types are xz, lzma, gzip and bzip2.
--compress-arg is used to pass arguments to the compression utility.
(cherry picked from commit d04a99e8f4)
Resolves: rhbz#1144140