composer-cli is used to interact with the lorax-composer API server, managing blueprints, exploring available packages, and building new images.
-
It requires lorax-composer to be installed on the
-local system, and the user running it needs to be a member of the weldr
-group. They do not need to be root, but all of the security precautions apply.
-
+
composer-cli is an interactive tool for use with a WELDR API server,
+managing blueprints, exploring available packages, and building new images.
+lorax-composer <lorax-composer.html> and osbuild-composer
+<https://osbuild.org> both implement compatible servers.
+
It requires server to be installed on the local system, and the user running it
+needs to be a member of the weldr group. They do not need to be root, but
+all of the security precautions apply.
Start an ostree compose using the selected blueprint and output type. Optionally start an upload. This command
+is only supported by osbuild-composer. –size is in MiB.
+
+
compose types
List the supported output types.
+
+
compose status
List the status of all running and finished composes.
+
+
compose list [waiting|running|finished|failed]
List basic information about composes.
+
+
compose log <UUID> [<SIZE>]
Show the last SIZE kB of the compose log.
+
+
compose cancel <UUID>
Cancel a running compose and delete any intermediate results.
+
+
compose delete <UUID,…>
Delete the listed compose results.
+
+
compose info <UUID>
Show detailed information on the compose.
+
+
compose metadata <UUID>
Download the metadata use to create the compose to <uuid>-metadata.tar
+
+
compose logs <UUID>
Download the compose logs to <uuid>-logs.tar
+
+
compose results <UUID>
Download all of the compose results; metadata, logs, and image to <uuid>.tar
+
+
compose image <UUID>
Download the output image from the compose. Filename depends on the type.
Display the differences between 2 versions of a blueprint.
+FROM-COMMIT can be a commit hash or NEWEST
+TO-COMMIT can be a commit hash, NEWEST, or WORKSPACE
+
+
blueprints save <BLUEPRINT,…>
Save the blueprint to a file, <BLUEPRINT>.toml
+
+
blueprints delete <BLUEPRINT>
Delete a blueprint from the server
+
+
blueprints depsolve <BLUEPRINT,…>
Display the packages needed to install the blueprint.
+
+
blueprints push <BLUEPRINT>
Push a blueprint TOML file to the server.
+
+
blueprints freeze <BLUEPRINT,…>
Display the frozen blueprint’s modules and packages.
+
+
blueprints freeze show <BLUEPRINT,…>
Display the frozen blueprint in TOML format.
+
+
blueprints freeze save <BLUEPRINT,…>
Save the frozen blueprint to a file, <blueprint-name>.frozen.toml.
+
+
blueprints tag <BLUEPRINT>
Tag the most recent blueprint commit as a release.
+
+
blueprints undo <BLUEPRINT> <COMMIT>
Undo changes to a blueprint by reverting to the selected commit.
+
+
blueprints workspace <BLUEPRINT>
Push the blueprint TOML to the temporary workspace storage.
+
+
modules list
List the available modules.
+
+
projects list
List the available projects.
+
+
projects info <PROJECT,…>
Show details about the listed projects.
+
+
sources list
List the available sources
+
+
sources info <SOURCE-NAME,…>
Details about the source.
+
+
sources add <SOURCE.TOML>
Add a package source to the server.
+
+
sources change <SOURCE.TOML>
Change an existing source
+
+
sources delete <SOURCE-NAME>
Delete a package source.
+
+
+
status show Show API server status.
+
NOTE: uploading is only available as part of the compose command
+using the osbuild-composer API server.
Start out by listing the available blueprints using composer-cliblueprintslist, pick one and save it to the local directory by running composer-cliblueprintssavehttp-server. If there are no blueprints available you can
-copy one of the examples from the test suite.
Edit the file (it will be saved with a .toml extension) and change the
description, add a package or module to it. Send it back to the server by
running composer-cliblueprintspushhttp-server.toml. You can verify that it was
saved by viewing the changelog - composer-cliblueprintschangeshttp-server.
Build a qcow2 disk image from this blueprint by running composer-clicomposestarthttp-serverqcow2. It will print a UUID that you can use to
@@ -197,53 +345,117 @@ keep track of the build. You can also cancel the build if needed.
The available types of images is displayed by composer-clicomposetypes.
Currently this consists of: alibaba, ami, ext4-filesystem, google, live-iso,
openstack, partitioned-disk, qcow2, tar, vhd, vmdk
Monitor it using composer-clicomposestatus, which will show the status of
all the builds on the system. You can view the end of the anaconda build logs
once it is in the RUNNING state using composer-clicomposelogUUID
where UUID is the UUID returned by the start command.
Once the build is in the FINISHED state you can download the image.
Downloading the final image is done with composer-clicomposeimageUUID and it will
save the qcow2 image as UUID-disk.qcow2 which you can then use to boot a VM like this:
composer-cli can upload the images to a number of services, including AWS,
+OpenStack, and vSphere. The upload can be started when the build is finished
+by using composer-clicomposestart.... In order to access the service you need
+to pass authentication details to composer-cli using a TOML file.
+
+
Note
+
This is only supported when running the osbuild-composer API server.
Providers are where the images are uploaded to. You
+will need to gather some provider
+specific information in order to authenticate with it. Please refer to the osbuild-composer
+documentation for the provider specific fields. You will then create a TOML file with the
+name of the provider and the settings, like this:
The access key and secret key can be created by going to the
+IAM->Users->SecurityCredentials section and creating a new access key. The
+secret key will only be shown when it is first created so make sure to record
+it in a secure place. The region should be the region that you want to use the
+AMI in, and the bucket can be an existing bucket, or a new one, following the
+normal AWS bucket naming rules. It will be created if it doesn’t already exist.
+
When uploading the image it is first uploaded to the s3 bucket, and then
+converted to an AMI. If the conversion is successful the s3 object will be
+deleted. If it fails, re-trying after correcting the problem will re-use the
+object if you have not deleted it in the meantime, speeding up the process.
There are a couple of arguments that can be helpful when debugging problems.
+These are only meant for debugging and should not be used to script access to
+the API. If you need to do that you can communicate with it directly in the
+language of your choice.
+
--json will return the server’s response as a nicely formatted json output
+instead of printing what the command would usually print.
+
--test=1 will cause a compose start to start creating an image, and then
+end with a failed state.
+
--test=2 will cause a compose to start and then end with a finished state,
+without actually composing anything.
I am the Lorax. I speak for the trees [and images].
Lorax is used to build the Anaconda Installer boot.iso, it consists of a
@@ -165,8 +177,8 @@ library, pylorax, a set of templates, and the lorax script. Its operation
is driven by a customized set of Mako templates that lists the packages
to be installed, steps to execute to remove unneeded files, and creation
of the iso for all of the supported architectures.
Tree building tools such as pungi and revisor rely on ‘buildinstall’ in
anaconda/scripts/ to produce the boot images and other such control files
@@ -206,36 +218,36 @@ it’s not completely clear from reading the scripts.
Create a new central driver with all information living in Python modules.
Configuration files will provide the knowledge previously contained in the
upd-instroot and mk-images* scripts.
sudolivemedia-creator--make-iso \
@@ -551,8 +563,8 @@ you have the anaconda-tui package installed.
chosen, or you can use a specific port by passing it. eg. --vncvnc:127.0.0.1:5
This is usually a good idea when testing changes to the kickstart. lmc tries
to monitor the logs for fatal errors, but may not catch everything.
There are 2 stages, the install stage which produces a disk or filesystem image
as its output, and the boot media creation which uses the image as its input.
@@ -586,8 +598,8 @@ the kernel, initrd, the squashfs filesystem, etc. If you only want the
boot.iso you can pass --iso-only and the other files will be removed. You
can also name the iso by using --iso-namemy-live.iso.
The docs/ directory includes several example kickstarts, one to create a live
desktop iso using GNOME, and another to create a minimal disk image. When
@@ -655,8 +667,8 @@ packages will get cached, so your kickstart url would look like:
You can also add an update repo, but don’t name it updates. Add –proxy to it
as well.
You can create images without using qemu by passing --no-virt on the
cmdline. This will use Anaconda’s directory install feature to handle the
@@ -688,8 +700,8 @@ virt.
logged for debugging purposes and if there are SELinux denials they should
be reported as a bug.
Amazon EC2 images can be created by using the –make-ami switch and an appropriate
kickstart file. All of the work to customize the image is handled by the kickstart.
@@ -699,8 +711,8 @@ that it would work with livemedia-creator.
livemedia-creator can now replace appliance-tools by using the –make-appliance
switch. This will create the partitioned disk image and an XML file that can be
@@ -754,8 +766,8 @@ from --releasever
--image-type=qcow2--app-file=minimal-test.xml--image-name=minimal-test.img
livemedia-creator can be used to create un-partitined filesystem images using
the --make-fsimage option. As of version 21.8 this works with both qemu and
@@ -765,8 +777,8 @@ no-virt modes of operation. Previously it was only available with no-virt.
The --make-tar command can be used to create a tar of the root filesystem. By
default it is compressed using xz, but this can be changed using the
@@ -778,15 +790,15 @@ no-virt install methods.
The --make-pxe-live command will produce squashfs image containing live root
filesystem that can be used for pxe boot. Directory with results will contain
the live image, kernel image, initrd image and template of pxe configuration
for the images.
The --make-ostree-live command will produce the same result as --make-pxe-live
for installations of Atomic Host. Example kickstart for such an installation
@@ -795,8 +807,8 @@ in docs/rhel-atomic-pxe-live.ks.
The PXE images can also be created with --no-virt by using the example
kickstart in docs/rhel-atomic-pxe-live-novirt.ks. This also works inside the
mock environment.
As of lorax version 22.2 you can use livemedia-creator and anaconda version
22.15 inside of a mock chroot with –make-iso and –make-fsimage.
@@ -847,8 +859,8 @@ group.
including anaconda logs and livemedia-creator logs. The new iso will be
located at ~/results/try-1/images/boot.iso, and the ~/results/try-1/
directory tree will also contain the vmlinuz, initrd, etc.
-
Version 25.0 of livemedia-creator switches to using qemu for virtualization.
This allows creation of all image types, and use of the KVM on the host if
@@ -898,8 +910,8 @@ located at ~/results/try-1/images/boot.iso, and the ~/results/try-1/
directory tree will also contain the vmlinuz, initrd, etc.
This will run qemu without kvm support, which is going to be very slow. You can
add mknod/dev/kvmc10232; to create the device node before running lmc.
OpenStack supports partitioned disk images so --make-disk can be used to
create images for importing into glance, OpenStack’s image storage component.
@@ -923,8 +935,8 @@ cloud-utils-growpart will grow the image to fit the instance’s disk size.
If qcow2 wasn’t used then --disk-format should be set to raw.
Use lmc to create a tarfile as described in the TAR File Creation section, but substitute the
rhel-container.ks example kickstart which removes the requirement for core files and the kernel.
@@ -936,8 +948,8 @@ rhel-container.ks example kickstart which removes the requirement for core files
Vagrant images can be created using the following command:
sudolivemedia-creator--make-vagrant--vagrant-metadata/path/to/metadata.json \
@@ -970,8 +982,8 @@ the vagrant user with the default insecure SSH pubkey and a few useful
utilities.
This also works with --no-virt, but will not work inside a mock due to its
use of partitioned disk images and qcow2.
Partitioned disk images can only be created for the same platform as the host system (BIOS or
UEFI). You can use virt to create BIOS images on UEFI systems, and it is also possible
@@ -992,8 +1004,8 @@ firmware files.
Note
The –virt-uefi method is currently only supported on the x86_64 architecture.
Sometimes an installation will get stuck. When using qemu the logs will
be written to ./virt-install.log and most of the time any problems that happen
@@ -1020,44 +1032,44 @@ running the anacond
multi-threaded and it can sometimes become stuck and refuse to exit. When this
happens you can usually clean up by first killing the anaconda process then
running anaconda-cleanup.
lorax-composer is an API server that allows you to build disk images using
+
lorax-composer is a WELDR API server that allows you to build disk images using
Blueprints to describe the package versions to be installed into the image.
It is compatible with the Weldr project’s bdcs-api REST protocol. More
information on Weldr can be found on the Weldr blog.
Behind the scenes it uses livemedia-creator and
Anaconda to handle the
installation and configuration of the images.
-
+
+
Note
+
lorax-composer is now deprecated. It is being replaced by the
+osbuild-composer WELDR API server which implements more features (eg.
+ostree, image uploads, etc.) You can still use composer-cli and
+cockpit-composer with osbuild-composer. See the documentation or
+the osbuild website for more information.
The best way to install lorax-composer is to use sudodnfinstalllorax-composercomposer-cli, this will setup the weldr user and install the
@@ -231,8 +251,8 @@ systemd socket activation service. You will then need to enable it with lorax-composer.socket. This will leave the server off until the first request
is made. Systemd will then launch the server and it will remain running until
the system is rebooted.
Create a weldr user and group by running useraddweldr
@@ -251,13 +271,13 @@ be created, and all the blueprints created with the .toml files in the top level
of the directory they will be imported into the blueprint git storage when
lorax-composer starts.
-
Some security related issues that you should be aware of before running lorax-composer:
@@ -268,8 +288,8 @@ messages as well as extra debugging info and API requests.
inject commands into a blueprint that would result in the kickstart executing
arbitrary code on the host. Only authorized users should be allowed to build
images using lorax-composer.
-
The server runs as root, and as weldr. Communication with it is via a unix
domain socket (/run/weldr/api.socket by default). The directory and socket
@@ -353,14 +373,14 @@ cmdline by passing it the It will then drop root privileges for the API thread and run as the weldr
user. The queue and compose thread still runs as root because it needs to be
able to mount/umount files and run Anaconda.
Blueprints are simple text files in TOML format that describe
which packages, and what versions, to install into the image. They can also define a limited set
@@ -385,7 +405,7 @@ automatically bump the PATCH level of the version
set to 0.1.0 when the existing blueprint version is 0.0.1 will
result in the new blueprint being stored as version0.1.0.
These entries describe the package names and matching version glob to be installed into the image.
The names must match the names exactly, and the versions can be an exact match
@@ -394,8 +414,8 @@ character matching.
NOTE: As of lorax-composer-29.2-1 the versions are not used for depsolving,
that is planned for a future release. And currently there are no differences
between packages and modules in lorax-composer.
These entries describe a group of packages to be installed into the image. Package groups are
defined in the repository metadata. Each group has a descriptive name used primarily for display
@@ -404,8 +424,8 @@ way of listing a group.
Groups have three different ways of categorizing their packages: mandatory, default, and optional.
For purposes of blueprints, mandatory and default packages will be installed. There is no mechanism
for selecting optional packages.
This allows you to append arguments to the bootloader’s kernel commandline. This will not have any
effect on tar or ext4-filesystem images since they do not include a bootloader.
@@ -422,8 +442,8 @@ effect on tarappend ="nosmt=force"
Add a user to the image, and/or set their ssh key.
All fields for this section are optional except for the name, here is a complete example:
@@ -451,8 +471,8 @@ All fields for this section are optional except for the $6$, $5$, or $2b$ it will be stored as
an encrypted password. Otherwise it will be treated as a plain text password.
-
Customizing the timezone and the NTP servers to use for the system:
[customizations.timezone]
@@ -475,8 +495,8 @@ optional and will default to using the distribution defaults which are fine for
In some image types there are already NTP servers setup, eg. Google cloud image, and they
cannot be overridden because they are required to boot in the selected environment. But the
timezone will be updated to the one selected in the blueprint.
[customizations.locale]
@@ -491,8 +511,8 @@ the command line.
Multiple languages can be added. The first one becomes the
primary, and the others are added as secondary. One or the other of languages
or keyboard must be included (or both) in the section.
By default the firewall blocks all access except for services that enable their ports explicitly,
like sshd. This command can be used to open other ports or services. Ports are configured using
@@ -516,8 +536,8 @@ in a customizations
only want the default firewall setup this section can be omitted from the blueprint.
NOTE: The Google and OpenStack templates explicitly disable the firewall for their environment.
This cannot be overridden by the blueprint.
This section can be used to control which services are enabled at boot time.
Some image types already have services enabled or disabled in order for the
@@ -532,9 +552,9 @@ file accepted by sy
disabled=["postfix","telnetd"]
The [[repos.git]] entries are used to add files from a git repository<https://git-scm.com/>
repository to the created image. The repository is cloned, the specified ref is checked out
@@ -568,34 +588,34 @@ of a branch set it to
+
livemedia-creator supports a large number of output types, and only some of
these are currently available via lorax-composer. To add a new output type to
lorax-composer a kickstart file needs to be added to ./share/composer/. The
name of the kickstart is what will be used by the /compose/types route, and the
compose_type field of the POST to start a compose. It also needs to have
-code added to the pylorax.api.compose.compose_args() function. The
+code added to the pylorax.api.compose.compose_args() function. The
_MAP entry in this function defines what lorax-composer will pass to
pylorax.installer.novirt_install() when it runs the compose. When the
compose is finished the output files need to be copied out of the build
directory (/var/lib/lorax/composer/results/<UUID>/compose/),
-pylorax.api.compose.move_compose_results() handles this for each type.
+pylorax.api.compose.move_compose_results() handles this for each type.
You should move them instead of copying to save space.
If the new output type does not have support in livemedia-creator it should be
added there first. This will make the output available to the widest number of
users.
Partitioned disk support is something that livemedia-creator already supports
via the --make-disk cmdline argument. To add this to lorax-composer it
needs 3 things:
A partitioned-disk.ks file in ./share/composer/
-
A new entry in the _MAP in pylorax.api.compose.compose_args()
-
Add a bit of code to pylorax.api.compose.move_compose_results() to move the disk image from
+
The partitioned-disk.ks is pretty similar to the example minimal kickstart
@@ -612,9 +632,9 @@ the results directory, or it could do some post-processing on it. The end of
the function should always clean up the ./compose/ directory, removing any
unneeded extra files. This is especially true for the live-iso since it produces
the contents of the iso as well as the boot.iso itself.
By default lorax-composer uses the host’s configured repositories. It copies
the *.repo files from /etc/yum.repos.d/ into
@@ -657,7 +677,7 @@ it returns JSON but it can also return TOML if
+
In some situations the system may want to only use a DVD iso as the package
source, not the repos from the network. lorax-composer and anaconda
@@ -686,38 +706,38 @@ type will not be available.
mounting the iso and creating a source file to point to it as described in the
Package Sources documentation. In that case there is no need to remove the other
sources from /etc/yum.repos.d/ or clear the cached repos.
@@ -198,7 +210,7 @@ environment. It is best to run lorax from the same release as is being targeted
because the templates may have release specific logic in them. eg. Use the
rawhide version to build the boot.iso for rawhide, along with the rawhide
repositories.
-
Run this as root to create a boot.iso in ./results/:
dnfinstalllorax
@@ -394,14 +411,14 @@ repositories.
override the ones in the distribution repositories.
Under ./results/ will be the release tree files: .discinfo, .treeinfo, everything that
goes onto the boot.iso, the pxeboot directory, and the boot.iso under ./images/.
By default lorax will search for the first package that provides system-release
that doesn’t start with generic- and will install it. It then selects a
corresponding logo package by using the first part of the system-release package and
appending -logos to it. eg. fedora-release and fedora-logos.
If --skip-branding is passed to lorax it will skip selecting the
system-release, and logos packages and leave it up to the user to pass any
@@ -412,16 +429,16 @@ and fedora-logosNote that this does not prevent something else in the dependency tree from
causing these packages to be included. Using --excludepkgs may help if they
are unexpectedly included.
If you are using lorax with mock v1.3.4 or later you will need to pass
--old-chroot to mock. Mock now defaults to using systemd-nspawn which cannot
create the needed loop device nodes. Passing --old-chroot will use the old
system where /dev/loop* is setup for you.
Lorax uses dnf to install
packages into a temporary directory, sets up configuration files, it then
@@ -435,14 +452,14 @@ supports %if/%endif
%> tags and variable substitution with ${}. The default templates are
shipped with lorax in /usr/share/lorax/templates.d/99-generic/ and use the
.tmpl extension.
The runtime-install.tmpl template lists packages to be installed using the
installpkg command. This template is fairly simple, installing common packages and
architecture specific packages. It must end with the run_pkg_transaction
command which tells dnf to download and install the packages.
The runtime-postinstall.tmpl template is where the system configuration
happens. The installer environment is similar to a normal running system, but
@@ -462,8 +479,8 @@ installation. A number of template commands are used here:
The runtime-cleanup.tmpl template is used to remove files that aren’t strictly needed
by the installation environment. In addition to the remove template command it uses:
@@ -477,15 +494,15 @@ remove everything except a select few.
After runtime-*.tmpl templates have finished their work lorax creates an
empty ext4 filesystem, copies the remaining files to it, and makes a squashfs
filesystem of it. This file is the / of the boot.iso’s installer environment
and is what is in the LiveOS/squashfs.img file on the iso.
The iso creation is handled by another set of templates. The one used depends
on the architecture that the iso is being created for. They are also stored in
@@ -495,9 +512,9 @@ configuration template files, configuration variable substitution, treeinfo
metadata (via the treeinfo
template command). Kernel and initrd are copied from the installroot to their
final locations and then mkisofs is run to create the boot.iso
The default set of templates and configuration files from the lorax-generic-templates package
are shipped in the /usr/share/lorax/templates.d/99-generic/ directory. You can
@@ -505,37 +522,37 @@ make a copy of them and place them into another directory under --sharedir to lorax.
mkksiso is a tool for creating kickstart boot isos. In it’s simplest form
+you can add a kickstart to a boot.iso and the kickstart will be executed when
+the iso is booted. If the original iso was created with EFI and Mac support the
+kickstart boot.iso will include this support as well.
+
mkksiso needs to be run as root, it depends on mounting the original iso
+and you need to be root to be able to do that.
Create a kickstart like you normally would, kickstart documentation can be
+found here, including the
+url and repo commands. If you are creating a DVD and only need the
+content on the DVD you can use the cdrom command to install without a
+network connection. Then run mkksiso like this:
This will create a new iso with the kickstart in the root directory, and the
+kernel cmdline will have inst.ks=... added to it so that it will be
+executed when the iso is booted (be careful not to boot on a system you don’t
+want to wipe out! There will be no prompting).
+
By default the volume id of the iso is preserved. You can set a custom volid
+by passing -V and the string to set. The kernel cmdline will be changes, and the iso will have th custom volume id.
+eg.:
You can add repo directories to the iso using --add/PATH/TO/REPO/, make
+sure it contains the repodata directory by running createrepo_c on it
+first. In the kickstart you can refer to the directories (and files) on the iso
+using file:///run/install/repo/DIRECTORY/. You can then use these repos in
+the kickstart like this:
You can use the kickstart liveimg command,
+to install a pre-generated disk image or tar to the system the iso is booting
+on.
+
Create a disk image or tar with osbuild-composer or livemedia-creator,
+make sure the image includes tools expected by anaconda, as well as the
+kernel and bootloader support. In osbuild-composer use the tar image
+type and make sure to include the kernel, grub2, and grub2-tools
+packages. If you plan to install it to a UEFI machine make sure to include
+grub2-efi and efibootmgr in the blueprint.
+
Add the root.tar.xz file to the iso using --add/PATH/TO/ROOT.TAR.XZ,
+and in the kickstart reference it with the liveimg command like this:
mkksiso first examines the system to make sure the tools it needs are installed,
+it will work with xorrisofs or mkisofs installed. It mounts the source iso,
+and copies the directories that need to be modified to a temporary directory.
+
It then modifies the boot configuration files to include the inst.ks command,
+and checks to see if the original iso supports EFI. If it does it regenerates the
+EFI boot images with the new configuration, and then runs the available iso creation
+tool to add the new files and directories to the new iso. If the architecture is
+x86_64 it will also make sure the iso can be booted as an iso or from a USB
+stick (hybridiso).
+
The last step is to update the iso checksums so that booting with test enabled
+will pass.
Lorax now supports creation of product.img and updates.img as part of the build
process. This is implemented using the installimg template command which will
@@ -179,36 +191,36 @@ you would put your custom class here:
If the packages containing the product/updates files are not included as part
of normal dependencies you can add specific packages with the --installpkgs
command or the installpkgs paramater of pylorax.treebuilder.RuntimeBuilder
The new output type must add a kickstart template to ./share/composer/ where the
name of the kickstart (without the trailing .ks) matches the entry in compose_args.
@@ -216,9 +228,9 @@ packages required by the output type, it should not have the trailing %end becau
package NEVRAs will be appended to it at build time.
compose_args should have a name matching the kickstart, and it should set the novirt_install
parameters needed to generate the desired output. Other types should be set to False.
kernel_append (str) – The arguments to append to the –append section
@@ -252,13 +264,13 @@ parameters needed to generate the desired output. Other types should be set to F
is parsed correctly, and re-assembled for inclusion into the final kickstart
Returns the settings to pass to novirt_install for the compose type
Parameters
-
compose_type (str) – The type of compose to create, from compose_types()
+
compose_type (str) – The type of compose to create, from compose_types()
This will return a dict of options that match the ArgumentParser options for livemedia-creator.
@@ -266,23 +278,23 @@ These are the ones the define the type of output, it’s filename, etc.
Other options will be filled in by make_compose()
When no services have been selected we don’t need to add anything to the kickstart
@@ -328,23 +340,23 @@ so return an empty string. Otherwise return “services” which will be updated
the settings.
@@ -488,23 +500,23 @@ is parsed correctly, and re-assembled for inclusion into the final kickstart
is parsed correctly, and re-assembled for inclusion into the final kickstart
Return a kickstart line with the correct args.
:param r: DNF repository information
:type r: dnf.Repo
@@ -515,15 +527,15 @@ is parsed correctly, and re-assembled for inclusion into the final kickstart
Set url to “baseurl” if it is a repo, leave it as “url” for the installation url.
settings (dict) – A dict with the list of services to enable and disable
@@ -531,31 +543,31 @@ is parsed correctly, and re-assembled for inclusion into the final kickstart
is parsed correctly, and re-assembled for inclusion into the final kickstart
settings (dict) – A dict with timezone and/or ntpservers list
@@ -587,9 +599,9 @@ is parsed correctly, and re-assembled for inclusion into the final kickstart
is parsed correctly, and re-assembled for inclusion into the final kickstart
The DNF Repo.dump() function does not produce a string that can be used as a dnf .repo file,
@@ -826,22 +838,22 @@ it ouputs baseurl and gpgkey as python lists which DNF cannot read. So do this m
only the attributes we care about.
Estimating actual requirements is difficult without the actual file sizes, which
@@ -849,13 +861,13 @@ dnf doesn’t provide access to. So use the file count and block size to estimat
a minimum size for each package.
This returns a dict with 2 lists. “new” is the list of uuids that are waiting to be built,
and “run” has the uuids that are being built (currently limited to 1 at a time).
RuntimeError if there was a problem (eg. no log file available)
@@ -1497,39 +1509,39 @@ and “run” has the uuids that are being built (currently limited to 1 at a ti
attempt to start on a line boundry, and may return less than size kbytes.
metadata (bool) – Set to true to include all the metadata needed to reproduce the build
+
image (bool) – Set to true to include the output image
+
logs (bool) – Set to true to include the logs from the build
Returns
@@ -1546,35 +1558,35 @@ attempt to start on a line boundry, and may return less than size k
the selected data to the caller by returning the Popen stdout from the tar process.
This is a subclass of dict that enforces the constructor arguments
and adds a .filename property to return the recipe’s filename,
and a .toml() function to return the recipe as a TOML string.
@@ -2550,16 +2562,16 @@ Revisions start at 1 and increment for each new commit that is tagged.
If the commit has already been tagged it will return false.
All of the blueprints routes support the optional branch argument. If it is not
used then the API will use the master branch for blueprints. If you want to create
a new branch use the new or workspace routes with ?branch=<branch-name> to
store the new blueprint on the new branch.
Return the commits to a blueprint. By default it returns the first 20 commits, this
@@ -2800,8 +2812,8 @@ hash can be passed to /api/v0/blueprints/diff/ to retrieve the exac
Create a new blueprint, or update an existing blueprint. This supports both JSON and TOML
@@ -2810,8 +2822,8 @@ for the blueprint format. The blueprint should be in the body of the request wit
The response will be a status response with status set to true, or an
error response with it set to false and an error message included.
Tag a blueprint as a new release. This uses git tags with a special format.
@@ -2863,8 +2875,8 @@ can be tagged. Revisions start at 1 and increment for each new tag
The response will be a status response with status set to true, or an
error response with it set to false and an error message included.
Return the differences between two commits, or the workspace. The commit hash
@@ -2908,8 +2920,8 @@ The contents for these will be the old/new values for them.
Return a JSON representation of the blueprint with the package and module versions set
@@ -2950,8 +2962,8 @@ to the exact versions chosen by depsolving the blueprint.
Depsolve the blueprint using yum, return the blueprint used, and the NEVRAs of the packages
@@ -3026,8 +3038,8 @@ and packages in modules, and any error will be in errors
List all of the available projects. By default this returns the first 20 items,
@@ -3051,8 +3063,8 @@ but this can be changed by setting the offset and limit
Return information about the comma-separated list of projects. It includes the description
@@ -3090,8 +3102,8 @@ of the package along with the list of available builds.
Return information about the comma-separated list of source names. Or all of the
@@ -3172,8 +3184,8 @@ will have it set to false. System sources cannot be changed or deleted.
Start a compose. The content type should be ‘application/json’ and the body of the POST
@@ -3328,8 +3340,8 @@ build and add it to the queue. It returns the build uuid and a status if it succ
Returns a .tar of the metadata, logs, and output image of the build. This
@@ -3577,8 +3589,8 @@ is already in compressed form so the returned tar is not compressed.
The mime type is set to ‘application/x-tar’ and the filename is set to
UUID.tar
Returns the output image from the build. The filename is set to the filename
from the build with the UUID as a prefix. eg. UUID-root.tar.xz or UUID-boot.iso.
A minimal DNF object suitable for passing to RuntimeBuilder
lmc uses RuntimeBuilder to run the arch specific iso creation
templates, so the the installroot config value is the important part of
this. Everything else should be a nop.
fsck.ext4 is run on the rootfs_image to make sure there are no errors and to zero
@@ -525,16 +537,16 @@ out any deleted blocks to make it compress better. If this fails for any reason
it will return None and log the error.
Take disk_img and put it into LiveOS/rootfs.img and squashfs this
@@ -591,33 +603,33 @@ out any deleted blocks to make it compress better. If this fails for any reason
it will return False and log the error.
proxy (string) – http proxy to use when fetching packages
releasever (string) – Release version to pass to dnf
cachedir (string) – Directory to use for caching packages
-
noverifyssl (bool) – Set to True to ignore the CA of ssl certs. eg. use self-signed ssl for https repos.
+
noverifyssl (bool) – Set to True to ignore the CA of ssl certs. eg. use self-signed ssl for https repos.
@@ -704,63 +716,63 @@ See the cmdline –help for livemedia-creator for the possible options
If cachedir is None a dnf.cache directory is created inside tmpdir
Set an environment variable to be used by child processes.
This method does not modify os.environ for the running process, which
is not thread-safe. If setenv has already been called for a particular
@@ -869,16 +881,16 @@ variable name, the old value is overwritten.
Start an external program and return the Popen object.
The root and reset_handlers arguments are handled by passing a
preexec_fn argument to subprocess.Popen, but an additional preexec_fn
@@ -906,61 +918,61 @@ last.
Make a compressed archive of the given rootdir or file.
command is a list of the archiver commands to run
compression should be “xz”, “gzip”, “lzma”, “bzip2”, or None.
compressargs will be used on the compression commandline.
Copy a tree of files using cp -a, thus preserving modes, timestamps,
links, acls, sparse files, xattrs, selinux contexts, etc.
If preserve is False, uses cp -R (useful for modeless filesystems)
raises CalledProcessError if copy fails.
Attach a devicemapper device to the given device, with the given size.
If name is None, a random name will be chosen. Returns the device name.
raises CalledProcessError if dmsetup fails.
Copy each of the items listed in grafts into dest.
If the key ends with ‘/’ it’s assumed to be a directory which should be
created, otherwise just the leading directories will be created.
Make sure the loop device is attached to the outfile.
It seems that on rare occasions losetup can return before the /dev/loopX is
ready for use, causing problems with mkfs. This tries to make sure that the
@@ -1027,29 +1039,29 @@ loop device really is associated with the backing file before continuing.
Raise RuntimeError if it isn’t setup after 5 tries.
Generic filesystem image creation function.
fstype should be a filesystem type - “mkfs.${fstype}” must exist.
graft should be a dict: {“some/path/in/image”: “local/file/or/dir”};
@@ -1059,84 +1071,84 @@ graft should be a dict: {“some/path/in/image”: “local/file/or/dir”};
Will raise CalledProcessError if something goes wrong.
Mount the given device at the given mountpoint, using the given opts.
opts should be a comma-separated string of mount options.
if mnt is none, a temporary directory will be created and its path will be
@@ -1144,43 +1156,43 @@ returned.
raises CalledProcessError if mount fails.
Unmount the given mountpoint. If lazy is True, do a lazy umount (-l).
If the mount was a temporary dir created by mount, it will be deleted.
raises CalledProcessError if umount fails.
cancel_func (function) – Function that returns True to cancel build
-
tar_img (str) – For make_tar_disk, the path to final tarball to be created
+
tar_img (str) – For make_tar_disk, the path to final tarball to be created
@@ -1326,12 +1338,12 @@ metadata file are set correctly. All other values are left untouched.
image and then optionally, based on the opts passed, creates tarfile.
Request installation of all packages matching the given globs.
Note that this is just a request - nothing is actually installed
@@ -1355,20 +1367,20 @@ until the ‘run_pkg_transaction’ command is given.
Append STRING (followed by a newline character) to FILE.
-Python character escape sequences (‘n’, ‘t’, etc.) will be
+Python character escape sequences (’n’, ‘t’, etc.) will be
converted to the appropriate characters.
Copy the given file (or files, if a glob is used) from the input
tree to the given destination in the output tree.
@@ -1474,9 +1486,9 @@ install /usr/share/myconfig/grub.conf.in /boot/grub.conf
Install the kernel from SRC in the input tree to DEST in the output
tree, and then add an item to the treeinfo data store, in the named
@@ -1519,9 +1531,9 @@ treeinfo SECTION kernel DEST
Request installation of all packages matching the given globs.
Note that this is just a request - nothing is actually installed
@@ -1531,18 +1543,18 @@ until the ‘run_pkg_transaction’ command is given.
Handle monitoring and saving the logfiles from the virtual install
Incoming data is written to self.server.log_path and each line is checked
for patterns that would indicate that the installation failed.
self.server.log_error is set True when this happens.
Rebuild all the initrds in the tree. If backup is specified, each
initrd will be renamed with backup as a suffix before rebuilding.
If backup is empty, the existing initrd files will be overwritten.
@@ -2046,19 +2065,19 @@ name of the kernel.