diff --git a/docs/composer-cli.rst b/docs/composer-cli.rst deleted file mode 100644 index 9d62f6a6..00000000 --- a/docs/composer-cli.rst +++ /dev/null @@ -1,542 +0,0 @@ -composer-cli -============ - -:Authors: - Brian C. Lane - -``composer-cli`` is an interactive tool for use with a WELDR API server, -managing blueprints, exploring available packages, and building new images. As -of Fedora 34, `osbuild-composer ` is the recommended -server. - -It requires the server to be installed on the local system, and the user -running it needs to be a member of the ``weldr`` group. - -composer-cli cmdline arguments ------------------------------- - -.. argparse:: - :ref: composer.cli.cmdline.composer_cli_parser - :prog: composer-cli - -Edit a Blueprint ----------------- - -Start out by listing the available blueprints using ``composer-cli blueprints -list``, pick one and save it to the local directory by running ``composer-cli -blueprints save http-server``. - -Edit the file (it will be saved with a .toml extension) and change the -description, add a package or module to it. Send it back to the server by -running ``composer-cli blueprints push http-server.toml``. You can verify that it was -saved by viewing the changelog - ``composer-cli blueprints changes http-server``. - -See the `Example Blueprint`_ for an example. - -Build an image ----------------- - -Build a ``qcow2`` disk image from this blueprint by running ``composer-cli -compose start http-server qcow2``. It will print a UUID that you can use to -keep track of the build. You can also cancel the build if needed. - -The available types of images is displayed by ``composer-cli compose types``. -Currently this consists of: alibaba, ami, ext4-filesystem, google, hyper-v, -live-iso, openstack, partitioned-disk, qcow2, tar, vhd, vmdk - -You can optionally start an upload of the finished image, see `Image Uploads`_ for -more information. - - -Monitor the build status ------------------------- - -Monitor it using ``composer-cli compose status``, which will show the status of -all the builds on the system. You can view the end of the anaconda build logs -once it is in the ``RUNNING`` state using ``composer-cli compose log UUID`` -where UUID is the UUID returned by the start command. - -Once the build is in the ``FINISHED`` state you can download the image. - -Download the image ------------------- - -Downloading the final image is done with ``composer-cli compose image UUID`` and it will -save the qcow2 image as ``UUID-disk.qcow2`` which you can then use to boot a VM like this:: - - qemu-kvm --name test-image -m 1024 -hda ./UUID-disk.qcow2 - - -Image Uploads -------------- - -``composer-cli`` can upload the images to a number of services, including AWS, -OpenStack, and vSphere. The upload can be started when the build is finished, -by using ``composer-cli compose start ...`` or an existing image can be uploaded -with ``composer-cli upload start ...``. In order to access the service you need -to pass authentication details to composer-cli using a TOML file, or reference -a previously saved profile. - -.. note:: - With ``osbuild-composer`` you can only specify upload targets during - the compose process. - - -Providers ---------- - -Providers are the services providers with Ansible playbook support under -``/usr/share/lorax/lifted/providers/``, you will need to gather some provider -specific information in order to authenticate with it. You can view the -required fields using ``composer-cli providers template ``, eg. for AWS -you would run:: - - composer-cli upload template aws - -The output looks like this:: - - provider = "aws" - - [settings] - aws_access_key = "AWS Access Key" - aws_bucket = "AWS Bucket" - aws_region = "AWS Region" - aws_secret_key = "AWS Secret Key" - -Save this into an ``aws-credentials.toml`` file and use it when running ``start``. - -AWS -^^^ - -The access key and secret key can be created by going to the -``IAM->Users->Security Credentials`` section and creating a new access key. The -secret key will only be shown when it is first created so make sure to record -it in a secure place. The region should be the region that you want to use the -AMI in, and the bucket can be an existing bucket, or a new one, following the -normal AWS bucket naming rules. It will be created if it doesn't already exist. - -When uploading the image it is first uploaded to the s3 bucket, and then -converted to an AMI. If the conversion is successful the s3 object will be -deleted. If it fails, re-trying after correcting the problem will re-use the -object if you have not deleted it in the meantime, speeding up the process. - - -Profiles --------- - -Profiles store the authentication settings associated with a specific provider. -Providers can have multiple profiles, as long as their names are unique. For -example, you may have one profile for testing and another for production -uploads. - -Profiles are created by pushing the provider settings template to the server using -``composer-cli providers push `` where ``PROFILE.TOML`` is the same as the -provider template, but with the addition of a ``profile`` field. For example, an AWS -profile named ``test-uploads`` would look like this:: - - provider = "aws" - profile = "test-uploads" - - [settings] - aws_access_key = "AWS Access Key" - aws_bucket = "AWS Bucket" - aws_region = "AWS Region" - aws_secret_key = "AWS Secret Key" - -You can view the profile by using ``composer-cli providers aws test-uploads``. - - -Build an image and upload results ---------------------------------- - -If you have a profile named ``test-uploads``:: - - composer-cli compose start example-http-server ami "http image" aws test-uploads - -Or if you have the settings stored in a TOML file:: - - composer-cli compose start example-http-server ami "http image" aws-settings.toml - -It will return the UUID of the image build, and the UUID of the upload. Once -the build has finished successfully it will start the upload process, which you -can monitor with ``composer-cli upload info `` - -You can also view the upload logs from the Ansible playbook with:: - - ``composer-cli upload log `` - -The type of the image must match the type supported by the provider. - - -Upload an existing image ------------------------- - -You can upload previously built images, as long as they are in the ``FINISHED`` state, using ``composer-cli upload start ...```. If you have a profile named ``test-uploads``:: - - composer-cli upload start "http-image" aws test-uploads - -Or if you have the settings stored in a TOML file:: - - composer-cli upload start "http-image" aws-settings.toml - -This will output the UUID of the upload, which can then be used to monitor the status in the same way -described above. - - -Debugging ---------- - -There are a couple of arguments that can be helpful when debugging problems. -These are only meant for debugging and should not be used to script access to -the API. If you need to do that you can communicate with it directly in the -language of your choice. - -``--json`` will return the server's response as a nicely formatted json output -instead of printing what the command would usually print. - -``--test=1`` will cause a compose start to start creating an image, and then -end with a failed state. - -``--test=2`` will cause a compose to start and then end with a finished state, -without actually composing anything. - - -Blueprint Reference -------------------- - -Blueprints are simple text files in `TOML `_ format that describe -which packages, and what versions, to install into the image. They can also define a limited set -of customizations to make to the final image. - -A basic blueprint looks like this:: - - name = "base" - description = "A base system with bash" - version = "0.0.1" - - [[packages]] - name = "bash" - version = "4.4.*" - -The ``name`` field is the name of the blueprint. It can contain spaces, but they will be converted to ``-`` -when it is written to disk. It should be short and descriptive. - -``description`` can be a longer description of the blueprint, it is only used for display purposes. - -``version`` is a `semver compatible `_ version number. If -a new blueprint is uploaded with the same ``version`` the server will -automatically bump the PATCH level of the ``version``. If the ``version`` -doesn't match it will be used as is. eg. Uploading a blueprint with ``version`` -set to ``0.1.0`` when the existing blueprint ``version`` is ``0.0.1`` will -result in the new blueprint being stored as ``version 0.1.0``. - -[[packages]] and [[modules]] -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -These entries describe the package names and matching version glob to be installed into the image. - -The names must match the names exactly, and the versions can be an exact match -or a filesystem-like glob of the version using ``*`` wildcards and ``?`` -character matching. - -.. note:: - Currently there are no differences between ``packages`` and ``modules`` - in ``osbuild-composer``. Both are treated like an rpm package dependency. - -For example, to install ``tmux-2.9a`` and ``openssh-server-8.*``, you would add -this to your blueprint:: - - [[packages]] - name = "tmux" - version = "2.9a" - - [[packages]] - name = "openssh-server" - version = "8.*" - - - -[[groups]] -^^^^^^^^^^ - -The ``groups`` entries describe a group of packages to be installed into the image. Package groups are -defined in the repository metadata. Each group has a descriptive name used primarily for display -in user interfaces and an ID more commonly used in kickstart files. Here, the ID is the expected -way of listing a group. - -Groups have three different ways of categorizing their packages: mandatory, default, and optional. -For purposes of blueprints, mandatory and default packages will be installed. There is no mechanism -for selecting optional packages. - -For example, if you want to install the ``anaconda-tools`` group you would add this to your -blueprint:: - - [[groups]] - name="anaconda-tools" - -``groups`` is a TOML list, so each group needs to be listed separately, like ``packages`` but with -no version number. - - -Customizations -^^^^^^^^^^^^^^ - -The ``[customizations]`` section can be used to configure the hostname of the final image. eg.:: - - [customizations] - hostname = "baseimage" - -This is optional and may be left out to use the defaults. - - -[customizations.kernel] -*********************** - -This allows you to append arguments to the bootloader's kernel commandline. This will not have any -effect on ``tar`` or ``ext4-filesystem`` images since they do not include a bootloader. - -For example:: - - [customizations.kernel] - append = "nosmt=force" - - -[[customizations.sshkey]] -************************* - -Set an existing user's ssh key in the final image:: - - [[customizations.sshkey]] - user = "root" - key = "PUBLIC SSH KEY" - -The key will be added to the user's authorized_keys file. - -.. warning:: - - ``key`` expects the entire content of ``~/.ssh/id_rsa.pub`` - - -[[customizations.user]] -*********************** - -Add a user to the image, and/or set their ssh key. -All fields for this section are optional except for the ``name``, here is a complete example:: - - [[customizations.user]] - name = "admin" - description = "Administrator account" - password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L..." - key = "PUBLIC SSH KEY" - home = "/srv/widget/" - shell = "/usr/bin/bash" - groups = ["widget", "users", "wheel"] - uid = 1200 - gid = 1200 - -If the password starts with ``$6$``, ``$5$``, or ``$2b$`` it will be stored as -an encrypted password. Otherwise it will be treated as a plain text password. - -.. warning:: - - ``key`` expects the entire content of ``~/.ssh/id_rsa.pub`` - - -[[customizations.group]] -************************ - -Add a group to the image. ``name`` is required and ``gid`` is optional:: - - [[customizations.group]] - name = "widget" - gid = 1130 - - -[customizations.timezone] -************************* - -Customizing the timezone and the NTP servers to use for the system:: - - [customizations.timezone] - timezone = "US/Eastern" - ntpservers = ["0.north-america.pool.ntp.org", "1.north-america.pool.ntp.org"] - -The values supported by ``timezone`` can be listed by running ``timedatectl list-timezones``. - -If no timezone is setup the system will default to using `UTC`. The ntp servers are also -optional and will default to using the distribution defaults which are fine for most uses. - -In some image types there are already NTP servers setup, eg. Google cloud image, and they -cannot be overridden because they are required to boot in the selected environment. But the -timezone will be updated to the one selected in the blueprint. - - -[customizations.locale] -*********************** - -Customize the locale settings for the system:: - - [customizations.locale] - languages = ["en_US.UTF-8"] - keyboard = "us" - -The values supported by ``languages`` can be listed by running ``localectl list-locales`` from -the command line. - -The values supported by ``keyboard`` can be listed by running ``localectl list-keymaps`` from -the command line. - -Multiple languages can be added. The first one becomes the -primary, and the others are added as secondary. One or the other of ``languages`` -or ``keyboard`` must be included (or both) in the section. - - -[customizations.firewall] -************************* - -By default the firewall blocks all access except for services that enable their ports explicitly, -like ``sshd``. This command can be used to open other ports or services. Ports are configured using -the port:protocol format:: - - [customizations.firewall] - ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp"] - -Numeric ports, or their names from ``/etc/services`` can be used in the ``ports`` enabled/disabled lists. - -The blueprint settings extend any existing settings in the image templates, so if ``sshd`` is -already enabled it will extend the list of ports with the ones listed by the blueprint. - -If the distribution uses ``firewalld`` you can specify services listed by ``firewall-cmd --get-services`` -in a ``customizations.firewall.services`` section:: - - [customizations.firewall.services] - enabled = ["ftp", "ntp", "dhcp"] - disabled = ["telnet"] - -Remember that the ``firewall.services`` are different from the names in ``/etc/services``. - -Both are optional, if they are not used leave them out or set them to an empty list ``[]``. If you -only want the default firewall setup this section can be omitted from the blueprint. - -NOTE: The ``Google`` and ``OpenStack`` templates explicitly disable the firewall for their environment. -This cannot be overridden by the blueprint. - -[customizations.services] -************************* - -This section can be used to control which services are enabled at boot time. -Some image types already have services enabled or disabled in order for the -image to work correctly, and cannot be overridden. eg. ``ami`` requires -``sshd``, ``chronyd``, and ``cloud-init``. Without them the image will not -boot. Blueprint services are added to, not replacing, the list already in the -templates, if any. - -The service names are systemd service units. You may specify any systemd unit -file accepted by ``systemctl enable`` eg. ``cockpit.socket``:: - - [customizations.services] - enabled = ["sshd", "cockpit.socket", "httpd"] - disabled = ["postfix", "telnetd"] - - -[[repos.git]] -~~~~~~~~~~~~~ - -.. note:: - Currently ``osbuild-composer`` does not support ``repos.git`` - -The ``[[repos.git]]`` entries are used to add files from a `git repository `_ -repository to the created image. The repository is cloned, the specified ``ref`` is checked out -and an rpm is created to install the files to a ``destination`` path. The rpm includes a summary -with the details of the repository and reference used to create it. The rpm is also included in the -image build metadata. - -To create an rpm named ``server-config-1.0-1.noarch.rpm`` you would add this to your blueprint:: - - [[repos.git]] - rpmname="server-config" - rpmversion="1.0" - rpmrelease="1" - summary="Setup files for server deployment" - repo="PATH OF GIT REPO TO CLONE" - ref="v1.0" - destination="/opt/server/" - -* rpmname: Name of the rpm to create, also used as the prefix name in the tar archive -* rpmversion: Version of the rpm, eg. "1.0.0" -* rpmrelease: Release of the rpm, eg. "1" -* summary: Summary string for the rpm -* repo: URL of the get repo to clone and create the archive from -* ref: Git reference to check out. eg. origin/branch-name, git tag, or git commit hash -* destination: Path to install the / of the git repo at when installing the rpm - -An rpm will be created with the contents of the git repository referenced, with the files -being installed under ``/opt/server/`` in this case. - -``ref`` can be any valid git reference for use with ``git archive``. eg. to use the head -of a branch set it to ``origin/branch-name``, a tag name, or a commit hash. - -Note that the repository is cloned in full each time a build is started, so pointing to a -repository with a large amount of history may take a while to clone and use a significant -amount of disk space. The clone is temporary and is removed once the rpm is created. - -Example Blueprint ------------------ - -This example blueprint will install the ``tmux``, ``git``, and ``vim-enhanced`` -packages. It will set the ``root`` ssh key, add the ``widget`` and ``admin`` -users as well as a ``students`` group:: - - name = "example-custom-base" - description = "A base system with customizations" - version = "0.0.1" - - [[packages]] - name = "tmux" - version = "*" - - [[packages]] - name = "git" - version = "*" - - [[packages]] - name = "vim-enhanced" - version = "*" - - [customizations] - hostname = "custombase" - - [[customizations.sshkey]] - user = "root" - key = "A SSH KEY FOR ROOT" - - [[customizations.user]] - name = "widget" - description = "Widget process user account" - home = "/srv/widget/" - shell = "/usr/bin/false" - groups = ["dialout", "users"] - - [[customizations.user]] - name = "admin" - description = "Widget admin account" - password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31LeOUleVK/R/aeWVHVZDi26zAH.o0ywBKH9Tc0/wm7sW/q39uyd1" - home = "/srv/widget/" - shell = "/usr/bin/bash" - groups = ["widget", "users", "students"] - uid = 1200 - - [[customizations.user]] - name = "plain" - password = "simple plain password" - - [[customizations.user]] - name = "bart" - key = "SSH KEY FOR BART" - groups = ["students"] - - [[customizations.group]] - name = "widget" - - [[customizations.group]] - name = "students" diff --git a/docs/composer.cli.rst b/docs/composer.cli.rst deleted file mode 100644 index 4dd2e240..00000000 --- a/docs/composer.cli.rst +++ /dev/null @@ -1,101 +0,0 @@ -composer.cli package -==================== - -Submodules ----------- - -composer.cli.blueprints module ------------------------------- - -.. automodule:: composer.cli.blueprints - :members: - :undoc-members: - :show-inheritance: - -composer.cli.cmdline module ---------------------------- - -.. automodule:: composer.cli.cmdline - :members: - :undoc-members: - :show-inheritance: - -composer.cli.compose module ---------------------------- - -.. automodule:: composer.cli.compose - :members: - :undoc-members: - :show-inheritance: - -composer.cli.help module ------------------------- - -.. automodule:: composer.cli.help - :members: - :undoc-members: - :show-inheritance: - -composer.cli.modules module ---------------------------- - -.. automodule:: composer.cli.modules - :members: - :undoc-members: - :show-inheritance: - -composer.cli.projects module ----------------------------- - -.. automodule:: composer.cli.projects - :members: - :undoc-members: - :show-inheritance: - -composer.cli.providers module ------------------------------ - -.. automodule:: composer.cli.providers - :members: - :undoc-members: - :show-inheritance: - -composer.cli.sources module ---------------------------- - -.. automodule:: composer.cli.sources - :members: - :undoc-members: - :show-inheritance: - -composer.cli.status module --------------------------- - -.. automodule:: composer.cli.status - :members: - :undoc-members: - :show-inheritance: - -composer.cli.upload module --------------------------- - -.. automodule:: composer.cli.upload - :members: - :undoc-members: - :show-inheritance: - -composer.cli.utilities module ------------------------------ - -.. automodule:: composer.cli.utilities - :members: - :undoc-members: - :show-inheritance: - -Module contents ---------------- - -.. automodule:: composer.cli - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/composer.rst b/docs/composer.rst deleted file mode 100644 index 1ff06326..00000000 --- a/docs/composer.rst +++ /dev/null @@ -1,37 +0,0 @@ -composer package -================ - -Subpackages ------------ - -.. toctree:: - :maxdepth: 4 - - composer.cli - -Submodules ----------- - -composer.http\_client module ----------------------------- - -.. automodule:: composer.http_client - :members: - :undoc-members: - :show-inheritance: - -composer.unix\_socket module ----------------------------- - -.. automodule:: composer.unix_socket - :members: - :undoc-members: - :show-inheritance: - -Module contents ---------------- - -.. automodule:: composer - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/conf.py b/docs/conf.py index ed68fdd2..a608bdcc 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -265,7 +265,6 @@ latex_documents = [ man_pages = [ ('lorax', 'lorax', u'Lorax Documentation', [u'Weldr Team'], 1), ('livemedia-creator', 'livemedia-creator', u'Live Media Creator Documentation', [u'Weldr Team'], 1), - ('composer-cli', 'composer-cli', u'Composer Cmdline Utility Documentation', [u'Weldr Team'], 1), ('mkksiso', 'mkksiso', u'Make Kickstart ISO Utility Documentation', [u'Weldr Team'], 1), ] diff --git a/docs/html/.buildinfo b/docs/html/.buildinfo index 6aa620a8..8b438470 100644 --- a/docs/html/.buildinfo +++ b/docs/html/.buildinfo @@ -1,4 +1,4 @@ # Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. -config: f4097b5df8f1eafb78a5c70d87386457 +config: 453cff3f00978a52ccb2e22f94955d00 tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/docs/html/.doctrees/composer-cli.doctree b/docs/html/.doctrees/composer-cli.doctree deleted file mode 100644 index c17e7548..00000000 Binary files a/docs/html/.doctrees/composer-cli.doctree and /dev/null differ diff --git a/docs/html/.doctrees/composer.cli.doctree b/docs/html/.doctrees/composer.cli.doctree deleted file mode 100644 index 48d85225..00000000 Binary files a/docs/html/.doctrees/composer.cli.doctree and /dev/null differ diff --git a/docs/html/.doctrees/composer.doctree b/docs/html/.doctrees/composer.doctree deleted file mode 100644 index 04cba15a..00000000 Binary files a/docs/html/.doctrees/composer.doctree and /dev/null differ diff --git a/docs/html/.doctrees/environment.pickle b/docs/html/.doctrees/environment.pickle index 50f9968f..69e46744 100644 Binary files a/docs/html/.doctrees/environment.pickle and b/docs/html/.doctrees/environment.pickle differ diff --git a/docs/html/.doctrees/index.doctree b/docs/html/.doctrees/index.doctree index 3b2253c4..98383396 100644 Binary files a/docs/html/.doctrees/index.doctree and b/docs/html/.doctrees/index.doctree differ diff --git a/docs/html/.doctrees/lifted.doctree b/docs/html/.doctrees/lifted.doctree deleted file mode 100644 index f7e18c5a..00000000 Binary files a/docs/html/.doctrees/lifted.doctree and /dev/null differ diff --git a/docs/html/.doctrees/livemedia-creator.doctree b/docs/html/.doctrees/livemedia-creator.doctree index 098131e6..724a2963 100644 Binary files a/docs/html/.doctrees/livemedia-creator.doctree and b/docs/html/.doctrees/livemedia-creator.doctree differ diff --git a/docs/html/.doctrees/lorax-composer.doctree b/docs/html/.doctrees/lorax-composer.doctree deleted file mode 100644 index 7d200317..00000000 Binary files a/docs/html/.doctrees/lorax-composer.doctree and /dev/null differ diff --git a/docs/html/.doctrees/lorax.doctree b/docs/html/.doctrees/lorax.doctree index 7ca6a136..116094f5 100644 Binary files a/docs/html/.doctrees/lorax.doctree and b/docs/html/.doctrees/lorax.doctree differ diff --git a/docs/html/.doctrees/modules.doctree b/docs/html/.doctrees/modules.doctree index 058edbcf..9b333c09 100644 Binary files a/docs/html/.doctrees/modules.doctree and b/docs/html/.doctrees/modules.doctree differ diff --git a/docs/html/.doctrees/pylorax.api.doctree b/docs/html/.doctrees/pylorax.api.doctree deleted file mode 100644 index 2d2d5f94..00000000 Binary files a/docs/html/.doctrees/pylorax.api.doctree and /dev/null differ diff --git a/docs/html/.doctrees/pylorax.doctree b/docs/html/.doctrees/pylorax.doctree index 43247e7e..ebf30d76 100644 Binary files a/docs/html/.doctrees/pylorax.doctree and b/docs/html/.doctrees/pylorax.doctree differ diff --git a/docs/html/README b/docs/html/README deleted file mode 100644 index 615e8738..00000000 --- a/docs/html/README +++ /dev/null @@ -1,5 +0,0 @@ -To build the docs for this branch run: -make test-in-docker -make docs-in-docker - -If you already have a welder/lorax-composer:latest docker image you can skip running 'test-in-docker'. diff --git a/docs/html/_modules/composer/cli.html b/docs/html/_modules/composer/cli.html deleted file mode 100644 index 3d065957..00000000 --- a/docs/html/_modules/composer/cli.html +++ /dev/null @@ -1,258 +0,0 @@ - - - - - - - - - - - composer.cli — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli

-#
-# composer-cli
-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-from composer.cli.blueprints import blueprints_cmd
-from composer.cli.modules import modules_cmd
-from composer.cli.projects import projects_cmd
-from composer.cli.compose import compose_cmd
-from composer.cli.sources import sources_cmd
-from composer.cli.status import status_cmd
-from composer.cli.upload import upload_cmd
-from composer.cli.providers import providers_cmd
-
-command_map = {
-    "blueprints": blueprints_cmd,
-    "modules":    modules_cmd,
-    "projects":   projects_cmd,
-    "compose":    compose_cmd,
-    "sources":    sources_cmd,
-    "status":     status_cmd,
-    "upload":     upload_cmd,
-    "providers":  providers_cmd
-    }
-
-
-
[docs]def main(opts): - """ Main program execution - - :param opts: Cmdline arguments - :type opts: argparse.Namespace - """ - - # Making sure opts.args is not empty (thus, has a command and subcommand) - # is already handled in src/bin/composer-cli. - if opts.args[0] not in command_map: - log.error("Unknown command %s", opts.args[0]) - return 1 - else: - try: - return command_map[opts.args[0]](opts) - except Exception as e: - log.error(str(e)) - return 1
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/blueprints.html b/docs/html/_modules/composer/cli/blueprints.html deleted file mode 100644 index 7133109f..00000000 --- a/docs/html/_modules/composer/cli/blueprints.html +++ /dev/null @@ -1,782 +0,0 @@ - - - - - - - - - - - composer.cli.blueprints — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.blueprints

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-import os
-
-from composer import http_client as client
-from composer.cli.help import blueprints_help
-from composer.cli.utilities import argify, frozen_toml_filename, toml_filename, handle_api_result
-from composer.cli.utilities import packageNEVRA
-
-
[docs]def blueprints_cmd(opts): - """Process blueprints commands - - :param opts: Cmdline arguments - :type opts: argparse.Namespace - :returns: Value to return from sys.exit() - :rtype: int - - This dispatches the blueprints commands to a function - """ - cmd_map = { - "list": blueprints_list, - "show": blueprints_show, - "changes": blueprints_changes, - "diff": blueprints_diff, - "save": blueprints_save, - "delete": blueprints_delete, - "depsolve": blueprints_depsolve, - "push": blueprints_push, - "freeze": blueprints_freeze, - "tag": blueprints_tag, - "undo": blueprints_undo, - "workspace": blueprints_workspace - } - if opts.args[1] == "help" or opts.args[1] == "--help": - print(blueprints_help) - return 0 - elif opts.args[1] not in cmd_map: - log.error("Unknown blueprints command: %s", opts.args[1]) - return 1 - - return cmd_map[opts.args[1]](opts.socket, opts.api_version, opts.args[2:], opts.json)
- -
[docs]def blueprints_list(socket_path, api_version, args, show_json=False): - """Output the list of available blueprints - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints list - """ - api_route = client.api_url(api_version, "/blueprints/list") - result = client.get_url_json_unlimited(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - # "list" should output a plain list of identifiers, one per line. - print("\n".join(result["blueprints"])) - - return rc
- -
[docs]def blueprints_show(socket_path, api_version, args, show_json=False): - """Show the blueprints, in TOML format - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints show <blueprint,...> Display the blueprint in TOML format. - - Multiple blueprints will be separated by \n\n - """ - for blueprint in argify(args): - api_route = client.api_url(api_version, "/blueprints/info/%s?format=toml" % blueprint) - print(client.get_url_raw(socket_path, api_route) + "\n\n") - - return 0
- -
[docs]def blueprints_changes(socket_path, api_version, args, show_json=False): - """Display the changes for each of the blueprints - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints changes <blueprint,...> Display the changes for each blueprint. - """ - def changes_total_fn(data): - """Return the maximum number of possible changes""" - - # Each blueprint can have a different total, return the largest one - return max([c["total"] for c in data["blueprints"]]) - - api_route = client.api_url(api_version, "/blueprints/changes/%s" % (",".join(argify(args)))) - result = client.get_url_json_unlimited(socket_path, api_route, total_fn=changes_total_fn) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - for blueprint in result["blueprints"]: - print(blueprint["name"]) - for change in blueprint["changes"]: - prettyCommitDetails(change) - - return rc
- -
[docs]def prettyCommitDetails(change, indent=4): - """Print the blueprint's change in a nice way - - :param change: The individual blueprint change dict - :type change: dict - :param indent: Number of spaces to indent - :type indent: int - """ - def revision(): - if change["revision"]: - return " revision %d" % change["revision"] - else: - return "" - - print(" " * indent + change["timestamp"] + " " + change["commit"] + revision()) - print(" " * indent + change["message"] + "\n")
- -
[docs]def blueprints_diff(socket_path, api_version, args, show_json=False): - """Display the differences between 2 versions of a blueprint - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints diff <blueprint-name> Display the differences between 2 versions of a blueprint. - <from-commit> Commit hash or NEWEST - <to-commit> Commit hash, NEWEST, or WORKSPACE - """ - if len(args) == 0: - log.error("blueprints diff is missing the blueprint name, from commit, and to commit") - return 1 - elif len(args) == 1: - log.error("blueprints diff is missing the from commit, and the to commit") - return 1 - elif len(args) == 2: - log.error("blueprints diff is missing the to commit") - return 1 - - api_route = client.api_url(api_version, "/blueprints/diff/%s/%s/%s" % (args[0], args[1], args[2])) - result = client.get_url_json(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - for diff in result["diff"]: - print(pretty_diff_entry(diff)) - - return rc
- -
[docs]def pretty_dict(d): - """Return the dict as a human readable single line - - :param d: key/values - :type d: dict - :returns: String of the dict's keys and values - :rtype: str - - key="str", key="str1,str2", ... - """ - result = [] - for k in d: - if type(d[k]) == type(""): - result.append('%s="%s"' % (k, d[k])) - elif type(d[k]) == type([]) and type(d[k][0]) == type(""): - result.append('%s="%s"' % (k, ", ".join(d[k]))) - elif type(d[k]) == type([]) and type(d[k][0]) == type({}): - result.append('%s="%s"' % (k, pretty_dict(d[k]))) - return " ".join(result)
- -
[docs]def dict_names(lst): - """Return comma-separated list of the dict's name/user fields - - :param d: key/values - :type d: dict - :returns: String of the dict's keys and values - :rtype: str - - root, norm - """ - if "user" in lst[0]: - field_name = "user" - elif "name" in lst[0]: - field_name = "name" - else: - # Use first fields in sorted keys - field_name = sorted(lst[0].keys())[0] - - return ", ".join(d[field_name] for d in lst)
- -
[docs]def pretty_diff_entry(diff): - """Generate nice diff entry string. - - :param diff: Difference entry dict - :type diff: dict - :returns: Nice string - """ - if diff["old"] and diff["new"]: - change = "Changed" - elif diff["new"] and not diff["old"]: - change = "Added" - elif diff["old"] and not diff["new"]: - change = "Removed" - else: - change = "Unknown" - - if diff["old"]: - name = list(diff["old"].keys())[0] - elif diff["new"]: - name = list(diff["new"].keys())[0] - else: - name = "Unknown" - - def details(diff): - if change == "Changed": - if type(diff["old"][name]) == type(""): - if name == "Description" or " " in diff["old"][name]: - return '"%s" -> "%s"' % (diff["old"][name], diff["new"][name]) - else: - return "%s -> %s" % (diff["old"][name], diff["new"][name]) - elif name in ["Module", "Package"]: - return "%s %s -> %s" % (diff["old"][name]["name"], diff["old"][name]["version"], - diff["new"][name]["version"]) - elif type(diff["old"][name]) == type([]): - if type(diff["old"][name][0]) == type(""): - return "%s -> %s" % (" ".join(diff["old"][name]), " ".join(diff["new"][name])) - elif type(diff["old"][name][0]) == type({}): - # Lists of dicts are too long to display in detail, just show their names - return "%s -> %s" % (dict_names(diff["old"][name]), dict_names(diff["new"][name])) - elif type(diff["old"][name]) == type({}): - return "%s -> %s" % (pretty_dict(diff["old"][name]), pretty_dict(diff["new"][name])) - else: - return "Unknown" - elif change == "Added": - if name in ["Module", "Package"]: - return "%s %s" % (diff["new"][name]["name"], diff["new"][name]["version"]) - elif name in ["Group"]: - return diff["new"][name]["name"] - elif type(diff["new"][name]) == type(""): - return diff["new"][name] - elif type(diff["new"][name]) == type([]): - if type(diff["new"][name][0]) == type(""): - return " ".join(diff["new"][name]) - elif type(diff["new"][name][0]) == type({}): - # Lists of dicts are too long to display in detail, just show their names - return dict_names(diff["new"][name]) - elif type(diff["new"][name]) == type({}): - return pretty_dict(diff["new"][name]) - else: - return "unknown/todo: %s" % type(diff["new"][name]) - elif change == "Removed": - if name in ["Module", "Package"]: - return "%s %s" % (diff["old"][name]["name"], diff["old"][name]["version"]) - elif name in ["Group"]: - return diff["old"][name]["name"] - elif type(diff["old"][name]) == type(""): - return diff["old"][name] - elif type(diff["old"][name]) == type([]): - if type(diff["old"][name][0]) == type(""): - return " ".join(diff["old"][name]) - elif type(diff["old"][name][0]) == type({}): - # Lists of dicts are too long to display in detail, just show their names - return dict_names(diff["old"][name]) - elif type(diff["old"][name]) == type({}): - return pretty_dict(diff["old"][name]) - else: - return "unknown/todo: %s" % type(diff["new"][name]) - - return change + " " + name + " " + details(diff)
- -
[docs]def blueprints_save(socket_path, api_version, args, show_json=False): - """Save the blueprint to a TOML file - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints save <blueprint,...> Save the blueprint to a file, <blueprint-name>.toml - """ - for blueprint in argify(args): - api_route = client.api_url(api_version, "/blueprints/info/%s?format=toml" % blueprint) - blueprint_toml = client.get_url_raw(socket_path, api_route) - with open(toml_filename(blueprint), "w") as f: - f.write(blueprint_toml) - - return 0
- -
[docs]def blueprints_delete(socket_path, api_version, args, show_json=False): - """Delete a blueprint from the server - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - delete <blueprint> Delete a blueprint from the server - """ - api_route = client.api_url(api_version, "/blueprints/delete/%s" % args[0]) - result = client.delete_url_json(socket_path, api_route) - - return handle_api_result(result, show_json)[0]
- -
[docs]def blueprints_depsolve(socket_path, api_version, args, show_json=False): - """Display the packages needed to install the blueprint - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints depsolve <blueprint,...> Display the packages needed to install the blueprint. - """ - api_route = client.api_url(api_version, "/blueprints/depsolve/%s" % (",".join(argify(args)))) - result = client.get_url_json(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - for blueprint in result["blueprints"]: - if blueprint["blueprint"].get("version", ""): - print("blueprint: %s v%s" % (blueprint["blueprint"]["name"], blueprint["blueprint"]["version"])) - else: - print("blueprint: %s" % (blueprint["blueprint"]["name"])) - for dep in blueprint["dependencies"]: - print(" " + packageNEVRA(dep)) - - return rc
- -
[docs]def blueprints_push(socket_path, api_version, args, show_json=False): - """Push a blueprint TOML file to the server, updating the blueprint - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - push <blueprint> Push a blueprint TOML file to the server. - """ - api_route = client.api_url(api_version, "/blueprints/new") - rval = 0 - for blueprint in argify(args): - if not os.path.exists(blueprint): - log.error("Missing blueprint file: %s", blueprint) - continue - with open(blueprint, "r") as f: - blueprint_toml = f.read() - - result = client.post_url_toml(socket_path, api_route, blueprint_toml) - if handle_api_result(result, show_json)[0]: - rval = 1 - - return rval
- -
[docs]def blueprints_freeze(socket_path, api_version, args, show_json=False): - """Handle the blueprints freeze commands - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints freeze <blueprint,...> Display the frozen blueprint's modules and packages. - blueprints freeze show <blueprint,...> Display the frozen blueprint in TOML format. - blueprints freeze save <blueprint,...> Save the frozen blueprint to a file, <blueprint-name>.frozen.toml. - """ - if args[0] == "show": - return blueprints_freeze_show(socket_path, api_version, args[1:], show_json) - elif args[0] == "save": - return blueprints_freeze_save(socket_path, api_version, args[1:], show_json) - - if len(args) == 0: - log.error("freeze is missing the blueprint name") - return 1 - - api_route = client.api_url(api_version, "/blueprints/freeze/%s" % (",".join(argify(args)))) - result = client.get_url_json(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - for entry in result["blueprints"]: - blueprint = entry["blueprint"] - if blueprint.get("version", ""): - print("blueprint: %s v%s" % (blueprint["name"], blueprint["version"])) - else: - print("blueprint: %s" % (blueprint["name"])) - - for m in blueprint["modules"]: - print(" %s-%s" % (m["name"], m["version"])) - - for p in blueprint["packages"]: - print(" %s-%s" % (p["name"], p["version"])) - - return rc
- -
[docs]def blueprints_freeze_show(socket_path, api_version, args, show_json=False): - """Show the frozen blueprint in TOML format - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints freeze show <blueprint,...> Display the frozen blueprint in TOML format. - """ - if len(args) == 0: - log.error("freeze show is missing the blueprint name") - return 1 - - for blueprint in argify(args): - api_route = client.api_url(api_version, "/blueprints/freeze/%s?format=toml" % blueprint) - print(client.get_url_raw(socket_path, api_route)) - - return 0
- -
[docs]def blueprints_freeze_save(socket_path, api_version, args, show_json=False): - """Save the frozen blueprint to a TOML file - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints freeze save <blueprint,...> Save the frozen blueprint to a file, <blueprint-name>.frozen.toml. - """ - if len(args) == 0: - log.error("freeze save is missing the blueprint name") - return 1 - - for blueprint in argify(args): - api_route = client.api_url(api_version, "/blueprints/freeze/%s?format=toml" % blueprint) - blueprint_toml = client.get_url_raw(socket_path, api_route) - with open(frozen_toml_filename(blueprint), "w") as f: - f.write(blueprint_toml) - - return 0
- -
[docs]def blueprints_tag(socket_path, api_version, args, show_json=False): - """Tag the most recent blueprint commit as a release - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints tag <blueprint> Tag the most recent blueprint commit as a release. - """ - api_route = client.api_url(api_version, "/blueprints/tag/%s" % args[0]) - result = client.post_url(socket_path, api_route, "") - - return handle_api_result(result, show_json)[0]
- -
[docs]def blueprints_undo(socket_path, api_version, args, show_json=False): - """Undo changes to a blueprint - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints undo <blueprint> <commit> Undo changes to a blueprint by reverting to the selected commit. - """ - if len(args) == 0: - log.error("undo is missing the blueprint name and commit hash") - return 1 - elif len(args) == 1: - log.error("undo is missing commit hash") - return 1 - - api_route = client.api_url(api_version, "/blueprints/undo/%s/%s" % (args[0], args[1])) - result = client.post_url(socket_path, api_route, "") - - return handle_api_result(result, show_json)[0]
- -
[docs]def blueprints_workspace(socket_path, api_version, args, show_json=False): - """Push the blueprint TOML to the temporary workspace storage - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - blueprints workspace <blueprint> Push the blueprint TOML to the temporary workspace storage. - """ - api_route = client.api_url(api_version, "/blueprints/workspace") - rval = 0 - for blueprint in argify(args): - if not os.path.exists(blueprint): - log.error("Missing blueprint file: %s", blueprint) - continue - with open(blueprint, "r") as f: - blueprint_toml = f.read() - - result = client.post_url_toml(socket_path, api_route, blueprint_toml) - if handle_api_result(result, show_json)[0]: - rval = 1 - - return rval
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/cmdline.html b/docs/html/_modules/composer/cli/cmdline.html deleted file mode 100644 index 42c7eff0..00000000 --- a/docs/html/_modules/composer/cli/cmdline.html +++ /dev/null @@ -1,250 +0,0 @@ - - - - - - - - - - - composer.cli.cmdline — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.cmdline

-#
-# Copyright (C) 2018 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import os
-import sys
-import argparse
-
-from composer import vernum
-from composer.cli.help import epilog
-
-VERSION = "{0}-{1}".format(os.path.basename(sys.argv[0]), vernum)
-
-
[docs]def composer_cli_parser(): - """ Return the ArgumentParser for composer-cli""" - - parser = argparse.ArgumentParser(description="Lorax Composer commandline tool", - epilog=epilog, - formatter_class=argparse.RawDescriptionHelpFormatter, - fromfile_prefix_chars="@") - - parser.add_argument("-j", "--json", action="store_true", default=False, - help="Output the raw JSON response instead of the normal output.") - parser.add_argument("-s", "--socket", default="/run/weldr/api.socket", metavar="SOCKET", - help="Path to the socket file to listen on") - parser.add_argument("--log", dest="logfile", default=None, metavar="LOG", - help="Path to logfile (./composer-cli.log)") - parser.add_argument("-a", "--api", dest="api_version", default="1", metavar="APIVER", - help="API Version to use") - parser.add_argument("--test", dest="testmode", default=0, type=int, metavar="TESTMODE", - help="Pass test mode to compose. 1=Mock compose with fail. 2=Mock compose with finished.") - parser.add_argument("-V", action="store_true", dest="showver", - help="show program's version number and exit") - - # Commands are implemented by parsing the remaining arguments outside of argparse - parser.add_argument('args', nargs=argparse.REMAINDER) - - return parser
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/compose.html b/docs/html/_modules/composer/cli/compose.html deleted file mode 100644 index 67e105c1..00000000 --- a/docs/html/_modules/composer/cli/compose.html +++ /dev/null @@ -1,906 +0,0 @@ - - - - - - - - - - - composer.cli.compose — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.compose

-#
-# Copyright (C) 2018-2020 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-from datetime import datetime
-import sys
-import json
-import toml
-
-from composer import http_client as client
-from composer.cli.help import compose_help
-from composer.cli.utilities import argify, handle_api_result, packageNEVRA, get_arg
-
-
[docs]def compose_cmd(opts): - """Process compose commands - - :param opts: Cmdline arguments - :type opts: argparse.Namespace - :returns: Value to return from sys.exit() - :rtype: int - - This dispatches the compose commands to a function - - compose_cmd expects api to be passed. eg. - - {"version": 1, "backend": "lorax-composer"} - - """ - result = client.get_url_json(opts.socket, "/api/status") - # Get the api version and fall back to 0 if it fails. - api_version = result.get("api", "0") - backend = result.get("backend", "unknown") - api = {"version": api_version, "backend": backend} - - cmd_map = { - "list": compose_list, - "status": compose_status, - "types": compose_types, - "start": compose_start, - "log": compose_log, - "cancel": compose_cancel, - "delete": compose_delete, - "info": compose_info, - "metadata": compose_metadata, - "results": compose_results, - "logs": compose_logs, - "image": compose_image, - "start-ostree": compose_ostree, - } - if opts.args[1] == "help" or opts.args[1] == "--help": - print(compose_help) - return 0 - elif opts.args[1] not in cmd_map: - log.error("Unknown compose command: %s", opts.args[1]) - return 1 - - return cmd_map[opts.args[1]](opts.socket, opts.api_version, opts.args[2:], opts.json, opts.testmode, api=api)
- -
[docs]def get_size(args): - """Return optional --size argument, and remaining args - - :param args: list of arguments - :type args: list of strings - :returns: (args, size) - :rtype: tuple - - - check size argument for int - - check other args for --size in wrong place - - raise error? Or just return 0? - - no size returns 0 in size - - multiply by 1024**2 to make it easier on users to specify large sizes - - """ - args, value = get_arg(args, "--size", int) - value = value * 1024**2 if value is not None else 0 - return (args, value)
- -
[docs]def get_parent(args): - """Return optional --parent argument, and remaining args - - :param args: list of arguments - :type args: list of strings - :returns: (args, parent) - :rtype: tuple - """ - args, value = get_arg(args, "--parent") - value = value if value is not None else "" - return (args, value)
- -
[docs]def get_ref(args): - """Return optional --ref argument, and remaining args - - :param args: list of arguments - :type args: list of strings - :returns: (args, parent) - :rtype: tuple - """ - args, value = get_arg(args, "--ref") - value = value if value is not None else "" - return (args, value)
- -
[docs]def get_url(args): - """Return optional --url argument, and remaining args - - :param args: list of arguments - :type args: list of strings - :returns: (args, parent) - :rtype: tuple - """ - args, value = get_arg(args, "--url") - value = value if value is not None else "" - return (args, value)
- -
[docs]def compose_list(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Return a simple list of compose identifiers""" - - states = ("running", "waiting", "finished", "failed") - - which = set() - - if any(a not in states for a in args): - # TODO: error about unknown state - return 1 - elif not args: - which.update(states) - else: - which.update(args) - - results = [] - - if "running" in which or "waiting" in which: - api_route = client.api_url(api_version, "/compose/queue") - r = client.get_url_json(socket_path, api_route) - if "running" in which: - results += r["run"] - if "waiting" in which: - results += r["new"] - - if "finished" in which: - api_route = client.api_url(api_version, "/compose/finished") - r = client.get_url_json(socket_path, api_route) - results += r["finished"] - - if "failed" in which: - api_route = client.api_url(api_version, "/compose/failed") - r = client.get_url_json(socket_path, api_route) - results += r["failed"] - - if results: - if show_json: - print(json.dumps(results, indent=4)) - else: - list_fmt = "{id} {queue_status} {blueprint} {version} {compose_type}" - print("\n".join(list_fmt.format(**c) for c in results)) - - return 0
- -
[docs]def compose_status(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Return the status of all known composes - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - This doesn't map directly to an API command, it combines the results from queue, finished, - and failed so raw JSON output is not available. - """ - def get_status(compose): - return {"id": compose["id"], - "blueprint": compose["blueprint"], - "version": compose["version"], - "compose_type": compose["compose_type"], - "image_size": compose["image_size"], - "status": compose["queue_status"], - "created": compose.get("job_created"), - "started": compose.get("job_started"), - "finished": compose.get("job_finished")} - - # Sort the status in a specific order - def sort_status(a): - order = ["RUNNING", "WAITING", "FINISHED", "FAILED"] - return (order.index(a["status"]), a["blueprint"], a["version"], a["compose_type"]) - - status = [] - - # Get the composes currently in the queue - api_route = client.api_url(api_version, "/compose/queue") - result = client.get_url_json(socket_path, api_route) - status.extend(list(map(get_status, result["run"] + result["new"]))) - - # Get the list of finished composes - api_route = client.api_url(api_version, "/compose/finished") - result = client.get_url_json(socket_path, api_route) - status.extend(list(map(get_status, result["finished"]))) - - # Get the list of failed composes - api_route = client.api_url(api_version, "/compose/failed") - result = client.get_url_json(socket_path, api_route) - status.extend(list(map(get_status, result["failed"]))) - - # Sort them by status (running, waiting, finished, failed) and then by name and version. - status.sort(key=sort_status) - - if show_json: - print(json.dumps(status, indent=4)) - return 0 - - # Print them as UUID blueprint STATUS - for c in status: - if c["image_size"] > 0: - image_size = str(c["image_size"]) - else: - image_size = "" - - dt = datetime.fromtimestamp(c.get("finished") or c.get("started") or c.get("created")) - - print("%s %-8s %s %-15s %s %-16s %s" % (c["id"], c["status"], dt.strftime("%c"), c["blueprint"], - c["version"], c["compose_type"], image_size))
- - -
[docs]def compose_types(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Return information about the supported compose types - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - Add additional details to types that are known to composer-cli. Raw JSON output does not - include this extra information. - """ - api_route = client.api_url(api_version, "/compose/types") - result = client.get_url_json(socket_path, api_route) - if show_json: - print(json.dumps(result, indent=4)) - return 0 - - # output a plain list of identifiers, one per line - print("\n".join(t["name"] for t in result["types"] if t["enabled"]))
- -
[docs]def compose_start(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Start a new compose using the selected blueprint and type - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: Set to 1 to simulate a failed compose, set to 2 to simulate a finished one. - :type testmode: int - :param api: Details about the API server, "version" and "backend" - :type api: dict - - compose start [--size XXX] <blueprint-name> <compose-type> [<image-name> <provider> <profile> | <image-name> <profile.toml>] - """ - if api == None: - log.error("Missing api version/backend") - return 1 - - # Get the optional size before checking other parameters - try: - args, size = get_size(args) - except (RuntimeError, ValueError) as e: - log.error(str(e)) - return 1 - - if len(args) == 0: - log.error("start is missing the blueprint name and output type") - return 1 - if len(args) == 1: - log.error("start is missing the output type") - return 1 - if len(args) == 3: - log.error("start is missing the provider and profile details") - return 1 - - config = { - "blueprint_name": args[0], - "compose_type": args[1], - "branch": "master" - } - if size > 0: - if api["backend"] == "lorax-composer": - log.warning("lorax-composer does not support --size, it will be ignored.") - else: - config["size"] = size - - if len(args) == 4: - config["upload"] = {"image_name": args[2]} - # profile TOML file (maybe) - try: - config["upload"].update(toml.load(args[3])) - except toml.TomlDecodeError as e: - log.error(str(e)) - return 1 - elif len(args) == 5: - config["upload"] = { - "image_name": args[2], - "provider": args[3], - "profile": args[4] - } - - if testmode: - test_url = "?test=%d" % testmode - else: - test_url = "" - api_route = client.api_url(api_version, "/compose" + test_url) - result = client.post_url_json(socket_path, api_route, json.dumps(config)) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - print("Compose %s added to the queue" % result["build_id"]) - - if "upload_id" in result and result["upload_id"]: - print ("Upload %s added to the upload queue" % result["upload_id"]) - - return rc
- -
[docs]def compose_ostree(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Start a new ostree compose using the selected blueprint and type - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: Set to 1 to simulate a failed compose, set to 2 to simulate a finished one. - :type testmode: int - :param api: Details about the API server, "version" and "backend" - :type api: dict - - compose start-ostree [--size XXXX] [--parent PARENT] [--ref REF] [--url URL] <BLUEPRINT> <TYPE> [<IMAGE-NAME> <PROFILE.TOML>] - """ - if api == None: - log.error("Missing api version/backend") - return 1 - - if api["backend"] == "lorax-composer": - log.warning("lorax-composer doesn not support start-ostree.") - return 1 - - # Get the optional arguments before checking other parameters - try: - args, size = get_size(args) - args, parent = get_parent(args) - args, ref = get_ref(args) - args, url = get_url(args) - except (RuntimeError, ValueError) as e: - log.error(str(e)) - return 1 - - if len(args) == 0: - log.error("start-ostree is missing the blueprint name, output type, and ostree details") - return 1 - if len(args) == 1: - log.error("start-ostree is missing the output type") - return 1 - if len(args) == 3: - log.error("start-ostree is missing the provider TOML file") - return 1 - - config = { - "blueprint_name": args[0], - "compose_type": args[1], - "branch": "master", - "ostree": {"ref": ref, "parent": parent}, - } - if size > 0: - config["size"] = size - if len(url) > 0: - config["ostree"]["url"] = url - - if len(args) == 4: - config["upload"] = {"image_name": args[2]} - # profile TOML file (maybe) - try: - config["upload"].update(toml.load(args[3])) - except toml.TomlDecodeError as e: - log.error(str(e)) - return 1 - - if testmode: - test_url = "?test=%d" % testmode - else: - test_url = "" - api_route = client.api_url(api_version, "/compose" + test_url) - result = client.post_url_json(socket_path, api_route, json.dumps(config)) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - print("Compose %s added to the queue" % result["build_id"]) - - if "upload_id" in result and result["upload_id"]: - print ("Upload %s added to the upload queue" % result["upload_id"]) - - return rc
- -
[docs]def compose_log(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Show the last part of the compose log - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - compose log <uuid> [<size>kB] - - This will display the last 1kB of the compose's log file. Can be used to follow progress - during the build. - """ - if len(args) == 0: - log.error("log is missing the compose build id") - return 1 - if len(args) == 2: - try: - log_size = int(args[1]) - except ValueError: - log.error("Log size must be an integer.") - return 1 - else: - log_size = 1024 - - api_route = client.api_url(api_version, "/compose/log/%s?size=%d" % (args[0], log_size)) - try: - result = client.get_url_raw(socket_path, api_route) - except RuntimeError as e: - print(str(e)) - return 1 - - print(result) - return 0
- -
[docs]def compose_cancel(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Cancel a running compose - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - compose cancel <uuid> - - This will cancel a running compose. It does nothing if the compose has finished. - """ - if len(args) == 0: - log.error("cancel is missing the compose build id") - return 1 - - api_route = client.api_url(api_version, "/compose/cancel/%s" % args[0]) - result = client.delete_url_json(socket_path, api_route) - return handle_api_result(result, show_json)[0]
- -
[docs]def compose_delete(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Delete a finished compose's results - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - compose delete <uuid,...> - - Delete the listed compose results. It will only delete results for composes that have finished - or failed, not a running compose. - """ - if len(args) == 0: - log.error("delete is missing the compose build id") - return 1 - - api_route = client.api_url(api_version, "/compose/delete/%s" % (",".join(argify(args)))) - result = client.delete_url_json(socket_path, api_route) - return handle_api_result(result, show_json)[0]
- -
[docs]def compose_info(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Return detailed information about the compose - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - compose info <uuid> - - This returns information about the compose, including the blueprint and the dependencies. - """ - if len(args) == 0: - log.error("info is missing the compose build id") - return 1 - - api_route = client.api_url(api_version, "/compose/info/%s" % args[0]) - result = client.get_url_json(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - if result["image_size"] > 0: - image_size = str(result["image_size"]) - else: - image_size = "" - - - print("%s %-8s %-15s %s %-16s %s" % (result["id"], - result["queue_status"], - result["blueprint"]["name"], - result["blueprint"]["version"], - result["compose_type"], - image_size)) - print("Packages:") - for p in result["blueprint"]["packages"]: - print(" %s-%s" % (p["name"], p["version"])) - - print("Modules:") - for m in result["blueprint"]["modules"]: - print(" %s-%s" % (m["name"], m["version"])) - - print("Dependencies:") - for d in result["deps"]["packages"]: - print(" " + packageNEVRA(d)) - - return rc
- -
[docs]def compose_metadata(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Download a tar file of the compose's metadata - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - compose metadata <uuid> - - Saves the metadata as uuid-metadata.tar - """ - if len(args) == 0: - log.error("metadata is missing the compose build id") - return 1 - - api_route = client.api_url(api_version, "/compose/metadata/%s" % args[0]) - try: - rc = client.download_file(socket_path, api_route) - except RuntimeError as e: - print(str(e)) - rc = 1 - - return rc
- -
[docs]def compose_results(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Download a tar file of the compose's results - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - compose results <uuid> - - The results includes the metadata, output image, and logs. - It is saved as uuid.tar - """ - if len(args) == 0: - log.error("results is missing the compose build id") - return 1 - - api_route = client.api_url(api_version, "/compose/results/%s" % args[0]) - try: - rc = client.download_file(socket_path, api_route, sys.stdout.isatty()) - except RuntimeError as e: - print(str(e)) - rc = 1 - - return rc
- -
[docs]def compose_logs(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Download a tar of the compose's logs - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - compose logs <uuid> - - Saves the logs as uuid-logs.tar - """ - if len(args) == 0: - log.error("logs is missing the compose build id") - return 1 - - api_route = client.api_url(api_version, "/compose/logs/%s" % args[0]) - try: - rc = client.download_file(socket_path, api_route, sys.stdout.isatty()) - except RuntimeError as e: - print(str(e)) - rc = 1 - - return rc
- -
[docs]def compose_image(socket_path, api_version, args, show_json=False, testmode=0, api=None): - """Download the compose's output image - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - compose image <uuid> - - This downloads only the result image, saving it as the image name, which depends on the type - of compose that was selected. - """ - if len(args) == 0: - log.error("logs is missing the compose build id") - return 1 - - api_route = client.api_url(api_version, "/compose/image/%s" % args[0]) - try: - rc = client.download_file(socket_path, api_route, sys.stdout.isatty()) - except RuntimeError as e: - print(str(e)) - rc = 1 - - return rc
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/modules.html b/docs/html/_modules/composer/cli/modules.html deleted file mode 100644 index 0e891472..00000000 --- a/docs/html/_modules/composer/cli/modules.html +++ /dev/null @@ -1,248 +0,0 @@ - - - - - - - - - - - composer.cli.modules — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.modules

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-from composer import http_client as client
-from composer.cli.help import modules_help
-from composer.cli.utilities import handle_api_result
-
-
[docs]def modules_cmd(opts): - """Process modules commands - - :param opts: Cmdline arguments - :type opts: argparse.Namespace - :returns: Value to return from sys.exit() - :rtype: int - """ - if opts.args[1] == "help" or opts.args[1] == "--help": - print(modules_help) - return 0 - elif opts.args[1] != "list": - log.error("Unknown modules command: %s", opts.args[1]) - return 1 - - api_route = client.api_url(opts.api_version, "/modules/list") - result = client.get_url_json_unlimited(opts.socket, api_route) - (rc, exit_now) = handle_api_result(result, opts.json) - if exit_now: - return rc - - # "list" should output a plain list of identifiers, one per line. - print("\n".join(r["name"] for r in result["modules"])) - - return rc
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/projects.html b/docs/html/_modules/composer/cli/projects.html deleted file mode 100644 index 635c9930..00000000 --- a/docs/html/_modules/composer/cli/projects.html +++ /dev/null @@ -1,310 +0,0 @@ - - - - - - - - - - - composer.cli.projects — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.projects

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-import textwrap
-
-from composer import http_client as client
-from composer.cli.help import projects_help
-from composer.cli.utilities import handle_api_result
-
-
[docs]def projects_cmd(opts): - """Process projects commands - - :param opts: Cmdline arguments - :type opts: argparse.Namespace - :returns: Value to return from sys.exit() - :rtype: int - """ - cmd_map = { - "list": projects_list, - "info": projects_info, - } - if opts.args[1] == "help" or opts.args[1] == "--help": - print(projects_help) - return 0 - elif opts.args[1] not in cmd_map: - log.error("Unknown projects command: %s", opts.args[1]) - return 1 - - return cmd_map[opts.args[1]](opts.socket, opts.api_version, opts.args[2:], opts.json)
- -
[docs]def projects_list(socket_path, api_version, args, show_json=False): - """Output the list of available projects - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - projects list - """ - api_route = client.api_url(api_version, "/projects/list") - result = client.get_url_json_unlimited(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - for proj in result["projects"]: - for k in [field for field in ("name", "summary", "homepage", "description") if proj[field]]: - print("%s: %s" % (k.title(), textwrap.fill(proj[k], subsequent_indent=" " * (len(k)+2)))) - print("\n\n") - - return rc
- -
[docs]def projects_info(socket_path, api_version, args, show_json=False): - """Output info on a list of projects - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - projects info <project,...> - """ - if len(args) == 0: - log.error("projects info is missing the packages") - return 1 - - api_route = client.api_url(api_version, "/projects/info/%s" % ",".join(args)) - result = client.get_url_json(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - for proj in result["projects"]: - for k in [field for field in ("name", "summary", "homepage", "description") if proj[field]]: - print("%s: %s" % (k.title(), textwrap.fill(proj[k], subsequent_indent=" " * (len(k)+2)))) - print("Builds: ") - for build in proj["builds"]: - print(" %s%s-%s.%s at %s for %s" % ("" if not build["epoch"] else str(build["epoch"]) + ":", - build["source"]["version"], - build["release"], - build["arch"], - build["build_time"], - build["changelog"])) - print("") - return rc
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/providers.html b/docs/html/_modules/composer/cli/providers.html deleted file mode 100644 index ddca8dee..00000000 --- a/docs/html/_modules/composer/cli/providers.html +++ /dev/null @@ -1,523 +0,0 @@ - - - - - - - - - - - composer.cli.providers — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.providers

-#
-# Copyright (C) 2019  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-import json
-import toml
-import os
-
-from composer import http_client as client
-from composer.cli.help import providers_help
-from composer.cli.utilities import handle_api_result, toml_filename
-
-
[docs]def providers_cmd(opts): - """Process providers commands - - :param opts: Cmdline arguments - :type opts: argparse.Namespace - :returns: Value to return from sys.exit() - :rtype: int - - This dispatches the providers commands to a function - """ - cmd_map = { - "list": providers_list, - "info": providers_info, - "show": providers_show, - "push": providers_push, - "save": providers_save, - "delete": providers_delete, - "template": providers_template - } - if opts.args[1] == "help" or opts.args[1] == "--help": - print(providers_help) - return 0 - elif opts.args[1] not in cmd_map: - log.error("Unknown providers command: %s", opts.args[1]) - return 1 - - return cmd_map[opts.args[1]](opts.socket, opts.api_version, opts.args[2:], opts.json, opts.testmode)
- -
[docs]def providers_list(socket_path, api_version, args, show_json=False, testmode=0): - """Return the list of providers - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - providers list - """ - api_route = client.api_url(api_version, "/upload/providers") - r = client.get_url_json(socket_path, api_route) - results = r["providers"] - if not results: - return 0 - - if show_json: - print(json.dumps(results, indent=4)) - else: - if len(args) == 1: - if args[0] not in results: - log.error("%s is not a valid provider", args[0]) - return 1 - print("\n".join(sorted(results[args[0]]["profiles"].keys()))) - else: - print("\n".join(sorted(results.keys()))) - - return 0
- -
[docs]def providers_info(socket_path, api_version, args, show_json=False, testmode=0): - """Show information about each provider - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - providers info <PROVIDER> - """ - if len(args) == 0: - log.error("info is missing the provider name") - return 1 - - api_route = client.api_url(api_version, "/upload/providers") - r = client.get_url_json(socket_path, api_route) - results = r["providers"] - if not results: - return 0 - - if show_json: - print(json.dumps(results, indent=4)) - else: - if args[0] not in results: - log.error("%s is not a valid provider", args[0]) - return 1 - p = results[args[0]] - print("%s supports these image types: %s" % (p["display"], ", ".join(p["supported_types"]))) - print("Settings:") - for k in p["settings-info"]: - f = p["settings-info"][k] - print(" %-20s: %s is a %s" % (k, f["display"], f["type"])) - - return 0
- -
[docs]def providers_show(socket_path, api_version, args, show_json=False, testmode=0): - """Return details about a provider - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - providers show <provider> <profile> - """ - if len(args) == 0: - log.error("show is missing the provider name") - return 1 - if len(args) == 1: - log.error("show is missing the profile name") - return 1 - - api_route = client.api_url(api_version, "/upload/providers") - r = client.get_url_json(socket_path, api_route) - results = r["providers"] - if not results: - return 0 - - if show_json: - print(json.dumps(results, indent=4)) - else: - if args[0] not in results: - log.error("%s is not a valid provider", args[0]) - return 1 - if args[1] not in results[args[0]]["profiles"]: - log.error("%s is not a valid %s profile", args[1], args[0]) - return 1 - - # Print the details for this profile - # fields are different for each provider, so we just print out the key:values - for k in results[args[0]]["profiles"][args[1]]: - print("%s: %s" % (k, results[args[0]]["profiles"][args[1]][k])) - return 0
- -
[docs]def providers_push(socket_path, api_version, args, show_json=False, testmode=0): - """Add a new provider profile or overwrite an existing one - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - providers push <profile.toml> - - """ - if len(args) == 0: - log.error("push is missing the profile TOML file") - return 1 - if not os.path.exists(args[0]): - log.error("Missing profile TOML file: %s", args[0]) - return 1 - - api_route = client.api_url(api_version, "/upload/providers/save") - profile = toml.load(args[0]) - result = client.post_url_json(socket_path, api_route, json.dumps(profile)) - return handle_api_result(result, show_json)[0]
- -
[docs]def providers_save(socket_path, api_version, args, show_json=False, testmode=0): - """Save a provider's profile to a TOML file - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - providers save <provider> <profile> - - """ - if len(args) == 0: - log.error("save is missing the provider name") - return 1 - if len(args) == 1: - log.error("save is missing the profile name") - return 1 - - api_route = client.api_url(api_version, "/upload/providers") - r = client.get_url_json(socket_path, api_route) - results = r["providers"] - if not results: - return 0 - - if show_json: - print(json.dumps(results, indent=4)) - else: - if args[0] not in results: - log.error("%s is not a valid provider", args[0]) - return 1 - if args[1] not in results[args[0]]["profiles"]: - log.error("%s is not a valid %s profile", args[1], args[0]) - return 1 - - profile = { - "provider": args[0], - "profile": args[1], - "settings": results[args[0]]["profiles"][args[1]] - } - with open(toml_filename(args[1]), "w") as f: - f.write(toml.dumps(profile)) - - return 0
- -
[docs]def providers_delete(socket_path, api_version, args, show_json=False, testmode=0): - """Delete a profile from a provider - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - providers delete <provider> <profile> - - """ - if len(args) == 0: - log.error("delete is missing the provider name") - return 1 - if len(args) == 1: - log.error("delete is missing the profile name") - return 1 - - api_route = client.api_url(api_version, "/upload/providers/delete/%s/%s" % (args[0], args[1])) - result = client.delete_url_json(socket_path, api_route) - return handle_api_result(result, show_json)[0]
- -
[docs]def providers_template(socket_path, api_version, args, show_json=False, testmode=0): - """Return a TOML template for setting the provider's fields - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - providers template <provider> - """ - if len(args) == 0: - log.error("template is missing the provider name") - return 1 - - api_route = client.api_url(api_version, "/upload/providers") - r = client.get_url_json(socket_path, api_route) - results = r["providers"] - if not results: - return 0 - - if show_json: - print(json.dumps(results, indent=4)) - return 0 - - if args[0] not in results: - log.error("%s is not a valid provider", args[0]) - return 1 - - template = {"provider": args[0]} - settings = results[args[0]]["settings-info"] - template["settings"] = dict([(k, settings[k]["display"]) for k in settings]) - print(toml.dumps(template)) - - return 0
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/sources.html b/docs/html/_modules/composer/cli/sources.html deleted file mode 100644 index 2c5b7eda..00000000 --- a/docs/html/_modules/composer/cli/sources.html +++ /dev/null @@ -1,353 +0,0 @@ - - - - - - - - - - - composer.cli.sources — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.sources

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-import os
-
-from composer import http_client as client
-from composer.cli.help import sources_help
-from composer.cli.utilities import argify, handle_api_result
-
-
[docs]def sources_cmd(opts): - """Process sources commands - - :param opts: Cmdline arguments - :type opts: argparse.Namespace - :returns: Value to return from sys.exit() - :rtype: int - """ - cmd_map = { - "list": sources_list, - "info": sources_info, - "add": sources_add, - "change": sources_add, - "delete": sources_delete, - } - if opts.args[1] == "help" or opts.args[1] == "--help": - print(sources_help) - return 0 - elif opts.args[1] not in cmd_map: - log.error("Unknown sources command: %s", opts.args[1]) - return 1 - - return cmd_map[opts.args[1]](opts.socket, opts.api_version, opts.args[2:], opts.json)
- -
[docs]def sources_list(socket_path, api_version, args, show_json=False): - """Output the list of available sources - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - sources list - """ - api_route = client.api_url(api_version, "/projects/source/list") - result = client.get_url_json(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - # "list" should output a plain list of identifiers, one per line. - print("\n".join(result["sources"])) - return rc
- -
[docs]def sources_info(socket_path, api_version, args, show_json=False): - """Output info on a list of projects - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - sources info <source-name> - """ - if len(args) == 0: - log.error("sources info is missing the name of the source") - return 1 - - if show_json: - api_route = client.api_url(api_version, "/projects/source/info/%s" % ",".join(args)) - result = client.get_url_json(socket_path, api_route) - rc = handle_api_result(result, show_json)[0] - else: - api_route = client.api_url(api_version, "/projects/source/info/%s?format=toml" % ",".join(args)) - try: - result = client.get_url_raw(socket_path, api_route) - print(result) - rc = 0 - except RuntimeError as e: - print(str(e)) - rc = 1 - - return rc
- -
[docs]def sources_add(socket_path, api_version, args, show_json=False): - """Add or change a source - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - sources add <source.toml> - """ - api_route = client.api_url(api_version, "/projects/source/new") - rval = 0 - for source in argify(args): - if not os.path.exists(source): - log.error("Missing source file: %s", source) - continue - with open(source, "r") as f: - source_toml = f.read() - - result = client.post_url_toml(socket_path, api_route, source_toml) - if handle_api_result(result, show_json)[0]: - rval = 1 - return rval
- -
[docs]def sources_delete(socket_path, api_version, args, show_json=False): - """Delete a source - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - - sources delete <source-name> - """ - api_route = client.api_url(api_version, "/projects/source/delete/%s" % args[0]) - result = client.delete_url_json(socket_path, api_route) - - return handle_api_result(result, show_json)[0]
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/status.html b/docs/html/_modules/composer/cli/status.html deleted file mode 100644 index 8e5de5d7..00000000 --- a/docs/html/_modules/composer/cli/status.html +++ /dev/null @@ -1,256 +0,0 @@ - - - - - - - - - - - composer.cli.status — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.status

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-from composer import http_client as client
-from composer.cli.help import status_help
-from composer.cli.utilities import handle_api_result
-
-
[docs]def status_cmd(opts): - """Process status commands - - :param opts: Cmdline arguments - :type opts: argparse.Namespace - :returns: Value to return from sys.exit() - :rtype: int - """ - if opts.args[1] == "help" or opts.args[1] == "--help": - print(status_help) - return 0 - elif opts.args[1] != "show": - log.error("Unknown status command: %s", opts.args[1]) - return 1 - - result = client.get_url_json(opts.socket, "/api/status") - (rc, exit_now) = handle_api_result(result, opts.json) - if exit_now: - return rc - - print("API server status:") - print(" Database version: " + result["db_version"]) - print(" Database supported: %s" % result["db_supported"]) - print(" Schema version: " + result["schema_version"]) - print(" API version: " + result["api"]) - print(" Backend: " + result["backend"]) - print(" Build: " + result["build"]) - - if result["msgs"]: - print("Error messages:") - print("\n".join([" " + r for r in result["msgs"]])) - - return rc
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/upload.html b/docs/html/_modules/composer/cli/upload.html deleted file mode 100644 index 4ebf0904..00000000 --- a/docs/html/_modules/composer/cli/upload.html +++ /dev/null @@ -1,477 +0,0 @@ - - - - - - - - - - - composer.cli.upload — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.upload

-#
-# Copyright (C) 2019  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-import json
-import toml
-import os
-
-from composer import http_client as client
-from composer.cli.help import upload_help
-from composer.cli.utilities import handle_api_result
-
-
[docs]def upload_cmd(opts): - """Process upload commands - - :param opts: Cmdline arguments - :type opts: argparse.Namespace - :returns: Value to return from sys.exit() - :rtype: int - - This dispatches the upload commands to a function - """ - cmd_map = { - "list": upload_list, - "info": upload_info, - "start": upload_start, - "log": upload_log, - "cancel": upload_cancel, - "delete": upload_delete, - "reset": upload_reset, - } - if opts.args[1] == "help" or opts.args[1] == "--help": - print(upload_help) - return 0 - elif opts.args[1] not in cmd_map: - log.error("Unknown upload command: %s", opts.args[1]) - return 1 - - return cmd_map[opts.args[1]](opts.socket, opts.api_version, opts.args[2:], opts.json, opts.testmode)
- -
[docs]def upload_list(socket_path, api_version, args, show_json=False, testmode=0): - """Return the composes and their associated upload uuids and status - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - upload list - """ - api_route = client.api_url(api_version, "/compose/finished") - r = client.get_url_json(socket_path, api_route) - results = r["finished"] - if not results: - return 0 - - if show_json: - print(json.dumps(results, indent=4)) - else: - compose_fmt = "{id} {queue_status} {blueprint} {version} {compose_type}" - upload_fmt = ' {uuid} "{image_name}" {provider_name} {status}' - for c in results: - print(compose_fmt.format(**c)) - print("\n".join(upload_fmt.format(**u) for u in c["uploads"])) - print() - - return 0
- -
[docs]def upload_info(socket_path, api_version, args, show_json=False, testmode=0): - """Return detailed information about the upload - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - upload info <uuid> - - This returns information about the upload, including uuid, name, status, service, and image. - """ - if len(args) == 0: - log.error("info is missing the upload uuid") - return 1 - - api_route = client.api_url(api_version, "/upload/info/%s" % args[0]) - result = client.get_url_json(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - image_path = result["upload"]["image_path"] - print("%s %-8s %-15s %-8s %s" % (result["upload"]["uuid"], - result["upload"]["status"], - result["upload"]["image_name"], - result["upload"]["provider_name"], - os.path.basename(image_path) if image_path else "UNFINISHED")) - - return rc
- -
[docs]def upload_start(socket_path, api_version, args, show_json=False, testmode=0): - """Start upload up a build uuid image - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - upload start <build-uuid> <image-name> [<provider> <profile> | <profile.toml>] - """ - if len(args) == 0: - log.error("start is missing the compose build id") - return 1 - if len(args) == 1: - log.error("start is missing the image name") - return 1 - if len(args) == 2: - log.error("start is missing the provider and profile details") - return 1 - - body = {"image_name": args[1]} - if len(args) == 3: - try: - body.update(toml.load(args[2])) - except toml.TomlDecodeError as e: - log.error(str(e)) - return 1 - elif len(args) == 4: - body["provider"] = args[2] - body["profile"] = args[3] - else: - log.error("start has incorrect number of arguments") - return 1 - - api_route = client.api_url(api_version, "/compose/uploads/schedule/%s" % args[0]) - result = client.post_url_json(socket_path, api_route, json.dumps(body)) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - print("Upload %s added to the queue" % result["upload_id"]) - return rc
- -
[docs]def upload_log(socket_path, api_version, args, show_json=False, testmode=0): - """Return the upload log - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - upload log <build-uuid> - """ - if len(args) == 0: - log.error("log is missing the upload uuid") - return 1 - - api_route = client.api_url(api_version, "/upload/log/%s" % args[0]) - result = client.get_url_json(socket_path, api_route) - (rc, exit_now) = handle_api_result(result, show_json) - if exit_now: - return rc - - print("Upload log for %s:\n" % result["upload_id"]) - print(result["log"]) - - return 0
- -
[docs]def upload_cancel(socket_path, api_version, args, show_json=False, testmode=0): - """Cancel the queued or running upload - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - upload cancel <build-uuid> - """ - if len(args) == 0: - log.error("cancel is missing the upload uuid") - return 1 - - api_route = client.api_url(api_version, "/upload/cancel/%s" % args[0]) - result = client.delete_url_json(socket_path, api_route) - return handle_api_result(result, show_json)[0]
- -
[docs]def upload_delete(socket_path, api_version, args, show_json=False, testmode=0): - """Delete an upload and remove it from the build - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - upload delete <build-uuid> - """ - if len(args) == 0: - log.error("delete is missing the upload uuid") - return 1 - - api_route = client.api_url(api_version, "/upload/delete/%s" % args[0]) - result = client.delete_url_json(socket_path, api_route) - return handle_api_result(result, show_json)[0]
- -
[docs]def upload_reset(socket_path, api_version, args, show_json=False, testmode=0): - """Reset the upload and execute it again - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param api_version: Version of the API to talk to. eg. "0" - :type api_version: str - :param args: List of remaining arguments from the cmdline - :type args: list of str - :param show_json: Set to True to show the JSON output instead of the human readable output - :type show_json: bool - :param testmode: unused in this function - :type testmode: int - - upload reset <build-uuid> - """ - if len(args) == 0: - log.error("reset is missing the upload uuid") - return 1 - - api_route = client.api_url(api_version, "/upload/reset/%s" % args[0]) - result = client.post_url_json(socket_path, api_route, json.dumps({})) - return handle_api_result(result, show_json)[0]
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/cli/utilities.html b/docs/html/_modules/composer/cli/utilities.html deleted file mode 100644 index 1548aeae..00000000 --- a/docs/html/_modules/composer/cli/utilities.html +++ /dev/null @@ -1,323 +0,0 @@ - - - - - - - - - - - composer.cli.utilities — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.cli.utilities

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-import json
-
-
[docs]def argify(args): - """Take a list of human args and return a list with each item - - :param args: list of strings with possible commas and spaces - :type args: list of str - :returns: List of all the items - :rtype: list of str - - Examples: - - ["one,two", "three", ",four", ",five,"] returns ["one", "two", "three", "four", "five"] - """ - return [i for i in [arg for entry in args for arg in entry.split(",")] if i]
- -
[docs]def toml_filename(blueprint_name): - """Convert a blueprint name into a filename.toml - - :param blueprint_name: The blueprint's name - :type blueprint_name: str - :returns: The blueprint name with ' ' converted to - and .toml appended - :rtype: str - """ - return blueprint_name.replace(" ", "-") + ".toml"
- -
[docs]def frozen_toml_filename(blueprint_name): - """Convert a blueprint name into a filename.toml - - :param blueprint_name: The blueprint's name - :type blueprint_name: str - :returns: The blueprint name with ' ' converted to - and .toml appended - :rtype: str - """ - return blueprint_name.replace(" ", "-") + ".frozen.toml"
- -
[docs]def handle_api_result(result, show_json=False): - """Log any errors, return the correct value - - :param result: JSON result from the http query - :type result: dict - :rtype: tuple - :returns: (rc, should_exit_now) - - Return the correct rc for the program (0 or 1), and whether or - not to continue processing the results. - """ - if show_json: - print(json.dumps(result, indent=4)) - else: - for err in result.get("errors", []): - log.error(err["msg"]) - - # What's the rc? If status is present, use that - # If not, use length of errors - if "status" in result: - rc = int(not result["status"]) - else: - rc = int(len(result.get("errors", [])) > 0) - - # Caller should return if showing json, or status was present and False - exit_now = show_json or ("status" in result and rc) - return (rc, exit_now)
- -
[docs]def packageNEVRA(pkg): - """Return the package info as a NEVRA - - :param pkg: The package details - :type pkg: dict - :returns: name-[epoch:]version-release-arch - :rtype: str - """ - if pkg["epoch"]: - return "%s-%s:%s-%s.%s" % (pkg["name"], pkg["epoch"], pkg["version"], pkg["release"], pkg["arch"]) - else: - return "%s-%s-%s.%s" % (pkg["name"], pkg["version"], pkg["release"], pkg["arch"])
- -
[docs]def get_arg(args, name, argtype=None): - """Return optional value from args, and remaining args - - :param args: list of arguments - :type args: list of strings - :param name: The argument to remove from the args list - :type name: string - :param argtype: Type to use for checking the argument value - :type argtype: type - :returns: (args, value) - :rtype: tuple - - This removes the optional argument and value from the argument list, returns the new list, - and the value of the argument. - """ - try: - idx = args.index(name) - if len(args) < idx+2: - raise RuntimeError(f"{name} is missing the value") - value = args[idx+1] - except ValueError: - return (args, None) - - if argtype: - value = argtype(value) - - return (args[:idx]+args[idx+2:], value)
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/http_client.html b/docs/html/_modules/composer/http_client.html deleted file mode 100644 index 69b73987..00000000 --- a/docs/html/_modules/composer/http_client.html +++ /dev/null @@ -1,458 +0,0 @@ - - - - - - - - - - - composer.http_client — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.http_client

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("composer-cli")
-
-import os
-import sys
-import json
-from urllib.parse import urlparse, urlunparse
-
-from composer.unix_socket import UnixHTTPConnectionPool
-
-
[docs]def api_url(api_version, url): - """Return the versioned path to the API route - - :param api_version: The version of the API to talk to. eg. "0" - :type api_version: str - :param url: The API route to talk to - :type url: str - :returns: The full url to use for the route and API version - :rtype: str - """ - return os.path.normpath("/api/v%s/%s" % (api_version, url))
- -
[docs]def append_query(url, query): - """Add a query argument to a URL - - The query should be of the form "param1=what&param2=ever", i.e., no - leading '?'. The new query data will be appended to any existing - query string. - - :param url: The original URL - :type url: str - :param query: The query to append - :type query: str - :returns: The new URL with the query argument included - :rtype: str - """ - - url_parts = urlparse(url) - if url_parts.query: - new_query = url_parts.query + "&" + query - else: - new_query = query - return urlunparse([url_parts[0], url_parts[1], url_parts[2], - url_parts[3], new_query, url_parts[5]])
- -
[docs]def get_url_raw(socket_path, url): - """Return the raw results of a GET request - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param url: URL to request - :type url: str - :returns: The raw response from the server - :rtype: str - """ - http = UnixHTTPConnectionPool(socket_path) - r = http.request("GET", url) - if r.status == 400: - err = json.loads(r.data.decode("utf-8")) - if "status" in err and err["status"] == False: - msgs = [e["msg"] for e in err["errors"]] - raise RuntimeError(", ".join(msgs)) - - return r.data.decode('utf-8')
- -
[docs]def get_url_json(socket_path, url): - """Return the JSON results of a GET request - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param url: URL to request - :type url: str - :returns: The json response from the server - :rtype: dict - """ - http = UnixHTTPConnectionPool(socket_path) - r = http.request("GET", url) - return json.loads(r.data.decode('utf-8'))
- -
[docs]def get_url_json_unlimited(socket_path, url, total_fn=None): - """Return the JSON results of a GET request - - For URLs that use offset/limit arguments, this command will - fetch all results for the given request. - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param url: URL to request - :type url: str - :returns: The json response from the server - :rtype: dict - """ - def default_total_fn(data): - """Return the total number of available results""" - return data["total"] - - http = UnixHTTPConnectionPool(socket_path) - - # Start with limit=0 to just get the number of objects - total_url = append_query(url, "limit=0") - r_total = http.request("GET", total_url) - json_total = json.loads(r_total.data.decode('utf-8')) - - # Where to get the total from - if not total_fn: - total_fn = default_total_fn - - # Add the "total" returned by limit=0 as the new limit - unlimited_url = append_query(url, "limit=%d" % total_fn(json_total)) - r_unlimited = http.request("GET", unlimited_url) - return json.loads(r_unlimited.data.decode('utf-8'))
- -
[docs]def delete_url_json(socket_path, url): - """Send a DELETE request to the url and return JSON response - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param url: URL to send DELETE to - :type url: str - :returns: The json response from the server - :rtype: dict - """ - http = UnixHTTPConnectionPool(socket_path) - r = http.request("DELETE", url) - return json.loads(r.data.decode("utf-8"))
- -
[docs]def post_url(socket_path, url, body): - """POST raw data to the URL - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param url: URL to send POST to - :type url: str - :param body: The data for the body of the POST - :type body: str - :returns: The json response from the server - :rtype: dict - """ - http = UnixHTTPConnectionPool(socket_path) - r = http.request("POST", url, - body=body.encode("utf-8")) - return json.loads(r.data.decode("utf-8"))
- -
[docs]def post_url_toml(socket_path, url, body): - """POST a TOML string to the URL - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param url: URL to send POST to - :type url: str - :param body: The data for the body of the POST - :type body: str - :returns: The json response from the server - :rtype: dict - """ - http = UnixHTTPConnectionPool(socket_path) - r = http.request("POST", url, - body=body.encode("utf-8"), - headers={"Content-Type": "text/x-toml"}) - return json.loads(r.data.decode("utf-8"))
- -
[docs]def post_url_json(socket_path, url, body): - """POST some JSON data to the URL - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param url: URL to send POST to - :type url: str - :param body: The data for the body of the POST - :type body: str - :returns: The json response from the server - :rtype: dict - """ - http = UnixHTTPConnectionPool(socket_path) - r = http.request("POST", url, - body=body.encode("utf-8"), - headers={"Content-Type": "application/json"}) - return json.loads(r.data.decode("utf-8"))
- -
[docs]def get_filename(headers): - """Get the filename from the response header - - :param response: The urllib3 response object - :type response: Response - :raises: RuntimeError if it cannot find a filename in the header - :returns: Filename from content-disposition header - :rtype: str - """ - log.debug("Headers = %s", headers) - if "content-disposition" not in headers: - raise RuntimeError("No Content-Disposition header; cannot get filename") - - try: - k, _, v = headers["content-disposition"].split(";")[1].strip().partition("=") - if k != "filename": - raise RuntimeError("No filename= found in content-disposition header") - except RuntimeError: - raise - except Exception as e: - raise RuntimeError("Error parsing filename from content-disposition header: %s" % str(e)) - - return os.path.basename(v)
- -
[docs]def download_file(socket_path, url, progress=True): - """Download a file, saving it to the CWD with the included filename - - :param socket_path: Path to the Unix socket to use for API communication - :type socket_path: str - :param url: URL to send POST to - :type url: str - """ - http = UnixHTTPConnectionPool(socket_path) - r = http.request("GET", url, preload_content=False) - if r.status == 400: - err = json.loads(r.data.decode("utf-8")) - if not err["status"]: - msgs = [e["msg"] for e in err["errors"]] - raise RuntimeError(", ".join(msgs)) - - filename = get_filename(r.headers) - if os.path.exists(filename): - msg = "%s exists, skipping download" % filename - log.error(msg) - raise RuntimeError(msg) - - with open(filename, "wb") as f: - while True: - data = r.read(10 * 1024**2) - if not data: - break - f.write(data) - - if progress: - data_written = f.tell() - if data_written > 5 * 1024**2: - sys.stdout.write("%s: %0.2f MB \r" % (filename, data_written / 1024**2)) - else: - sys.stdout.write("%s: %0.2f kB\r" % (filename, data_written / 1024)) - sys.stdout.flush() - - print("") - r.release_conn() - - return 0
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/composer/unix_socket.html b/docs/html/_modules/composer/unix_socket.html deleted file mode 100644 index 82ee83f0..00000000 --- a/docs/html/_modules/composer/unix_socket.html +++ /dev/null @@ -1,261 +0,0 @@ - - - - - - - - - - - composer.unix_socket — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for composer.unix_socket

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import http.client
-import socket
-import urllib3
-
-
-# These 2 classes were adapted and simplified for use with just urllib3.
-# Originally from https://github.com/msabramo/requests-unixsocket/blob/master/requests_unixsocket/adapters.py
-
-# The following was adapted from some code from docker-py
-# https://github.com/docker/docker-py/blob/master/docker/transport/unixconn.py
-
[docs]class UnixHTTPConnection(http.client.HTTPConnection, object): - - def __init__(self, socket_path, timeout=60*5): - """Create an HTTP connection to a unix domain socket - - :param socket_path: The path to the Unix domain socket - :param timeout: Number of seconds to timeout the connection - """ - super(UnixHTTPConnection, self).__init__('localhost', timeout=timeout) - self.socket_path = socket_path - self.sock = None - - def __del__(self): # base class does not have d'tor - if self.sock: - self.sock.close() - -
[docs] def connect(self): - sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) - sock.settimeout(self.timeout) - sock.connect(self.socket_path) - self.sock = sock
- -
[docs]class UnixHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool): - - def __init__(self, socket_path, timeout=60*5): - """Create a connection pool using a Unix domain socket - - :param socket_path: The path to the Unix domain socket - :param timeout: Number of seconds to timeout the connection - - NOTE: retries are disabled for these connections, they are never useful - """ - super(UnixHTTPConnectionPool, self).__init__('localhost', timeout=timeout, retries=False) - self.socket_path = socket_path - - def _new_conn(self): - return UnixHTTPConnection(self.socket_path, self.timeout)
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/index.html b/docs/html/_modules/index.html index 3c94454e..c2b180fb 100644 --- a/docs/html/_modules/index.html +++ b/docs/html/_modules/index.html @@ -1,38 +1,38 @@ - - + - + - + + + Overview: module code — Lorax 35.1 documentation + + + + + + - Overview: module code — Lorax 35.0 documentation - + - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
@@ -233,7 +206,6 @@ - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for lifted.config

-#
-# Copyright (C) 2019  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-from pylorax.sysutils import joinpaths
-
-
[docs]def configure(conf): - """Add lifted settings to the configuration - - :param conf: configuration object - :type conf: ComposerConfig - :returns: None - - This uses the composer.share_dir and composer.lib_dir as the base - directories for the settings. - """ - share_dir = conf.get("composer", "share_dir") - lib_dir = conf.get("composer", "lib_dir") - - conf.add_section("upload") - conf.set("upload", "providers_dir", joinpaths(share_dir, "/lifted/providers/")) - conf.set("upload", "queue_dir", joinpaths(lib_dir, "/upload/queue/")) - conf.set("upload", "settings_dir", joinpaths(lib_dir, "/upload/settings/"))
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/lifted/providers.html b/docs/html/_modules/lifted/providers.html deleted file mode 100644 index 2e4cf211..00000000 --- a/docs/html/_modules/lifted/providers.html +++ /dev/null @@ -1,443 +0,0 @@ - - - - - - - - - - - lifted.providers — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for lifted.providers

-#
-# Copyright (C) 2019 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-
-from glob import glob
-import os
-import re
-import stat
-
-import pylorax.api.toml as toml
-
-
-def _get_profile_path(ucfg, provider_name, profile, exists=True):
-    """Helper to return the directory and path for a provider's profile file
-
-    :param ucfg: upload config
-    :type ucfg: object
-    :param provider_name: the name of the cloud provider, e.g. "azure"
-    :type provider_name: str
-    :param profile: the name of the profile to save
-    :type profile: str != ""
-    :returns: Full path of the profile .toml file
-    :rtype: str
-    :raises: ValueError when passed invalid settings or an invalid profile name
-    :raises: RuntimeError when the provider or profile couldn't be found
-    """
-    # Make sure no path elements are present
-    profile = os.path.basename(profile)
-    provider_name = os.path.basename(provider_name)
-    if not profile:
-        raise ValueError("Profile name cannot be empty!")
-    if not provider_name:
-        raise ValueError("Provider name cannot be empty!")
-
-    directory = os.path.join(ucfg["settings_dir"], provider_name)
-    # create the settings directory if it doesn't exist
-    os.makedirs(directory, exist_ok=True)
-
-    path = os.path.join(directory, f"{profile}.toml")
-    if exists and not os.path.isfile(path):
-        raise RuntimeError(f'Couldn\'t find profile "{profile}"!')
-
-    return os.path.abspath(path)
-
-
[docs]def resolve_provider(ucfg, provider_name): - """Get information about the specified provider as defined in that - provider's `provider.toml`, including the provider's display name and expected - settings. - - At a minimum, each setting has a display name (that likely differs from its - snake_case name) and a type. Currently, there are two types of settings: - string and boolean. String settings can optionally have a "placeholder" - value for use on the front end and a "regex" for making sure that a value - follows an expected pattern. - - :param ucfg: upload config - :type ucfg: object - :param provider_name: the name of the provider to look for - :type provider_name: str - :raises: RuntimeError when the provider couldn't be found - :returns: the provider - :rtype: dict - """ - # Make sure no path elements are present - provider_name = os.path.basename(provider_name) - path = os.path.join(ucfg["providers_dir"], provider_name, "provider.toml") - try: - with open(path) as provider_file: - provider = toml.load(provider_file) - except OSError as error: - raise RuntimeError(f'Couldn\'t find provider "{provider_name}"!') from error - - return provider
- - -
[docs]def load_profiles(ucfg, provider_name): - """Return all settings profiles associated with a provider - - :param ucfg: upload config - :type ucfg: object - :param provider_name: name a provider to find profiles for - :type provider_name: str - :returns: a dict of settings dicts, keyed by profile name - :rtype: dict - """ - # Make sure no path elements are present - provider_name = os.path.basename(provider_name) - - def load_path(path): - with open(path) as file: - return toml.load(file) - - def get_name(path): - return os.path.splitext(os.path.basename(path))[0] - - paths = glob(os.path.join(ucfg["settings_dir"], provider_name, "*")) - return {get_name(path): load_path(path) for path in paths}
- - -
[docs]def resolve_playbook_path(ucfg, provider_name): - """Given a provider's name, return the path to its playbook - - :param ucfg: upload config - :type ucfg: object - :param provider_name: the name of the provider to find the playbook for - :type provider_name: str - :raises: RuntimeError when the provider couldn't be found - :returns: the path to the playbook - :rtype: str - """ - # Make sure no path elements are present - provider_name = os.path.basename(provider_name) - - path = os.path.join(ucfg["providers_dir"], provider_name, "playbook.yaml") - if not os.path.isfile(path): - raise RuntimeError(f'Couldn\'t find playbook for "{provider_name}"!') - return path
- - -
[docs]def list_providers(ucfg): - """List the names of the available upload providers - - :param ucfg: upload config - :type ucfg: object - :returns: a list of all available provider_names - :rtype: list of str - """ - paths = glob(os.path.join(ucfg["providers_dir"], "*")) - return sorted(os.path.basename(path) for path in paths)
- - -
[docs]def validate_settings(ucfg, provider_name, settings, image_name=None): - """Raise a ValueError if any settings are invalid - - :param ucfg: upload config - :type ucfg: object - :param provider_name: the name of the provider to validate the settings against - :type provider_name: str - :param settings: the settings to validate - :type settings: dict - :param image_name: optionally check whether an image_name is valid - :type image_name: str - :raises: ValueError when the passed settings are invalid - :raises: RuntimeError when provider_name can't be found - """ - if image_name == "": - raise ValueError("Image name cannot be empty!") - type_map = {"string": str, "boolean": bool} - settings_info = resolve_provider(ucfg, provider_name)["settings-info"] - for key, value in settings.items(): - if key not in settings_info: - raise ValueError(f'Received unexpected setting: "{key}"!') - setting_type = settings_info[key]["type"] - correct_type = type_map[setting_type] - if not isinstance(value, correct_type): - raise ValueError( - f'Expected a {correct_type} for "{key}", received a {type(value)}!' - ) - if setting_type == "string" and "regex" in settings_info[key]: - if not re.match(settings_info[key]["regex"], value): - raise ValueError(f'Value "{value}" is invalid for setting "{key}"!')
- - -
[docs]def save_settings(ucfg, provider_name, profile, settings): - """Save (and overwrite) settings for a given provider - - :param ucfg: upload config - :type ucfg: object - :param provider_name: the name of the cloud provider, e.g. "azure" - :type provider_name: str - :param profile: the name of the profile to save - :type profile: str != "" - :param settings: settings to save for that provider - :type settings: dict - :raises: ValueError when passed invalid settings or an invalid profile name - """ - path = _get_profile_path(ucfg, provider_name, profile, exists=False) - validate_settings(ucfg, provider_name, settings, image_name=None) - - # touch the TOML file if it doesn't exist - if not os.path.isfile(path): - open(path, "a").close() - - # make sure settings files aren't readable by others, as they will contain - # sensitive credentials - current = stat.S_IMODE(os.lstat(path).st_mode) - os.chmod(path, current & ~stat.S_IROTH) - - with open(path, "w") as settings_file: - toml.dump(settings, settings_file)
- -
[docs]def load_settings(ucfg, provider_name, profile): - """Load settings for a provider's profile - - :param ucfg: upload config - :type ucfg: object - :param provider_name: the name of the cloud provider, e.g. "azure" - :type provider_name: str - :param profile: the name of the profile to save - :type profile: str != "" - :returns: The profile settings for the selected provider - :rtype: dict - :raises: ValueError when passed invalid settings or an invalid profile name - :raises: RuntimeError when the provider or profile couldn't be found - :raises: ValueError when the passed settings are invalid - - This also calls validate_settings on the loaded settings, potentially - raising an error if the saved settings are invalid. - """ - path = _get_profile_path(ucfg, provider_name, profile) - - with open(path) as file: - settings = toml.load(file) - validate_settings(ucfg, provider_name, settings) - return settings
- -
[docs]def delete_profile(ucfg, provider_name, profile): - """Delete a provider's profile settings file - - :param ucfg: upload config - :type ucfg: object - :param provider_name: the name of the cloud provider, e.g. "azure" - :type provider_name: str - :param profile: the name of the profile to save - :type profile: str != "" - :raises: ValueError when passed invalid settings or an invalid profile name - :raises: RuntimeError when the provider or profile couldn't be found - """ - path = _get_profile_path(ucfg, provider_name, profile) - - if os.path.exists(path): - os.unlink(path)
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/lifted/queue.html b/docs/html/_modules/lifted/queue.html deleted file mode 100644 index ad8542ba..00000000 --- a/docs/html/_modules/lifted/queue.html +++ /dev/null @@ -1,467 +0,0 @@ - - - - - - - - - - - lifted.queue — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for lifted.queue

-#
-# Copyright (C) 2019 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-
-from functools import partial
-from glob import glob
-import logging
-import multiprocessing
-
-# We use a multiprocessing Pool for uploads so that we can cancel them with a
-# simple SIGINT, which should bubble down to subprocesses.
-from multiprocessing import Pool
-
-# multiprocessing.dummy is to threads as multiprocessing is to processes.
-# Since daemonic processes can't have children, we use a thread to monitor the
-# upload pool.
-from multiprocessing.dummy import Process
-
-from operator import attrgetter
-import os
-import stat
-import time
-
-import pylorax.api.toml as toml
-
-from lifted.upload import Upload
-from lifted.providers import resolve_playbook_path, validate_settings
-
-# the maximum number of simultaneous uploads
-SIMULTANEOUS_UPLOADS = 1
-
-log = logging.getLogger("lifted")
-multiprocessing.log_to_stderr().setLevel(logging.INFO)
-
-
-def _get_queue_path(ucfg):
-    path = ucfg["queue_dir"]
-
-    # create the upload_queue directory if it doesn't exist
-    os.makedirs(path, exist_ok=True)
-
-    return path
-
-
-def _get_upload_path(ucfg, uuid, write=False):
-    # Make sure no path elements are present
-    uuid = os.path.basename(uuid)
-
-    path = os.path.join(_get_queue_path(ucfg), f"{uuid}.toml")
-    if write and not os.path.exists(path):
-        open(path, "a").close()
-    if os.path.exists(path):
-        # make sure uploads aren't readable by others, as they will contain
-        # sensitive credentials
-        current = stat.S_IMODE(os.lstat(path).st_mode)
-        os.chmod(path, current & ~stat.S_IROTH)
-    return path
-
-
-def _list_upload_uuids(ucfg):
-    paths = glob(os.path.join(_get_queue_path(ucfg), "*"))
-    return [os.path.splitext(os.path.basename(path))[0] for path in paths]
-
-
-def _write_upload(ucfg, upload):
-    with open(_get_upload_path(ucfg, upload.uuid, write=True), "w") as upload_file:
-        toml.dump(upload.serializable(), upload_file)
-
-
-def _write_callback(ucfg):
-    return partial(_write_upload, ucfg)
-
-
-
[docs]def get_upload(ucfg, uuid, ignore_missing=False, ignore_corrupt=False): - """Get an Upload object by UUID - - :param ucfg: upload config - :type ucfg: object - :param uuid: UUID of the upload to get - :type uuid: str - :param ignore_missing: if True, don't raise a RuntimeError when the specified upload is missing, instead just return None - :type ignore_missing: bool - :param ignore_corrupt: if True, don't raise a RuntimeError when the specified upload could not be deserialized, instead just return None - :type ignore_corrupt: bool - :returns: the upload object or None - :rtype: Upload or None - :raises: RuntimeError - """ - try: - with open(_get_upload_path(ucfg, uuid), "r") as upload_file: - return Upload(**toml.load(upload_file)) - except FileNotFoundError as error: - if not ignore_missing: - raise RuntimeError(f"Could not find upload {uuid}!") from error - except toml.TomlError as error: - if not ignore_corrupt: - raise RuntimeError(f"Could not parse upload {uuid}!") from error
- - -
[docs]def get_uploads(ucfg, uuids): - """Gets a list of Upload objects from a list of upload UUIDs, ignoring - missing or corrupt uploads - - :param ucfg: upload config - :type ucfg: object - :param uuids: list of upload UUIDs to get - :type uuids: list of str - :returns: a list of the uploads that were successfully deserialized - :rtype: list of Upload - """ - uploads = ( - get_upload(ucfg, uuid, ignore_missing=True, ignore_corrupt=True) - for uuid in uuids - ) - return list(filter(None, uploads))
- - -
[docs]def get_all_uploads(ucfg): - """Get a list of all stored Upload objects - - :param ucfg: upload config - :type ucfg: object - :returns: a list of all stored upload objects - :rtype: list of Upload - """ - return get_uploads(ucfg, _list_upload_uuids(ucfg))
- - -
[docs]def create_upload(ucfg, provider_name, image_name, settings): - """Creates a new upload - - :param ucfg: upload config - :type ucfg: object - :param provider_name: the name of the cloud provider to upload to, e.g. "azure" - :type provider_name: str - :param image_name: what to name the image in the cloud - :type image_name: str - :param settings: settings to pass to the upload, specific to the cloud provider - :type settings: dict - :returns: the created upload object - :rtype: Upload - """ - validate_settings(ucfg, provider_name, settings, image_name) - return Upload( - provider_name=provider_name, - playbook_path=resolve_playbook_path(ucfg, provider_name), - image_name=image_name, - settings=settings, - status_callback=_write_callback(ucfg), - )
- - -
[docs]def ready_upload(ucfg, uuid, image_path): - """Pass an image_path to an upload and mark it ready to execute - - :param ucfg: upload config - :type ucfg: object - :param uuid: the UUID of the upload to mark ready - :type uuid: str - :param image_path: the path of the image to pass to the upload - :type image_path: str - """ - get_upload(ucfg, uuid).ready(image_path, _write_callback(ucfg))
- - -
[docs]def reset_upload(ucfg, uuid, new_image_name=None, new_settings=None): - """Reset an upload so it can be attempted again - - :param ucfg: upload config - :type ucfg: object - :param uuid: the UUID of the upload to reset - :type uuid: str - :param new_image_name: optionally update the upload's image_name - :type new_image_name: str - :param new_settings: optionally update the upload's settings - :type new_settings: dict - """ - upload = get_upload(ucfg, uuid) - validate_settings( - ucfg, - upload.provider_name, - new_settings or upload.settings, - new_image_name or upload.image_name, - ) - if new_image_name: - upload.image_name = new_image_name - if new_settings: - upload.settings = new_settings - upload.reset(_write_callback(ucfg))
- - -
[docs]def cancel_upload(ucfg, uuid): - """Cancel an upload - - :param ucfg: the compose config - :type ucfg: ComposerConfig - :param uuid: the UUID of the upload to cancel - :type uuid: str - """ - get_upload(ucfg, uuid).cancel(_write_callback(ucfg))
- - -
[docs]def delete_upload(ucfg, uuid): - """Delete an upload - - :param ucfg: the compose config - :type ucfg: ComposerConfig - :param uuid: the UUID of the upload to delete - :type uuid: str - """ - upload = get_upload(ucfg, uuid) - if upload and upload.is_cancellable(): - upload.cancel() - os.remove(_get_upload_path(ucfg, uuid))
- - -
[docs]def start_upload_monitor(ucfg): - """Start a thread that manages the upload queue - - :param ucfg: the compose config - :type ucfg: ComposerConfig - """ - process = Process(target=_monitor, args=(ucfg,)) - process.daemon = True - process.start()
- - -def _monitor(ucfg): - log.info("Started upload monitor.") - for upload in get_all_uploads(ucfg): - # Set abandoned uploads to FAILED - if upload.status == "RUNNING": - upload.set_status("FAILED", _write_callback(ucfg)) - pool = Pool(processes=SIMULTANEOUS_UPLOADS) - pool_uuids = set() - - def remover(uuid): - return lambda _: pool_uuids.remove(uuid) - - while True: - # Every second, scoop up READY uploads from the filesystem and throw - # them in the pool - all_uploads = get_all_uploads(ucfg) - for upload in sorted(all_uploads, key=attrgetter("creation_time")): - ready = upload.status == "READY" - if ready and upload.uuid not in pool_uuids: - log.info("Starting upload %s...", upload.uuid) - pool_uuids.add(upload.uuid) - callback = remover(upload.uuid) - pool.apply_async( - upload.execute, - (_write_callback(ucfg),), - callback=callback, - error_callback=callback, - ) - time.sleep(1) -
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/lifted/upload.html b/docs/html/_modules/lifted/upload.html deleted file mode 100644 index 23513a4a..00000000 --- a/docs/html/_modules/lifted/upload.html +++ /dev/null @@ -1,410 +0,0 @@ - - - - - - - - - - - lifted.upload — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for lifted.upload

-#
-# Copyright (C) 2019 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-
-from datetime import datetime
-import logging
-from multiprocessing import current_process
-import os
-import signal
-from uuid import uuid4
-
-from ansible_runner.interface import run as ansible_run
-from ansible_runner.exceptions import AnsibleRunnerException
-
-log = logging.getLogger("lifted")
-
-
-
[docs]class Upload: - """Represents an upload of an image to a cloud provider. Instances of this - class are serialized as TOML and stored in the upload queue directory, - which is /var/lib/lorax/upload/queue/ by default""" - - def __init__( - self, - uuid=None, - provider_name=None, - playbook_path=None, - image_name=None, - settings=None, - creation_time=None, - upload_log=None, - upload_pid=None, - image_path=None, - status_callback=None, - status=None, - ): - self.uuid = uuid or str(uuid4()) - self.provider_name = provider_name - self.playbook_path = playbook_path - self.image_name = image_name - self.settings = settings - self.creation_time = creation_time or datetime.now().timestamp() - self.upload_log = upload_log or "" - self.upload_pid = upload_pid - self.image_path = image_path - if status: - self.status = status - else: - self.set_status("WAITING", status_callback) - - def _log(self, message, callback=None): - """Logs something to the upload log with an optional callback - - :param message: the object to log - :type message: object - :param callback: a function of the form callback(self) - :type callback: function - """ - if message: - messages = str(message).splitlines() - - # Log multi-line messages as individual log lines - for m in messages: - log.info(m) - self.upload_log += f"{message}\n" - if callback: - callback(self) - -
[docs] def serializable(self): - """Returns a representation of the object as a dict for serialization - - :returns: the object's __dict__ - :rtype: dict - """ - return self.__dict__
- -
[docs] def summary(self): - """Return a dict with useful information about the upload - - :returns: upload information - :rtype: dict - """ - - return { - "uuid": self.uuid, - "status": self.status, - "provider_name": self.provider_name, - "image_name": self.image_name, - "image_path": self.image_path, - "creation_time": self.creation_time, - "settings": self.settings, - }
- -
[docs] def set_status(self, status, status_callback=None): - """Sets the status of the upload with an optional callback - - :param status: the new status - :type status: str - :param status_callback: a function of the form callback(self) - :type status_callback: function - """ - self._log("Setting status to %s" % status) - self.status = status - if status_callback: - status_callback(self)
- -
[docs] def ready(self, image_path, status_callback): - """Provide an image_path and mark the upload as ready to execute - - :param image_path: path of the image to upload - :type image_path: str - :param status_callback: a function of the form callback(self) - :type status_callback: function - """ - self._log("Setting image_path to %s" % image_path) - self.image_path = image_path - if self.status == "WAITING": - self.set_status("READY", status_callback)
- -
[docs] def reset(self, status_callback): - """Reset the upload so it can be attempted again - - :param status_callback: a function of the form callback(self) - :type status_callback: function - """ - if self.is_cancellable(): - raise RuntimeError(f"Can't reset, status is {self.status}!") - if not self.image_path: - raise RuntimeError("Can't reset, no image supplied yet!") - # self.error = None - self._log("Resetting state") - self.set_status("READY", status_callback)
- -
[docs] def is_cancellable(self): - """Is the upload in a cancellable state? - - :returns: whether the upload is cancellable - :rtype: bool - """ - return self.status in ("WAITING", "READY", "RUNNING")
- -
[docs] def cancel(self, status_callback=None): - """Cancel the upload. Sends a SIGINT to self.upload_pid. - - :param status_callback: a function of the form callback(self) - :type status_callback: function - """ - if not self.is_cancellable(): - raise RuntimeError(f"Can't cancel, status is already {self.status}!") - if self.upload_pid: - os.kill(self.upload_pid, signal.SIGINT) - self.set_status("CANCELLED", status_callback)
- -
[docs] def execute(self, status_callback=None): - """Execute the upload. Meant to be called from a dedicated process so - that the upload can be cancelled by sending a SIGINT to - self.upload_pid. - - :param status_callback: a function of the form callback(self) - :type status_callback: function - """ - if self.status != "READY": - raise RuntimeError("This upload is not ready!") - - try: - self.upload_pid = current_process().pid - self.set_status("RUNNING", status_callback) - self._log("Executing playbook.yml") - - # NOTE: event_handler doesn't seem to be called for playbook errors - logger = lambda e: self._log(e["stdout"], status_callback) - - runner = ansible_run( - playbook=self.playbook_path, - extravars={ - **self.settings, - "image_name": self.image_name, - "image_path": self.image_path, - }, - event_handler=logger, - verbosity=2, - ) - - # Try logging events and stats -- but they may not exist, so catch the error - try: - for e in runner.events: - self._log("%s" % dir(e), status_callback) - - self._log("%s" % runner.stats, status_callback) - except AnsibleRunnerException: - self._log("%s" % runner.stdout.read(), status_callback) - - if runner.status == "successful": - self.set_status("FINISHED", status_callback) - else: - self.set_status("FAILED", status_callback) - except Exception: - import traceback - log.error(traceback.format_exc(limit=2))
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax.html b/docs/html/_modules/pylorax.html index 9f641757..f56a20cc 100644 --- a/docs/html/_modules/pylorax.html +++ b/docs/html/_modules/pylorax.html @@ -1,38 +1,38 @@ - - + - + - + + + pylorax — Lorax 35.1 documentation + + + + + + - pylorax — Lorax 35.0 documentation - + - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -640,7 +648,6 @@ - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.bisect

-#
-# Copyright (C) 2018 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-
[docs]def insort_left(a, x, key=None, lo=0, hi=None): - """Insert item x in list a, and keep it sorted assuming a is sorted. - - :param a: sorted list - :type a: list - :param x: item to insert into the list - :type x: object - :param key: Function to use to compare items in the list - :type key: function - :returns: index where the item was inserted - :rtype: int - - If x is already in a, insert it to the left of the leftmost x. - Optional args lo (default 0) and hi (default len(a)) bound the - slice of a to be searched. - - This is a modified version of bisect.insort_left that can use a - function for the compare, and returns the index position where it - was inserted. - """ - if key is None: - key = lambda i: i - - if lo < 0: - raise ValueError('lo must be non-negative') - if hi is None: - hi = len(a) - while lo < hi: - mid = (lo+hi)//2 - if key(a[mid]) < key(x): lo = mid+1 - else: hi = mid - a.insert(lo, x) - return lo
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/checkparams.html b/docs/html/_modules/pylorax/api/checkparams.html deleted file mode 100644 index fa4e44f4..00000000 --- a/docs/html/_modules/pylorax/api/checkparams.html +++ /dev/null @@ -1,244 +0,0 @@ - - - - - - - - - - - pylorax.api.checkparams — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.checkparams

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-
-import logging
-log = logging.getLogger("lorax-composer")
-
-from flask import jsonify
-from functools import update_wrapper
-
-# A decorator for checking the parameters provided to the API route implementing
-# functions.  The tuples parameter is a list of tuples.  Each tuple is the string
-# name of a parameter ("blueprint_name", not blueprint_name), the value it's set
-# to by flask if the caller did not provide it, and a message to be returned to
-# the user.
-#
-# If the parameter is set to its default, the error message is returned.  Otherwise,
-# the decorated function is called and its return value is returned.
-
[docs]def checkparams(tuples): - def decorator(f): - def wrapped_function(*args, **kwargs): - for tup in tuples: - if kwargs[tup[0]] == tup[1]: - log.error("(%s) %s", f.__name__, tup[2]) - return jsonify(status=False, errors=[tup[2]]), 400 - - return f(*args, **kwargs) - - return update_wrapper(wrapped_function, f) - - return decorator
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/cmdline.html b/docs/html/_modules/pylorax/api/cmdline.html deleted file mode 100644 index 7de7f4f2..00000000 --- a/docs/html/_modules/pylorax/api/cmdline.html +++ /dev/null @@ -1,263 +0,0 @@ - - - - - - - - - - - pylorax.api.cmdline — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.cmdline

-#
-# cmdline.py
-#
-# Copyright (C) 2018 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import os
-import sys
-import argparse
-
-from pylorax import vernum
-
-DEFAULT_USER  = "root"
-DEFAULT_GROUP = "weldr"
-
-version = "{0}-{1}".format(os.path.basename(sys.argv[0]), vernum)
-
-
[docs]def lorax_composer_parser(): - """ Return the ArgumentParser for lorax-composer""" - - parser = argparse.ArgumentParser(description="Lorax Composer API Server", - fromfile_prefix_chars="@") - - parser.add_argument("--socket", default="/run/weldr/api.socket", metavar="SOCKET", - help="Path to the socket file to listen on") - parser.add_argument("--user", default=DEFAULT_USER, metavar="USER", - help="User to use for reduced permissions") - parser.add_argument("--group", default=DEFAULT_GROUP, metavar="GROUP", - help="Group to set ownership of the socket to") - parser.add_argument("--log", dest="logfile", default="/var/log/lorax-composer/composer.log", metavar="LOG", - help="Path to logfile (/var/log/lorax-composer/composer.log)") - parser.add_argument("--mockfiles", default="/var/tmp/bdcs-mockfiles/", metavar="MOCKFILES", - help="Path to JSON files used for /api/mock/ paths (/var/tmp/bdcs-mockfiles/)") - parser.add_argument("--sharedir", type=os.path.abspath, metavar="SHAREDIR", - help="Directory containing all the templates. Overrides config file sharedir") - parser.add_argument("-V", action="store_true", dest="showver", - help="show program's version number and exit") - parser.add_argument("-c", "--config", default="/etc/lorax/composer.conf", metavar="CONFIG", - help="Path to lorax-composer configuration file.") - parser.add_argument("--releasever", default=None, metavar="STRING", - help="Release version to use for $releasever in dnf repository urls") - parser.add_argument("--tmp", default="/var/tmp", - help="Top level temporary directory") - parser.add_argument("--proxy", default=None, metavar="PROXY", - help="Set proxy for DNF, overrides configuration file setting.") - parser.add_argument("--no-system-repos", action="store_true", default=False, - help="Do not copy over system repos from /etc/yum.repos.d/ at startup") - parser.add_argument("BLUEPRINTS", metavar="BLUEPRINTS", - help="Path to the blueprints") - - return parser
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/compose.html b/docs/html/_modules/pylorax/api/compose.html deleted file mode 100644 index 1a129254..00000000 --- a/docs/html/_modules/pylorax/api/compose.html +++ /dev/null @@ -1,1467 +0,0 @@ - - - - - - - - - - - pylorax.api.compose — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.compose

-# Copyright (C) 2018-2019 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-""" Setup for composing an image
-
-Adding New Output Types
------------------------
-
-The new output type must add a kickstart template to ./share/composer/ where the
-name of the kickstart (without the trailing .ks) matches the entry in compose_args.
-
-The kickstart should not have any url or repo entries, these will be added at build
-time. The %packages section should be the last thing, and while it can contain mandatory
-packages required by the output type, it should not have the trailing %end because the
-package NEVRAs will be appended to it at build time.
-
-compose_args should have a name matching the kickstart, and it should set the novirt_install
-parameters needed to generate the desired output. Other types should be set to False.
-
-"""
-import logging
-log = logging.getLogger("lorax-composer")
-
-import os
-from glob import glob
-from io import StringIO
-from math import ceil
-import shutil
-from uuid import uuid4
-
-# Use pykickstart to calculate disk image size
-from pykickstart.parser import KickstartParser
-from pykickstart.version import makeVersion
-
-from pylorax import ArchData, find_templates, get_buildarch
-from pylorax.api.gitrpm import create_gitrpm_repo
-from pylorax.api.projects import projects_depsolve, projects_depsolve_with_size, dep_nevra
-from pylorax.api.projects import ProjectsError
-from pylorax.api.recipes import read_recipe_and_id
-from pylorax.api.timestamp import TS_CREATED, write_timestamp
-import pylorax.api.toml as toml
-from pylorax.base import DataHolder
-from pylorax.imgutils import default_image_name
-from pylorax.ltmpl import LiveTemplateRunner
-from pylorax.sysutils import joinpaths, flatconfig
-
-
-
[docs]def test_templates(dbo, share_dir): - """ Try depsolving each of the the templates and report any errors - - :param dbo: dnf base object - :type dbo: dnf.Base - :returns: List of template types and errors - :rtype: List of errors - - Return a list of templates and errors encountered or an empty list - """ - template_errors = [] - for compose_type, enabled in compose_types(share_dir): - if not enabled: - continue - - # Read the kickstart template for this type - ks_template_path = joinpaths(share_dir, "composer", compose_type) + ".ks" - ks_template = open(ks_template_path, "r").read() - - # How much space will the packages in the default template take? - ks_version = makeVersion() - ks = KickstartParser(ks_version, errorsAreFatal=False, missingIncludeIsFatal=False) - ks.readKickstartFromString(ks_template+"\n%end\n") - pkgs = [(name, "*") for name in ks.handler.packages.packageList] - grps = [grp.name for grp in ks.handler.packages.groupList] - try: - projects_depsolve(dbo, pkgs, grps) - except ProjectsError as e: - template_errors.append("Error depsolving %s: %s" % (compose_type, str(e))) - - return template_errors
- - -
[docs]def repo_to_ks(r, url="url"): - """ Return a kickstart line with the correct args. - :param r: DNF repository information - :type r: dnf.Repo - :param url: "url" or "baseurl" to use for the baseurl parameter - :type url: str - :returns: kickstart command arguments for url/repo command - :rtype: str - - Set url to "baseurl" if it is a repo, leave it as "url" for the installation url. - """ - cmd = "" - # url uses --url not --baseurl - if r.baseurl: - cmd += '--%s="%s" ' % (url, r.baseurl[0]) - elif r.metalink: - cmd += '--metalink="%s" ' % r.metalink - elif r.mirrorlist: - cmd += '--mirrorlist="%s" ' % r.mirrorlist - else: - raise RuntimeError("Repo has no baseurl, metalink, or mirrorlist") - - if r.proxy: - cmd += '--proxy="%s" ' % r.proxy - - if not r.sslverify: - cmd += '--noverifyssl' - - if r.sslcacert: - cmd += ' --sslcacert="%s"' % r.sslcacert - if r.sslclientcert: - cmd += ' --sslclientcert="%s"' % r.sslclientcert - if r.sslclientkey: - cmd += ' --sslclientkey="%s"' % r.sslclientkey - - return cmd
- - -
[docs]def bootloader_append(line, kernel_append): - """ Insert the kernel_append string into the --append argument - - :param line: The bootloader ... line - :type line: str - :param kernel_append: The arguments to append to the --append section - :type kernel_append: str - - Using pykickstart to process the line is the best way to make sure it - is parsed correctly, and re-assembled for inclusion into the final kickstart - """ - ks_version = makeVersion() - ks = KickstartParser(ks_version, errorsAreFatal=False, missingIncludeIsFatal=False) - ks.readKickstartFromString(line) - - if ks.handler.bootloader.appendLine: - ks.handler.bootloader.appendLine += " %s" % kernel_append - else: - ks.handler.bootloader.appendLine = kernel_append - - # Converting back to a string includes a comment, return just the bootloader line - return str(ks.handler.bootloader).splitlines()[-1]
- - -
[docs]def get_kernel_append(recipe): - """Return the customizations.kernel append value - - :param recipe: - :type recipe: Recipe object - :returns: append value or empty string - :rtype: str - """ - if "customizations" not in recipe or \ - "kernel" not in recipe["customizations"] or \ - "append" not in recipe["customizations"]["kernel"]: - return "" - return recipe["customizations"]["kernel"]["append"]
- - -
[docs]def timezone_cmd(line, settings): - """ Update the timezone line with the settings - - :param line: The timezone ... line - :type line: str - :param settings: A dict with timezone and/or ntpservers list - :type settings: dict - - Using pykickstart to process the line is the best way to make sure it - is parsed correctly, and re-assembled for inclusion into the final kickstart - """ - ks_version = makeVersion() - ks = KickstartParser(ks_version, errorsAreFatal=False, missingIncludeIsFatal=False) - ks.readKickstartFromString(line) - - if "timezone" in settings: - ks.handler.timezone.timezone = settings["timezone"] - if "ntpservers" in settings: - ks.handler.timezone.ntpservers = settings["ntpservers"] - - # Converting back to a string includes a comment, return just the timezone line - return str(ks.handler.timezone).splitlines()[-1]
- - -
[docs]def get_timezone_settings(recipe): - """Return the customizations.timezone dict - - :param recipe: - :type recipe: Recipe object - :returns: append value or empty string - :rtype: dict - """ - if "customizations" not in recipe or \ - "timezone" not in recipe["customizations"]: - return {} - return recipe["customizations"]["timezone"]
- - -
[docs]def lang_cmd(line, languages): - """ Update the lang line with the languages - - :param line: The lang ... line - :type line: str - :param settings: The list of languages - :type settings: list - - Using pykickstart to process the line is the best way to make sure it - is parsed correctly, and re-assembled for inclusion into the final kickstart - """ - ks_version = makeVersion() - ks = KickstartParser(ks_version, errorsAreFatal=False, missingIncludeIsFatal=False) - ks.readKickstartFromString(line) - - if languages: - ks.handler.lang.lang = languages[0] - - if len(languages) > 1: - ks.handler.lang.addsupport = languages[1:] - - # Converting back to a string includes a comment, return just the lang line - return str(ks.handler.lang).splitlines()[-1]
- - -
[docs]def get_languages(recipe): - """Return the customizations.locale.languages list - - :param recipe: The recipe - :type recipe: Recipe object - :returns: list of language strings - :rtype: list - """ - if "customizations" not in recipe or \ - "locale" not in recipe["customizations"] or \ - "languages" not in recipe["customizations"]["locale"]: - return [] - return recipe["customizations"]["locale"]["languages"]
- - -
[docs]def keyboard_cmd(line, layout): - """ Update the keyboard line with the layout - - :param line: The keyboard ... line - :type line: str - :param settings: The keyboard layout - :type settings: str - - Using pykickstart to process the line is the best way to make sure it - is parsed correctly, and re-assembled for inclusion into the final kickstart - """ - ks_version = makeVersion() - ks = KickstartParser(ks_version, errorsAreFatal=False, missingIncludeIsFatal=False) - ks.readKickstartFromString(line) - - if layout: - ks.handler.keyboard.keyboard = layout - ks.handler.keyboard.vc_keymap = "" - ks.handler.keyboard.x_layouts = [] - - # Converting back to a string includes a comment, return just the keyboard line - return str(ks.handler.keyboard).splitlines()[-1]
- - -
[docs]def get_keyboard_layout(recipe): - """Return the customizations.locale.keyboard list - - :param recipe: The recipe - :type recipe: Recipe object - :returns: The keyboard layout string - :rtype: str - """ - if "customizations" not in recipe or \ - "locale" not in recipe["customizations"] or \ - "keyboard" not in recipe["customizations"]["locale"]: - return [] - return recipe["customizations"]["locale"]["keyboard"]
- - -
[docs]def firewall_cmd(line, settings): - """ Update the firewall line with the new ports and services - - :param line: The firewall ... line - :type line: str - :param settings: A dict with the list of services and ports to enable and disable - :type settings: dict - - Using pykickstart to process the line is the best way to make sure it - is parsed correctly, and re-assembled for inclusion into the final kickstart - """ - ks_version = makeVersion() - ks = KickstartParser(ks_version, errorsAreFatal=False, missingIncludeIsFatal=False) - ks.readKickstartFromString(line) - - # Do not override firewall --disabled - if ks.handler.firewall.enabled != False and settings: - ks.handler.firewall.ports = sorted(set(settings["ports"] + ks.handler.firewall.ports)) - ks.handler.firewall.services = sorted(set(settings["enabled"] + ks.handler.firewall.services)) - ks.handler.firewall.remove_services = sorted(set(settings["disabled"] + ks.handler.firewall.remove_services)) - - # Converting back to a string includes a comment, return just the keyboard line - return str(ks.handler.firewall).splitlines()[-1]
- - -
[docs]def get_firewall_settings(recipe): - """Return the customizations.firewall settings - - :param recipe: The recipe - :type recipe: Recipe object - :returns: A dict of settings - :rtype: dict - """ - settings = {"ports": [], "enabled": [], "disabled": []} - - if "customizations" not in recipe or \ - "firewall" not in recipe["customizations"]: - return settings - - settings["ports"] = recipe["customizations"]["firewall"].get("ports", []) - - if "services" in recipe["customizations"]["firewall"]: - settings["enabled"] = recipe["customizations"]["firewall"]["services"].get("enabled", []) - settings["disabled"] = recipe["customizations"]["firewall"]["services"].get("disabled", []) - return settings
- - -
[docs]def services_cmd(line, settings): - """ Update the services line with additional services to enable/disable - - :param line: The services ... line - :type line: str - :param settings: A dict with the list of services to enable and disable - :type settings: dict - - Using pykickstart to process the line is the best way to make sure it - is parsed correctly, and re-assembled for inclusion into the final kickstart - """ - # Empty services and no additional settings, return an empty string - if not line and not settings["enabled"] and not settings["disabled"]: - return "" - - ks_version = makeVersion() - ks = KickstartParser(ks_version, errorsAreFatal=False, missingIncludeIsFatal=False) - - # Allow passing in a 'default' so that the enable/disable may be applied to it, without - # parsing it and emitting a kickstart error message - if line != "services": - ks.readKickstartFromString(line) - - # Add to any existing services, removing any duplicates - ks.handler.services.enabled = sorted(set(settings["enabled"] + ks.handler.services.enabled)) - ks.handler.services.disabled = sorted(set(settings["disabled"] + ks.handler.services.disabled)) - - # Converting back to a string includes a comment, return just the keyboard line - return str(ks.handler.services).splitlines()[-1]
- - -
[docs]def get_services(recipe): - """Return the customizations.services settings - - :param recipe: The recipe - :type recipe: Recipe object - :returns: A dict of settings - :rtype: dict - """ - settings = {"enabled": [], "disabled": []} - - if "customizations" not in recipe or \ - "services" not in recipe["customizations"]: - return settings - - settings["enabled"] = sorted(recipe["customizations"]["services"].get("enabled", [])) - settings["disabled"] = sorted(recipe["customizations"]["services"].get("disabled", [])) - return settings
- - -
[docs]def get_default_services(recipe): - """Get the default string for services, based on recipe - :param recipe: The recipe - - :type recipe: Recipe object - :returns: string with "services" or "" - :rtype: str - - When no services have been selected we don't need to add anything to the kickstart - so return an empty string. Otherwise return "services" which will be updated with - the settings. - """ - services = get_services(recipe) - - if services["enabled"] or services["disabled"]: - return "services" - else: - return ""
- - -
[docs]def customize_ks_template(ks_template, recipe): - """ Customize the kickstart template and return it - - :param ks_template: The kickstart template - :type ks_template: str - :param recipe: - :type recipe: Recipe object - - Apply customizations to existing template commands, or add defaults for ones that are - missing and required. - - Apply customizations.kernel.append to the bootloader argument in the template. - Add bootloader line if it is missing. - - Add default timezone if needed. It does NOT replace an existing timezone entry - """ - # Commands to be modified [NEW-COMMAND-FUNC, NEW-VALUE, DEFAULT, REPLACE] - # The function is called with a kickstart command string and the value to replace - # The value is specific to the command, and is understood by the function - # The default is a complete kickstart command string, suitable for writing to the template - # If REPLACE is False it will not change an existing entry only add a missing one - commands = {"bootloader": [bootloader_append, - get_kernel_append(recipe), - 'bootloader --location=none', True], - "timezone": [timezone_cmd, - get_timezone_settings(recipe), - 'timezone UTC', False], - "lang": [lang_cmd, - get_languages(recipe), - 'lang en_US.UTF-8', True], - "keyboard": [keyboard_cmd, - get_keyboard_layout(recipe), - 'keyboard --xlayouts us --vckeymap us', True], - "firewall": [firewall_cmd, - get_firewall_settings(recipe), - 'firewall --enabled', True], - "services": [services_cmd, - get_services(recipe), - get_default_services(recipe), True] - } - found = {} - - output = StringIO() - for line in ks_template.splitlines(): - for cmd in commands: - (new_command, value, default, replace) = commands[cmd] - if line.startswith(cmd): - found[cmd] = True - if value and replace: - log.debug("Replacing %s with %s", cmd, value) - print(new_command(line, value), file=output) - else: - log.debug("Skipping %s", cmd) - print(line, file=output) - break - else: - # No matches, write the line as-is - print(line, file=output) - - # Write out defaults for the ones not found - # These must go FIRST because the template still needs to have the packages added - defaults = StringIO() - for cmd in commands: - if cmd in found: - continue - (new_command, value, default, _) = commands[cmd] - if value and default: - log.debug("Setting %s to use %s", cmd, value) - print(new_command(default, value), file=defaults) - elif default: - log.debug("Setting %s to %s", cmd, default) - print(default, file=defaults) - - return defaults.getvalue() + output.getvalue()
- - -
[docs]def write_ks_root(f, user): - """ Write kickstart root password and sshkey entry - - :param f: kickstart file object - :type f: open file object - :param user: A blueprint user dictionary - :type user: dict - :returns: True if it wrote a rootpw command to the kickstart - :rtype: bool - - If the entry contains a ssh key, use sshkey to write it - If it contains password, use rootpw to set it - - root cannot be used with the user command. So only key and password are supported - for root. - """ - wrote_rootpw = False - - # ssh key uses the sshkey kickstart command - if "key" in user: - f.write('sshkey --user %s "%s"\n' % (user["name"], user["key"])) - - if "password" in user: - if any(user["password"].startswith(prefix) for prefix in ["$2b$", "$6$", "$5$"]): - log.debug("Detected pre-crypted password") - f.write('rootpw --iscrypted "%s"\n' % user["password"]) - wrote_rootpw = True - else: - log.debug("Detected plaintext password") - f.write('rootpw --plaintext "%s"\n' % user["password"]) - wrote_rootpw = True - - return wrote_rootpw
- -
[docs]def write_ks_user(f, user): - """ Write kickstart user and sshkey entry - - :param f: kickstart file object - :type f: open file object - :param user: A blueprint user dictionary - :type user: dict - - If the entry contains a ssh key, use sshkey to write it - All of the user fields are optional, except name, write out a kickstart user entry - with whatever options are relevant. - """ - # ssh key uses the sshkey kickstart command - if "key" in user: - f.write('sshkey --user %s "%s"\n' % (user["name"], user["key"])) - - # Write out the user kickstart command, much of it is optional - f.write("user --name %s" % user["name"]) - if "home" in user: - f.write(" --homedir %s" % user["home"]) - - if "password" in user: - if any(user["password"].startswith(prefix) for prefix in ["$2b$", "$6$", "$5$"]): - log.debug("Detected pre-crypted password") - f.write(" --iscrypted") - else: - log.debug("Detected plaintext password") - f.write(" --plaintext") - - f.write(" --password \"%s\"" % user["password"]) - - if "shell" in user: - f.write(" --shell %s" % user["shell"]) - - if "uid" in user: - f.write(" --uid %d" % int(user["uid"])) - - if "gid" in user: - f.write(" --gid %d" % int(user["gid"])) - - if "description" in user: - f.write(" --gecos \"%s\"" % user["description"]) - - if "groups" in user: - f.write(" --groups %s" % ",".join(user["groups"])) - - f.write("\n")
- - -
[docs]def write_ks_group(f, group): - """ Write kickstart group entry - - :param f: kickstart file object - :type f: open file object - :param group: A blueprint group dictionary - :type user: dict - - gid is optional - """ - if "name" not in group: - raise RuntimeError("group entry requires a name") - - f.write("group --name %s" % group["name"]) - if "gid" in group: - f.write(" --gid %d" % int(group["gid"])) - - f.write("\n")
- - -
[docs]def add_customizations(f, recipe): - """ Add customizations to the kickstart file - - :param f: kickstart file object - :type f: open file object - :param recipe: - :type recipe: Recipe object - :returns: None - :raises: RuntimeError if there was a problem writing to the kickstart - """ - if "customizations" not in recipe: - f.write('rootpw --lock\n') - return - customizations = recipe["customizations"] - - # allow customizations to be incorrectly specified as [[customizations]] instead of [customizations] - if isinstance(customizations, list): - customizations = customizations[0] - - if "hostname" in customizations: - f.write("network --hostname=%s\n" % customizations["hostname"]) - - # TODO - remove this, should use user section to define this - if "sshkey" in customizations: - # This is a list of entries - for sshkey in customizations["sshkey"]: - if "user" not in sshkey or "key" not in sshkey: - log.error("%s is incorrect, skipping", sshkey) - continue - f.write('sshkey --user %s "%s"\n' % (sshkey["user"], sshkey["key"])) - - # Creating a user also creates a group. Make a list of the names for later - user_groups = [] - # kickstart requires a rootpw line - wrote_rootpw = False - if "user" in customizations: - # only name is required, everything else is optional - for user in customizations["user"]: - if "name" not in user: - raise RuntimeError("user entry requires a name") - - # root is special, cannot use normal user command for it - if user["name"] == "root": - wrote_rootpw = write_ks_root(f, user) - continue - - write_ks_user(f, user) - user_groups.append(user["name"]) - - if "group" in customizations: - for group in customizations["group"]: - if group["name"] not in user_groups: - write_ks_group(f, group) - else: - log.warning("Skipping group %s, already created by user", group["name"]) - - # Lock the root account if no root user password has been specified - if not wrote_rootpw: - f.write('rootpw --lock\n')
- - -
[docs]def get_extra_pkgs(dbo, share_dir, compose_type): - """Return extra packages needed for the output type - - :param dbo: dnf base object - :type dbo: dnf.Base - :param share_dir: Path to the top level share directory - :type share_dir: str - :param compose_type: The type of output to create from the recipe - :type compose_type: str - :returns: List of package names (name only, not NEVRA) - :rtype: list - - Currently this is only needed by live-iso, it reads ./live/live-install.tmpl and - processes only the installpkg lines. It lists the packages needed to complete creation of the - iso using the templates such as x86.tmpl - - Keep in mind that the live-install.tmpl is shared between livemedia-creator and lorax-composer, - even though the results are applied differently. - """ - if compose_type != "live-iso": - return [] - - # get the arch information to pass to the runner - arch = ArchData(get_buildarch(dbo)) - defaults = DataHolder(basearch=arch.basearch) - templatedir = joinpaths(find_templates(share_dir), "live") - runner = LiveTemplateRunner(dbo, templatedir=templatedir, defaults=defaults) - runner.run("live-install.tmpl") - log.debug("extra pkgs = %s", runner.pkgs) - - return runner.pkgnames
- - -
[docs]def start_build(cfg, dnflock, gitlock, branch, recipe_name, compose_type, test_mode=0): - """ Start the build - - :param cfg: Configuration object - :type cfg: ComposerConfig - :param dnflock: Lock and YumBase for depsolving - :type dnflock: YumLock - :param recipe: The recipe to build - :type recipe: str - :param compose_type: The type of output to create from the recipe - :type compose_type: str - :returns: Unique ID for the build that can be used to track its status - :rtype: str - """ - share_dir = cfg.get("composer", "share_dir") - lib_dir = cfg.get("composer", "lib_dir") - - # Make sure compose_type is valid, only allow enabled types - type_enabled = dict(compose_types(share_dir)).get(compose_type) - if type_enabled is None: - raise RuntimeError("Invalid compose type (%s), must be one of %s" % (compose_type, [t for t, e in compose_types(share_dir)])) - if not type_enabled: - raise RuntimeError("Compose type '%s' is disabled on this architecture" % compose_type) - - # Some image types (live-iso) need extra packages for composer to execute the output template - with dnflock.lock: - extra_pkgs = get_extra_pkgs(dnflock.dbo, share_dir, compose_type) - log.debug("Extra packages needed for %s: %s", compose_type, extra_pkgs) - - with gitlock.lock: - (commit_id, recipe) = read_recipe_and_id(gitlock.repo, branch, recipe_name) - - # Combine modules and packages and depsolve the list - module_nver = recipe.module_nver - package_nver = recipe.package_nver - package_nver.extend([(name, '*') for name in extra_pkgs]) - - projects = sorted(set(module_nver+package_nver), key=lambda p: p[0].lower()) - deps = [] - log.info("depsolving %s", recipe["name"]) - try: - # This can possibly update repodata and reset the YumBase object. - with dnflock.lock_check: - (installed_size, deps) = projects_depsolve_with_size(dnflock.dbo, projects, recipe.group_names, with_core=False) - except ProjectsError as e: - log.error("start_build depsolve: %s", str(e)) - raise RuntimeError("Problem depsolving %s: %s" % (recipe["name"], str(e))) - - # Read the kickstart template for this type - ks_template_path = joinpaths(share_dir, "composer", compose_type) + ".ks" - ks_template = open(ks_template_path, "r").read() - - # How much space will the packages in the default template take? - ks_version = makeVersion() - ks = KickstartParser(ks_version, errorsAreFatal=False, missingIncludeIsFatal=False) - ks.readKickstartFromString(ks_template+"\n%end\n") - pkgs = [(name, "*") for name in ks.handler.packages.packageList] - grps = [grp.name for grp in ks.handler.packages.groupList] - try: - with dnflock.lock: - (template_size, _) = projects_depsolve_with_size(dnflock.dbo, pkgs, grps, with_core=not ks.handler.packages.nocore) - except ProjectsError as e: - log.error("start_build depsolve: %s", str(e)) - raise RuntimeError("Problem depsolving %s: %s" % (recipe["name"], str(e))) - log.debug("installed_size = %d, template_size=%d", installed_size, template_size) - - # Minimum LMC disk size is 1GiB, and anaconda bumps the estimated size up by 10% (which doesn't always work). - installed_size = int((installed_size+template_size)) * 1.2 - log.debug("/ partition size = %d", installed_size) - - # Create the results directory - build_id = str(uuid4()) - results_dir = joinpaths(lib_dir, "results", build_id) - os.makedirs(results_dir) - - # Write the recipe commit hash - commit_path = joinpaths(results_dir, "COMMIT") - with open(commit_path, "w") as f: - f.write(commit_id) - - # Write the original recipe - recipe_path = joinpaths(results_dir, "blueprint.toml") - with open(recipe_path, "w") as f: - f.write(recipe.toml()) - - # Write the frozen recipe - frozen_recipe = recipe.freeze(deps) - recipe_path = joinpaths(results_dir, "frozen.toml") - with open(recipe_path, "w") as f: - f.write(frozen_recipe.toml()) - - # Write out the dependencies to the results dir - deps_path = joinpaths(results_dir, "deps.toml") - with open(deps_path, "w") as f: - f.write(toml.dumps({"packages":deps})) - - # Save a copy of the original kickstart - shutil.copy(ks_template_path, results_dir) - - with dnflock.lock: - repos = list(dnflock.dbo.repos.iter_enabled()) - if not repos: - raise RuntimeError("No enabled repos, canceling build.") - - # Create the git rpms, if any, and return the path to the repo under results_dir - gitrpm_repo = create_gitrpm_repo(results_dir, recipe) - - # Create the final kickstart with repos and package list - ks_path = joinpaths(results_dir, "final-kickstart.ks") - with open(ks_path, "w") as f: - ks_url = repo_to_ks(repos[0], "url") - log.debug("url = %s", ks_url) - f.write('url %s\n' % ks_url) - for idx, r in enumerate(repos[1:]): - ks_repo = repo_to_ks(r, "baseurl") - log.debug("repo composer-%s = %s", idx, ks_repo) - f.write('repo --name="composer-%s" %s\n' % (idx, ks_repo)) - - if gitrpm_repo: - log.debug("repo gitrpms = %s", gitrpm_repo) - f.write('repo --name="gitrpms" --baseurl="file://%s"\n' % gitrpm_repo) - - # Setup the disk for booting - # TODO Add GPT and UEFI boot support - f.write('clearpart --all --initlabel\n') - - # Write the root partition and it's size in MB (rounded up) - f.write('part / --size=%d\n' % ceil(installed_size / 1024**2)) - - # Some customizations modify the template before writing it - f.write(customize_ks_template(ks_template, recipe)) - - for d in deps: - f.write(dep_nevra(d)+"\n") - - # Include the rpms from the gitrpm repo directory - if gitrpm_repo: - for rpm in glob(os.path.join(gitrpm_repo, "*.rpm")): - f.write(os.path.basename(rpm)[:-4]+"\n") - - f.write("%end\n") - - # Other customizations can be appended to the kickstart - add_customizations(f, recipe) - - # Setup the config to pass to novirt_install - log_dir = joinpaths(results_dir, "logs/") - cfg_args = compose_args(compose_type) - - # Get the title, project, and release version from the host - if not os.path.exists("/etc/os-release"): - log.error("/etc/os-release is missing, cannot determine product or release version") - os_release = flatconfig("/etc/os-release") - - log.debug("os_release = %s", dict(os_release.items())) - - cfg_args["title"] = os_release.get("PRETTY_NAME", "") - cfg_args["project"] = os_release.get("NAME", "") - cfg_args["releasever"] = os_release.get("VERSION_ID", "") - cfg_args["volid"] = "" - cfg_args["extra_boot_args"] = get_kernel_append(recipe) - - if "compression" not in cfg_args: - cfg_args["compression"] = "xz" - - if "compress_args" not in cfg_args: - cfg_args["compress_args"] = [] - - cfg_args.update({ - "ks": [ks_path], - "logfile": log_dir, - "timeout": 60, # 60 minute timeout - }) - with open(joinpaths(results_dir, "config.toml"), "w") as f: - f.write(toml.dumps(cfg_args)) - - # Set the initial status - open(joinpaths(results_dir, "STATUS"), "w").write("WAITING") - - # Set the test mode, if requested - if test_mode > 0: - open(joinpaths(results_dir, "TEST"), "w").write("%s" % test_mode) - - write_timestamp(results_dir, TS_CREATED) - log.info("Adding %s (%s %s) to compose queue", build_id, recipe["name"], compose_type) - os.symlink(results_dir, joinpaths(lib_dir, "queue/new/", build_id)) - - return build_id
- -# Supported output types -
[docs]def compose_types(share_dir): - r""" Returns a list of tuples of the supported output types, and their state - - The output types come from the kickstart names in /usr/share/lorax/composer/\*ks - - If they are disabled on the current arch their state is False. If enabled, it is True. - eg. [("alibaba", False), ("ext4-filesystem", True), ...] - """ - # These are compose types that are not supported on an architecture. eg. hyper-v on s390 - # If it is not listed, it is allowed - disable_map = { - "arm": ["alibaba", "ami", "google", "hyper-v", "vhd", "vmdk"], - "armhfp": ["alibaba", "ami", "google", "hyper-v", "vhd", "vmdk"], - "aarch64": ["alibaba", "google", "hyper-v", "vhd", "vmdk"], - "ppc": ["alibaba", "ami", "google", "hyper-v", "vhd", "vmdk"], - "ppc64": ["alibaba", "ami", "google", "hyper-v", "vhd", "vmdk"], - "ppc64le": ["alibaba", "ami", "google", "hyper-v", "vhd", "vmdk"], - "s390": ["alibaba", "ami", "google", "hyper-v", "vhd", "vmdk"], - "s390x": ["alibaba", "ami", "google", "hyper-v", "vhd", "vmdk"], - } - - all_types = sorted([os.path.basename(ks)[:-3] for ks in glob(joinpaths(share_dir, "composer/*.ks"))]) - arch_disabled = disable_map.get(os.uname().machine, []) - - return [(t, t not in arch_disabled) for t in all_types]
- -
[docs]def compose_args(compose_type): - """ Returns the settings to pass to novirt_install for the compose type - - :param compose_type: The type of compose to create, from `compose_types()` - :type compose_type: str - - This will return a dict of options that match the ArgumentParser options for livemedia-creator. - These are the ones the define the type of output, it's filename, etc. - Other options will be filled in by `make_compose()` - """ - _MAP = {"tar": {"make_iso": False, - "make_disk": False, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": True, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": False, # False instead of None because of TOML - "qemu_args": [], - "image_name": default_image_name("xz", "root.tar"), - "tar_disk_name": None, - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "liveimg-tar": {"make_iso": False, - "make_disk": False, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": True, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": False, # False instead of None because of TOML - "qemu_args": [], - "image_name": default_image_name("xz", "root.tar"), - "tar_disk_name": None, - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "live-iso": {"make_iso": True, - "make_disk": False, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": False, # False instead of None because of TOML - "qemu_args": [], - "image_name": "live.iso", - "tar_disk_name": None, - "fs_label": "Anaconda", # Live booting may expect this to be 'Anaconda' - "image_only": False, - "app_name": None, - "app_template": None, - "app_file": None, - "iso_only": True, - "iso_name": "live.iso", - "squashfs_only": False, - }, - "partitioned-disk": {"make_iso": False, - "make_disk": True, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": False, # False instead of None because of TOML - "qemu_args": [], - "image_name": "disk.img", - "tar_disk_name": None, - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "qcow2": {"make_iso": False, - "make_disk": True, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": "qcow2", - "qemu_args": [], - "image_name": "disk.qcow2", - "tar_disk_name": None, - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "ext4-filesystem": {"make_iso": False, - "make_disk": False, - "make_fsimage": True, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": False, # False instead of None because of TOML - "qemu_args": [], - "image_name": "filesystem.img", - "tar_disk_name": None, - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "ami": {"make_iso": False, - "make_disk": True, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": False, - "qemu_args": [], - "image_name": "disk.ami", - "tar_disk_name": None, - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "vhd": {"make_iso": False, - "make_disk": True, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": "vpc", - "qemu_args": ["-o", "subformat=fixed,force_size"], - "image_name": "disk.vhd", - "tar_disk_name": None, - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "vmdk": {"make_iso": False, - "make_disk": True, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": "vmdk", - "qemu_args": [], - "image_name": "disk.vmdk", - "tar_disk_name": None, - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "openstack": {"make_iso": False, - "make_disk": True, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": "qcow2", - "qemu_args": [], - "image_name": "disk.qcow2", - "tar_disk_name": None, - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "google": {"make_iso": False, - "make_disk": True, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": True, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 1024, - "image_type": False, # False instead of None because of TOML - "qemu_args": [], - "image_name": "disk.tar.gz", - "tar_disk_name": "disk.raw", - "compression": "gzip", - "compress_args": ["-9"], - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "hyper-v": {"make_iso": False, - "make_disk": True, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": "vhdx", - "qemu_args": [], - "image_name": "disk.vhdx", - "tar_disk_name": None, - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - "alibaba": {"make_iso": False, - "make_disk": True, - "make_fsimage": False, - "make_appliance": False, - "make_ami": False, - "make_tar": False, - "make_tar_disk": False, - "make_pxe_live": False, - "make_ostree_live": False, - "make_oci": False, - "make_vagrant": False, - "ostree": False, - "live_rootfs_keep_size": False, - "live_rootfs_size": 0, - "image_size_align": 0, - "image_type": "qcow2", - "qemu_args": [], - "image_name": "disk.qcow2", - "tar_disk_name": None, - "fs_label": "", - "image_only": True, - "app_name": None, - "app_template": None, - "app_file": None, - "squashfs_only": False, - }, - } - return _MAP[compose_type]
- -
[docs]def move_compose_results(cfg, results_dir): - """Move the final image to the results_dir and cleanup the unneeded compose files - - :param cfg: Build configuration - :type cfg: DataHolder - :param results_dir: Directory to put the results into - :type results_dir: str - """ - if cfg["make_tar"]: - shutil.move(joinpaths(cfg["result_dir"], cfg["image_name"]), results_dir) - elif cfg["make_iso"]: - # Output from live iso is always a boot.iso under images/, move and rename it - shutil.move(joinpaths(cfg["result_dir"], cfg["iso_name"]), joinpaths(results_dir, cfg["image_name"])) - elif cfg["make_disk"] or cfg["make_fsimage"]: - shutil.move(joinpaths(cfg["result_dir"], cfg["image_name"]), joinpaths(results_dir, cfg["image_name"])) - - - # Cleanup the compose directory, but only if it looks like a compose directory - if os.path.basename(cfg["result_dir"]) == "compose": - shutil.rmtree(cfg["result_dir"]) - else: - log.error("Incorrect compose directory, not cleaning up")
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/config.html b/docs/html/_modules/pylorax/api/config.html deleted file mode 100644 index bf8360c7..00000000 --- a/docs/html/_modules/pylorax/api/config.html +++ /dev/null @@ -1,340 +0,0 @@ - - - - - - - - - - - pylorax.api.config — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.config

-#
-# Copyright (C) 2017  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import configparser
-import grp
-import os
-import pwd
-
-from pylorax.sysutils import joinpaths
-
-
[docs]class ComposerConfig(configparser.ConfigParser): -
[docs] def get_default(self, section, option, default): - try: - return self.get(section, option) - except configparser.Error: - return default
- - -
[docs]def configure(conf_file="/etc/lorax/composer.conf", root_dir="/", test_config=False): - """lorax-composer configuration - - :param conf_file: Path to the config file overriding the default settings - :type conf_file: str - :param root_dir: Directory to prepend to paths, defaults to / - :type root_dir: str - :param test_config: Set to True to skip reading conf_file - :type test_config: bool - :returns: Configuration - :rtype: ComposerConfig - """ - conf = ComposerConfig() - - # set defaults - conf.add_section("composer") - conf.set("composer", "share_dir", os.path.realpath(joinpaths(root_dir, "/usr/share/lorax/"))) - conf.set("composer", "lib_dir", os.path.realpath(joinpaths(root_dir, "/var/lib/lorax/composer/"))) - conf.set("composer", "repo_dir", os.path.realpath(joinpaths(root_dir, "/var/lib/lorax/composer/repos.d/"))) - conf.set("composer", "dnf_conf", os.path.realpath(joinpaths(root_dir, "/var/tmp/composer/dnf.conf"))) - conf.set("composer", "dnf_root", os.path.realpath(joinpaths(root_dir, "/var/tmp/composer/dnf/root/"))) - conf.set("composer", "cache_dir", os.path.realpath(joinpaths(root_dir, "/var/tmp/composer/cache/"))) - conf.set("composer", "tmp", os.path.realpath(joinpaths(root_dir, "/var/tmp/"))) - - conf.add_section("users") - conf.set("users", "root", "1") - - # Enable all available repo files by default - conf.add_section("repos") - conf.set("repos", "use_system_repos", "1") - conf.set("repos", "enabled", "*") - - conf.add_section("dnf") - - if not test_config: - # read the config file - if os.path.isfile(conf_file): - conf.read(conf_file) - - return conf
- -
[docs]def make_owned_dir(p_dir, uid, gid): - """Make a directory and its parents, setting owner and group - - :param p_dir: path to directory to create - :type p_dir: string - :param uid: uid of owner - :type uid: int - :param gid: gid of owner - :type gid: int - :returns: list of errors - :rtype: list of str - - Check to make sure it does not have o+rw permissions and that it is owned by uid:gid - """ - errors = [] - if not os.path.isdir(p_dir): - # Make sure no o+rw permissions are set - orig_umask = os.umask(0o006) - os.makedirs(p_dir, 0o771) - os.chown(p_dir, uid, gid) - os.umask(orig_umask) - else: - p_stat = os.stat(p_dir) - if p_stat.st_mode & 0o006 != 0: - errors.append("Incorrect permissions on %s, no o+rw permissions are allowed." % p_dir) - - if p_stat.st_gid != gid or p_stat.st_uid != 0: - gr_name = grp.getgrgid(gid).gr_name - u_name = pwd.getpwuid(uid) - errors.append("%s should be owned by %s:%s" % (p_dir, u_name, gr_name)) - - return errors
- -
[docs]def make_dnf_dirs(conf, uid, gid): - """Make any missing dnf directories owned by user:group - - :param conf: The configuration to use - :type conf: ComposerConfig - :param uid: uid of owner - :type uid: int - :param gid: gid of owner - :type gid: int - :returns: list of errors - :rtype: list of str - """ - errors = [] - for p in ["dnf_conf", "repo_dir", "cache_dir", "dnf_root"]: - p_dir = os.path.abspath(conf.get("composer", p)) - if p == "dnf_conf": - p_dir = os.path.dirname(p_dir) - errors.extend(make_owned_dir(p_dir, uid, gid))
- -
[docs]def make_queue_dirs(conf, gid): - """Make any missing queue directories - - :param conf: The configuration to use - :type conf: ComposerConfig - :param gid: Group ID that has access to the queue directories - :type gid: int - :returns: list of errors - :rtype: list of str - """ - errors = [] - lib_dir = conf.get("composer", "lib_dir") - for p in ["queue/run", "queue/new", "results"]: - p_dir = joinpaths(lib_dir, p) - errors.extend(make_owned_dir(p_dir, 0, gid)) - return errors
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/crossdomain.html b/docs/html/_modules/pylorax/api/crossdomain.html deleted file mode 100644 index d8f653fe..00000000 --- a/docs/html/_modules/pylorax/api/crossdomain.html +++ /dev/null @@ -1,264 +0,0 @@ - - - - - - - - - - - pylorax.api.crossdomain — Lorax 31.7 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.crossdomain

-#
-# Copyright (C) 2017  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-
-# crossdomain decorator from - http://flask.pocoo.org/snippets/56/
-from datetime import timedelta
-from flask import make_response, request, current_app
-from functools import update_wrapper
-
-
-
[docs]def crossdomain(origin, methods=None, headers=None, - max_age=21600, attach_to_all=True, - automatic_options=True): - if methods is not None: - methods = ', '.join(sorted(x.upper() for x in methods)) - if headers is not None and not isinstance(headers, str): - headers = ', '.join(x.upper() for x in headers) - if not isinstance(origin, list): - origin = [origin] - if isinstance(max_age, timedelta): - max_age = int(max_age.total_seconds()) - - def get_methods(): - if methods is not None: - return methods - - options_resp = current_app.make_default_options_response() - return options_resp.headers['allow'] - - def decorator(f): - def wrapped_function(*args, **kwargs): - if automatic_options and request.method == 'OPTIONS': - resp = current_app.make_default_options_response() - else: - resp = make_response(f(*args, **kwargs)) - if not attach_to_all and request.method != 'OPTIONS': - return resp - - h = resp.headers - - h.extend([("Access-Control-Allow-Origin", orig) for orig in origin]) - h['Access-Control-Allow-Methods'] = get_methods() - h['Access-Control-Max-Age'] = str(max_age) - if headers is not None: - h['Access-Control-Allow-Headers'] = headers - return resp - - f.provide_automatic_options = False - f.required_methods = ['OPTIONS'] - return update_wrapper(wrapped_function, f) - return decorator
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/dnfbase.html b/docs/html/_modules/pylorax/api/dnfbase.html deleted file mode 100644 index d566b473..00000000 --- a/docs/html/_modules/pylorax/api/dnfbase.html +++ /dev/null @@ -1,386 +0,0 @@ - - - - - - - - - - - pylorax.api.dnfbase — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.dnfbase

-#
-# Copyright (C) 2017-2018 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-# pylint: disable=bad-preconf-access
-
-import logging
-log = logging.getLogger("lorax-composer")
-
-import dnf
-import dnf.logging
-from glob import glob
-import os
-import shutil
-from threading import Lock
-import time
-
-from pylorax import DEFAULT_PLATFORM_ID
-from pylorax.sysutils import flatconfig
-
-
[docs]class DNFLock(object): - """Hold the dnf.Base object and a Lock to control access to it. - - self.dbo is a property that returns the dnf.Base object, but it *may* change - from one call to the next if the upstream repositories have changed. - """ - def __init__(self, conf, expire_secs=6*60*60): - self._conf = conf - self._lock = Lock() - self.dbo = get_base_object(self._conf) - self._expire_secs = expire_secs - self._expire_time = time.time() + self._expire_secs - - @property - def lock(self): - """Check for repo updates (using expiration time) and return the lock - - If the repository has been updated, tear down the old dnf.Base and - create a new one. This is the only way to force dnf to use the new - metadata. - """ - if time.time() > self._expire_time: - return self.lock_check - return self._lock - - @property - def lock_check(self): - """Force a check for repo updates and return the lock - - Use this method sparingly, it removes the repodata and downloads a new copy every time. - """ - self._expire_time = time.time() + self._expire_secs - self.dbo.update_cache() - return self._lock
- -
[docs]def get_base_object(conf): - """Get the DNF object with settings from the config file - - :param conf: configuration object - :type conf: ComposerParser - :returns: A DNF Base object - :rtype: dnf.Base - """ - cachedir = os.path.abspath(conf.get("composer", "cache_dir")) - dnfconf = os.path.abspath(conf.get("composer", "dnf_conf")) - dnfroot = os.path.abspath(conf.get("composer", "dnf_root")) - repodir = os.path.abspath(conf.get("composer", "repo_dir")) - - # Setup the config for the DNF Base object - dbo = dnf.Base() - dbc = dbo.conf -# TODO - Handle this -# dbc.logdir = logdir - dbc.installroot = dnfroot - if not os.path.isdir(dnfroot): - os.makedirs(dnfroot) - if not os.path.isdir(repodir): - os.makedirs(repodir) - - dbc.cachedir = cachedir - dbc.reposdir = [repodir] - dbc.install_weak_deps = False - dbc.prepend_installroot('persistdir') - # this is a weird 'AppendOption' thing that, when you set it, - # actually appends. Doing this adds 'nodocs' to the existing list - # of values, over in libdnf, it does not replace the existing values. - dbc.tsflags = ['nodocs'] - - if conf.get_default("dnf", "proxy", None): - dbc.proxy = conf.get("dnf", "proxy") - - if conf.has_option("dnf", "sslverify") and not conf.getboolean("dnf", "sslverify"): - dbc.sslverify = False - - # If the system repos are enabled read the dnf vars from /etc/dnf/vars/ - if not conf.has_option("repos", "use_system_repos") or conf.getboolean("repos", "use_system_repos"): - dbc.substitutions.update_from_etc("/") - log.info("dnf vars: %s", dbc.substitutions) - - _releasever = conf.get_default("composer", "releasever", None) - if not _releasever: - # Use the releasever of the host system - _releasever = dnf.rpm.detect_releasever("/") - log.info("releasever = %s", _releasever) - dbc.releasever = _releasever - - # DNF 3.2 needs to have module_platform_id set, otherwise depsolve won't work correctly - if not os.path.exists("/etc/os-release"): - log.warning("/etc/os-release is missing, cannot determine platform id, falling back to %s", DEFAULT_PLATFORM_ID) - platform_id = DEFAULT_PLATFORM_ID - else: - os_release = flatconfig("/etc/os-release") - platform_id = os_release.get("PLATFORM_ID", DEFAULT_PLATFORM_ID) - log.info("Using %s for module_platform_id", platform_id) - dbc.module_platform_id = platform_id - - # Make sure metadata is always current - dbc.metadata_expire = 0 - dbc.metadata_expire_filter = "never" - - # write the dnf configuration file - with open(dnfconf, "w") as f: - f.write(dbc.dump()) - - # dnf needs the repos all in one directory, composer uses repodir for this - # if system repos are supposed to be used, copy them into repodir, overwriting any previous copies - if not conf.has_option("repos", "use_system_repos") or conf.getboolean("repos", "use_system_repos"): - for repo_file in glob("/etc/yum.repos.d/*.repo"): - shutil.copy2(repo_file, repodir) - dbo.read_all_repos() - - # Remove any duplicate repo entries. These can cause problems with Anaconda, which will fail - # with space problems. - repos = sorted(list(r.id for r in dbo.repos.iter_enabled())) - seen = {"baseurl": [], "mirrorlist": [], "metalink": []} - for source_name in repos: - remove = False - repo = dbo.repos.get(source_name, None) - if repo is None: - log.warning("repo %s vanished while removing duplicates", source_name) - continue - if repo.baseurl: - if repo.baseurl[0] in seen["baseurl"]: - log.info("Removing duplicate repo: %s baseurl=%s", source_name, repo.baseurl[0]) - remove = True - else: - seen["baseurl"].append(repo.baseurl[0]) - elif repo.mirrorlist: - if repo.mirrorlist in seen["mirrorlist"]: - log.info("Removing duplicate repo: %s mirrorlist=%s", source_name, repo.mirrorlist) - remove = True - else: - seen["mirrorlist"].append(repo.mirrorlist) - elif repo.metalink: - if repo.metalink in seen["metalink"]: - log.info("Removing duplicate repo: %s metalink=%s", source_name, repo.metalink) - remove = True - else: - seen["metalink"].append(repo.metalink) - - if remove: - del dbo.repos[source_name] - - # Update the metadata from the enabled repos to speed up later operations - log.info("Updating repository metadata") - try: - dbo.fill_sack(load_system_repo=False) - dbo.read_comps() - dbo.update_cache() - except dnf.exceptions.Error as e: - log.error("Failed to update metadata: %s", str(e)) - raise RuntimeError("Fetching metadata failed: %s" % str(e)) - - return dbo
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/flask_blueprint.html b/docs/html/_modules/pylorax/api/flask_blueprint.html deleted file mode 100644 index fc737ac7..00000000 --- a/docs/html/_modules/pylorax/api/flask_blueprint.html +++ /dev/null @@ -1,254 +0,0 @@ - - - - - - - - - - - pylorax.api.flask_blueprint — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.flask_blueprint

-#
-# Copyright (C) 2019 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-""" Flask Blueprints that support skipping routes
-
-When using Blueprints for API versioning you will usually want to fall back
-to the previous version's rules for routes that have no new behavior. To do
-this we add a 'skip_rule' list to the Blueprint's options dictionary. It lists
-all of the routes that you do not want to register.
-
-For example:
-    from pylorax.api.v0 import v0
-    from pylorax.api.v1 import v1
-
-    server.register_blueprint(v0, url_prefix="/api/v0/")
-    server.register_blueprint(v0, url_prefix="/api/v1/", skip_rules=["/blueprints/list"]
-    server.register_blueprint(v1, url_prefix="/api/v1/")
-
-This will register all of v0's routes under `/api/v0`, and all but `/blueprints/list` under /api/v1,
-and then register v1's version of `/blueprints/list` under `/api/v1`
-
-"""
-from flask import Blueprint
-from flask.blueprints import BlueprintSetupState
-
-
[docs]class BlueprintSetupStateSkip(BlueprintSetupState): - def __init__(self, blueprint, app, options, first_registration, skip_rules): - self._skip_rules = skip_rules - super(BlueprintSetupStateSkip, self).__init__(blueprint, app, options, first_registration) - -
[docs] def add_url_rule(self, rule, endpoint=None, view_func=None, **options): - if rule not in self._skip_rules: - super(BlueprintSetupStateSkip, self).add_url_rule(rule, endpoint, view_func, **options)
- -
[docs]class BlueprintSkip(Blueprint): - def __init__(self, *args, **kwargs): - super(BlueprintSkip, self).__init__(*args, **kwargs) - -
[docs] def make_setup_state(self, app, options, first_registration=False): - skip_rules = options.pop("skip_rules", []) - return BlueprintSetupStateSkip(self, app, options, first_registration, skip_rules)
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/gitrpm.html b/docs/html/_modules/pylorax/api/gitrpm.html deleted file mode 100644 index a94fa96d..00000000 --- a/docs/html/_modules/pylorax/api/gitrpm.html +++ /dev/null @@ -1,422 +0,0 @@ - - - - - - - - - - - pylorax.api.gitrpm — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.gitrpm

-# Copyright (C) 2019 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-""" Clone a git repository and package it as an rpm
-
-This module contains functions for cloning a git repo, creating a tar archive of
-the selected commit, branch, or tag, and packaging the files into an rpm that will
-be installed by anaconda when creating the image.
-"""
-import logging
-log = logging.getLogger("lorax-composer")
-
-import os
-from rpmfluff import SimpleRpmBuild
-import shutil
-import subprocess
-import tempfile
-import time
-
-from pylorax.sysutils import joinpaths
-
-
[docs]def get_repo_description(gitRepo): - """ Return a description including the git repo and reference - - :param gitRepo: A dict with the repository details - :type gitRepo: dict - :returns: A string with the git repo url and reference - :rtype: str - """ - return "Created from %s, reference '%s', on %s" % (gitRepo["repo"], gitRepo["ref"], time.ctime())
- -
[docs]class GitArchiveTarball: - """Create a git archive of the selected git repo and reference""" - def __init__(self, gitRepo): - self._gitRepo = gitRepo - self.sourceName = self._gitRepo["rpmname"]+".tar.xz" - -
[docs] def write_file(self, sourcesDir): - """ Create the tar archive - - :param sourcesDir: Path to use for creating the archive - :type sourcesDir: str - - This clones the git repository and creates a git archive from the specified reference. - The result is in RPMNAME.tar.xz under the sourcesDir - """ - # Clone the repository into a temporary location - cmd = ["git", "clone", self._gitRepo["repo"], joinpaths(sourcesDir, "gitrepo")] - log.debug(cmd) - try: - subprocess.check_output(cmd, stderr=subprocess.STDOUT) - except subprocess.CalledProcessError as e: - log.error("Failed to clone %s: %s", self._gitRepo["repo"], e.output) - raise RuntimeError("Failed to clone %s" % self._gitRepo["repo"]) - - oldcwd = os.getcwd() - try: - os.chdir(joinpaths(sourcesDir, "gitrepo")) - - # Configure archive to create a .tar.xz - cmd = ["git", "config", "tar.tar.xz.command", "xz -c"] - log.debug(cmd) - subprocess.check_call(cmd) - - cmd = ["git", "archive", "--prefix", self._gitRepo["rpmname"] + "/", "-o", joinpaths(sourcesDir, self.sourceName), self._gitRepo["ref"]] - log.debug(cmd) - try: - subprocess.check_output(cmd, stderr=subprocess.STDOUT) - except subprocess.CalledProcessError as e: - log.error("Failed to archive %s: %s", self._gitRepo["repo"], e.output) - raise RuntimeError('Failed to archive %s from ref "%s"' % (self._gitRepo["repo"], - self._gitRepo["ref"])) - finally: - # Cleanup even if there was an error - os.chdir(oldcwd) - shutil.rmtree(joinpaths(sourcesDir, "gitrepo"))
- -
[docs]class GitRpmBuild(SimpleRpmBuild): - """Build an rpm containing files from a git repository""" - def __init__(self, *args, **kwargs): - self._base_dir = None - super().__init__(*args, **kwargs) - -
[docs] def check(self): - raise NotImplementedError
- -
[docs] def get_base_dir(self): - """Place all the files under a temporary directory + rpmbuild/ - """ - if not self._base_dir: - self._base_dir = tempfile.mkdtemp(prefix="lorax-git-rpm.") - return joinpaths(self._base_dir, "rpmbuild")
- -
[docs] def cleanup_tmpdir(self): - """Remove the temporary directory and all of its contents - """ - if len(self._base_dir) < 5: - raise RuntimeError("Invalid base_dir: %s" % self.get_base_dir()) - - shutil.rmtree(self._base_dir)
- -
[docs] def clean(self): - """Remove the base directory from inside the tmpdir""" - if len(self.get_base_dir()) < 5: - raise RuntimeError("Invalid base_dir: %s" % self.get_base_dir()) - shutil.rmtree(self.get_base_dir(), ignore_errors=True)
- -
[docs] def add_git_tarball(self, gitRepo): - """Add a tar archive of a git repository to the rpm - - :param gitRepo: A dict with the repository details - :type gitRepo: dict - - This populates the rpm with the URL of the git repository, the summary - describing the repo, the description of the repository and reference used, - and sets up the rpm to install the archive contents into the destination - path. - """ - self.addUrl(gitRepo["repo"]) - self.add_summary(gitRepo["summary"]) - self.add_description(get_repo_description(gitRepo)) - self.addLicense("Unknown") - sourceIndex = self.add_source(GitArchiveTarball(gitRepo)) - self.section_build += "tar -xvf %s\n" % self.sources[sourceIndex].sourceName - dest = os.path.normpath(gitRepo["destination"]) - # Prevent double slash root - if dest == "/": - dest = "" - self.create_parent_dirs(dest) - self.section_install += "cp -r %s/. $RPM_BUILD_ROOT/%s\n" % (gitRepo["rpmname"], dest) - sub = self.get_subpackage(None) - if not dest: - # / is special, we don't want to include / itself, just what's under it - sub.section_files += "/*\n" - else: - sub.section_files += "%s/\n" % dest
- -
[docs]def make_git_rpm(gitRepo, dest): - """ Create an rpm from the specified git repo - - :param gitRepo: A dict with the repository details - :type gitRepo: dict - - This will clone the git repository, create an archive of the selected reference, - and build an rpm that will install the files from the repository under the destination - directory. The gitRepo dict should have the following fields:: - - rpmname: "server-config" - rpmversion: "1.0" - rpmrelease: "1" - summary: "Setup files for server deployment" - repo: "PATH OF GIT REPO TO CLONE" - ref: "v1.0" - destination: "/opt/server/" - - * rpmname: Name of the rpm to create, also used as the prefix name in the tar archive - * rpmversion: Version of the rpm, eg. "1.0.0" - * rpmrelease: Release of the rpm, eg. "1" - * summary: Summary string for the rpm - * repo: URL of the get repo to clone and create the archive from - * ref: Git reference to check out. eg. origin/branch-name, git tag, or git commit hash - * destination: Path to install the / of the git repo at when installing the rpm - """ - gitRpm = GitRpmBuild(gitRepo["rpmname"], gitRepo["rpmversion"], gitRepo["rpmrelease"], ["noarch"]) - try: - gitRpm.add_git_tarball(gitRepo) - gitRpm.do_make() - rpmfile = gitRpm.get_built_rpm("noarch") - shutil.move(rpmfile, dest) - except Exception as e: - log.error("Creating git repo rpm: %s", e) - raise RuntimeError("Creating git repo rpm: %s" % e) - finally: - gitRpm.cleanup_tmpdir() - - return os.path.basename(rpmfile)
- -# Create the git rpms, if any, and return the path to the repo under results_dir -
[docs]def create_gitrpm_repo(results_dir, recipe): - """Create a dnf repository with the rpms from the recipe - - :param results_dir: Path to create the repository under - :type results_dir: str - :param recipe: The recipe to get the repos.git entries from - :type recipe: Recipe - :returns: Path to the dnf repository or "" - :rtype: str - - This function creates a dnf repository directory at results_dir+"repo/", - creates rpms for all of the repos.git entries in the recipe, runs createrepo_c - on the dnf repository so that Anaconda can use it, and returns the path to the - repository to the caller. - """ - if "repos" not in recipe or "git" not in recipe["repos"]: - return "" - - gitrepo = joinpaths(results_dir, "repo/") - if not os.path.exists(gitrepo): - os.makedirs(gitrepo) - for r in recipe["repos"]["git"]: - make_git_rpm(r, gitrepo) - cmd = ["createrepo_c", gitrepo] - log.debug(cmd) - try: - subprocess.check_output(cmd, stderr=subprocess.STDOUT) - except subprocess.CalledProcessError as e: - log.error("Failed to create repo at %s: %s", gitrepo, e.output) - raise RuntimeError("Failed to create repo at %s" % gitrepo) - - return gitrepo
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/projects.html b/docs/html/_modules/pylorax/api/projects.html deleted file mode 100644 index accccabf..00000000 --- a/docs/html/_modules/pylorax/api/projects.html +++ /dev/null @@ -1,897 +0,0 @@ - - - - - - - - - - - pylorax.api.projects — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.projects

-#
-# Copyright (C) 2017  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("lorax-composer")
-
-from configparser import ConfigParser
-import dnf
-from glob import glob
-import os
-import time
-
-from pylorax.api.bisect import insort_left
-from pylorax.sysutils import joinpaths
-
-TIME_FORMAT = "%Y-%m-%dT%H:%M:%S"
-
-
-
[docs]class ProjectsError(Exception): - pass
- - -
[docs]def api_time(t): - """Convert time since epoch to a string - - :param t: Seconds since epoch - :type t: int - :returns: Time string - :rtype: str - """ - return time.strftime(TIME_FORMAT, time.localtime(t))
- - -
[docs]def api_changelog(changelog): - """Convert the changelog to a string - - :param changelog: A list of time, author, string tuples. - :type changelog: tuple - :returns: The most recent changelog text or "" - :rtype: str - - This returns only the most recent changelog entry. - """ - try: - entry = changelog[0][2] - except IndexError: - entry = "" - return entry
- - -
[docs]def pkg_to_project(pkg): - """Extract the details from a hawkey.Package object - - :param pkgs: hawkey.Package object with package details - :type pkgs: hawkey.Package - :returns: A dict with the name, summary, description, and url. - :rtype: dict - - upstream_vcs is hard-coded to UPSTREAM_VCS - """ - return {"name": pkg.name, - "summary": pkg.summary, - "description": pkg.description, - "homepage": pkg.url, - "upstream_vcs": "UPSTREAM_VCS"}
- - -
[docs]def pkg_to_build(pkg): - """Extract the build details from a hawkey.Package object - - :param pkg: hawkey.Package object with package details - :type pkg: hawkey.Package - :returns: A dict with the build details, epoch, release, arch, build_time, changelog, ... - :rtype: dict - - metadata entries are hard-coded to {} - - Note that this only returns the build dict, it does not include the name, description, etc. - """ - return {"epoch": pkg.epoch, - "release": pkg.release, - "arch": pkg.arch, - "build_time": api_time(pkg.buildtime), - "changelog": "CHANGELOG_NEEDED", # XXX Not in hawkey.Package - "build_config_ref": "BUILD_CONFIG_REF", - "build_env_ref": "BUILD_ENV_REF", - "metadata": {}, - "source": {"license": pkg.license, - "version": pkg.version, - "source_ref": "SOURCE_REF", - "metadata": {}}}
- - -
[docs]def pkg_to_project_info(pkg): - """Extract the details from a hawkey.Package object - - :param pkg: hawkey.Package object with package details - :type pkg: hawkey.Package - :returns: A dict with the project details, as well as epoch, release, arch, build_time, changelog, ... - :rtype: dict - - metadata entries are hard-coded to {} - """ - return {"name": pkg.name, - "summary": pkg.summary, - "description": pkg.description, - "homepage": pkg.url, - "upstream_vcs": "UPSTREAM_VCS", - "builds": [pkg_to_build(pkg)]}
- - -
[docs]def pkg_to_dep(pkg): - """Extract the info from a hawkey.Package object - - :param pkg: A hawkey.Package object - :type pkg: hawkey.Package - :returns: A dict with name, epoch, version, release, arch - :rtype: dict - """ - return {"name": pkg.name, - "epoch": pkg.epoch, - "version": pkg.version, - "release": pkg.release, - "arch": pkg.arch}
- - -
[docs]def proj_to_module(proj): - """Extract the name from a project_info dict - - :param pkg: dict with package details - :type pkg: dict - :returns: A dict with name, and group_type - :rtype: dict - - group_type is hard-coded to "rpm" - """ - return {"name": proj["name"], - "group_type": "rpm"}
- - -
[docs]def dep_evra(dep): - """Return the epoch:version-release.arch for the dep - - :param dep: dependency dict - :type dep: dict - :returns: epoch:version-release.arch - :rtype: str - """ - if dep["epoch"] == 0: - return dep["version"]+"-"+dep["release"]+"."+dep["arch"] - else: - return str(dep["epoch"])+":"+dep["version"]+"-"+dep["release"]+"."+dep["arch"]
- -
[docs]def dep_nevra(dep): - """Return the name-epoch:version-release.arch""" - return dep["name"]+"-"+dep_evra(dep)
- - -
[docs]def projects_list(dbo): - """Return a list of projects - - :param dbo: dnf base object - :type dbo: dnf.Base - :returns: List of project info dicts with name, summary, description, homepage, upstream_vcs - :rtype: list of dicts - """ - return projects_info(dbo, None)
- - -
[docs]def projects_info(dbo, project_names): - """Return details about specific projects - - :param dbo: dnf base object - :type dbo: dnf.Base - :param project_names: List of names of projects to get info about - :type project_names: str - :returns: List of project info dicts with pkg_to_project as well as epoch, version, release, etc. - :rtype: list of dicts - - If project_names is None it will return the full list of available packages - """ - if project_names: - pkgs = dbo.sack.query().available().filter(name__glob=project_names) - else: - pkgs = dbo.sack.query().available() - - # iterate over pkgs - # - if pkg.name isn't in the results yet, add pkg_to_project_info in sorted position - # - if pkg.name is already in results, get its builds. If the build for pkg is different - # in any way (version, arch, etc.) add it to the entry's builds list. If it is the same, - # skip it. - results = [] - results_names = {} - for p in pkgs: - if p.name.lower() not in results_names: - idx = insort_left(results, pkg_to_project_info(p), key=lambda p: p["name"].lower()) - results_names[p.name.lower()] = idx - else: - build = pkg_to_build(p) - if build not in results[results_names[p.name.lower()]]["builds"]: - results[results_names[p.name.lower()]]["builds"].append(build) - - return results
- -def _depsolve(dbo, projects, groups): - """Add projects to a new transaction - - :param dbo: dnf base object - :type dbo: dnf.Base - :param projects: The projects and version globs to find the dependencies for - :type projects: List of tuples - :param groups: The groups to include in dependency solving - :type groups: List of str - :returns: None - :rtype: None - :raises: ProjectsError if there was a problem installing something - """ - # This resets the transaction and updates the cache. - # It is important that the cache always be synchronized because Anaconda will grab its own copy - # and if that is different the NEVRAs will not match and the build will fail. - dbo.reset(goal=True) - install_errors = [] - for name in groups: - try: - dbo.group_install(name, ["mandatory", "default"]) - except dnf.exceptions.MarkingError as e: - install_errors.append(("Group %s" % (name), str(e))) - - for name, version in projects: - # Find the best package matching the name + version glob - # dnf can return multiple packages if it is in more than 1 repository - query = dbo.sack.query().filterm(provides__glob=name) - if version: - query.filterm(version__glob=version) - - query.filterm(latest=1) - if not query: - install_errors.append(("%s-%s" % (name, version), "No match")) - continue - sltr = dnf.selector.Selector(dbo.sack).set(pkg=query) - - # NOTE: dnf says in near future there will be a "goal" attribute of Base class - # so yes, we're using a 'private' attribute here on purpose and with permission. - dbo._goal.install(select=sltr, optional=False) - - if install_errors: - raise ProjectsError("The following package(s) had problems: %s" % ",".join(["%s (%s)" % (pattern, err) for pattern, err in install_errors])) - -
[docs]def projects_depsolve(dbo, projects, groups): - """Return the dependencies for a list of projects - - :param dbo: dnf base object - :type dbo: dnf.Base - :param projects: The projects to find the dependencies for - :type projects: List of Strings - :param groups: The groups to include in dependency solving - :type groups: List of str - :returns: NEVRA's of the project and its dependencies - :rtype: list of dicts - :raises: ProjectsError if there was a problem installing something - """ - _depsolve(dbo, projects, groups) - - try: - dbo.resolve() - except dnf.exceptions.DepsolveError as e: - raise ProjectsError("There was a problem depsolving %s: %s" % (projects, str(e))) - - if len(dbo.transaction) == 0: - return [] - - return sorted(map(pkg_to_dep, dbo.transaction.install_set), key=lambda p: p["name"].lower())
- - -
[docs]def estimate_size(packages, block_size=6144): - """Estimate the installed size of a package list - - :param packages: The packages to be installed - :type packages: list of hawkey.Package objects - :param block_size: The block size to use for rounding up file sizes. - :type block_size: int - :returns: The estimated size of installed packages - :rtype: int - - Estimating actual requirements is difficult without the actual file sizes, which - dnf doesn't provide access to. So use the file count and block size to estimate - a minimum size for each package. - """ - installed_size = 0 - for p in packages: - installed_size += len(p.files) * block_size - installed_size += p.installsize - return installed_size
- - -
[docs]def projects_depsolve_with_size(dbo, projects, groups, with_core=True): - """Return the dependencies and installed size for a list of projects - - :param dbo: dnf base object - :type dbo: dnf.Base - :param project_names: The projects to find the dependencies for - :type project_names: List of Strings - :param groups: The groups to include in dependency solving - :type groups: List of str - :returns: installed size and a list of NEVRA's of the project and its dependencies - :rtype: tuple of (int, list of dicts) - :raises: ProjectsError if there was a problem installing something - """ - _depsolve(dbo, projects, groups) - - if with_core: - dbo.group_install("core", ['mandatory', 'default', 'optional']) - - try: - dbo.resolve() - except dnf.exceptions.DepsolveError as e: - raise ProjectsError("There was a problem depsolving %s: %s" % (projects, str(e))) - - if len(dbo.transaction) == 0: - return (0, []) - - installed_size = estimate_size(dbo.transaction.install_set) - deps = sorted(map(pkg_to_dep, dbo.transaction.install_set), key=lambda p: p["name"].lower()) - return (installed_size, deps)
- - -
[docs]def modules_list(dbo, module_names): - """Return a list of modules - - :param dbo: dnf base object - :type dbo: dnf.Base - :param offset: Number of modules to skip - :type limit: int - :param limit: Maximum number of modules to return - :type limit: int - :returns: List of module information and total count - :rtype: tuple of a list of dicts and an Int - - Modules don't exist in RHEL7 so this only returns projects - and sets the type to "rpm" - - """ - # TODO - Figure out what to do with this for Fedora 'modules' - return list(map(proj_to_module, projects_info(dbo, module_names)))
- -
[docs]def modules_info(dbo, module_names): - """Return details about a module, including dependencies - - :param dbo: dnf base object - :type dbo: dnf.Base - :param module_names: Names of the modules to get info about - :type module_names: str - :returns: List of dicts with module details and dependencies. - :rtype: list of dicts - """ - modules = projects_info(dbo, module_names) - - # Add the dependency info to each one - for module in modules: - module["dependencies"] = projects_depsolve(dbo, [(module["name"], "*.*")], []) - - return modules
- -
[docs]def dnf_repo_to_file_repo(repo): - """Return a string representation of a DNF Repo object suitable for writing to a .repo file - - :param repo: DNF Repository - :type repo: dnf.RepoDict - :returns: A string - :rtype: str - - The DNF Repo.dump() function does not produce a string that can be used as a dnf .repo file, - it ouputs baseurl and gpgkey as python lists which DNF cannot read. So do this manually with - only the attributes we care about. - """ - repo_str = "[%s]\nname = %s\n" % (repo.id, repo.name) - if repo.metalink: - repo_str += "metalink = %s\n" % repo.metalink - elif repo.mirrorlist: - repo_str += "mirrorlist = %s\n" % repo.mirrorlist - elif repo.baseurl: - repo_str += "baseurl = %s\n" % repo.baseurl[0] - else: - raise RuntimeError("Repo has no baseurl, metalink, or mirrorlist") - - # proxy is optional - if repo.proxy: - repo_str += "proxy = %s\n" % repo.proxy - - repo_str += "sslverify = %s\n" % repo.sslverify - repo_str += "gpgcheck = %s\n" % repo.gpgcheck - if repo.gpgkey: - repo_str += "gpgkey = %s\n" % ",".join(repo.gpgkey) - - if repo.skip_if_unavailable: - repo_str += "skip_if_unavailable=1\n" - - return repo_str
- -
[docs]def repo_to_source(repo, system_source, api=1): - """Return a Weldr Source dict created from the DNF Repository - - :param repo: DNF Repository - :type repo: dnf.RepoDict - :param system_source: True if this source is an immutable system source - :type system_source: bool - :param api: Select which api version of the dict to return (default 1) - :type api: int - :returns: A dict with Weldr Source fields filled in - :rtype: dict - - Example:: - - { - "check_gpg": true, - "check_ssl": true, - "gpgkey_url": [ - "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64" - ], - "id": "fedora", - "name": "Fedora $releasever - $basearch", - "proxy": "http://proxy.brianlane.com:8123", - "system": true - "type": "yum-metalink", - "url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64" - } - - The ``name`` field has changed in v1 of the API. - In v0 of the API ``name`` is the repo.id, in v1 it is the repo.name and a new field, - ``id`` has been added for the repo.id - - """ - if api==0: - source = {"name": repo.id, "system": system_source} - else: - source = {"id": repo.id, "name": repo.name, "system": system_source} - if repo.baseurl: - source["url"] = repo.baseurl[0] - source["type"] = "yum-baseurl" - elif repo.metalink: - source["url"] = repo.metalink - source["type"] = "yum-metalink" - elif repo.mirrorlist: - source["url"] = repo.mirrorlist - source["type"] = "yum-mirrorlist" - else: - raise RuntimeError("Repo has no baseurl, metalink, or mirrorlist") - - # proxy is optional - if repo.proxy: - source["proxy"] = repo.proxy - - if not repo.sslverify: - source["check_ssl"] = False - else: - source["check_ssl"] = True - - if not repo.gpgcheck: - source["check_gpg"] = False - else: - source["check_gpg"] = True - - if repo.gpgkey: - source["gpgkey_urls"] = list(repo.gpgkey) - - return source
- -
[docs]def source_to_repodict(source): - """Return a tuple suitable for use with dnf.add_new_repo - - :param source: A Weldr source dict - :type source: dict - :returns: A tuple of dnf.Repo attributes - :rtype: (str, list, dict) - - Return a tuple with (id, baseurl|(), kwargs) that can be used - with dnf.repos.add_new_repo - """ - kwargs = {} - if "id" in source: - # This is an API v1 source definition - repoid = source["id"] - if "name" in source: - kwargs["name"] = source["name"] - else: - repoid = source["name"] - - # This will allow errors to be raised so we can catch them - # without this they are logged, but the repo is silently disabled - kwargs["skip_if_unavailable"] = False - - if source["type"] == "yum-baseurl": - baseurl = [source["url"]] - elif source["type"] == "yum-metalink": - kwargs["metalink"] = source["url"] - baseurl = () - elif source["type"] == "yum-mirrorlist": - kwargs["mirrorlist"] = source["url"] - baseurl = () - - if "proxy" in source: - kwargs["proxy"] = source["proxy"] - - if source["check_ssl"]: - kwargs["sslverify"] = True - else: - kwargs["sslverify"] = False - - if source["check_gpg"]: - kwargs["gpgcheck"] = True - else: - kwargs["gpgcheck"] = False - - if "gpgkey_urls" in source: - kwargs["gpgkey"] = tuple(source["gpgkey_urls"]) - - return (repoid, baseurl, kwargs)
- - -
[docs]def source_to_repo(source, dnf_conf): - """Return a dnf Repo object created from a source dict - - :param source: A Weldr source dict - :type source: dict - :param dnf_conf: The dnf Config object - :type dnf_conf: dnf.conf - :returns: A dnf Repo object - :rtype: dnf.Repo - - Example:: - - { - "check_gpg": True, - "check_ssl": True, - "gpgkey_urls": [ - "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64" - ], - "id": "fedora", - "name": "Fedora $releasever - $basearch", - "proxy": "http://proxy.brianlane.com:8123", - "system": True - "type": "yum-metalink", - "url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64" - } - - If the ``id`` field is included it is used for the repo id, otherwise ``name`` is used. - v0 of the API only used ``name``, v1 added the distinction between ``id`` and ``name``. - """ - repoid, baseurl, kwargs = source_to_repodict(source) - repo = dnf.repo.Repo(repoid, dnf_conf) - if baseurl: - repo.baseurl = baseurl - - # Apply the rest of the kwargs to the Repo object - for k, v in kwargs.items(): - setattr(repo, k, v) - - repo.enable() - - return repo
- -
[docs]def get_source_ids(source_path): - """Return a list of the source ids in a file - - :param source_path: Full path and filename of the source (yum repo) file - :type source_path: str - :returns: A list of source id strings - :rtype: list of str - """ - if not os.path.exists(source_path): - return [] - - cfg = ConfigParser() - cfg.read(source_path) - return cfg.sections()
- -
[docs]def get_repo_sources(source_glob): - """Return a list of sources from a directory of yum repositories - - :param source_glob: A glob to use to match the source files, including full path - :type source_glob: str - :returns: A list of the source ids in all of the matching files - :rtype: list of str - """ - sources = [] - for f in glob(source_glob): - sources.extend(get_source_ids(f)) - return sources
- -
[docs]def delete_repo_source(source_glob, source_id): - """Delete a source from a repo file - - :param source_glob: A glob of the repo sources to search - :type source_glob: str - :param source_id: The repo id to delete - :type source_id: str - :returns: None - :raises: ProjectsError if there was a problem - - A repo file may have multiple sources in it, delete only the selected source. - If it is the last one in the file, delete the file. - - WARNING: This will delete ANY source, the caller needs to ensure that a system - source_id isn't passed to it. - """ - found = False - for f in glob(source_glob): - try: - cfg = ConfigParser() - cfg.read(f) - if source_id in cfg.sections(): - found = True - cfg.remove_section(source_id) - # If there are other sections, rewrite the file without the deleted one - if len(cfg.sections()) > 0: - with open(f, "w") as cfg_file: - cfg.write(cfg_file) - else: - # No sections left, just delete the file - os.unlink(f) - except Exception as e: - raise ProjectsError("Problem deleting repo source %s: %s" % (source_id, str(e))) - if not found: - raise ProjectsError("source %s not found" % source_id)
- -
[docs]def new_repo_source(dbo, repoid, source, repo_dir): - """Add a new repo source from a Weldr source dict - - :param dbo: dnf base object - :type dbo: dnf.Base - :param id: The repo id (API v0 uses the name, v1 uses the id) - :type id: str - :param source: A Weldr source dict - :type source: dict - :returns: None - :raises: ... - - Make sure access to the dbo has been locked before calling this. - The `id` parameter will the the 'name' field for API v0, and the 'id' field for API v1 - - DNF variables will be substituted at load time, and on restart. - """ - try: - # Remove it from the RepoDict (NOTE that this isn't explicitly supported by the DNF API) - # If this repo already exists, delete it and replace it with the new one - repos = list(r.id for r in dbo.repos.iter_enabled()) - if repoid in repos: - del dbo.repos[repoid] - - # Add the repo and substitute any dnf variables - _, baseurl, kwargs = source_to_repodict(source) - log.debug("repoid=%s, baseurl=%s, kwargs=%s", repoid, baseurl, kwargs) - r = dbo.repos.add_new_repo(repoid, dbo.conf, baseurl, **kwargs) - r.enable() - - log.info("Updating repository metadata after adding %s", repoid) - dbo.fill_sack(load_system_repo=False) - dbo.read_comps() - - # Remove any previous sources with this id, ignore it if it isn't found - try: - delete_repo_source(joinpaths(repo_dir, "*.repo"), repoid) - except ProjectsError: - pass - - # Make sure the source id can't contain a path traversal by taking the basename - source_path = joinpaths(repo_dir, os.path.basename("%s.repo" % repoid)) - # Write the un-substituted version of the repo to disk - with open(source_path, "w") as f: - repo = source_to_repo(source, dbo.conf) - f.write(dnf_repo_to_file_repo(repo)) - except Exception as e: - log.error("(new_repo_source) adding %s failed: %s", repoid, str(e)) - - # Cleanup the mess, if loading it failed we don't want to leave it in memory - repos = list(r.id for r in dbo.repos.iter_enabled()) - if repoid in repos: - del dbo.repos[repoid] - - log.info("Updating repository metadata after adding %s failed", repoid) - dbo.fill_sack(load_system_repo=False) - dbo.read_comps() - - raise
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/queue.html b/docs/html/_modules/pylorax/api/queue.html deleted file mode 100644 index 545d0d4e..00000000 --- a/docs/html/_modules/pylorax/api/queue.html +++ /dev/null @@ -1,1063 +0,0 @@ - - - - - - - - - - - pylorax.api.queue — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.queue

-# Copyright (C) 2018 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-""" Functions to monitor compose queue and run anaconda"""
-import logging
-log = logging.getLogger("pylorax")
-program_log = logging.getLogger("program")
-dnf_log = logging.getLogger("dnf")
-
-import os
-import grp
-from glob import glob
-import multiprocessing as mp
-import pwd
-import shutil
-import subprocess
-from subprocess import Popen, PIPE
-import time
-
-from pylorax import find_templates
-from pylorax.api.compose import move_compose_results
-from pylorax.api.recipes import recipe_from_file
-from pylorax.api.timestamp import TS_CREATED, TS_STARTED, TS_FINISHED, write_timestamp, timestamp_dict
-import pylorax.api.toml as toml
-from pylorax.base import DataHolder
-from pylorax.creator import run_creator
-from pylorax.sysutils import joinpaths, read_tail
-
-from lifted.queue import create_upload, get_uploads, ready_upload, delete_upload
-
-
[docs]def check_queues(cfg): - """Check to make sure the new and run queue symlinks are correct - - :param cfg: Configuration settings - :type cfg: DataHolder - - Also check all of the existing results and make sure any with WAITING - set in STATUS have a symlink in queue/new/ - """ - # Remove broken symlinks from the new and run queues - queue_symlinks = glob(joinpaths(cfg.composer_dir, "queue/new/*")) + \ - glob(joinpaths(cfg.composer_dir, "queue/run/*")) - for link in queue_symlinks: - if not os.path.isdir(os.path.realpath(link)): - log.info("Removing broken symlink %s", link) - os.unlink(link) - - # Write FAILED to the STATUS of any run queue symlinks and remove them - for link in glob(joinpaths(cfg.composer_dir, "queue/run/*")): - log.info("Setting build %s to FAILED, and removing symlink from queue/run/", os.path.basename(link)) - open(joinpaths(link, "STATUS"), "w").write("FAILED\n") - os.unlink(link) - - # Check results STATUS messages - # - If STATUS is missing, set it to FAILED - # - RUNNING should be changed to FAILED - # - WAITING should have a symlink in the new queue - for link in glob(joinpaths(cfg.composer_dir, "results/*")): - if not os.path.exists(joinpaths(link, "STATUS")): - open(joinpaths(link, "STATUS"), "w").write("FAILED\n") - continue - - status = open(joinpaths(link, "STATUS")).read().strip() - if status == "RUNNING": - log.info("Setting build %s to FAILED", os.path.basename(link)) - open(joinpaths(link, "STATUS"), "w").write("FAILED\n") - elif status == "WAITING": - if not os.path.islink(joinpaths(cfg.composer_dir, "queue/new/", os.path.basename(link))): - log.info("Creating missing symlink to new build %s", os.path.basename(link)) - os.symlink(link, joinpaths(cfg.composer_dir, "queue/new/", os.path.basename(link)))
- -
[docs]def start_queue_monitor(cfg, uid, gid): - """Start the queue monitor as a mp process - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uid: User ID that owns the queue - :type uid: int - :param gid: Group ID that owns the queue - :type gid: int - :returns: None - """ - lib_dir = cfg.get("composer", "lib_dir") - share_dir = cfg.get("composer", "share_dir") - tmp = cfg.get("composer", "tmp") - monitor_cfg = DataHolder(cfg=cfg, composer_dir=lib_dir, share_dir=share_dir, uid=uid, gid=gid, tmp=tmp) - p = mp.Process(target=monitor, args=(monitor_cfg,)) - p.daemon = True - p.start()
- -
[docs]def monitor(cfg): - """Monitor the queue for new compose requests - - :param cfg: Configuration settings - :type cfg: DataHolder - :returns: Does not return - - The queue has 2 subdirectories, new and run. When a compose is ready to be run - a symlink to the uniquely named results directory should be placed in ./queue/new/ - - When the it is ready to be run (it is checked every 30 seconds or after a previous - compose is finished) the symlink will be moved into ./queue/run/ and a STATUS file - will be created in the results directory. - - STATUS can contain one of: WAITING, RUNNING, FINISHED, FAILED - - If the system is restarted while a compose is running it will move any old symlinks - from ./queue/run/ to ./queue/new/ and rerun them. - """ - def queue_sort(uuid): - """Sort the queue entries by their mtime, not their names""" - return os.stat(joinpaths(cfg.composer_dir, "queue/new", uuid)).st_mtime - - check_queues(cfg) - while True: - uuids = sorted(os.listdir(joinpaths(cfg.composer_dir, "queue/new")), key=queue_sort) - - # Pick the oldest and move it into ./run/ - if not uuids: - # No composes left to process, sleep for a bit - time.sleep(5) - else: - src = joinpaths(cfg.composer_dir, "queue/new", uuids[0]) - dst = joinpaths(cfg.composer_dir, "queue/run", uuids[0]) - try: - os.rename(src, dst) - except OSError: - # The symlink may vanish if uuid_cancel() has been called - continue - - # The anaconda logs are also copied into ./anaconda/ in this directory - os.makedirs(joinpaths(dst, "logs"), exist_ok=True) - - def open_handler(loggers, file_name): - handler = logging.FileHandler(joinpaths(dst, "logs", file_name)) - handler.setLevel(logging.DEBUG) - handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)s: %(message)s")) - for logger in loggers: - logger.addHandler(handler) - return (handler, loggers) - - loggers = (((log, program_log, dnf_log), "combined.log"), - ((log,), "composer.log"), - ((program_log,), "program.log"), - ((dnf_log,), "dnf.log")) - handlers = [open_handler(loggers, file_name) for loggers, file_name in loggers] - - log.info("Starting new compose: %s", dst) - open(joinpaths(dst, "STATUS"), "w").write("RUNNING\n") - - try: - make_compose(cfg, os.path.realpath(dst)) - log.info("Finished building %s, results are in %s", dst, os.path.realpath(dst)) - open(joinpaths(dst, "STATUS"), "w").write("FINISHED\n") - write_timestamp(dst, TS_FINISHED) - - upload_cfg = cfg.cfg["upload"] - for upload in get_uploads(upload_cfg, uuid_get_uploads(cfg.cfg, uuids[0])): - log.info("Readying upload %s", upload.uuid) - uuid_ready_upload(cfg.cfg, uuids[0], upload.uuid) - except Exception: - import traceback - log.error("traceback: %s", traceback.format_exc()) - -# TODO - Write the error message to an ERROR-LOG file to include with the status -# log.error("Error running compose: %s", e) - open(joinpaths(dst, "STATUS"), "w").write("FAILED\n") - write_timestamp(dst, TS_FINISHED) - finally: - for handler, loggers in handlers: - for logger in loggers: - logger.removeHandler(handler) - handler.close() - - os.unlink(dst)
- -
[docs]def make_compose(cfg, results_dir): - """Run anaconda with the final-kickstart.ks from results_dir - - :param cfg: Configuration settings - :type cfg: DataHolder - :param results_dir: The directory containing the metadata and results for the build - :type results_dir: str - :returns: Nothing - :raises: May raise various exceptions - - This takes the final-kickstart.ks, and the settings in config.toml and runs Anaconda - in no-virt mode (directly on the host operating system). Exceptions should be caught - at the higer level. - - If there is a failure, the build artifacts will be cleaned up, and any logs will be - moved into logs/anaconda/ and their ownership will be set to the user from the cfg - object. - """ - - # Check on the ks's presence - ks_path = joinpaths(results_dir, "final-kickstart.ks") - if not os.path.exists(ks_path): - raise RuntimeError("Missing kickstart file at %s" % ks_path) - - # Load the compose configuration - cfg_path = joinpaths(results_dir, "config.toml") - if not os.path.exists(cfg_path): - raise RuntimeError("Missing config.toml for %s" % results_dir) - cfg_dict = toml.loads(open(cfg_path, "r").read()) - - # The keys in cfg_dict correspond to the arguments setup in livemedia-creator - # keys that define what to build should be setup in compose_args, and keys with - # defaults should be setup here. - - # Make sure that image_name contains no path components - cfg_dict["image_name"] = os.path.basename(cfg_dict["image_name"]) - - # Only support novirt installation, set some other defaults - cfg_dict["no_virt"] = True - cfg_dict["disk_image"] = None - cfg_dict["fs_image"] = None - cfg_dict["keep_image"] = False - cfg_dict["domacboot"] = False - cfg_dict["anaconda_args"] = "" - cfg_dict["proxy"] = "" - cfg_dict["armplatform"] = "" - cfg_dict["squashfs_args"] = None - - cfg_dict["lorax_templates"] = find_templates(cfg.share_dir) - cfg_dict["tmp"] = cfg.tmp - # Use default args for dracut - cfg_dict["dracut_conf"] = None - cfg_dict["dracut_args"] = None - - # TODO How to support other arches? - cfg_dict["arch"] = None - - # Compose things in a temporary directory inside the results directory - cfg_dict["result_dir"] = joinpaths(results_dir, "compose") - os.makedirs(cfg_dict["result_dir"]) - - install_cfg = DataHolder(**cfg_dict) - - # Some kludges for the 99-copy-logs %post, failure in it will crash the build - for f in ["/tmp/NOSAVE_INPUT_KS", "/tmp/NOSAVE_LOGS"]: - open(f, "w") - - # Placing a CANCEL file in the results directory will make execWithRedirect send anaconda a SIGTERM - def cancel_build(): - return os.path.exists(joinpaths(results_dir, "CANCEL")) - - log.debug("cfg = %s", install_cfg) - try: - test_path = joinpaths(results_dir, "TEST") - write_timestamp(results_dir, TS_STARTED) - if os.path.exists(test_path): - # Pretend to run the compose - time.sleep(5) - try: - test_mode = int(open(test_path, "r").read()) - except Exception: - test_mode = 1 - if test_mode == 1: - raise RuntimeError("TESTING FAILED compose") - else: - open(joinpaths(results_dir, install_cfg.image_name), "w").write("TEST IMAGE") - else: - run_creator(install_cfg, cancel_func=cancel_build) - - # Extract the results of the compose into results_dir and cleanup the compose directory - move_compose_results(install_cfg, results_dir) - finally: - # Make sure any remaining temporary directories are removed (eg. if there was an exception) - for d in glob(joinpaths(cfg.tmp, "lmc-*")): - if os.path.isdir(d): - shutil.rmtree(d) - elif os.path.isfile(d): - os.unlink(d) - - # Make sure that everything under the results directory is owned by the user - user = pwd.getpwuid(cfg.uid).pw_name - group = grp.getgrgid(cfg.gid).gr_name - log.debug("Install finished, chowning results to %s:%s", user, group) - subprocess.call(["chown", "-R", "%s:%s" % (user, group), results_dir])
- -
[docs]def get_compose_type(results_dir): - """Return the type of composition. - - :param results_dir: The directory containing the metadata and results for the build - :type results_dir: str - :returns: The type of compose (eg. 'tar') - :rtype: str - :raises: RuntimeError if no kickstart template can be found. - """ - # Should only be 2 kickstarts, the final-kickstart.ks and the template - t = [os.path.basename(ks)[:-3] for ks in glob(joinpaths(results_dir, "*.ks")) - if "final-kickstart" not in ks] - if len(t) != 1: - raise RuntimeError("Cannot find ks template for build %s" % os.path.basename(results_dir)) - return t[0]
- -
[docs]def compose_detail(cfg, results_dir, api=1): - """Return details about the build. - - :param cfg: Configuration settings (required for api=1) - :type cfg: ComposerConfig - :param results_dir: The directory containing the metadata and results for the build - :type results_dir: str - :param api: Select which api version of the dict to return (default 1) - :type api: int - :returns: A dictionary with details about the compose - :rtype: dict - :raises: IOError if it cannot read the directory, STATUS, or blueprint file. - - The following details are included in the dict: - - * id - The uuid of the comoposition - * queue_status - The final status of the composition (FINISHED or FAILED) - * compose_type - The type of output generated (tar, iso, etc.) - * blueprint - Blueprint name - * version - Blueprint version - * image_size - Size of the image, if finished. 0 otherwise. - * uploads - For API v1 details about uploading the image are included - - Various timestamps are also included in the dict. These are all Unix UTC timestamps. - It is possible for these timestamps to not always exist, in which case they will be - None in Python (or null in JSON). The following timestamps are included: - - * job_created - When the user submitted the compose - * job_started - Anaconda started running - * job_finished - Job entered FINISHED or FAILED state - """ - build_id = os.path.basename(os.path.abspath(results_dir)) - status = open(joinpaths(results_dir, "STATUS")).read().strip() - blueprint = recipe_from_file(joinpaths(results_dir, "blueprint.toml")) - - compose_type = get_compose_type(results_dir) - - image_path = get_image_name(results_dir)[1] - if status == "FINISHED" and os.path.exists(image_path): - image_size = os.stat(image_path).st_size - else: - image_size = 0 - - times = timestamp_dict(results_dir) - - detail = {"id": build_id, - "queue_status": status, - "job_created": times.get(TS_CREATED), - "job_started": times.get(TS_STARTED), - "job_finished": times.get(TS_FINISHED), - "compose_type": compose_type, - "blueprint": blueprint["name"], - "version": blueprint["version"], - "image_size": image_size, - } - - if api == 1: - # Get uploads for this build_id - upload_uuids = uuid_get_uploads(cfg, build_id) - summaries = [upload.summary() for upload in get_uploads(cfg["upload"], upload_uuids)] - detail["uploads"] = summaries - return detail
- -
[docs]def queue_status(cfg, api=1): - """Return details about what is in the queue. - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param api: Select which api version of the dict to return (default 1) - :type api: int - :returns: A list of the new composes, and a list of the running composes - :rtype: dict - - This returns a dict with 2 lists. "new" is the list of uuids that are waiting to be built, - and "run" has the uuids that are being built (currently limited to 1 at a time). - """ - queue_dir = joinpaths(cfg.get("composer", "lib_dir"), "queue") - new_queue = [os.path.realpath(p) for p in glob(joinpaths(queue_dir, "new/*"))] - run_queue = [os.path.realpath(p) for p in glob(joinpaths(queue_dir, "run/*"))] - - new_details = [] - for n in new_queue: - try: - d = compose_detail(cfg, n, api) - except IOError: - continue - new_details.append(d) - - run_details = [] - for r in run_queue: - try: - d = compose_detail(cfg, r, api) - except IOError: - continue - run_details.append(d) - - return { - "new": new_details, - "run": run_details - }
- -
[docs]def uuid_status(cfg, uuid, api=1): - """Return the details of a specific UUID compose - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :param api: Select which api version of the dict to return (default 1) - :type api: int - :returns: Details about the build - :rtype: dict or None - - Returns the same dict as `compose_detail()` - """ - uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid) - try: - return compose_detail(cfg, uuid_dir, api) - except IOError: - return None
- -
[docs]def build_status(cfg, status_filter=None, api=1): - """Return the details of finished or failed builds - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param status_filter: What builds to return. None == all, "FINISHED", or "FAILED" - :type status_filter: str - :param api: Select which api version of the dict to return (default 1) - :type api: int - :returns: A list of the build details (from compose_detail) - :rtype: list of dicts - - This returns a list of build details for each of the matching builds on the - system. It does not return the status of builds that have not been finished. - Use queue_status() for those. - """ - if status_filter: - status_filter = [status_filter] - else: - status_filter = ["FINISHED", "FAILED"] - - results = [] - result_dir = joinpaths(cfg.get("composer", "lib_dir"), "results") - for build in glob(result_dir + "/*"): - log.debug("Checking status of build %s", build) - - try: - status = open(joinpaths(build, "STATUS"), "r").read().strip() - if status in status_filter: - results.append(compose_detail(cfg, build, api)) - except IOError: - pass - return results
- -def _upload_list_path(cfg, uuid): - """Return the path to the UPLOADS file - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :returns: Path to the UPLOADS file listing the build's associated uploads - :rtype: str - :raises: RuntimeError if the uuid is not found - """ - results_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid) - if not os.path.isdir(results_dir): - raise RuntimeError(f'"{uuid}" is not a valid build uuid!') - return joinpaths(results_dir, "UPLOADS") - -
[docs]def uuid_schedule_upload(cfg, uuid, provider_name, image_name, settings): - """Schedule an upload of an image - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :param provider_name: The name of the cloud provider, e.g. "azure" - :type provider_name: str - :param image_name: Path of the image to upload - :type image_name: str - :param settings: Settings to use for the selected provider - :type settings: dict - :returns: uuid of the upload - :rtype: str - :raises: RuntimeError if the uuid is not a valid build uuid - """ - status = uuid_status(cfg, uuid) - if status is None: - raise RuntimeError(f'"{uuid}" is not a valid build uuid!') - - upload = create_upload(cfg["upload"], provider_name, image_name, settings) - uuid_add_upload(cfg, uuid, upload.uuid) - return upload.uuid
- -
[docs]def uuid_get_uploads(cfg, uuid): - """Return the list of uploads for a build uuid - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :returns: The upload UUIDs associated with the build UUID - :rtype: frozenset - """ - try: - with open(_upload_list_path(cfg, uuid)) as uploads_file: - return frozenset(uploads_file.read().split()) - except FileNotFoundError: - return frozenset()
- -
[docs]def uuid_add_upload(cfg, uuid, upload_uuid): - """Add an upload UUID to a build - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :param upload_uuid: The UUID of the upload - :type upload_uuid: str - :returns: None - :rtype: None - """ - if upload_uuid not in uuid_get_uploads(cfg, uuid): - with open(_upload_list_path(cfg, uuid), "a") as uploads_file: - print(upload_uuid, file=uploads_file) - status = uuid_status(cfg, uuid) - if status and status["queue_status"] == "FINISHED": - uuid_ready_upload(cfg, uuid, upload_uuid)
- -
[docs]def uuid_remove_upload(cfg, upload_uuid): - """Remove an upload UUID from the build - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param upload_uuid: The UUID of the upload - :type upload_uuid: str - :returns: None - :rtype: None - :raises: RuntimeError if the upload_uuid is not found - """ - for build_uuid in (os.path.basename(b) for b in glob(joinpaths(cfg.get("composer", "lib_dir"), "results/*"))): - uploads = uuid_get_uploads(cfg, build_uuid) - if upload_uuid not in uploads: - continue - - uploads = uploads - frozenset((upload_uuid,)) - with open(_upload_list_path(cfg, build_uuid), "w") as uploads_file: - for upload in uploads: - print(upload, file=uploads_file) - return - - raise RuntimeError(f"{upload_uuid} is not a valid upload id!")
- -
[docs]def uuid_ready_upload(cfg, uuid, upload_uuid): - """Set an upload to READY if the build is in FINISHED state - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :param upload_uuid: The UUID of the upload - :type upload_uuid: str - :returns: None - :rtype: None - :raises: RuntimeError if the build uuid is invalid or not in FINISHED state. - """ - status = uuid_status(cfg, uuid) - if not status: - raise RuntimeError(f"{uuid} is not a valid build id!") - if status["queue_status"] != "FINISHED": - raise RuntimeError(f"Build {uuid} is not finished!") - _, image_path = uuid_image(cfg, uuid) - ready_upload(cfg["upload"], upload_uuid, image_path)
- -
[docs]def uuid_cancel(cfg, uuid): - """Cancel a build and delete its results - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :returns: True if it was canceled and deleted - :rtype: bool - - Only call this if the build status is WAITING or RUNNING - """ - cancel_path = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid, "CANCEL") - if os.path.exists(cancel_path): - log.info("Cancel has already been requested for %s", uuid) - return False - - # This status can change (and probably will) while it is in the middle of doing this: - # It can move from WAITING -> RUNNING or it can move from RUNNING -> FINISHED|FAILED - - # If it is in WAITING remove the symlink and then check to make sure it didn't show up - # in the run queue - queue_dir = joinpaths(cfg.get("composer", "lib_dir"), "queue") - uuid_new = joinpaths(queue_dir, "new", uuid) - if os.path.exists(uuid_new): - try: - os.unlink(uuid_new) - except OSError: - # The symlink may vanish if the queue monitor started the build - pass - uuid_run = joinpaths(queue_dir, "run", uuid) - if not os.path.exists(uuid_run): - # Make sure the build is still in the waiting state - status = uuid_status(cfg, uuid) - if status is None or status["queue_status"] == "WAITING": - # Successfully removed it before the build started - return uuid_delete(cfg, uuid) - - # At this point the build has probably started. Write to the CANCEL file. - open(cancel_path, "w").write("\n") - - # Wait for status to move to FAILED or FINISHED - started = time.time() - while True: - status = uuid_status(cfg, uuid) - if status is None or status["queue_status"] == "FAILED": - break - elif status is not None and status["queue_status"] == "FINISHED": - # The build finished successfully, no point in deleting it now - return False - - # Is this taking too long? Exit anyway and try to cleanup. - if time.time() > started + (10 * 60): - log.error("Failed to cancel the build of %s", uuid) - break - - time.sleep(5) - - # Remove the partial results - uuid_delete(cfg, uuid)
- -
[docs]def uuid_delete(cfg, uuid): - """Delete all of the results from a compose - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :returns: True if it was deleted - :rtype: bool - :raises: This will raise an error if the delete failed - """ - uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid) - if not uuid_dir or len(uuid_dir) < 10: - raise RuntimeError("Directory length is too short: %s" % uuid_dir) - - for upload in get_uploads(cfg["upload"], uuid_get_uploads(cfg, uuid)): - delete_upload(cfg["upload"], upload.uuid) - - shutil.rmtree(uuid_dir) - return True
- -
[docs]def uuid_info(cfg, uuid, api=1): - """Return information about the composition - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :returns: dictionary of information about the composition or None - :rtype: dict - :raises: RuntimeError if there was a problem - - This will return a dict with the following fields populated: - - * id - The uuid of the comoposition - * config - containing the configuration settings used to run Anaconda - * blueprint - The depsolved blueprint used to generate the kickstart - * commit - The (local) git commit hash for the blueprint used - * deps - The NEVRA of all of the dependencies used in the composition - * compose_type - The type of output generated (tar, iso, etc.) - * queue_status - The final status of the composition (FINISHED or FAILED) - """ - uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid) - if not os.path.exists(uuid_dir): - return None - - # Load the compose configuration - cfg_path = joinpaths(uuid_dir, "config.toml") - if not os.path.exists(cfg_path): - raise RuntimeError("Missing config.toml for %s" % uuid) - cfg_dict = toml.loads(open(cfg_path, "r").read()) - - frozen_path = joinpaths(uuid_dir, "frozen.toml") - if not os.path.exists(frozen_path): - raise RuntimeError("Missing frozen.toml for %s" % uuid) - frozen_dict = toml.loads(open(frozen_path, "r").read()) - - deps_path = joinpaths(uuid_dir, "deps.toml") - if not os.path.exists(deps_path): - raise RuntimeError("Missing deps.toml for %s" % uuid) - deps_dict = toml.loads(open(deps_path, "r").read()) - - details = compose_detail(cfg, uuid_dir, api) - - commit_path = joinpaths(uuid_dir, "COMMIT") - if not os.path.exists(commit_path): - raise RuntimeError("Missing commit hash for %s" % uuid) - commit_id = open(commit_path, "r").read().strip() - - info = {"id": uuid, - "config": cfg_dict, - "blueprint": frozen_dict, - "commit": commit_id, - "deps": deps_dict, - "compose_type": details["compose_type"], - "queue_status": details["queue_status"], - "image_size": details["image_size"], - } - if api == 1: - upload_uuids = uuid_get_uploads(cfg, uuid) - summaries = [upload.summary() for upload in get_uploads(cfg["upload"], upload_uuids)] - info["uploads"] = summaries - return info
- -
[docs]def uuid_tar(cfg, uuid, metadata=False, image=False, logs=False): - """Return a tar of the build data - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :param metadata: Set to true to include all the metadata needed to reproduce the build - :type metadata: bool - :param image: Set to true to include the output image - :type image: bool - :param logs: Set to true to include the logs from the build - :type logs: bool - :returns: A stream of bytes from tar - :rtype: A generator - :raises: RuntimeError if there was a problem (eg. missing config file) - - This yields an uncompressed tar's data to the caller. It includes - the selected data to the caller by returning the Popen stdout from the tar process. - """ - uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid) - if not os.path.exists(uuid_dir): - raise RuntimeError("%s is not a valid build_id" % uuid) - - # Load the compose configuration - cfg_path = joinpaths(uuid_dir, "config.toml") - if not os.path.exists(cfg_path): - raise RuntimeError("Missing config.toml for %s" % uuid) - cfg_dict = toml.loads(open(cfg_path, "r").read()) - image_name = cfg_dict["image_name"] - - def include_file(f): - if f.endswith("/logs"): - return logs - if f.endswith(image_name): - return image - return metadata - filenames = [os.path.basename(f) for f in glob(joinpaths(uuid_dir, "*")) if include_file(f)] - - tar = Popen(["tar", "-C", uuid_dir, "-cf-"] + filenames, stdout=PIPE) - return tar.stdout
- -
[docs]def uuid_image(cfg, uuid): - """Return the filename and full path of the build's image file - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :returns: The image filename and full path - :rtype: tuple of strings - :raises: RuntimeError if there was a problem (eg. invalid uuid, missing config file) - """ - uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid) - return get_image_name(uuid_dir)
- -
[docs]def get_image_name(uuid_dir): - """Return the filename and full path of the build's image file - - :param uuid: The UUID of the build - :type uuid: str - :returns: The image filename and full path - :rtype: tuple of strings - :raises: RuntimeError if there was a problem (eg. invalid uuid, missing config file) - """ - uuid = os.path.basename(os.path.abspath(uuid_dir)) - if not os.path.exists(uuid_dir): - raise RuntimeError("%s is not a valid build_id" % uuid) - - # Load the compose configuration - cfg_path = joinpaths(uuid_dir, "config.toml") - if not os.path.exists(cfg_path): - raise RuntimeError("Missing config.toml for %s" % uuid) - cfg_dict = toml.loads(open(cfg_path, "r").read()) - image_name = cfg_dict["image_name"] - - return (image_name, joinpaths(uuid_dir, image_name))
- -
[docs]def uuid_log(cfg, uuid, size=1024): - """Return `size` KiB from the end of the most currently relevant log for a - given compose - - :param cfg: Configuration settings - :type cfg: ComposerConfig - :param uuid: The UUID of the build - :type uuid: str - :param size: Number of KiB to read. Default is 1024 - :type size: int - :returns: Up to `size` KiB from the end of the log - :rtype: str - :raises: RuntimeError if there was a problem (eg. no log file available) - - This function will return the end of either the anaconda log, the packaging - log, or the combined composer logs, depending on the progress of the - compose. It tries to return lines from the end of the log, it will attempt - to start on a line boundary, and it may return less than `size` kbytes. - """ - uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid) - if not os.path.exists(uuid_dir): - raise RuntimeError("%s is not a valid build_id" % uuid) - - # While a build is running the logs will be in /tmp/anaconda.log and when it - # has finished they will be in the results directory - status = uuid_status(cfg, uuid) - if status is None: - raise RuntimeError("Status is missing for %s" % uuid) - - def get_log_path(): - # Try to return the most relevant log at any given time during the - # compose. If the compose is not running, return the composer log. - anaconda_log = "/tmp/anaconda.log" - packaging_log = "/tmp/packaging.log" - combined_log = joinpaths(uuid_dir, "logs", "combined.log") - if status["queue_status"] != "RUNNING" or not os.path.isfile(anaconda_log): - return combined_log - if not os.path.isfile(packaging_log): - return anaconda_log - try: - anaconda_mtime = os.stat(anaconda_log).st_mtime - packaging_mtime = os.stat(packaging_log).st_mtime - # If the packaging log exists and its last message is at least 15 - # seconds newer than the anaconda log, return the packaging log. - if packaging_mtime > anaconda_mtime + 15: - return packaging_log - return anaconda_log - except OSError: - # Return the combined log if anaconda_log or packaging_log disappear - return combined_log - try: - tail = read_tail(get_log_path(), size) - except OSError as e: - raise RuntimeError("No log available.") from e - return tail
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/recipes.html b/docs/html/_modules/pylorax/api/recipes.html deleted file mode 100644 index 98ad6aa6..00000000 --- a/docs/html/_modules/pylorax/api/recipes.html +++ /dev/null @@ -1,1476 +0,0 @@ - - - - - - - - - - - pylorax.api.recipes — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.recipes

-#
-# Copyright (C) 2017-2019  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-
-import gi
-gi.require_version("Ggit", "1.0")
-from gi.repository import Ggit as Git
-from gi.repository import Gio
-from gi.repository import GLib
-
-import os
-import semantic_version as semver
-
-from pylorax.api.projects import dep_evra
-from pylorax.base import DataHolder
-from pylorax.sysutils import joinpaths
-import pylorax.api.toml as toml
-
-
-
[docs]class CommitTimeValError(Exception): - pass
- -
[docs]class RecipeFileError(Exception): - pass
- -
[docs]class RecipeError(Exception): - pass
- - -
[docs]class Recipe(dict): - """A Recipe of package and modules - - This is a subclass of dict that enforces the constructor arguments - and adds a .filename property to return the recipe's filename, - and a .toml() function to return the recipe as a TOML string. - """ - def __init__(self, name, description, version, modules, packages, groups, customizations=None, gitrepos=None): - # Check that version is empty or semver compatible - if version: - semver.Version(version) - - # Make sure modules, packages, and groups are listed by their case-insensitive names - if modules is not None: - modules = sorted(modules, key=lambda m: m["name"].lower()) - if packages is not None: - packages = sorted(packages, key=lambda p: p["name"].lower()) - if groups is not None: - groups = sorted(groups, key=lambda g: g["name"].lower()) - - # Only support [[repos.git]] for now - if gitrepos is not None: - repos = {"git": sorted(gitrepos, key=lambda g: g["repo"].lower())} - else: - repos = None - dict.__init__(self, name=name, - description=description, - version=version, - modules=modules, - packages=packages, - groups=groups, - customizations=customizations, - repos=repos) - - # We don't want customizations=None to show up in the TOML so remove it - if customizations is None: - del self["customizations"] - - # Don't include empty repos or repos.git - if repos is None or not repos["git"]: - del self["repos"] - - @property - def package_names(self): - """Return the names of the packages""" - return [p["name"] for p in self["packages"] or []] - - @property - def package_nver(self): - """Return the names and version globs of the packages""" - return [(p["name"], p["version"]) for p in self["packages"] or []] - - @property - def module_names(self): - """Return the names of the modules""" - return [m["name"] for m in self["modules"] or []] - - @property - def module_nver(self): - """Return the names and version globs of the modules""" - return [(m["name"], m["version"]) for m in self["modules"] or []] - - @property - def group_names(self): - """Return the names of the groups. Groups do not have versions.""" - return map(lambda g: g["name"], self["groups"] or []) - - @property - def filename(self): - """Return the Recipe's filename - - Replaces spaces in the name with '-' and appends .toml - """ - return recipe_filename(self.get("name")) - -
[docs] def toml(self): - """Return the Recipe in TOML format""" - return toml.dumps(self)
- -
[docs] def bump_version(self, old_version=None): - """semver recipe version number bump - - :param old_version: An optional old version number - :type old_version: str - :returns: The new version number or None - :rtype: str - :raises: ValueError - - If neither have a version, 0.0.1 is returned - If there is no old version the new version is checked and returned - If there is no new version, but there is a old one, bump its patch level - If the old and new versions are the same, bump the patch level - If they are different, check and return the new version - """ - new_version = self.get("version") - if not new_version and not old_version: - self["version"] = "0.0.1" - - elif new_version and not old_version: - semver.Version(new_version) - self["version"] = new_version - - elif not new_version or new_version == old_version: - new_version = str(semver.Version(old_version).next_patch()) - self["version"] = new_version - - else: - semver.Version(new_version) - self["version"] = new_version - - # Return the new version - return str(semver.Version(self["version"]))
- -
[docs] def freeze(self, deps): - """ Return a new Recipe with full module and package NEVRA - - :param deps: A list of dependency NEVRA to use to fill in the modules and packages - :type deps: list( - :returns: A new Recipe object - :rtype: Recipe - """ - module_names = self.module_names - package_names = self.package_names - group_names = self.group_names - - new_modules = [] - new_packages = [] - new_groups = [] - for dep in deps: - if dep["name"] in package_names: - new_packages.append(RecipePackage(dep["name"], dep_evra(dep))) - elif dep["name"] in module_names: - new_modules.append(RecipeModule(dep["name"], dep_evra(dep))) - elif dep["name"] in group_names: - new_groups.append(RecipeGroup(dep["name"])) - if "customizations" in self: - customizations = self["customizations"] - else: - customizations = None - if "repos" in self and "git" in self["repos"]: - gitrepos = self["repos"]["git"] - else: - gitrepos = None - - return Recipe(self["name"], self["description"], self["version"], - new_modules, new_packages, new_groups, customizations, gitrepos)
- -
[docs]class RecipeModule(dict): - def __init__(self, name, version): - dict.__init__(self, name=name, version=version)
- -
[docs]class RecipePackage(RecipeModule): - pass
- -
[docs]class RecipeGroup(dict): - def __init__(self, name): - dict.__init__(self, name=name)
- -
[docs]def NewRecipeGit(toml_dict): - """Create a RecipeGit object from fields in a TOML dict - - :param rpmname: Name of the rpm to create, also used as the prefix name in the tar archive - :type rpmname: str - :param rpmversion: Version of the rpm, eg. "1.0.0" - :type rpmversion: str - :param rpmrelease: Release of the rpm, eg. "1" - :type rpmrelease: str - :param summary: Summary string for the rpm - :type summary: str - :param repo: URL of the get repo to clone and create the archive from - :type repo: str - :param ref: Git reference to check out. eg. origin/branch-name, git tag, or git commit hash - :type ref: str - :param destination: Path to install the / of the git repo at when installing the rpm - :type destination: str - :returns: A populated RecipeGit object - :rtype: RecipeGit - - The TOML should look like this:: - - [[repos.git]] - rpmname="server-config" - rpmversion="1.0" - rpmrelease="1" - summary="Setup files for server deployment" - repo="PATH OF GIT REPO TO CLONE" - ref="v1.0" - destination="/opt/server/" - - Note that the repo path supports anything that git supports, file://, https://, http:// - - Currently there is no support for authentication - """ - return RecipeGit(toml_dict.get("rpmname"), - toml_dict.get("rpmversion"), - toml_dict.get("rpmrelease"), - toml_dict.get("summary", ""), - toml_dict.get("repo"), - toml_dict.get("ref"), - toml_dict.get("destination"))
- -
[docs]class RecipeGit(dict): - def __init__(self, rpmname, rpmversion, rpmrelease, summary, repo, ref, destination): - dict.__init__(self, rpmname=rpmname, rpmversion=rpmversion, rpmrelease=rpmrelease, - summary=summary, repo=repo, ref=ref, destination=destination)
- -
[docs]def recipe_from_file(recipe_path): - """Return a recipe file as a Recipe object - - :param recipe_path: Path to the recipe fila - :type recipe_path: str - :returns: A Recipe object - :rtype: Recipe - """ - with open(recipe_path, 'rb') as f: - return recipe_from_toml(f.read())
- -
[docs]def recipe_from_toml(recipe_str): - """Create a Recipe object from a toml string. - - :param recipe_str: The Recipe TOML string - :type recipe_str: str - :returns: A Recipe object - :rtype: Recipe - :raises: TomlError - """ - recipe_dict = toml.loads(recipe_str) - return recipe_from_dict(recipe_dict)
- -
[docs]def check_required_list(lst, fields): - """Check a list of dicts for required fields - - :param lst: A list of dicts with fields - :type lst: list of dict - :param fields: A list of field name strings - :type fields: list of str - :returns: A list of error strings - :rtype: list of str - """ - errors = [] - for i, m in enumerate(lst): - m_errs = [] - errors.extend(check_list_case(fields, m.keys(), prefix="%d " % (i+1))) - for f in fields: - if f not in m: - m_errs.append("'%s'" % f) - if m_errs: - errors.append("%d is missing %s" % (i+1, ", ".join(m_errs))) - return errors
- -
[docs]def check_list_case(expected_keys, recipe_keys, prefix=""): - """Check the case of the recipe keys - - :param expected_keys: A list of expected key strings - :type expected_keys: list of str - :param recipe_keys: A list of the recipe's key strings - :type recipe_keys: list of str - :returns: list of errors - :rtype: list of str - """ - errors = [] - for k in recipe_keys: - if k in expected_keys: - continue - if k.lower() in expected_keys: - errors.append(prefix + "%s should be %s" % (k, k.lower())) - return errors
- -
[docs]def check_recipe_dict(recipe_dict): - """Check a dict before using it to create a new Recipe - - :param recipe_dict: A plain dict of the recipe - :type recipe_dict: dict - :returns: True if dict is ok - :rtype: bool - :raises: RecipeError - - This checks a dict to make sure required fields are present, - that optional fields are correct, and that other optional fields - are of the correct format, when included. - - This collects all of the errors and returns a single RecipeError with - a string that can be presented to users. - """ - errors = [] - - # Check for wrong case of top level keys - top_keys = ["name", "description", "version", "modules", "packages", "groups", "repos", "customizations"] - errors.extend(check_list_case(recipe_dict.keys(), top_keys)) - - if "name" not in recipe_dict: - errors.append("Missing 'name'") - if "description" not in recipe_dict: - errors.append("Missing 'description'") - if "version" in recipe_dict: - try: - semver.Version(recipe_dict["version"]) - except ValueError: - errors.append("Invalid 'version', must use Semantic Versioning") - - # Examine all the modules - if recipe_dict.get("modules"): - module_errors = check_required_list(recipe_dict["modules"], ["name", "version"]) - if module_errors: - errors.append("'modules' errors:\n%s" % "\n".join(module_errors)) - - # Examine all the packages - if recipe_dict.get("packages"): - package_errors = check_required_list(recipe_dict["packages"], ["name", "version"]) - if package_errors: - errors.append("'packages' errors:\n%s" % "\n".join(package_errors)) - - if recipe_dict.get("groups"): - groups_errors = check_required_list(recipe_dict["groups"], ["name"]) - if groups_errors: - errors.append("'groups' errors:\n%s" % "\n".join(groups_errors)) - - if recipe_dict.get("repos") and recipe_dict.get("repos").get("git"): - repos_errors = check_required_list(recipe_dict.get("repos").get("git"), - ["rpmname", "rpmversion", "rpmrelease", "summary", "repo", "ref", "destination"]) - if repos_errors: - errors.append("'repos.git' errors:\n%s" % "\n".join(repos_errors)) - - # No customizations to check, exit now - c = recipe_dict.get("customizations") - if not c: - return errors - - # Make sure to catch empty sections by testing for keywords, not just looking at .get() result. - if "kernel" in c: - errors.extend(check_list_case(["append"], c["kernel"].keys(), prefix="kernel ")) - if "append" not in c.get("kernel", []): - errors.append("'customizations.kernel': missing append field.") - - if "sshkey" in c: - sshkey_errors = check_required_list(c.get("sshkey"), ["user", "key"]) - if sshkey_errors: - errors.append("'customizations.sshkey' errors:\n%s" % "\n".join(sshkey_errors)) - - if "user" in c: - user_errors = check_required_list(c.get("user"), ["name"]) - if user_errors: - errors.append("'customizations.user' errors:\n%s" % "\n".join(user_errors)) - - if "group" in c: - group_errors = check_required_list(c.get("group"), ["name"]) - if group_errors: - errors.append("'customizations.group' errors:\n%s" % "\n".join(group_errors)) - - if "timezone" in c: - errors.extend(check_list_case(["timezone", "ntpservers"], c["timezone"].keys(), prefix="timezone ")) - if not c.get("timezone"): - errors.append("'customizations.timezone': missing timezone or ntpservers fields.") - - if "locale" in c: - errors.extend(check_list_case(["languages", "keyboard"], c["locale"].keys(), prefix="locale ")) - if not c.get("locale"): - errors.append("'customizations.locale': missing languages or keyboard fields.") - - if "firewall" in c: - errors.extend(check_list_case(["ports"], c["firewall"].keys(), prefix="firewall ")) - if not c.get("firewall"): - errors.append("'customizations.firewall': missing ports field or services section.") - - if "services" in c.get("firewall", []): - errors.extend(check_list_case(["enabled", "disabled"], c["firewall"]["services"].keys(), prefix="firewall.services ")) - if not c.get("firewall").get("services"): - errors.append("'customizations.firewall.services': missing enabled or disabled fields.") - - if "services" in c: - errors.extend(check_list_case(["enabled", "disabled"], c["services"].keys(), prefix="services ")) - if not c.get("services"): - errors.append("'customizations.services': missing enabled or disabled fields.") - - return errors
- -
[docs]def recipe_from_dict(recipe_dict): - """Create a Recipe object from a plain dict. - - :param recipe_dict: A plain dict of the recipe - :type recipe_dict: dict - :returns: A Recipe object - :rtype: Recipe - :raises: RecipeError - """ - errors = check_recipe_dict(recipe_dict) - if errors: - msg = "\n".join(errors) - raise RecipeError(msg) - - # Make RecipeModule objects from the toml - # The TOML may not have modules or packages in it. Set them to None in this case - try: - if recipe_dict.get("modules"): - modules = [RecipeModule(m.get("name"), m.get("version")) for m in recipe_dict["modules"]] - else: - modules = [] - if recipe_dict.get("packages"): - packages = [RecipePackage(p.get("name"), p.get("version")) for p in recipe_dict["packages"]] - else: - packages = [] - if recipe_dict.get("groups"): - groups = [RecipeGroup(g.get("name")) for g in recipe_dict["groups"]] - else: - groups = [] - if recipe_dict.get("repos") and recipe_dict.get("repos").get("git"): - gitrepos = [NewRecipeGit(r) for r in recipe_dict["repos"]["git"]] - else: - gitrepos = [] - name = recipe_dict["name"] - description = recipe_dict["description"] - version = recipe_dict.get("version", None) - customizations = recipe_dict.get("customizations", None) - - # [customizations] was incorrectly documented at first, so we have to support using it - # as [[customizations]] by grabbing the first element. - if isinstance(customizations, list): - customizations = customizations[0] - - except KeyError as e: - raise RecipeError("There was a problem parsing the recipe: %s" % str(e)) - - return Recipe(name, description, version, modules, packages, groups, customizations, gitrepos)
- -
[docs]def gfile(path): - """Convert a string path to GFile for use with Git""" - return Gio.file_new_for_path(path)
- -
[docs]def recipe_filename(name): - """Return the toml filename for a recipe - - Replaces spaces with '-' and appends '.toml' - """ - # XXX Raise and error if this is empty? - return name.replace(" ", "-") + ".toml"
- -
[docs]def head_commit(repo, branch): - """Get the branch's HEAD Commit Object - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :returns: Branch's head commit - :rtype: Git.Commit - :raises: Can raise errors from Ggit - """ - branch_obj = repo.lookup_branch(branch, Git.BranchType.LOCAL) - commit_id = branch_obj.get_target() - return repo.lookup(commit_id, Git.Commit)
- -
[docs]def prepare_commit(repo, branch, builder): - """Prepare for a commit - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param builder: instance of TreeBuilder - :type builder: TreeBuilder - :returns: (Tree, Sig, Ref) - :rtype: tuple - :raises: Can raise errors from Ggit - """ - tree_id = builder.write() - tree = repo.lookup(tree_id, Git.Tree) - sig = Git.Signature.new_now("bdcs-api-server", "user-email") - ref = "refs/heads/%s" % branch - return (tree, sig, ref)
- -
[docs]def open_or_create_repo(path): - """Open an existing repo, or create a new one - - :param path: path to recipe directory - :type path: string - :returns: A repository object - :rtype: Git.Repository - :raises: Can raise errors from Ggit - - A bare git repo will be created in the git directory of the specified path. - If a repo already exists it will be opened and returned instead of - creating a new one. - """ - Git.init() - git_path = joinpaths(path, "git") - if os.path.exists(joinpaths(git_path, "HEAD")): - return Git.Repository.open(gfile(git_path)) - - repo = Git.Repository.init_repository(gfile(git_path), True) - - # Make an initial empty commit - sig = Git.Signature.new_now("bdcs-api-server", "user-email") - tree_id = repo.get_index().write_tree() - tree = repo.lookup(tree_id, Git.Tree) - repo.create_commit("HEAD", sig, sig, "UTF-8", "Initial Recipe repository commit", tree, []) - return repo
- -
[docs]def write_commit(repo, branch, filename, message, content): - """Make a new commit to a repository's branch - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param filename: full path of the file to add - :type filename: str - :param message: The commit message - :type message: str - :param content: The data to write - :type content: str - :returns: OId of the new commit - :rtype: Git.OId - :raises: Can raise errors from Ggit - """ - try: - parent_commit = head_commit(repo, branch) - except GLib.GError: - # Branch doesn't exist, make a new one based on master - master_head = head_commit(repo, "master") - repo.create_branch(branch, master_head, 0) - parent_commit = head_commit(repo, branch) - - parent_commit = head_commit(repo, branch) - blob_id = repo.create_blob_from_buffer(content.encode("UTF-8")) - - # Use treebuilder to make a new entry for this filename and blob - parent_tree = parent_commit.get_tree() - builder = repo.create_tree_builder_from_tree(parent_tree) - builder.insert(filename, blob_id, Git.FileMode.BLOB) - (tree, sig, ref) = prepare_commit(repo, branch, builder) - return repo.create_commit(ref, sig, sig, "UTF-8", message, tree, [parent_commit])
- -
[docs]def read_commit_spec(repo, spec): - """Return the raw content of the blob specified by the spec - - :param repo: Open repository - :type repo: Git.Repository - :param spec: Git revparse spec - :type spec: str - :returns: Contents of the commit - :rtype: str - :raises: Can raise errors from Ggit - - eg. To read the README file from master the spec is "master:README" - """ - commit_id = repo.revparse(spec).get_id() - blob = repo.lookup(commit_id, Git.Blob) - return blob.get_raw_content()
- -
[docs]def read_commit(repo, branch, filename, commit=None): - """Return the contents of a file on a specific branch or commit. - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param filename: filename to read - :type filename: str - :param commit: Optional commit hash - :type commit: str - :returns: The commit id, and the contents of the commit - :rtype: tuple(str, str) - :raises: Can raise errors from Ggit - - If no commit is passed the master:filename is returned, otherwise it will be - commit:filename - """ - if not commit: - # Find the most recent commit for filename on the selected branch - commits = list_commits(repo, branch, filename, 1) - if not commits: - raise RecipeError("No commits for %s on the %s branch." % (filename, branch)) - commit = commits[0].commit - return (commit, read_commit_spec(repo, "%s:%s" % (commit, filename)))
- -
[docs]def read_recipe_commit(repo, branch, recipe_name, commit=None): - """Read a recipe commit from git and return a Recipe object - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe_name: Recipe name to read - :type recipe_name: str - :param commit: Optional commit hash - :type commit: str - :returns: A Recipe object - :rtype: Recipe - :raises: Can raise errors from Ggit - - If no commit is passed the master:filename is returned, otherwise it will be - commit:filename - """ - if not repo_file_exists(repo, branch, recipe_filename(recipe_name)): - raise RecipeFileError("Unknown blueprint") - - (_, recipe_toml) = read_commit(repo, branch, recipe_filename(recipe_name), commit) - return recipe_from_toml(recipe_toml)
- -
[docs]def read_recipe_and_id(repo, branch, recipe_name, commit=None): - """Read a recipe commit and its id from git - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe_name: Recipe name to read - :type recipe_name: str - :param commit: Optional commit hash - :type commit: str - :returns: The commit id, and a Recipe object - :rtype: tuple(str, Recipe) - :raises: Can raise errors from Ggit - - If no commit is passed the master:filename is returned, otherwise it will be - commit:filename - """ - (commit_id, recipe_toml) = read_commit(repo, branch, recipe_filename(recipe_name), commit) - return (commit_id, recipe_from_toml(recipe_toml))
- -
[docs]def list_branch_files(repo, branch): - """Return a sorted list of the files on the branch HEAD - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :returns: A sorted list of the filenames - :rtype: list(str) - :raises: Can raise errors from Ggit - """ - commit = head_commit(repo, branch).get_id().to_string() - return list_commit_files(repo, commit)
- -
[docs]def list_commit_files(repo, commit): - """Return a sorted list of the files on a commit - - :param repo: Open repository - :type repo: Git.Repository - :param commit: The commit hash to list - :type commit: str - :returns: A sorted list of the filenames - :rtype: list(str) - :raises: Can raise errors from Ggit - """ - commit_id = Git.OId.new_from_string(commit) - commit_obj = repo.lookup(commit_id, Git.Commit) - tree = commit_obj.get_tree() - return sorted([tree.get(i).get_name() for i in range(0, tree.size())])
- -
[docs]def delete_recipe(repo, branch, recipe_name): - """Delete a recipe from a branch. - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe_name: Recipe name to delete - :type recipe_name: str - :returns: OId of the new commit - :rtype: Git.OId - :raises: Can raise errors from Ggit - """ - return delete_file(repo, branch, recipe_filename(recipe_name))
- -
[docs]def delete_file(repo, branch, filename): - """Delete a file from a branch. - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param filename: filename to delete - :type filename: str - :returns: OId of the new commit - :rtype: Git.OId - :raises: Can raise errors from Ggit - """ - parent_commit = head_commit(repo, branch) - parent_tree = parent_commit.get_tree() - builder = repo.create_tree_builder_from_tree(parent_tree) - builder.remove(filename) - (tree, sig, ref) = prepare_commit(repo, branch, builder) - message = "Recipe %s deleted" % filename - return repo.create_commit(ref, sig, sig, "UTF-8", message, tree, [parent_commit])
- -
[docs]def revert_recipe(repo, branch, recipe_name, commit): - """Revert the contents of a recipe to that of a previous commit - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe_name: Recipe name to revert - :type recipe_name: str - :param commit: Commit hash - :type commit: str - :returns: OId of the new commit - :rtype: Git.OId - :raises: Can raise errors from Ggit - """ - return revert_file(repo, branch, recipe_filename(recipe_name), commit)
- -
[docs]def revert_file(repo, branch, filename, commit): - """Revert the contents of a file to that of a previous commit - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param filename: filename to revert - :type filename: str - :param commit: Commit hash - :type commit: str - :returns: OId of the new commit - :rtype: Git.OId - :raises: Can raise errors from Ggit - """ - commit_id = Git.OId.new_from_string(commit) - commit_obj = repo.lookup(commit_id, Git.Commit) - revert_tree = commit_obj.get_tree() - entry = revert_tree.get_by_name(filename) - blob_id = entry.get_id() - parent_commit = head_commit(repo, branch) - - # Use treebuilder to modify the tree - parent_tree = parent_commit.get_tree() - builder = repo.create_tree_builder_from_tree(parent_tree) - builder.insert(filename, blob_id, Git.FileMode.BLOB) - (tree, sig, ref) = prepare_commit(repo, branch, builder) - commit_hash = commit_id.to_string() - message = "%s reverted to commit %s" % (filename, commit_hash) - return repo.create_commit(ref, sig, sig, "UTF-8", message, tree, [parent_commit])
- -
[docs]def commit_recipe(repo, branch, recipe): - """Commit a recipe to a branch - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe: Recipe to commit - :type recipe: Recipe - :returns: OId of the new commit - :rtype: Git.OId - :raises: Can raise errors from Ggit - """ - try: - old_recipe = read_recipe_commit(repo, branch, recipe["name"]) - old_version = old_recipe["version"] - except Exception: - old_version = None - - recipe.bump_version(old_version) - recipe_toml = recipe.toml() - message = "Recipe %s, version %s saved." % (recipe["name"], recipe["version"]) - return write_commit(repo, branch, recipe.filename, message, recipe_toml)
- -
[docs]def commit_recipe_file(repo, branch, filename): - """Commit a recipe file to a branch - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param filename: Path to the recipe file to commit - :type filename: str - :returns: OId of the new commit - :rtype: Git.OId - :raises: Can raise errors from Ggit or RecipeFileError - """ - try: - recipe = recipe_from_file(filename) - except IOError: - raise RecipeFileError - - return commit_recipe(repo, branch, recipe)
- -
[docs]def commit_recipe_directory(repo, branch, directory): - r"""Commit all \*.toml files from a directory, if they aren't already in git. - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param directory: The directory of \*.toml recipes to commit - :type directory: str - :returns: None - :raises: Can raise errors from Ggit or RecipeFileError - - Files with Toml or RecipeFileErrors will be skipped, and the remainder will - be tried. - """ - dir_files = set([e for e in os.listdir(directory) if e.endswith(".toml")]) - branch_files = set(list_branch_files(repo, branch)) - new_files = dir_files.difference(branch_files) - - for f in new_files: - # Skip files with errors, but try the others - try: - commit_recipe_file(repo, branch, joinpaths(directory, f)) - except (RecipeError, RecipeFileError, toml.TomlError): - pass
- -
[docs]def tag_recipe_commit(repo, branch, recipe_name): - """Tag a file's most recent commit - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe_name: Recipe name to tag - :type recipe_name: str - :returns: Tag id or None if it failed. - :rtype: Git.OId - :raises: Can raise errors from Ggit - - Uses tag_file_commit() - """ - if not repo_file_exists(repo, branch, recipe_filename(recipe_name)): - raise RecipeFileError("Unknown blueprint") - - return tag_file_commit(repo, branch, recipe_filename(recipe_name))
- -
[docs]def tag_file_commit(repo, branch, filename): - """Tag a file's most recent commit - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param filename: Filename to tag - :type filename: str - :returns: Tag id or None if it failed. - :rtype: Git.OId - :raises: Can raise errors from Ggit - - This uses git tags, of the form `refs/tags/<branch>/<filename>/r<revision>` - Only the most recent recipe commit can be tagged to prevent out of order tagging. - Revisions start at 1 and increment for each new commit that is tagged. - If the commit has already been tagged it will return false. - """ - file_commits = list_commits(repo, branch, filename) - if not file_commits: - return None - - # Find the most recently tagged version (may not be one) and add 1 to it. - for details in file_commits: - if details.revision is not None: - new_revision = details.revision + 1 - break - else: - new_revision = 1 - - name = "%s/%s/r%d" % (branch, filename, new_revision) - sig = Git.Signature.new_now("bdcs-api-server", "user-email") - commit_id = Git.OId.new_from_string(file_commits[0].commit) - commit = repo.lookup(commit_id, Git.Commit) - return repo.create_tag(name, commit, sig, name, Git.CreateFlags.NONE)
- -
[docs]def find_commit_tag(repo, branch, filename, commit_id): - """Find the tag that matches the commit_id - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param filename: filename to revert - :type filename: str - :param commit_id: The commit id to check - :type commit_id: Git.OId - :returns: The tag or None if there isn't one - :rtype: str or None - - There should be only 1 tag pointing to a commit, but there may not - be a tag at all. - - The tag will look like: 'refs/tags/<branch>/<filename>/r<revision>' - """ - pattern = "%s/%s/r*" % (branch, filename) - tags = [t for t in repo.list_tags_match(pattern) if is_commit_tag(repo, commit_id, t)] - if len(tags) != 1: - return None - else: - return tags[0]
- -
[docs]def is_commit_tag(repo, commit_id, tag): - """Check to see if a tag points to a specific commit. - - :param repo: Open repository - :type repo: Git.Repository - :param commit_id: The commit id to check - :type commit_id: Git.OId - :param tag: The tag to check - :type tag: str - :returns: True if the tag points to the commit, False otherwise - :rtype: bool - """ - ref = repo.lookup_reference("refs/tags/" + tag) - tag_id = ref.get_target() - tag = repo.lookup(tag_id, Git.Tag) - target_id = tag.get_target_id() - return commit_id.compare(target_id) == 0
- -
[docs]def get_revision_from_tag(tag): - """Return the revision number from a tag - - :param tag: The tag to exract the revision from - :type tag: str - :returns: The integer revision or None - :rtype: int or None - - The revision is the part after the r in 'branch/filename/rXXX' - """ - if tag is None: - return None - try: - return int(tag.rsplit('r', 2)[-1]) - except (ValueError, IndexError): - return None
- -
[docs]class CommitDetails(DataHolder): - def __init__(self, commit, timestamp, message, revision=None): - DataHolder.__init__(self, - commit = commit, - timestamp = timestamp, - message = message, - revision = revision)
- -
[docs]def list_commits(repo, branch, filename, limit=0): - """List the commit history of a file on a branch. - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param filename: filename to revert - :type filename: str - :param limit: Number of commits to return (0=all) - :type limit: int - :returns: A list of commit details - :rtype: list(CommitDetails) - :raises: Can raise errors from Ggit - """ - revwalk = Git.RevisionWalker.new(repo) - branch_ref = "refs/heads/%s" % branch - revwalk.push_ref(branch_ref) - - commits = [] - while True: - commit_id = revwalk.next() - if not commit_id: - break - commit = repo.lookup(commit_id, Git.Commit) - - parents = commit.get_parents() - # No parents? Must be the first commit. - if parents.get_size() == 0: - continue - - tree = commit.get_tree() - # Is the filename in this tree? If not, move on. - if not tree.get_by_name(filename): - continue - - # Is filename different in all of the parent commits? - parent_commits = list(map(parents.get, range(0, parents.get_size()))) - is_diff = all([is_parent_diff(repo, filename, tree, pc) for pc in parent_commits]) - # No changes from parents, skip it. - if not is_diff: - continue - - tag = find_commit_tag(repo, branch, filename, commit.get_id()) - try: - commits.append(get_commit_details(commit, get_revision_from_tag(tag))) - if limit and len(commits) > limit: - break - except CommitTimeValError: - # Skip any commits that have trouble converting the time - # TODO - log details about this failure - pass - - # These will be in reverse time sort order thanks to revwalk - return commits
- -
[docs]def get_commit_details(commit, revision=None): - """Return the details about a specific commit. - - :param commit: The commit to get details from - :type commit: Git.Commit - :param revision: Optional commit revision - :type revision: int - :returns: Details about the commit - :rtype: CommitDetails - :raises: CommitTimeValError or Ggit exceptions - - """ - message = commit.get_message() - commit_str = commit.get_id().to_string() - sig = commit.get_committer() - - datetime = sig.get_time() - # XXX What do we do with timezone? - _timezone = sig.get_time_zone() - time_str = datetime.format_iso8601() - if not time_str: - raise CommitTimeValError - - return CommitDetails(commit_str, time_str, message, revision)
- -
[docs]def is_parent_diff(repo, filename, tree, parent): - """Check to see if the commit is different from its parents - - :param repo: Open repository - :type repo: Git.Repository - :param filename: filename to revert - :type filename: str - :param tree: The commit's tree - :type tree: Git.Tree - :param parent: The commit's parent commit - :type parent: Git.Commit - :retuns: True if filename in the commit is different from its parents - :rtype: bool - """ - diff_opts = Git.DiffOptions.new() - diff_opts.set_pathspec([filename]) - diff = Git.Diff.new_tree_to_tree(repo, parent.get_tree(), tree, diff_opts) - return diff.get_num_deltas() > 0
- -
[docs]def find_field_value(field, value, lst): - """Find a field matching value in the list of dicts. - - :param field: field to search for - :type field: str - :param value: value to match in the field - :type value: str - :param lst: List of dict's with field - :type lst: list of dict - :returns: First dict with matching field:value, or None - :rtype: dict or None - - Used to return a specific entry from a list that looks like this: - - [{"name": "one", "attr": "green"}, ...] - - find_field_value("name", "one", lst) will return the matching dict. - """ - for d in lst: - if d.get(field) and d.get(field) == value: - return d - return None
- -
[docs]def find_name(name, lst): - """Find the dict matching the name in a list and return it. - - :param name: Name to search for - :type name: str - :param lst: List of dict's with "name" field - :type lst: list of dict - :returns: First dict with matching name, or None - :rtype: dict or None - - This is just a wrapper for find_field_value with field set to "name" - """ - return find_field_value("name", name, lst)
- -
[docs]def find_recipe_obj(path, recipe, default=None): - """Find a recipe object - - :param path: A list of dict field names - :type path: list of str - :param recipe: The recipe to search - :type recipe: Recipe - :param default: The value to return if it is not found - :type default: Any - - Return the object found by applying the path to the dicts in the recipe, or - return the default if it doesn't exist. - - eg. {"customizations": {"hostname": "foo", "users": [...]}} - - find_recipe_obj(["customizations", "hostname"], recipe, "") - """ - o = recipe - try: - for p in path: - if not o.get(p): - return default - o = o.get(p) - except AttributeError: - return default - - return o
- -
[docs]def diff_lists(title, field, old_items, new_items): - """Return the differences between two lists of dicts. - - :param title: Title of the entry - :type title: str - :param field: Field to use as the key for comparisons - :type field: str - :param old_items: List of item dicts with "name" field - :type old_items: list(dict) - :param new_items: List of item dicts with "name" field - :type new_items: list(dict) - :returns: List of diff dicts with old/new entries - :rtype: list(dict) - """ - diffs = [] - old_fields= set(m[field] for m in old_items) - new_fields= set(m[field] for m in new_items) - - added_items = new_fields.difference(old_fields) - added_items = sorted(added_items, key=lambda n: n.lower()) - - removed_items = old_fields.difference(new_fields) - removed_items = sorted(removed_items, key=lambda n: n.lower()) - - same_items = old_fields.intersection(new_fields) - same_items = sorted(same_items, key=lambda n: n.lower()) - - for v in added_items: - diffs.append({"old":None, - "new":{title:find_field_value(field, v, new_items)}}) - - for v in removed_items: - diffs.append({"old":{title:find_field_value(field, v, old_items)}, - "new":None}) - - for v in same_items: - old_item = find_field_value(field, v, old_items) - new_item = find_field_value(field, v, new_items) - if old_item != new_item: - diffs.append({"old":{title:old_item}, - "new":{title:new_item}}) - - return diffs
- -
[docs]def customizations_diff(old_recipe, new_recipe): - """Diff the customizations sections from two versions of a recipe - """ - diffs = [] - old_keys = set(old_recipe.get("customizations", {}).keys()) - new_keys = set(new_recipe.get("customizations", {}).keys()) - - added_keys = new_keys.difference(old_keys) - added_keys = sorted(added_keys, key=lambda n: n.lower()) - - removed_keys = old_keys.difference(new_keys) - removed_keys = sorted(removed_keys, key=lambda n: n.lower()) - - same_keys = old_keys.intersection(new_keys) - same_keys = sorted(same_keys, key=lambda n: n.lower()) - - for v in added_keys: - diffs.append({"old": None, - "new": {"Customizations."+v: new_recipe["customizations"][v]}}) - - for v in removed_keys: - diffs.append({"old": {"Customizations."+v: old_recipe["customizations"][v]}, - "new": None}) - - for v in same_keys: - if new_recipe["customizations"][v] == old_recipe["customizations"][v]: - continue - - if type(new_recipe["customizations"][v]) == type([]): - # Lists of dicts need to use diff_lists - # sshkey uses 'user', user and group use 'name' - if "user" in new_recipe["customizations"][v][0]: - field_name = "user" - elif "name" in new_recipe["customizations"][v][0]: - field_name = "name" - else: - raise RuntimeError("%s list has unrecognized key, not 'name' or 'user'" % "customizations."+v) - - diffs.extend(diff_lists("Customizations."+v, field_name, old_recipe["customizations"][v], new_recipe["customizations"][v])) - else: - diffs.append({"old": {"Customizations."+v: old_recipe["customizations"][v]}, - "new": {"Customizations."+v: new_recipe["customizations"][v]}}) - - return diffs
- - -
[docs]def recipe_diff(old_recipe, new_recipe): - """Diff two versions of a recipe - - :param old_recipe: The old version of the recipe - :type old_recipe: Recipe - :param new_recipe: The new version of the recipe - :type new_recipe: Recipe - :returns: A list of diff dict entries with old/new - :rtype: list(dict) - """ - - diffs = [] - # These cannot be added or removed, just different - for element in ["name", "description", "version"]: - if old_recipe[element] != new_recipe[element]: - diffs.append({"old":{element.title():old_recipe[element]}, - "new":{element.title():new_recipe[element]}}) - - # These lists always exist - diffs.extend(diff_lists("Module", "name", old_recipe["modules"], new_recipe["modules"])) - diffs.extend(diff_lists("Package", "name", old_recipe["packages"], new_recipe["packages"])) - diffs.extend(diff_lists("Group", "name", old_recipe["groups"], new_recipe["groups"])) - - # The customizations section can contain a number of different types - diffs.extend(customizations_diff(old_recipe, new_recipe)) - - # repos contains keys that are lists (eg. [[repos.git]]) - diffs.extend(diff_lists("Repos.git", "rpmname", - find_recipe_obj(["repos", "git"], old_recipe, []), - find_recipe_obj(["repos", "git"], new_recipe, []))) - - return diffs
- -
[docs]def repo_file_exists(repo, branch, filename): - """Return True if the filename exists on the branch - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param filename: Filename to check - :type filename: str - :returns: True if the filename exists on the HEAD of the branch, False otherwise. - :rtype: bool - """ - commit = head_commit(repo, branch).get_id().to_string() - commit_id = Git.OId.new_from_string(commit) - commit_obj = repo.lookup(commit_id, Git.Commit) - tree = commit_obj.get_tree() - return tree.get_by_name(filename) is not None
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/server.html b/docs/html/_modules/pylorax/api/server.html deleted file mode 100644 index 3a4c5a69..00000000 --- a/docs/html/_modules/pylorax/api/server.html +++ /dev/null @@ -1,303 +0,0 @@ - - - - - - - - - - - pylorax.api.server — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.server

-#
-# Copyright (C) 2017-2019 Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import logging
-log = logging.getLogger("lorax-composer")
-
-from collections import namedtuple
-from flask import Flask, jsonify, redirect, send_from_directory
-from glob import glob
-import os
-import werkzeug
-
-from pylorax import vernum
-from pylorax.api.errors import HTTP_ERROR
-from pylorax.api.v0 import v0_api
-from pylorax.api.v1 import v1_api
-from pylorax.sysutils import joinpaths
-
-GitLock = namedtuple("GitLock", ["repo", "lock", "dir"])
-
-server = Flask(__name__)
-
-__all__ = ["server", "GitLock"]
-
-@server.route('/')
-def server_root():
-    redirect("/api/docs/")
-
-@server.route("/api/docs/")
-@server.route("/api/docs/<path:path>")
-def api_docs(path=None):
-    # Find the html docs
-    try:
-        # This assumes it is running from the source tree
-        docs_path = os.path.abspath(joinpaths(os.path.dirname(__file__), "../../../docs/html"))
-    except IndexError:
-        docs_path = glob("/usr/share/doc/lorax-*/html/")[0]
-
-    if not path:
-        path="index.html"
-    return send_from_directory(docs_path, path)
-
-@server.route("/api/status")
-def api_status():
-    """
-    `/api/status`
-    ^^^^^^^^^^^^^^^^
-    Return the status of the API Server::
-
-          { "api": "0",
-            "build": "devel",
-            "db_supported": true,
-            "db_version": "0",
-            "schema_version": "0",
-            "backend": "lorax-composer",
-            "msgs": []}
-
-    The 'msgs' field can be a list of strings describing startup problems or status that
-    should be displayed to the user. eg. if the compose templates are not depsolving properly
-    the errors will be in 'msgs'.
-    """
-    return jsonify(backend="lorax-composer",
-                   build=vernum,
-                   api="1",
-                   db_version="0",
-                   schema_version="0",
-                   db_supported=True,
-                   msgs=server.config["TEMPLATE_ERRORS"])
-
-@server.errorhandler(werkzeug.exceptions.HTTPException)
-def bad_request(error):
-    return jsonify(status=False, errors=[{ "id": HTTP_ERROR, "code": error.code, "msg": error.name }]), error.code
-
-# Register the v0 API on /api/v0/
-server.register_blueprint(v0_api, url_prefix="/api/v0/")
-
-# Register the v1 API on /api/v1/
-# Use v0 routes by default
-skip_rules = [
-    "/compose",
-    "/compose/queue",
-    "/compose/finished",
-    "/compose/failed",
-    "/compose/status/<uuids>",
-    "/compose/info/<uuid>",
-    "/projects/source/info/<source_names>",
-    "/projects/source/new",
-]
-server.register_blueprint(v0_api, url_prefix="/api/v1/", skip_rules=skip_rules)
-server.register_blueprint(v1_api, url_prefix="/api/v1/")
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/timestamp.html b/docs/html/_modules/pylorax/api/timestamp.html deleted file mode 100644 index 16aea740..00000000 --- a/docs/html/_modules/pylorax/api/timestamp.html +++ /dev/null @@ -1,251 +0,0 @@ - - - - - - - - - - - pylorax.api.timestamp — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.timestamp

-#
-# Copyright (C) 2018  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-
-import time
-
-from pylorax.sysutils import joinpaths
-import pylorax.api.toml as toml
-
-TS_CREATED  = "created"
-TS_STARTED  = "started"
-TS_FINISHED = "finished"
-
-
[docs]def write_timestamp(destdir, ty): - path = joinpaths(destdir, "times.toml") - - try: - contents = toml.loads(open(path, "r").read()) - except IOError: - contents = toml.loads("") - - if ty == TS_CREATED: - contents[TS_CREATED] = time.time() - elif ty == TS_STARTED: - contents[TS_STARTED] = time.time() - elif ty == TS_FINISHED: - contents[TS_FINISHED] = time.time() - - with open(path, "w") as f: - f.write(toml.dumps(contents))
- -
[docs]def timestamp_dict(destdir): - path = joinpaths(destdir, "times.toml") - - try: - return toml.loads(open(path, "r").read()) - except IOError: - return toml.loads("")
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/toml.html b/docs/html/_modules/pylorax/api/toml.html deleted file mode 100644 index e391ab1a..00000000 --- a/docs/html/_modules/pylorax/api/toml.html +++ /dev/null @@ -1,242 +0,0 @@ - - - - - - - - - - - pylorax.api.toml — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.toml

-#
-# Copyright (C) 2019  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-
-import toml
-
-
[docs]class TomlError(toml.TomlDecodeError): - pass
- -
[docs]def loads(s): - if isinstance(s, bytes): - s = s.decode('utf-8') - try: - return toml.loads(s) - except toml.TomlDecodeError as e: - raise TomlError(e.msg, e.doc, e.pos)
- -
[docs]def dumps(o): - # strip the result, because `toml.dumps` adds a lot of newlines - return toml.dumps(o, encoder=toml.TomlEncoder(dict)).strip()
- -
[docs]def load(file): - try: - return toml.load(file) - except toml.TomlDecodeError as e: - raise TomlError(e.msg, e.doc, e.pos)
- -
[docs]def dump(o, file): - return toml.dump(o, file)
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/utils.html b/docs/html/_modules/pylorax/api/utils.html deleted file mode 100644 index e968ae30..00000000 --- a/docs/html/_modules/pylorax/api/utils.html +++ /dev/null @@ -1,249 +0,0 @@ - - - - - - - - - - - pylorax.api.utils — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.utils

-#
-# Copyright (C) 2019  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-""" API utility functions
-"""
-from pylorax.api.recipes import RecipeError, RecipeFileError, read_recipe_commit
-
-
[docs]def take_limits(iterable, offset, limit): - """ Apply offset and limit to an iterable object - - :param iterable: The object to limit - :type iterable: iter - :param offset: The number of items to skip - :type offset: int - :param limit: The total number of items to return - :type limit: int - :returns: A subset of the iterable - """ - return iterable[offset:][:limit]
- -
[docs]def blueprint_exists(api, branch, blueprint_name): - """Return True if the blueprint exists - - :param api: flask object - :type api: Flask - :param branch: Branch name - :type branch: str - :param recipe_name: Recipe name to read - :type recipe_name: str - """ - try: - with api.config["GITLOCK"].lock: - read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name) - - return True - except (RecipeError, RecipeFileError): - return False
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/v0.html b/docs/html/_modules/pylorax/api/v0.html deleted file mode 100644 index 8f62fb54..00000000 --- a/docs/html/_modules/pylorax/api/v0.html +++ /dev/null @@ -1,2197 +0,0 @@ - - - - - - - - - - - pylorax.api.v0 — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.v0

-#
-# Copyright (C) 2017-2019  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-""" Setup v0 of the API server
-
-v0_api() must be called to setup the API routes for Flask
-
-Status Responses
-----------------
-
-Some requests only return a status/error response.
-
-  The response will be a status response with `status` set to true, or an
-  error response with it set to false and an error message included.
-
-  Example response::
-
-      {
-        "status": true
-      }
-
-  Error response::
-
-      {
-        "errors": ["ggit-error: Failed to remove entry. File isn't in the tree - jboss.toml (-1)"]
-        "status": false
-      }
-
-API Routes
-----------
-
-All of the blueprints routes support the optional `branch` argument. If it is not
-used then the API will use the `master` branch for blueprints. If you want to create
-a new branch use the `new` or `workspace` routes with ?branch=<branch-name> to
-store the new blueprint on the new branch.
-"""
-
-import logging
-log = logging.getLogger("lorax-composer")
-
-import os
-from flask import jsonify, request, Response, send_file
-from flask import current_app as api
-
-from pylorax.sysutils import joinpaths
-from pylorax.api.checkparams import checkparams
-from pylorax.api.compose import start_build, compose_types
-from pylorax.api.errors import *                               # pylint: disable=wildcard-import,unused-wildcard-import
-from pylorax.api.flask_blueprint import BlueprintSkip
-from pylorax.api.projects import projects_list, projects_info, projects_depsolve
-from pylorax.api.projects import modules_list, modules_info, ProjectsError, repo_to_source
-from pylorax.api.projects import get_repo_sources, delete_repo_source, new_repo_source
-from pylorax.api.queue import queue_status, build_status, uuid_delete, uuid_status, uuid_info
-from pylorax.api.queue import uuid_tar, uuid_image, uuid_cancel, uuid_log
-from pylorax.api.recipes import list_branch_files, read_recipe_commit, recipe_filename, list_commits
-from pylorax.api.recipes import recipe_from_dict, recipe_from_toml, commit_recipe, delete_recipe, revert_recipe
-from pylorax.api.recipes import tag_recipe_commit, recipe_diff, RecipeFileError
-from pylorax.api.regexes import VALID_API_STRING, VALID_BLUEPRINT_NAME
-import pylorax.api.toml as toml
-from pylorax.api.utils import take_limits, blueprint_exists
-from pylorax.api.workspace import workspace_read, workspace_write, workspace_delete, workspace_exists
-
-# The API functions don't actually get called by any code here
-# pylint: disable=unused-variable
-
-# Create the v0 routes Blueprint with skip_routes support
-v0_api = BlueprintSkip("v0_routes", __name__)
-
-
[docs]@v0_api.route("/blueprints/list") -def v0_blueprints_list(): - """List the available blueprints on a branch. - - **/api/v0/blueprints/list** - - List the available blueprints:: - - { "limit": 20, - "offset": 0, - "blueprints": [ - "atlas", - "development", - "glusterfs", - "http-server", - "jboss", - "kubernetes" ], - "total": 6 } - """ - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - try: - limit = int(request.args.get("limit", "20")) - offset = int(request.args.get("offset", "0")) - except ValueError as e: - return jsonify(status=False, errors=[{"id": BAD_LIMIT_OR_OFFSET, "msg": str(e)}]), 400 - - with api.config["GITLOCK"].lock: - blueprints = [f[:-5] for f in list_branch_files(api.config["GITLOCK"].repo, branch)] - limited_blueprints = take_limits(blueprints, offset, limit) - return jsonify(blueprints=limited_blueprints, limit=limit, offset=offset, total=len(blueprints))
- -
[docs]@v0_api.route("/blueprints/info", defaults={'blueprint_names': ""}) -@v0_api.route("/blueprints/info/<blueprint_names>") -@checkparams([("blueprint_names", "", "no blueprint names given")]) -def v0_blueprints_info(blueprint_names): - """Return the contents of the blueprint, or a list of blueprints - - **/api/v0/blueprints/info/<blueprint_names>[?format=<json|toml>]** - - Return the JSON representation of the blueprint. This includes 3 top level - objects. `changes` which lists whether or not the workspace is different from - the most recent commit. `blueprints` which lists the JSON representation of the - blueprint, and `errors` which will list any errors, like non-existant blueprints. - - By default the response is JSON, but if `?format=toml` is included in the URL's - arguments it will return the response as the blueprint's raw TOML content. - *Unless* there is an error which will only return a 400 and a standard error - `Status Responses`_. - - If there is an error when JSON is requested the successful blueprints and the - errors will both be returned. - - Example of json response:: - - { - "changes": [ - { - "changed": false, - "name": "glusterfs" - } - ], - "errors": [], - "blueprints": [ - { - "description": "An example GlusterFS server with samba", - "modules": [ - { - "name": "glusterfs", - "version": "3.7.*" - }, - { - "name": "glusterfs-cli", - "version": "3.7.*" - } - ], - "name": "glusterfs", - "packages": [ - { - "name": "2ping", - "version": "3.2.1" - }, - { - "name": "samba", - "version": "4.2.*" - } - ], - "version": "0.0.6" - } - ] - } - - Error example:: - - { - "changes": [], - "errors": ["ggit-error: the path 'missing.toml' does not exist in the given tree (-3)"] - "blueprints": [] - } - """ - if any(VALID_BLUEPRINT_NAME.match(blueprint_name) is None for blueprint_name in blueprint_names.split(',')): - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - out_fmt = request.args.get("format", "json") - if VALID_API_STRING.match(out_fmt) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in format argument"}]), 400 - - blueprints = [] - changes = [] - errors = [] - for blueprint_name in [n.strip() for n in blueprint_names.split(",")]: - exceptions = [] - # Get the workspace version (if it exists) - try: - with api.config["GITLOCK"].lock: - ws_blueprint = workspace_read(api.config["GITLOCK"].repo, branch, blueprint_name) - except Exception as e: - ws_blueprint = None - exceptions.append(str(e)) - log.error("(v0_blueprints_info) %s", str(e)) - - # Get the git version (if it exists) - try: - with api.config["GITLOCK"].lock: - git_blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name) - except RecipeFileError as e: - # Adding an exception would be redundant, skip it - git_blueprint = None - log.error("(v0_blueprints_info) %s", str(e)) - except Exception as e: - git_blueprint = None - exceptions.append(str(e)) - log.error("(v0_blueprints_info) %s", str(e)) - - if not ws_blueprint and not git_blueprint: - # Neither blueprint, return an error - errors.append({"id": UNKNOWN_BLUEPRINT, "msg": "%s: %s" % (blueprint_name, ", ".join(exceptions))}) - elif ws_blueprint and not git_blueprint: - # No git blueprint, return the workspace blueprint - changes.append({"name":blueprint_name, "changed":True}) - blueprints.append(ws_blueprint) - elif not ws_blueprint and git_blueprint: - # No workspace blueprint, no change, return the git blueprint - changes.append({"name":blueprint_name, "changed":False}) - blueprints.append(git_blueprint) - else: - # Both exist, maybe changed, return the workspace blueprint - changes.append({"name":blueprint_name, "changed":ws_blueprint != git_blueprint}) - blueprints.append(ws_blueprint) - - # Sort all the results by case-insensitive blueprint name - changes = sorted(changes, key=lambda c: c["name"].lower()) - blueprints = sorted(blueprints, key=lambda r: r["name"].lower()) - - if out_fmt == "toml": - if errors: - # If there are errors they need to be reported, use JSON and 400 for this - return jsonify(status=False, errors=errors), 400 - else: - # With TOML output we just want to dump the raw blueprint, skipping the rest. - return "\n\n".join([r.toml() for r in blueprints]) - else: - return jsonify(changes=changes, blueprints=blueprints, errors=errors)
- -
[docs]@v0_api.route("/blueprints/changes", defaults={'blueprint_names': ""}) -@v0_api.route("/blueprints/changes/<blueprint_names>") -@checkparams([("blueprint_names", "", "no blueprint names given")]) -def v0_blueprints_changes(blueprint_names): - """Return the changes to a blueprint or list of blueprints - - **/api/v0/blueprints/changes/<blueprint_names>[?offset=0&limit=20]** - - Return the commits to a blueprint. By default it returns the first 20 commits, this - can be changed by passing `offset` and/or `limit`. The response will include the - commit hash, summary, timestamp, and optionally the revision number. The commit - hash can be passed to `/api/v0/blueprints/diff/` to retrieve the exact changes. - - Example:: - - { - "errors": [], - "limit": 20, - "offset": 0, - "blueprints": [ - { - "changes": [ - { - "commit": "e083921a7ed1cf2eec91ad12b9ad1e70ef3470be", - "message": "blueprint glusterfs, version 0.0.6 saved.", - "revision": null, - "timestamp": "2017-11-23T00:18:13Z" - }, - { - "commit": "cee5f4c20fc33ea4d54bfecf56f4ad41ad15f4f3", - "message": "blueprint glusterfs, version 0.0.5 saved.", - "revision": null, - "timestamp": "2017-11-11T01:00:28Z" - }, - { - "commit": "29b492f26ed35d80800b536623bafc51e2f0eff2", - "message": "blueprint glusterfs, version 0.0.4 saved.", - "revision": null, - "timestamp": "2017-11-11T00:28:30Z" - }, - { - "commit": "03374adbf080fe34f5c6c29f2e49cc2b86958bf2", - "message": "blueprint glusterfs, version 0.0.3 saved.", - "revision": null, - "timestamp": "2017-11-10T23:15:52Z" - }, - { - "commit": "0e08ecbb708675bfabc82952599a1712a843779d", - "message": "blueprint glusterfs, version 0.0.2 saved.", - "revision": null, - "timestamp": "2017-11-10T23:14:56Z" - }, - { - "commit": "3e11eb87a63d289662cba4b1804a0947a6843379", - "message": "blueprint glusterfs, version 0.0.1 saved.", - "revision": null, - "timestamp": "2017-11-08T00:02:47Z" - } - ], - "name": "glusterfs", - "total": 6 - } - ] - } - """ - if any(VALID_BLUEPRINT_NAME.match(blueprint_name) is None for blueprint_name in blueprint_names.split(',')): - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - try: - limit = int(request.args.get("limit", "20")) - offset = int(request.args.get("offset", "0")) - except ValueError as e: - return jsonify(status=False, errors=[{"id": BAD_LIMIT_OR_OFFSET, "msg": str(e)}]), 400 - - blueprints = [] - errors = [] - for blueprint_name in [n.strip() for n in blueprint_names.split(",")]: - filename = recipe_filename(blueprint_name) - try: - with api.config["GITLOCK"].lock: - commits = list_commits(api.config["GITLOCK"].repo, branch, filename) - except Exception as e: - errors.append({"id": BLUEPRINTS_ERROR, "msg": "%s: %s" % (blueprint_name, str(e))}) - log.error("(v0_blueprints_changes) %s", str(e)) - else: - if commits: - limited_commits = take_limits(commits, offset, limit) - blueprints.append({"name":blueprint_name, "changes":limited_commits, "total":len(commits)}) - else: - # no commits means there is no blueprint in the branch - errors.append({"id": UNKNOWN_BLUEPRINT, "msg": "%s" % blueprint_name}) - - blueprints = sorted(blueprints, key=lambda r: r["name"].lower()) - - return jsonify(blueprints=blueprints, errors=errors, offset=offset, limit=limit)
- -
[docs]@v0_api.route("/blueprints/new", methods=["POST"]) -def v0_blueprints_new(): - """Commit a new blueprint - - **POST /api/v0/blueprints/new** - - Create a new blueprint, or update an existing blueprint. This supports both JSON and TOML - for the blueprint format. The blueprint should be in the body of the request with the - `Content-Type` header set to either `application/json` or `text/x-toml`. - - The response will be a status response with `status` set to true, or an - error response with it set to false and an error message included. - """ - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - try: - if request.headers['Content-Type'] == "text/x-toml": - blueprint = recipe_from_toml(request.data) - else: - blueprint = recipe_from_dict(request.get_json(cache=False)) - - if VALID_BLUEPRINT_NAME.match(blueprint["name"]) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - with api.config["GITLOCK"].lock: - commit_recipe(api.config["GITLOCK"].repo, branch, blueprint) - - # Read the blueprint with new version and write it to the workspace - blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint["name"]) - workspace_write(api.config["GITLOCK"].repo, branch, blueprint) - except Exception as e: - log.error("(v0_blueprints_new) %s", str(e)) - return jsonify(status=False, errors=[{"id": BLUEPRINTS_ERROR, "msg": str(e)}]), 400 - else: - return jsonify(status=True)
- -
[docs]@v0_api.route("/blueprints/delete", defaults={'blueprint_name': ""}, methods=["DELETE"]) -@v0_api.route("/blueprints/delete/<blueprint_name>", methods=["DELETE"]) -@checkparams([("blueprint_name", "", "no blueprint name given")]) -def v0_blueprints_delete(blueprint_name): - """Delete a blueprint from git - - **DELETE /api/v0/blueprints/delete/<blueprint_name>** - - Delete a blueprint. The blueprint is deleted from the branch, and will no longer - be listed by the `list` route. A blueprint can be undeleted using the `undo` route - to revert to a previous commit. This will also delete the workspace copy of the - blueprint. - - The response will be a status response with `status` set to true, or an - error response with it set to false and an error message included. - """ - if VALID_BLUEPRINT_NAME.match(blueprint_name) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - try: - with api.config["GITLOCK"].lock: - workspace_delete(api.config["GITLOCK"].repo, branch, blueprint_name) - delete_recipe(api.config["GITLOCK"].repo, branch, blueprint_name) - except Exception as e: - log.error("(v0_blueprints_delete) %s", str(e)) - return jsonify(status=False, errors=[{"id": BLUEPRINTS_ERROR, "msg": str(e)}]), 400 - else: - return jsonify(status=True)
- -
[docs]@v0_api.route("/blueprints/workspace", methods=["POST"]) -def v0_blueprints_workspace(): - """Write a blueprint to the workspace - - **POST /api/v0/blueprints/workspace** - - Write a blueprint to the temporary workspace. This works exactly the same as `new` except - that it does not create a commit. JSON and TOML bodies are supported. - - The workspace is meant to be used as a temporary blueprint storage for clients. - It will be read by the `info` and `diff` routes if it is different from the - most recent commit. - - The response will be a status response with `status` set to true, or an - error response with it set to false and an error message included. - """ - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - try: - if request.headers['Content-Type'] == "text/x-toml": - blueprint = recipe_from_toml(request.data) - else: - blueprint = recipe_from_dict(request.get_json(cache=False)) - - if VALID_BLUEPRINT_NAME.match(blueprint["name"]) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - with api.config["GITLOCK"].lock: - workspace_write(api.config["GITLOCK"].repo, branch, blueprint) - except Exception as e: - log.error("(v0_blueprints_workspace) %s", str(e)) - return jsonify(status=False, errors=[{"id": BLUEPRINTS_ERROR, "msg": str(e)}]), 400 - else: - return jsonify(status=True)
- -
[docs]@v0_api.route("/blueprints/workspace", defaults={'blueprint_name': ""}, methods=["DELETE"]) -@v0_api.route("/blueprints/workspace/<blueprint_name>", methods=["DELETE"]) -@checkparams([("blueprint_name", "", "no blueprint name given")]) -def v0_blueprints_delete_workspace(blueprint_name): - """Delete a blueprint from the workspace - - **DELETE /api/v0/blueprints/workspace/<blueprint_name>** - - Remove the temporary workspace copy of a blueprint. The `info` route will now - return the most recent commit of the blueprint. Any changes that were in the - workspace will be lost. - - The response will be a status response with `status` set to true, or an - error response with it set to false and an error message included. - """ - if VALID_BLUEPRINT_NAME.match(blueprint_name) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - try: - with api.config["GITLOCK"].lock: - if not workspace_exists(api.config["GITLOCK"].repo, branch, blueprint_name): - raise Exception("Unknown blueprint: %s" % blueprint_name) - - workspace_delete(api.config["GITLOCK"].repo, branch, blueprint_name) - except Exception as e: - log.error("(v0_blueprints_delete_workspace) %s", str(e)) - return jsonify(status=False, errors=[{"id": BLUEPRINTS_ERROR, "msg": str(e)}]), 400 - else: - return jsonify(status=True)
- -
[docs]@v0_api.route("/blueprints/undo", defaults={'blueprint_name': "", 'commit': ""}, methods=["POST"]) -@v0_api.route("/blueprints/undo/<blueprint_name>", defaults={'commit': ""}, methods=["POST"]) -@v0_api.route("/blueprints/undo/<blueprint_name>/<commit>", methods=["POST"]) -@checkparams([("blueprint_name", "", "no blueprint name given"), - ("commit", "", "no commit ID given")]) -def v0_blueprints_undo(blueprint_name, commit): - """Undo changes to a blueprint by reverting to a previous commit. - - **POST /api/v0/blueprints/undo/<blueprint_name>/<commit>** - - This will revert the blueprint to a previous commit. The commit hash from the `changes` - route can be used in this request. - - The response will be a status response with `status` set to true, or an - error response with it set to false and an error message included. - """ - if VALID_BLUEPRINT_NAME.match(blueprint_name) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - if VALID_BLUEPRINT_NAME.match(commit) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - try: - with api.config["GITLOCK"].lock: - revert_recipe(api.config["GITLOCK"].repo, branch, blueprint_name, commit) - - # Read the new recipe and write it to the workspace - blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name) - workspace_write(api.config["GITLOCK"].repo, branch, blueprint) - except Exception as e: - log.error("(v0_blueprints_undo) %s", str(e)) - return jsonify(status=False, errors=[{"id": UNKNOWN_COMMIT, "msg": str(e)}]), 400 - else: - return jsonify(status=True)
- -
[docs]@v0_api.route("/blueprints/tag", defaults={'blueprint_name': ""}, methods=["POST"]) -@v0_api.route("/blueprints/tag/<blueprint_name>", methods=["POST"]) -@checkparams([("blueprint_name", "", "no blueprint name given")]) -def v0_blueprints_tag(blueprint_name): - """Tag a blueprint's latest blueprint commit as a 'revision' - - **POST /api/v0/blueprints/tag/<blueprint_name>** - - Tag a blueprint as a new release. This uses git tags with a special format. - `refs/tags/<branch>/<filename>/r<revision>`. Only the most recent blueprint commit - can be tagged. Revisions start at 1 and increment for each new tag - (per-blueprint). If the commit has already been tagged it will return false. - - The response will be a status response with `status` set to true, or an - error response with it set to false and an error message included. - """ - if VALID_BLUEPRINT_NAME.match(blueprint_name) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - try: - with api.config["GITLOCK"].lock: - tag_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name) - except RecipeFileError as e: - log.error("(v0_blueprints_tag) %s", str(e)) - return jsonify(status=False, errors=[{"id": UNKNOWN_BLUEPRINT, "msg": str(e)}]), 400 - except Exception as e: - log.error("(v0_blueprints_tag) %s", str(e)) - return jsonify(status=False, errors=[{"id": BLUEPRINTS_ERROR, "msg": str(e)}]), 400 - else: - return jsonify(status=True)
- -
[docs]@v0_api.route("/blueprints/diff", defaults={'blueprint_name': "", 'from_commit': "", 'to_commit': ""}) -@v0_api.route("/blueprints/diff/<blueprint_name>", defaults={'from_commit': "", 'to_commit': ""}) -@v0_api.route("/blueprints/diff/<blueprint_name>/<from_commit>", defaults={'to_commit': ""}) -@v0_api.route("/blueprints/diff/<blueprint_name>/<from_commit>/<to_commit>") -@checkparams([("blueprint_name", "", "no blueprint name given"), - ("from_commit", "", "no from commit ID given"), - ("to_commit", "", "no to commit ID given")]) -def v0_blueprints_diff(blueprint_name, from_commit, to_commit): - """Return the differences between two commits of a blueprint - - **/api/v0/blueprints/diff/<blueprint_name>/<from_commit>/<to_commit>** - - Return the differences between two commits, or the workspace. The commit hash - from the `changes` response can be used here, or several special strings: - - - NEWEST will select the newest git commit. This works for `from_commit` or `to_commit` - - WORKSPACE will select the workspace copy. This can only be used in `to_commit` - - eg. `/api/v0/blueprints/diff/glusterfs/NEWEST/WORKSPACE` will return the differences - between the most recent git commit and the contents of the workspace. - - Each entry in the response's diff object contains the old blueprint value and the new one. - If old is null and new is set, then it was added. - If new is null and old is set, then it was removed. - If both are set, then it was changed. - - The old/new entries will have the name of the blueprint field that was changed. This - can be one of: Name, Description, Version, Module, or Package. - The contents for these will be the old/new values for them. - - In the example below the version was changed and the ping package was added. - - Example:: - - { - "diff": [ - { - "new": { - "Version": "0.0.6" - }, - "old": { - "Version": "0.0.5" - } - }, - { - "new": { - "Package": { - "name": "ping", - "version": "3.2.1" - } - }, - "old": null - } - ] - } - """ - for s in [blueprint_name, from_commit, to_commit]: - if VALID_API_STRING.match(s) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - if not blueprint_exists(api, branch, blueprint_name): - return jsonify(status=False, errors=[{"id": UNKNOWN_BLUEPRINT, "msg": "Unknown blueprint name: %s" % blueprint_name}]) - - try: - if from_commit == "NEWEST": - with api.config["GITLOCK"].lock: - old_blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name) - else: - with api.config["GITLOCK"].lock: - old_blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name, from_commit) - except Exception as e: - log.error("(v0_blueprints_diff) %s", str(e)) - return jsonify(status=False, errors=[{"id": UNKNOWN_COMMIT, "msg": str(e)}]), 400 - - try: - if to_commit == "WORKSPACE": - with api.config["GITLOCK"].lock: - new_blueprint = workspace_read(api.config["GITLOCK"].repo, branch, blueprint_name) - # If there is no workspace, use the newest commit instead - if not new_blueprint: - with api.config["GITLOCK"].lock: - new_blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name) - elif to_commit == "NEWEST": - with api.config["GITLOCK"].lock: - new_blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name) - else: - with api.config["GITLOCK"].lock: - new_blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name, to_commit) - except Exception as e: - log.error("(v0_blueprints_diff) %s", str(e)) - return jsonify(status=False, errors=[{"id": UNKNOWN_COMMIT, "msg": str(e)}]), 400 - - diff = recipe_diff(old_blueprint, new_blueprint) - return jsonify(diff=diff)
- -
[docs]@v0_api.route("/blueprints/freeze", defaults={'blueprint_names': ""}) -@v0_api.route("/blueprints/freeze/<blueprint_names>") -@checkparams([("blueprint_names", "", "no blueprint names given")]) -def v0_blueprints_freeze(blueprint_names): - """Return the blueprint with the exact modules and packages selected by depsolve - - **/api/v0/blueprints/freeze/<blueprint_names>** - - Return a JSON representation of the blueprint with the package and module versions set - to the exact versions chosen by depsolving the blueprint. - - Example:: - - { - "errors": [], - "blueprints": [ - { - "blueprint": { - "description": "An example GlusterFS server with samba", - "modules": [ - { - "name": "glusterfs", - "version": "3.8.4-18.4.el7.x86_64" - }, - { - "name": "glusterfs-cli", - "version": "3.8.4-18.4.el7.x86_64" - } - ], - "name": "glusterfs", - "packages": [ - { - "name": "ping", - "version": "2:3.2.1-2.el7.noarch" - }, - { - "name": "samba", - "version": "4.6.2-8.el7.x86_64" - } - ], - "version": "0.0.6" - } - } - ] - } - """ - if any(VALID_BLUEPRINT_NAME.match(blueprint_name) is None for blueprint_name in blueprint_names.split(',')): - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - out_fmt = request.args.get("format", "json") - if VALID_API_STRING.match(out_fmt) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in format argument"}]), 400 - - blueprints = [] - errors = [] - for blueprint_name in [n.strip() for n in sorted(blueprint_names.split(","), key=lambda n: n.lower())]: - # get the blueprint - # Get the workspace version (if it exists) - blueprint = None - try: - with api.config["GITLOCK"].lock: - blueprint = workspace_read(api.config["GITLOCK"].repo, branch, blueprint_name) - except Exception: - pass - - if not blueprint: - # No workspace version, get the git version (if it exists) - try: - with api.config["GITLOCK"].lock: - blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name) - except RecipeFileError as e: - # adding an error here would be redundant, skip it - log.error("(v0_blueprints_freeze) %s", str(e)) - except Exception as e: - errors.append({"id": BLUEPRINTS_ERROR, "msg": "%s: %s" % (blueprint_name, str(e))}) - log.error("(v0_blueprints_freeze) %s", str(e)) - - # No blueprint found, skip it. - if not blueprint: - errors.append({"id": UNKNOWN_BLUEPRINT, "msg": "%s: blueprint_not_found" % blueprint_name}) - continue - - # Combine modules and packages and depsolve the list - # TODO include the version/glob in the depsolving - module_nver = blueprint.module_nver - package_nver = blueprint.package_nver - projects = sorted(set(module_nver+package_nver), key=lambda p: p[0].lower()) - deps = [] - try: - with api.config["DNFLOCK"].lock: - deps = projects_depsolve(api.config["DNFLOCK"].dbo, projects, blueprint.group_names) - except ProjectsError as e: - errors.append({"id": BLUEPRINTS_ERROR, "msg": "%s: %s" % (blueprint_name, str(e))}) - log.error("(v0_blueprints_freeze) %s", str(e)) - - blueprints.append({"blueprint": blueprint.freeze(deps)}) - - if out_fmt == "toml": - # With TOML output we just want to dump the raw blueprint, skipping the rest. - return "\n\n".join([e["blueprint"].toml() for e in blueprints]) - else: - return jsonify(blueprints=blueprints, errors=errors)
- -
[docs]@v0_api.route("/blueprints/depsolve", defaults={'blueprint_names': ""}) -@v0_api.route("/blueprints/depsolve/<blueprint_names>") -@checkparams([("blueprint_names", "", "no blueprint names given")]) -def v0_blueprints_depsolve(blueprint_names): - """Return the dependencies for a blueprint - - **/api/v0/blueprints/depsolve/<blueprint_names>** - - Depsolve the blueprint using yum, return the blueprint used, and the NEVRAs of the packages - chosen to satisfy the blueprint's requirements. The response will include a list of results, - with the full dependency list in `dependencies`, the NEVRAs for the blueprint's direct modules - and packages in `modules`, and any error will be in `errors`. - - Example:: - - { - "errors": [], - "blueprints": [ - { - "dependencies": [ - { - "arch": "noarch", - "epoch": "0", - "name": "2ping", - "release": "2.el7", - "version": "3.2.1" - }, - { - "arch": "x86_64", - "epoch": "0", - "name": "acl", - "release": "12.el7", - "version": "2.2.51" - }, - { - "arch": "x86_64", - "epoch": "0", - "name": "audit-libs", - "release": "3.el7", - "version": "2.7.6" - }, - { - "arch": "x86_64", - "epoch": "0", - "name": "avahi-libs", - "release": "17.el7", - "version": "0.6.31" - }, - ... - ], - "modules": [ - { - "arch": "noarch", - "epoch": "0", - "name": "2ping", - "release": "2.el7", - "version": "3.2.1" - }, - { - "arch": "x86_64", - "epoch": "0", - "name": "glusterfs", - "release": "18.4.el7", - "version": "3.8.4" - }, - ... - ], - "blueprint": { - "description": "An example GlusterFS server with samba", - "modules": [ - { - "name": "glusterfs", - "version": "3.7.*" - }, - ... - } - } - ] - } - """ - if any(VALID_BLUEPRINT_NAME.match(blueprint_name) is None for blueprint_name in blueprint_names.split(',')): - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - branch = request.args.get("branch", "master") - if VALID_API_STRING.match(branch) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in branch argument"}]), 400 - - blueprints = [] - errors = [] - for blueprint_name in [n.strip() for n in sorted(blueprint_names.split(","), key=lambda n: n.lower())]: - # get the blueprint - # Get the workspace version (if it exists) - blueprint = None - try: - with api.config["GITLOCK"].lock: - blueprint = workspace_read(api.config["GITLOCK"].repo, branch, blueprint_name) - except Exception: - pass - - if not blueprint: - # No workspace version, get the git version (if it exists) - try: - with api.config["GITLOCK"].lock: - blueprint = read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name) - except RecipeFileError as e: - # adding an error here would be redundant, skip it - log.error("(v0_blueprints_depsolve) %s", str(e)) - except Exception as e: - errors.append({"id": BLUEPRINTS_ERROR, "msg": "%s: %s" % (blueprint_name, str(e))}) - log.error("(v0_blueprints_depsolve) %s", str(e)) - - # No blueprint found, skip it. - if not blueprint: - errors.append({"id": UNKNOWN_BLUEPRINT, "msg": "%s: blueprint not found" % blueprint_name}) - continue - - # Combine modules and packages and depsolve the list - # TODO include the version/glob in the depsolving - module_nver = blueprint.module_nver - package_nver = blueprint.package_nver - projects = sorted(set(module_nver+package_nver), key=lambda p: p[0].lower()) - deps = [] - try: - with api.config["DNFLOCK"].lock: - deps = projects_depsolve(api.config["DNFLOCK"].dbo, projects, blueprint.group_names) - except ProjectsError as e: - errors.append({"id": BLUEPRINTS_ERROR, "msg": "%s: %s" % (blueprint_name, str(e))}) - log.error("(v0_blueprints_depsolve) %s", str(e)) - - # Get the NEVRA's of the modules and projects, add as "modules" - modules = [] - for dep in deps: - if dep["name"] in projects: - modules.append(dep) - modules = sorted(modules, key=lambda m: m["name"].lower()) - - blueprints.append({"blueprint":blueprint, "dependencies":deps, "modules":modules}) - - return jsonify(blueprints=blueprints, errors=errors)
- -
[docs]@v0_api.route("/projects/list") -def v0_projects_list(): - """List all of the available projects/packages - - **/api/v0/projects/list[?offset=0&limit=20]** - - List all of the available projects. By default this returns the first 20 items, - but this can be changed by setting the `offset` and `limit` arguments. - - Example:: - - { - "limit": 20, - "offset": 0, - "projects": [ - { - "description": "0 A.D. (pronounced \"zero ey-dee\") is a ...", - "homepage": "http://play0ad.com", - "name": "0ad", - "summary": "Cross-Platform RTS Game of Ancient Warfare", - "upstream_vcs": "UPSTREAM_VCS" - }, - ... - ], - "total": 21770 - } - """ - try: - limit = int(request.args.get("limit", "20")) - offset = int(request.args.get("offset", "0")) - except ValueError as e: - return jsonify(status=False, errors=[{"id": BAD_LIMIT_OR_OFFSET, "msg": str(e)}]), 400 - - try: - with api.config["DNFLOCK"].lock: - available = projects_list(api.config["DNFLOCK"].dbo) - except ProjectsError as e: - log.error("(v0_projects_list) %s", str(e)) - return jsonify(status=False, errors=[{"id": PROJECTS_ERROR, "msg": str(e)}]), 400 - - projects = take_limits(available, offset, limit) - return jsonify(projects=projects, offset=offset, limit=limit, total=len(available))
- -
[docs]@v0_api.route("/projects/info", defaults={'project_names': ""}) -@v0_api.route("/projects/info/<project_names>") -@checkparams([("project_names", "", "no project names given")]) -def v0_projects_info(project_names): - """Return detailed information about the listed projects - - **/api/v0/projects/info/<project_names>** - - Return information about the comma-separated list of projects. It includes the description - of the package along with the list of available builds. - - Example:: - - { - "projects": [ - { - "builds": [ - { - "arch": "x86_64", - "build_config_ref": "BUILD_CONFIG_REF", - "build_env_ref": "BUILD_ENV_REF", - "build_time": "2017-03-01T08:39:23", - "changelog": "- restore incremental backups correctly, files ...", - "epoch": "2", - "metadata": {}, - "release": "32.el7", - "source": { - "license": "GPLv3+", - "metadata": {}, - "source_ref": "SOURCE_REF", - "version": "1.26" - } - } - ], - "description": "The GNU tar program saves many ...", - "homepage": "http://www.gnu.org/software/tar/", - "name": "tar", - "summary": "A GNU file archiving program", - "upstream_vcs": "UPSTREAM_VCS" - } - ] - } - """ - if VALID_API_STRING.match(project_names) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - try: - with api.config["DNFLOCK"].lock: - projects = projects_info(api.config["DNFLOCK"].dbo, project_names.split(",")) - except ProjectsError as e: - log.error("(v0_projects_info) %s", str(e)) - return jsonify(status=False, errors=[{"id": PROJECTS_ERROR, "msg": str(e)}]), 400 - - if not projects: - msg = "one of the requested projects does not exist: %s" % project_names - log.error("(v0_projects_info) %s", msg) - return jsonify(status=False, errors=[{"id": UNKNOWN_PROJECT, "msg": msg}]), 400 - - return jsonify(projects=projects)
- -
[docs]@v0_api.route("/projects/depsolve", defaults={'project_names': ""}) -@v0_api.route("/projects/depsolve/<project_names>") -@checkparams([("project_names", "", "no project names given")]) -def v0_projects_depsolve(project_names): - """Return detailed information about the listed projects - - **/api/v0/projects/depsolve/<project_names>** - - Depsolve the comma-separated list of projects and return the list of NEVRAs needed - to satisfy the request. - - Example:: - - { - "projects": [ - { - "arch": "noarch", - "epoch": "0", - "name": "basesystem", - "release": "7.el7", - "version": "10.0" - }, - { - "arch": "x86_64", - "epoch": "0", - "name": "bash", - "release": "28.el7", - "version": "4.2.46" - }, - { - "arch": "x86_64", - "epoch": "0", - "name": "filesystem", - "release": "21.el7", - "version": "3.2" - }, - ... - ] - } - """ - if VALID_API_STRING.match(project_names) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - try: - with api.config["DNFLOCK"].lock: - deps = projects_depsolve(api.config["DNFLOCK"].dbo, [(n, "*") for n in project_names.split(",")], []) - except ProjectsError as e: - log.error("(v0_projects_depsolve) %s", str(e)) - return jsonify(status=False, errors=[{"id": PROJECTS_ERROR, "msg": str(e)}]), 400 - - if not deps: - msg = "one of the requested projects does not exist: %s" % project_names - log.error("(v0_projects_depsolve) %s", msg) - return jsonify(status=False, errors=[{"id": UNKNOWN_PROJECT, "msg": msg}]), 400 - - return jsonify(projects=deps)
- -
[docs]@v0_api.route("/projects/source/list") -def v0_projects_source_list(): - """Return the list of source names - - **/api/v0/projects/source/list** - - Return the list of repositories used for depsolving and installing packages. - - Example:: - - { - "sources": [ - "fedora", - "fedora-cisco-openh264", - "fedora-updates-testing", - "fedora-updates" - ] - } - """ - with api.config["DNFLOCK"].lock: - repos = list(api.config["DNFLOCK"].dbo.repos.iter_enabled()) - sources = sorted([r.id for r in repos]) - return jsonify(sources=sources)
- -
[docs]@v0_api.route("/projects/source/info", defaults={'source_names': ""}) -@v0_api.route("/projects/source/info/<source_names>") -@checkparams([("source_names", "", "no source names given")]) -def v0_projects_source_info(source_names): - """Return detailed info about the list of sources - - **/api/v0/projects/source/info/<source-names>** - - Return information about the comma-separated list of source names. Or all of the - sources if '*' is passed. Note that general globbing is not supported, only '*'. - - immutable system sources will have the "system" field set to true. User added sources - will have it set to false. System sources cannot be changed or deleted. - - Example:: - - { - "errors": [], - "sources": { - "fedora": { - "check_gpg": true, - "check_ssl": true, - "gpgkey_urls": [ - "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64" - ], - "name": "fedora", - "proxy": "http://proxy.brianlane.com:8123", - "system": true, - "type": "yum-metalink", - "url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64" - } - } - } - """ - if VALID_API_STRING.match(source_names) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - out_fmt = request.args.get("format", "json") - if VALID_API_STRING.match(out_fmt) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in format argument"}]), 400 - - # Return info on all of the sources - if source_names == "*": - with api.config["DNFLOCK"].lock: - source_names = ",".join(r.id for r in api.config["DNFLOCK"].dbo.repos.iter_enabled()) - - sources = {} - errors = [] - system_sources = get_repo_sources("/etc/yum.repos.d/*.repo") - for source in source_names.split(","): - with api.config["DNFLOCK"].lock: - repo = api.config["DNFLOCK"].dbo.repos.get(source, None) - if not repo: - errors.append({"id": UNKNOWN_SOURCE, "msg": "%s is not a valid source" % source}) - continue - sources[repo.id] = repo_to_source(repo, repo.id in system_sources, api=0) - - if out_fmt == "toml" and not errors: - # With TOML output we just want to dump the raw sources, skipping the errors - return toml.dumps(sources) - elif out_fmt == "toml" and errors: - # TOML requested, but there was an error - return jsonify(status=False, errors=errors), 400 - else: - return jsonify(sources=sources, errors=errors)
- -
[docs]@v0_api.route("/projects/source/new", methods=["POST"]) -def v0_projects_source_new(): - """Add a new package source. Or change an existing one - - **POST /api/v0/projects/source/new** - - Add (or change) a source for use when depsolving blueprints and composing images. - - The ``proxy`` and ``gpgkey_urls`` entries are optional. All of the others are required. The supported - types for the urls are: - - * ``yum-baseurl`` is a URL to a yum repository. - * ``yum-mirrorlist`` is a URL for a mirrorlist. - * ``yum-metalink`` is a URL for a metalink. - - If ``check_ssl`` is true the https certificates must be valid. If they are self-signed you can either set - this to false, or add your Certificate Authority to the host system. - - If ``check_gpg`` is true the GPG key must either be installed on the host system, or ``gpgkey_urls`` - should point to it. - - You can edit an existing source (other than system sources), by doing a POST - of the new version of the source. It will overwrite the previous one. - - Example:: - - { - "name": "custom-source-1", - "url": "https://url/path/to/repository/", - "type": "yum-baseurl", - "check_ssl": true, - "check_gpg": true, - "gpgkey_urls": [ - "https://url/path/to/gpg-key" - ] - } - - - """ - if request.headers['Content-Type'] == "text/x-toml": - source = toml.loads(request.data) - else: - source = request.get_json(cache=False) - - system_sources = get_repo_sources("/etc/yum.repos.d/*.repo") - if source["name"] in system_sources: - return jsonify(status=False, errors=[{"id": SYSTEM_SOURCE, "msg": "%s is a system source, it cannot be changed." % source["name"]}]), 400 - - try: - # Remove it from the RepoDict (NOTE that this isn't explicitly supported by the DNF API) - with api.config["DNFLOCK"].lock: - repo_dir = api.config["COMPOSER_CFG"].get("composer", "repo_dir") - new_repo_source(api.config["DNFLOCK"].dbo, source["name"], source, repo_dir) - except Exception as e: - return jsonify(status=False, errors=[{"id": PROJECTS_ERROR, "msg": str(e)}]), 400 - - return jsonify(status=True)
- -
[docs]@v0_api.route("/projects/source/delete", defaults={'source_name': ""}, methods=["DELETE"]) -@v0_api.route("/projects/source/delete/<source_name>", methods=["DELETE"]) -@checkparams([("source_name", "", "no source name given")]) -def v0_projects_source_delete(source_name): - """Delete the named source and return a status response - - **DELETE /api/v0/projects/source/delete/<source-name>** - - Delete a user added source. This will fail if a system source is passed to - it. - - The response will be a status response with `status` set to true, or an - error response with it set to false and an error message included. - """ - if VALID_API_STRING.match(source_name) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - system_sources = get_repo_sources("/etc/yum.repos.d/*.repo") - if source_name in system_sources: - return jsonify(status=False, errors=[{"id": SYSTEM_SOURCE, "msg": "%s is a system source, it cannot be deleted." % source_name}]), 400 - share_dir = api.config["COMPOSER_CFG"].get("composer", "repo_dir") - try: - # Remove the file entry for the source - delete_repo_source(joinpaths(share_dir, "*.repo"), source_name) - - # Remove it from the RepoDict (NOTE that this isn't explicitly supported by the DNF API) - with api.config["DNFLOCK"].lock: - if source_name in api.config["DNFLOCK"].dbo.repos: - del api.config["DNFLOCK"].dbo.repos[source_name] - log.info("Updating repository metadata after removing %s", source_name) - api.config["DNFLOCK"].dbo.fill_sack(load_system_repo=False) - api.config["DNFLOCK"].dbo.read_comps() - - except ProjectsError as e: - log.error("(v0_projects_source_delete) %s", str(e)) - return jsonify(status=False, errors=[{"id": UNKNOWN_SOURCE, "msg": str(e)}]), 400 - - return jsonify(status=True)
- -
[docs]@v0_api.route("/modules/list") -@v0_api.route("/modules/list/<module_names>") -def v0_modules_list(module_names=None): - """List available modules, filtering by module_names - - **/api/v0/modules/list[?offset=0&limit=20]** - - Return a list of all of the available modules. This includes the name and the - group_type, which is always "rpm" for lorax-composer. By default this returns - the first 20 items. This can be changed by setting the `offset` and `limit` - arguments. - - Example:: - - { - "limit": 20, - "modules": [ - { - "group_type": "rpm", - "name": "0ad" - }, - { - "group_type": "rpm", - "name": "0ad-data" - }, - { - "group_type": "rpm", - "name": "0install" - }, - { - "group_type": "rpm", - "name": "2048-cli" - }, - ... - ] - "total": 21770 - } - - **/api/v0/modules/list/<module_names>[?offset=0&limit=20]** - - Return the list of comma-separated modules. Output is the same as `/modules/list` - - Example:: - - { - "limit": 20, - "modules": [ - { - "group_type": "rpm", - "name": "tar" - } - ], - "offset": 0, - "total": 1 - } - """ - if module_names and VALID_API_STRING.match(module_names) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - try: - limit = int(request.args.get("limit", "20")) - offset = int(request.args.get("offset", "0")) - except ValueError as e: - return jsonify(status=False, errors=[{"id": BAD_LIMIT_OR_OFFSET, "msg": str(e)}]), 400 - - if module_names: - module_names = module_names.split(",") - - try: - with api.config["DNFLOCK"].lock: - available = modules_list(api.config["DNFLOCK"].dbo, module_names) - except ProjectsError as e: - log.error("(v0_modules_list) %s", str(e)) - return jsonify(status=False, errors=[{"id": MODULES_ERROR, "msg": str(e)}]), 400 - - if module_names and not available: - msg = "one of the requested modules does not exist: %s" % module_names - log.error("(v0_modules_list) %s", msg) - return jsonify(status=False, errors=[{"id": UNKNOWN_MODULE, "msg": msg}]), 400 - - modules = take_limits(available, offset, limit) - return jsonify(modules=modules, offset=offset, limit=limit, total=len(available))
- -
[docs]@v0_api.route("/modules/info", defaults={'module_names': ""}) -@v0_api.route("/modules/info/<module_names>") -@checkparams([("module_names", "", "no module names given")]) -def v0_modules_info(module_names): - """Return detailed information about the listed modules - - **/api/v0/modules/info/<module_names>** - - Return the module's dependencies, and the information about the module. - - Example:: - - { - "modules": [ - { - "dependencies": [ - { - "arch": "noarch", - "epoch": "0", - "name": "basesystem", - "release": "7.el7", - "version": "10.0" - }, - { - "arch": "x86_64", - "epoch": "0", - "name": "bash", - "release": "28.el7", - "version": "4.2.46" - }, - ... - ], - "description": "The GNU tar program saves ...", - "homepage": "http://www.gnu.org/software/tar/", - "name": "tar", - "summary": "A GNU file archiving program", - "upstream_vcs": "UPSTREAM_VCS" - } - ] - } - """ - if VALID_API_STRING.match(module_names) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - try: - with api.config["DNFLOCK"].lock: - modules = modules_info(api.config["DNFLOCK"].dbo, module_names.split(",")) - except ProjectsError as e: - log.error("(v0_modules_info) %s", str(e)) - return jsonify(status=False, errors=[{"id": MODULES_ERROR, "msg": str(e)}]), 400 - - if not modules: - msg = "one of the requested modules does not exist: %s" % module_names - log.error("(v0_modules_info) %s", msg) - return jsonify(status=False, errors=[{"id": UNKNOWN_MODULE, "msg": msg}]), 400 - - return jsonify(modules=modules)
- -
[docs]@v0_api.route("/compose", methods=["POST"]) -def v0_compose_start(): - """Start a compose - - The body of the post should have these fields: - blueprint_name - The blueprint name from /blueprints/list/ - compose_type - The type of output to create, from /compose/types - branch - Optional, defaults to master, selects the git branch to use for the blueprint. - - **POST /api/v0/compose** - - Start a compose. The content type should be 'application/json' and the body of the POST - should look like this - - Example:: - - { - "blueprint_name": "http-server", - "compose_type": "tar", - "branch": "master" - } - - Pass it the name of the blueprint, the type of output (from '/api/v0/compose/types'), and the - blueprint branch to use. 'branch' is optional and will default to master. It will create a new - build and add it to the queue. It returns the build uuid and a status if it succeeds - - Example:: - - { - "build_id": "e6fa6db4-9c81-4b70-870f-a697ca405cdf", - "status": true - } - """ - # Passing ?test=1 will generate a fake FAILED compose. - # Passing ?test=2 will generate a fake FINISHED compose. - try: - test_mode = int(request.args.get("test", "0")) - except ValueError: - test_mode = 0 - - compose = request.get_json(cache=False) - - errors = [] - if not compose: - return jsonify(status=False, errors=[{"id": MISSING_POST, "msg": "Missing POST body"}]), 400 - - if "blueprint_name" not in compose: - errors.append({"id": UNKNOWN_BLUEPRINT,"msg": "No 'blueprint_name' in the JSON request"}) - else: - blueprint_name = compose["blueprint_name"] - - if "branch" not in compose or not compose["branch"]: - branch = "master" - else: - branch = compose["branch"] - - if "compose_type" not in compose: - errors.append({"id": BAD_COMPOSE_TYPE, "msg": "No 'compose_type' in the JSON request"}) - else: - compose_type = compose["compose_type"] - - if VALID_BLUEPRINT_NAME.match(blueprint_name) is None: - errors.append({"id": INVALID_CHARS, "msg": "Invalid characters in API path"}) - - if not blueprint_exists(api, branch, blueprint_name): - errors.append({"id": UNKNOWN_BLUEPRINT, "msg": "Unknown blueprint name: %s" % blueprint_name}) - - if errors: - return jsonify(status=False, errors=errors), 400 - - try: - build_id = start_build(api.config["COMPOSER_CFG"], api.config["DNFLOCK"], api.config["GITLOCK"], - branch, blueprint_name, compose_type, test_mode) - except Exception as e: - if "Invalid compose type" in str(e): - return jsonify(status=False, errors=[{"id": BAD_COMPOSE_TYPE, "msg": str(e)}]), 400 - else: - return jsonify(status=False, errors=[{"id": BUILD_FAILED, "msg": str(e)}]), 400 - - return jsonify(status=True, build_id=build_id)
- -
[docs]@v0_api.route("/compose/types") -def v0_compose_types(): - """Return the list of enabled output types - - (only enabled types are returned) - - **/api/v0/compose/types** - - Returns the list of supported output types that are valid for use with 'POST /api/v0/compose' - - Example:: - - { - "types": [ - { - "enabled": true, - "name": "tar" - } - ] - } - """ - share_dir = api.config["COMPOSER_CFG"].get("composer", "share_dir") - return jsonify(types=[{"name": t, "enabled": e} for t, e in compose_types(share_dir)])
- -
[docs]@v0_api.route("/compose/queue") -def v0_compose_queue(): - """Return the status of the new and running queues - - **/api/v0/compose/queue** - - Return the status of the build queue. It includes information about the builds waiting, - and the build that is running. - - Example:: - - { - "new": [ - { - "id": "45502a6d-06e8-48a5-a215-2b4174b3614b", - "blueprint": "glusterfs", - "queue_status": "WAITING", - "job_created": 1517362647.4570868, - "version": "0.0.6" - }, - { - "id": "6d292bd0-bec7-4825-8d7d-41ef9c3e4b73", - "blueprint": "kubernetes", - "queue_status": "WAITING", - "job_created": 1517362659.0034983, - "version": "0.0.1" - } - ], - "run": [ - { - "id": "745712b2-96db-44c0-8014-fe925c35e795", - "blueprint": "glusterfs", - "queue_status": "RUNNING", - "job_created": 1517362633.7965999, - "job_started": 1517362633.8001345, - "version": "0.0.6" - } - ] - } - """ - return jsonify(queue_status(api.config["COMPOSER_CFG"], api=0))
- -
[docs]@v0_api.route("/compose/finished") -def v0_compose_finished(): - """Return the list of finished composes - - **/api/v0/compose/finished** - - Return the details on all of the finished composes on the system. - - Example:: - - { - "finished": [ - { - "id": "70b84195-9817-4b8a-af92-45e380f39894", - "blueprint": "glusterfs", - "queue_status": "FINISHED", - "job_created": 1517351003.8210032, - "job_started": 1517351003.8230415, - "job_finished": 1517359234.1003145, - "version": "0.0.6" - }, - { - "id": "e695affd-397f-4af9-9022-add2636e7459", - "blueprint": "glusterfs", - "queue_status": "FINISHED", - "job_created": 1517362289.7193348, - "job_started": 1517362289.9751132, - "job_finished": 1517363500.1234567, - "version": "0.0.6" - } - ] - } - """ - return jsonify(finished=build_status(api.config["COMPOSER_CFG"], "FINISHED", api=0))
- -
[docs]@v0_api.route("/compose/failed") -def v0_compose_failed(): - """Return the list of failed composes - - **/api/v0/compose/failed** - - Return the details on all of the failed composes on the system. - - Example:: - - { - "failed": [ - { - "id": "8c8435ef-d6bd-4c68-9bf1-a2ef832e6b1a", - "blueprint": "http-server", - "queue_status": "FAILED", - "job_created": 1517523249.9301329, - "job_started": 1517523249.9314211, - "job_finished": 1517523255.5623411, - "version": "0.0.2" - } - ] - } - """ - return jsonify(failed=build_status(api.config["COMPOSER_CFG"], "FAILED", api=0))
- -
[docs]@v0_api.route("/compose/status", defaults={'uuids': ""}) -@v0_api.route("/compose/status/<uuids>") -@checkparams([("uuids", "", "no UUIDs given")]) -def v0_compose_status(uuids): - """Return the status of the listed uuids - - **/api/v0/compose/status/<uuids>[?blueprint=<blueprint_name>&status=<compose_status>&type=<compose_type>]** - - Return the details for each of the comma-separated list of uuids. A uuid of '*' will return - details for all composes. - - Example:: - - { - "uuids": [ - { - "id": "8c8435ef-d6bd-4c68-9bf1-a2ef832e6b1a", - "blueprint": "http-server", - "queue_status": "FINISHED", - "job_created": 1517523644.2384307, - "job_started": 1517523644.2551234, - "job_finished": 1517523689.9864314, - "version": "0.0.2" - }, - { - "id": "45502a6d-06e8-48a5-a215-2b4174b3614b", - "blueprint": "glusterfs", - "queue_status": "FINISHED", - "job_created": 1517363442.188399, - "job_started": 1517363442.325324, - "job_finished": 1517363451.653621, - "version": "0.0.6" - } - ] - } - """ - if VALID_API_STRING.match(uuids) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - blueprint = request.args.get("blueprint", None) - status = request.args.get("status", None) - compose_type = request.args.get("type", None) - - # Check the arguments for invalid characters - for a in [blueprint, status, compose_type]: - if a is not None and VALID_API_STRING.match(a) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - results = [] - errors = [] - - if uuids.strip() == '*': - queue_status_dict = queue_status(api.config["COMPOSER_CFG"], api=0) - queue_new = queue_status_dict["new"] - queue_running = queue_status_dict["run"] - candidates = queue_new + queue_running + build_status(api.config["COMPOSER_CFG"], api=0) - else: - candidates = [] - for uuid in [n.strip().lower() for n in uuids.split(",")]: - details = uuid_status(api.config["COMPOSER_CFG"], uuid, api=0) - if details is None: - errors.append({"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}) - else: - candidates.append(details) - - for details in candidates: - if blueprint is not None and details['blueprint'] != blueprint: - continue - - if status is not None and details['queue_status'] != status: - continue - - if compose_type is not None and details['compose_type'] != compose_type: - continue - - results.append(details) - - return jsonify(uuids=results, errors=errors)
- -
[docs]@v0_api.route("/compose/cancel", defaults={'uuid': ""}, methods=["DELETE"]) -@v0_api.route("/compose/cancel/<uuid>", methods=["DELETE"]) -@checkparams([("uuid", "", "no UUID given")]) -def v0_compose_cancel(uuid): - """Cancel a running compose and delete its results directory - - **DELETE /api/v0/compose/cancel/<uuid>** - - Cancel the build, if it is not finished, and delete the results. It will return a - status of True if it is successful. - - Example:: - - { - "status": true, - "uuid": "03397f8d-acff-4cdb-bd31-f629b7a948f5" - } - """ - if VALID_API_STRING.match(uuid) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - status = uuid_status(api.config["COMPOSER_CFG"], uuid, api=0) - if status is None: - return jsonify(status=False, errors=[{"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}]), 400 - - if status["queue_status"] not in ["WAITING", "RUNNING"]: - return jsonify(status=False, errors=[{"id": BUILD_IN_WRONG_STATE, "msg": "Build %s is not in WAITING or RUNNING." % uuid}]) - - try: - uuid_cancel(api.config["COMPOSER_CFG"], uuid) - except Exception as e: - return jsonify(status=False, errors=[{"id": COMPOSE_ERROR, "msg": "%s: %s" % (uuid, str(e))}]),400 - else: - return jsonify(status=True, uuid=uuid)
- -
[docs]@v0_api.route("/compose/delete", defaults={'uuids': ""}, methods=["DELETE"]) -@v0_api.route("/compose/delete/<uuids>", methods=["DELETE"]) -@checkparams([("uuids", "", "no UUIDs given")]) -def v0_compose_delete(uuids): - """Delete the compose results for the listed uuids - - **DELETE /api/v0/compose/delete/<uuids>** - - Delete the list of comma-separated uuids from the compose results. - - Example:: - - { - "errors": [], - "uuids": [ - { - "status": true, - "uuid": "ae1bf7e3-7f16-4c9f-b36e-3726a1093fd0" - } - ] - } - """ - if VALID_API_STRING.match(uuids) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - results = [] - errors = [] - for uuid in [n.strip().lower() for n in uuids.split(",")]: - status = uuid_status(api.config["COMPOSER_CFG"], uuid, api=0) - if status is None: - errors.append({"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}) - elif status["queue_status"] not in ["FINISHED", "FAILED"]: - errors.append({"id": BUILD_IN_WRONG_STATE, "msg": "Build %s is not in FINISHED or FAILED." % uuid}) - else: - try: - uuid_delete(api.config["COMPOSER_CFG"], uuid) - except Exception as e: - errors.append({"id": COMPOSE_ERROR, "msg": "%s: %s" % (uuid, str(e))}) - else: - results.append({"uuid":uuid, "status":True}) - return jsonify(uuids=results, errors=errors)
- -
[docs]@v0_api.route("/compose/info", defaults={'uuid': ""}) -@v0_api.route("/compose/info/<uuid>") -@checkparams([("uuid", "", "no UUID given")]) -def v0_compose_info(uuid): - """Return detailed info about a compose - - **/api/v0/compose/info/<uuid>** - - Get detailed information about the compose. The returned JSON string will - contain the following information: - - * id - The uuid of the comoposition - * config - containing the configuration settings used to run Anaconda - * blueprint - The depsolved blueprint used to generate the kickstart - * commit - The (local) git commit hash for the blueprint used - * deps - The NEVRA of all of the dependencies used in the composition - * compose_type - The type of output generated (tar, iso, etc.) - * queue_status - The final status of the composition (FINISHED or FAILED) - - Example:: - - { - "commit": "7078e521a54b12eae31c3fd028680da7a0815a4d", - "compose_type": "tar", - "config": { - "anaconda_args": "", - "armplatform": "", - "compress_args": [], - "compression": "xz", - "image_name": "root.tar.xz", - ... - }, - "deps": { - "packages": [ - { - "arch": "x86_64", - "epoch": "0", - "name": "acl", - "release": "14.el7", - "version": "2.2.51" - } - ] - }, - "id": "c30b7d80-523b-4a23-ad52-61b799739ce8", - "queue_status": "FINISHED", - "blueprint": { - "description": "An example kubernetes master", - ... - } - } - """ - if VALID_API_STRING.match(uuid) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - try: - info = uuid_info(api.config["COMPOSER_CFG"], uuid, api=0) - except Exception as e: - return jsonify(status=False, errors=[{"id": COMPOSE_ERROR, "msg": str(e)}]), 400 - - if info is None: - return jsonify(status=False, errors=[{"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}]), 400 - else: - return jsonify(**info)
- -
[docs]@v0_api.route("/compose/metadata", defaults={'uuid': ""}) -@v0_api.route("/compose/metadata/<uuid>") -@checkparams([("uuid","", "no UUID given")]) -def v0_compose_metadata(uuid): - """Return a tar of the metadata for the build - - **/api/v0/compose/metadata/<uuid>** - - Returns a .tar of the metadata used for the build. This includes all the - information needed to reproduce the build, including the final kickstart - populated with repository and package NEVRA. - - The mime type is set to 'application/x-tar' and the filename is set to - UUID-metadata.tar - - The .tar is uncompressed, but is not large. - """ - if VALID_API_STRING.match(uuid) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - status = uuid_status(api.config["COMPOSER_CFG"], uuid, api=0) - if status is None: - return jsonify(status=False, errors=[{"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}]), 400 - if status["queue_status"] not in ["FINISHED", "FAILED"]: - return jsonify(status=False, errors=[{"id": BUILD_IN_WRONG_STATE, "msg": "Build %s not in FINISHED or FAILED state." % uuid}]), 400 - else: - return Response(uuid_tar(api.config["COMPOSER_CFG"], uuid, metadata=True, image=False, logs=False), - mimetype="application/x-tar", - headers=[("Content-Disposition", "attachment; filename=%s-metadata.tar;" % uuid)], - direct_passthrough=True)
- -
[docs]@v0_api.route("/compose/results", defaults={'uuid': ""}) -@v0_api.route("/compose/results/<uuid>") -@checkparams([("uuid","", "no UUID given")]) -def v0_compose_results(uuid): - """Return a tar of the metadata and the results for the build - - **/api/v0/compose/results/<uuid>** - - Returns a .tar of the metadata, logs, and output image of the build. This - includes all the information needed to reproduce the build, including the - final kickstart populated with repository and package NEVRA. The output image - is already in compressed form so the returned tar is not compressed. - - The mime type is set to 'application/x-tar' and the filename is set to - UUID.tar - """ - if VALID_API_STRING.match(uuid) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - status = uuid_status(api.config["COMPOSER_CFG"], uuid, api=0) - if status is None: - return jsonify(status=False, errors=[{"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}]), 400 - elif status["queue_status"] not in ["FINISHED", "FAILED"]: - return jsonify(status=False, errors=[{"id": BUILD_IN_WRONG_STATE, "msg": "Build %s not in FINISHED or FAILED state." % uuid}]), 400 - else: - return Response(uuid_tar(api.config["COMPOSER_CFG"], uuid, metadata=True, image=True, logs=True), - mimetype="application/x-tar", - headers=[("Content-Disposition", "attachment; filename=%s.tar;" % uuid)], - direct_passthrough=True)
- -
[docs]@v0_api.route("/compose/logs", defaults={'uuid': ""}) -@v0_api.route("/compose/logs/<uuid>") -@checkparams([("uuid","", "no UUID given")]) -def v0_compose_logs(uuid): - """Return a tar of the metadata for the build - - **/api/v0/compose/logs/<uuid>** - - Returns a .tar of the anaconda build logs. The tar is not compressed, but is - not large. - - The mime type is set to 'application/x-tar' and the filename is set to - UUID-logs.tar - """ - if VALID_API_STRING.match(uuid) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - status = uuid_status(api.config["COMPOSER_CFG"], uuid, api=0) - if status is None: - return jsonify(status=False, errors=[{"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}]), 400 - elif status["queue_status"] not in ["FINISHED", "FAILED"]: - return jsonify(status=False, errors=[{"id": BUILD_IN_WRONG_STATE, "msg": "Build %s not in FINISHED or FAILED state." % uuid}]), 400 - else: - return Response(uuid_tar(api.config["COMPOSER_CFG"], uuid, metadata=False, image=False, logs=True), - mimetype="application/x-tar", - headers=[("Content-Disposition", "attachment; filename=%s-logs.tar;" % uuid)], - direct_passthrough=True)
- -
[docs]@v0_api.route("/compose/image", defaults={'uuid': ""}) -@v0_api.route("/compose/image/<uuid>") -@checkparams([("uuid","", "no UUID given")]) -def v0_compose_image(uuid): - """Return the output image for the build - - **/api/v0/compose/image/<uuid>** - - Returns the output image from the build. The filename is set to the filename - from the build with the UUID as a prefix. eg. UUID-root.tar.xz or UUID-boot.iso. - """ - if VALID_API_STRING.match(uuid) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - status = uuid_status(api.config["COMPOSER_CFG"], uuid, api=0) - if status is None: - return jsonify(status=False, errors=[{"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}]), 400 - elif status["queue_status"] not in ["FINISHED", "FAILED"]: - return jsonify(status=False, errors=[{"id": BUILD_IN_WRONG_STATE, "msg": "Build %s not in FINISHED or FAILED state." % uuid}]), 400 - else: - image_name, image_path = uuid_image(api.config["COMPOSER_CFG"], uuid) - - # Make sure it really exists - if not os.path.exists(image_path): - return jsonify(status=False, errors=[{"id": BUILD_MISSING_FILE, "msg": "Build %s is missing image file %s" % (uuid, image_name)}]), 400 - - # Make the image name unique - image_name = uuid + "-" + image_name - # XXX - Will mime type guessing work for all our output? - return send_file(image_path, as_attachment=True, attachment_filename=image_name, add_etags=False)
- -
[docs]@v0_api.route("/compose/log", defaults={'uuid': ""}) -@v0_api.route("/compose/log/<uuid>") -@checkparams([("uuid","", "no UUID given")]) -def v0_compose_log_tail(uuid): - """Return the tail of the most currently relevant log - - **/api/v0/compose/log/<uuid>[?size=KiB]** - - Returns the end of either the anaconda log, the packaging log, or the - composer logs, depending on the progress of the compose. The size - parameter is optional and defaults to 1 MiB if it is not included. The - returned data is raw text from the end of the log file, starting on a - line boundary. - - Example:: - - 12:59:24,222 INFO anaconda: Running Thread: AnaConfigurationThread (140629395244800) - 12:59:24,223 INFO anaconda: Configuring installed system - 12:59:24,912 INFO anaconda: Configuring installed system - 12:59:24,912 INFO anaconda: Creating users - 12:59:24,913 INFO anaconda: Clearing libuser.conf at /tmp/libuser.Dyy8Gj - 12:59:25,154 INFO anaconda: Creating users - 12:59:25,155 INFO anaconda: Configuring addons - 12:59:25,155 INFO anaconda: Configuring addons - 12:59:25,155 INFO anaconda: Generating initramfs - 12:59:49,467 INFO anaconda: Generating initramfs - 12:59:49,467 INFO anaconda: Running post-installation scripts - 12:59:49,467 INFO anaconda: Running kickstart %%post script(s) - 12:59:50,782 INFO anaconda: All kickstart %%post script(s) have been run - 12:59:50,782 INFO anaconda: Running post-installation scripts - 12:59:50,784 INFO anaconda: Thread Done: AnaConfigurationThread (140629395244800) - """ - if VALID_API_STRING.match(uuid) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - try: - size = int(request.args.get("size", "1024")) - except ValueError as e: - return jsonify(status=False, errors=[{"id": COMPOSE_ERROR, "msg": str(e)}]), 400 - - status = uuid_status(api.config["COMPOSER_CFG"], uuid, api=0) - if status is None: - return jsonify(status=False, errors=[{"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}]), 400 - elif status["queue_status"] == "WAITING": - return jsonify(status=False, errors=[{"id": BUILD_IN_WRONG_STATE, "msg": "Build %s has not started yet. No logs to view" % uuid}]) - try: - return Response(uuid_log(api.config["COMPOSER_CFG"], uuid, size), direct_passthrough=True) - except RuntimeError as e: - return jsonify(status=False, errors=[{"id": COMPOSE_ERROR, "msg": str(e)}]), 400
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/v1.html b/docs/html/_modules/pylorax/api/v1.html deleted file mode 100644 index 75879329..00000000 --- a/docs/html/_modules/pylorax/api/v1.html +++ /dev/null @@ -1,1242 +0,0 @@ - - - - - - - - - - - pylorax.api.v1 — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.v1

-#
-# Copyright (C) 2019  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-""" Setup v1 of the API server
-
-"""
-import logging
-log = logging.getLogger("lorax-composer")
-
-from flask import jsonify, request
-from flask import current_app as api
-
-from lifted.queue import get_upload, reset_upload, cancel_upload, delete_upload
-from lifted.providers import list_providers, resolve_provider, load_profiles, validate_settings, save_settings
-from lifted.providers import load_settings, delete_profile
-from pylorax.api.checkparams import checkparams
-from pylorax.api.compose import start_build
-from pylorax.api.errors import BAD_COMPOSE_TYPE, BUILD_FAILED, INVALID_CHARS, MISSING_POST, PROJECTS_ERROR
-from pylorax.api.errors import SYSTEM_SOURCE, UNKNOWN_BLUEPRINT, UNKNOWN_SOURCE, UNKNOWN_UUID, UPLOAD_ERROR
-from pylorax.api.errors import COMPOSE_ERROR
-from pylorax.api.flask_blueprint import BlueprintSkip
-from pylorax.api.queue import queue_status, build_status, uuid_status, uuid_schedule_upload, uuid_remove_upload
-from pylorax.api.queue import uuid_info
-from pylorax.api.projects import get_repo_sources, repo_to_source
-from pylorax.api.projects import new_repo_source
-from pylorax.api.regexes import VALID_API_STRING, VALID_BLUEPRINT_NAME
-import pylorax.api.toml as toml
-from pylorax.api.utils import blueprint_exists
-
-
-# Create the v1 routes Blueprint with skip_routes support
-v1_api = BlueprintSkip("v1_routes", __name__)
-
-
[docs]@v1_api.route("/projects/source/info", defaults={'source_ids': ""}) -@v1_api.route("/projects/source/info/<source_ids>") -@checkparams([("source_ids", "", "no source names given")]) -def v1_projects_source_info(source_ids): - """Return detailed info about the list of sources - - **/api/v1/projects/source/info/<source-ids>** - - Return information about the comma-separated list of source ids. Or all of the - sources if '*' is passed. Note that general globbing is not supported, only '*'. - - Immutable system sources will have the "system" field set to true. User added sources - will have it set to false. System sources cannot be changed or deleted. - - Example:: - - { - "errors": [], - "sources": { - "fedora": { - "check_gpg": true, - "check_ssl": true, - "gpgkey_urls": [ - "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64" - ], - "id": "fedora", - "name": "Fedora $releasever - $basearch", - "proxy": "http://proxy.brianlane.com:8123", - "system": true, - "type": "yum-metalink", - "url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64" - } - } - } - - In v0 the ``name`` field was used for the id (a short name for the repo). In v1 ``name`` changed - to ``id`` and ``name`` is now used for the longer descriptive name of the repository. - """ - if VALID_API_STRING.match(source_ids) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - out_fmt = request.args.get("format", "json") - if VALID_API_STRING.match(out_fmt) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in format argument"}]), 400 - - # Return info on all of the sources - if source_ids == "*": - with api.config["DNFLOCK"].lock: - source_ids = ",".join(r.id for r in api.config["DNFLOCK"].dbo.repos.iter_enabled()) - - sources = {} - errors = [] - system_sources = get_repo_sources("/etc/yum.repos.d/*.repo") - for source in source_ids.split(","): - with api.config["DNFLOCK"].lock: - repo = api.config["DNFLOCK"].dbo.repos.get(source, None) - if not repo: - errors.append({"id": UNKNOWN_SOURCE, "msg": "%s is not a valid source" % source}) - continue - sources[repo.id] = repo_to_source(repo, repo.id in system_sources, api=1) - - if out_fmt == "toml" and not errors: - # With TOML output we just want to dump the raw sources, skipping the errors - return toml.dumps(sources) - elif out_fmt == "toml" and errors: - # TOML requested, but there was an error - return jsonify(status=False, errors=errors), 400 - else: - return jsonify(sources=sources, errors=errors)
- -
[docs]@v1_api.route("/projects/source/new", methods=["POST"]) -def v1_projects_source_new(): - """Add a new package source. Or change an existing one - - **POST /api/v1/projects/source/new** - - Add (or change) a source for use when depsolving blueprints and composing images. - - The ``proxy`` and ``gpgkey_urls`` entries are optional. All of the others are required. The supported - types for the urls are: - - * ``yum-baseurl`` is a URL to a yum repository. - * ``yum-mirrorlist`` is a URL for a mirrorlist. - * ``yum-metalink`` is a URL for a metalink. - - If ``check_ssl`` is true the https certificates must be valid. If they are self-signed you can either set - this to false, or add your Certificate Authority to the host system. - - If ``check_gpg`` is true the GPG key must either be installed on the host system, or ``gpgkey_urls`` - should point to it. - - You can edit an existing source (other than system sources), by doing a POST - of the new version of the source. It will overwrite the previous one. - - Example:: - - { - "id": "custom-source-1", - "name": "Custom Package Source #1", - "url": "https://url/path/to/repository/", - "type": "yum-baseurl", - "check_ssl": true, - "check_gpg": true, - "gpgkey_urls": [ - "https://url/path/to/gpg-key" - ] - } - - In v0 the ``name`` field was used for the id (a short name for the repo). In v1 ``name`` changed - to ``id`` and ``name`` is now used for the longer descriptive name of the repository. - """ - if request.headers['Content-Type'] == "text/x-toml": - source = toml.loads(request.data) - else: - source = request.get_json(cache=False) - - # Check for id in source, return error if not - if "id" not in source: - return jsonify(status=False, errors=[{"id": UNKNOWN_SOURCE, "msg": "'id' field is missing from API v1 request."}]), 400 - - system_sources = get_repo_sources("/etc/yum.repos.d/*.repo") - if source["id"] in system_sources: - return jsonify(status=False, errors=[{"id": SYSTEM_SOURCE, "msg": "%s is a system source, it cannot be changed." % source["id"]}]), 400 - - try: - # Remove it from the RepoDict (NOTE that this isn't explicitly supported by the DNF API) - with api.config["DNFLOCK"].lock: - repo_dir = api.config["COMPOSER_CFG"].get("composer", "repo_dir") - new_repo_source(api.config["DNFLOCK"].dbo, source["id"], source, repo_dir) - except Exception as e: - return jsonify(status=False, errors=[{"id": PROJECTS_ERROR, "msg": str(e)}]), 400 - - return jsonify(status=True)
- -
[docs]@v1_api.route("/compose", methods=["POST"]) -def v1_compose_start(): - """Start a compose - - The body of the post should have these fields: - blueprint_name - The blueprint name from /blueprints/list/ - compose_type - The type of output to create, from /compose/types - branch - Optional, defaults to master, selects the git branch to use for the blueprint. - - **POST /api/v1/compose** - - Start a compose. The content type should be 'application/json' and the body of the POST - should look like this. The "upload" object is optional. - - The upload object can specify either a pre-existing profile to use (as returned by - `/uploads/providers`) or one-time use settings for the provider. - - Example with upload profile:: - - { - "blueprint_name": "http-server", - "compose_type": "tar", - "branch": "master", - "upload": { - "image_name": "My Image", - "provider": "azure", - "profile": "production-azure-settings" - } - } - - Example with upload settings:: - - { - "blueprint_name": "http-server", - "compose_type": "tar", - "branch": "master", - "upload": { - "image_name": "My Image", - "provider": "azure", - "settings": { - "resource_group": "SOMEBODY", - "storage_account_name": "ONCE", - "storage_container": "TOLD", - "location": "ME", - "subscription_id": "THE", - "client_id": "WORLD", - "secret": "IS", - "tenant": "GONNA" - } - } - } - - Pass it the name of the blueprint, the type of output (from - '/api/v1/compose/types'), and the blueprint branch to use. 'branch' is - optional and will default to master. It will create a new build and add - it to the queue. It returns the build uuid and a status if it succeeds. - If an "upload" is given, it will schedule an upload to run when the build - finishes. - - Example response:: - - { - "build_id": "e6fa6db4-9c81-4b70-870f-a697ca405cdf", - "upload_id": "572eb0d0-5348-4600-9666-14526ba628bb", - "status": true - } - """ - # Passing ?test=1 will generate a fake FAILED compose. - # Passing ?test=2 will generate a fake FINISHED compose. - try: - test_mode = int(request.args.get("test", "0")) - except ValueError: - test_mode = 0 - - compose = request.get_json(cache=False) - - errors = [] - if not compose: - return jsonify(status=False, errors=[{"id": MISSING_POST, "msg": "Missing POST body"}]), 400 - - if "blueprint_name" not in compose: - errors.append({"id": UNKNOWN_BLUEPRINT, "msg": "No 'blueprint_name' in the JSON request"}) - else: - blueprint_name = compose["blueprint_name"] - - if "branch" not in compose or not compose["branch"]: - branch = "master" - else: - branch = compose["branch"] - - if "compose_type" not in compose: - errors.append({"id": BAD_COMPOSE_TYPE, "msg": "No 'compose_type' in the JSON request"}) - else: - compose_type = compose["compose_type"] - - if VALID_BLUEPRINT_NAME.match(blueprint_name) is None: - errors.append({"id": INVALID_CHARS, "msg": "Invalid characters in API path"}) - - if not blueprint_exists(api, branch, blueprint_name): - errors.append({"id": UNKNOWN_BLUEPRINT, "msg": "Unknown blueprint name: %s" % blueprint_name}) - - if "upload" in compose: - try: - image_name = compose["upload"]["image_name"] - - if "profile" in compose["upload"]: - # Load a specific profile for this provider - profile = compose["upload"]["profile"] - provider_name = compose["upload"]["provider"] - settings = load_settings(api.config["COMPOSER_CFG"]["upload"], provider_name, profile) - else: - provider_name = compose["upload"]["provider"] - settings = compose["upload"]["settings"] - except KeyError as e: - errors.append({"id": UPLOAD_ERROR, "msg": f'Missing parameter {str(e)}!'}) - try: - provider = resolve_provider(api.config["COMPOSER_CFG"]["upload"], provider_name) - if "supported_types" in provider and compose_type not in provider["supported_types"]: - raise RuntimeError(f'Type "{compose_type}" is not supported by provider "{provider_name}"!') - validate_settings(api.config["COMPOSER_CFG"]["upload"], provider_name, settings, image_name) - except Exception as e: - errors.append({"id": UPLOAD_ERROR, "msg": str(e)}) - - if errors: - return jsonify(status=False, errors=errors), 400 - - try: - build_id = start_build(api.config["COMPOSER_CFG"], api.config["DNFLOCK"], api.config["GITLOCK"], - branch, blueprint_name, compose_type, test_mode) - except Exception as e: - if "Invalid compose type" in str(e): - return jsonify(status=False, errors=[{"id": BAD_COMPOSE_TYPE, "msg": str(e)}]), 400 - else: - return jsonify(status=False, errors=[{"id": BUILD_FAILED, "msg": str(e)}]), 400 - - if "upload" in compose: - upload_id = uuid_schedule_upload( - api.config["COMPOSER_CFG"], - build_id, - provider_name, - image_name, - settings - ) - else: - upload_id = "" - - return jsonify(status=True, build_id=build_id, upload_id=upload_id)
- -
[docs]@v1_api.route("/compose/queue") -def v1_compose_queue(): - """Return the status of the new and running queues - - **/api/v1/compose/queue** - - Return the status of the build queue. It includes information about the builds waiting, - and the build that is running. - - Example:: - - { - "new": [ - { - "id": "45502a6d-06e8-48a5-a215-2b4174b3614b", - "blueprint": "glusterfs", - "queue_status": "WAITING", - "job_created": 1517362647.4570868, - "version": "0.0.6" - }, - { - "id": "6d292bd0-bec7-4825-8d7d-41ef9c3e4b73", - "blueprint": "kubernetes", - "queue_status": "WAITING", - "job_created": 1517362659.0034983, - "version": "0.0.1" - } - ], - "run": [ - { - "id": "745712b2-96db-44c0-8014-fe925c35e795", - "blueprint": "glusterfs", - "queue_status": "RUNNING", - "job_created": 1517362633.7965999, - "job_started": 1517362633.8001345, - "version": "0.0.6", - "uploads": [ - { - "creation_time": 1568150660.524401, - "image_name": "glusterfs server", - "image_path": null, - "provider_name": "azure", - "settings": { - "client_id": "need", - "location": "need", - "resource_group": "group", - "secret": "need", - "storage_account_name": "need", - "storage_container": "need", - "subscription_id": "need", - "tenant": "need" - }, - "status": "WAITING", - "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65" - } - ] - } - ] - } - """ - return jsonify(queue_status(api.config["COMPOSER_CFG"], api=1))
- -
[docs]@v1_api.route("/compose/finished") -def v1_compose_finished(): - """Return the list of finished composes - - **/api/v1/compose/finished** - - Return the details on all of the finished composes on the system. - - Example:: - - { - "finished": [ - { - "id": "70b84195-9817-4b8a-af92-45e380f39894", - "blueprint": "glusterfs", - "queue_status": "FINISHED", - "job_created": 1517351003.8210032, - "job_started": 1517351003.8230415, - "job_finished": 1517359234.1003145, - "version": "0.0.6" - }, - { - "id": "e695affd-397f-4af9-9022-add2636e7459", - "blueprint": "glusterfs", - "queue_status": "FINISHED", - "job_created": 1517362289.7193348, - "job_started": 1517362289.9751132, - "job_finished": 1517363500.1234567, - "version": "0.0.6", - "uploads": [ - { - "creation_time": 1568150660.524401, - "image_name": "glusterfs server", - "image_path": "/var/lib/lorax/composer/results/e695affd-397f-4af9-9022-add2636e7459/disk.vhd", - "provider_name": "azure", - "settings": { - "client_id": "need", - "location": "need", - "resource_group": "group", - "secret": "need", - "storage_account_name": "need", - "storage_container": "need", - "subscription_id": "need", - "tenant": "need" - }, - "status": "WAITING", - "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65" - } - ] - } - ] - } - """ - return jsonify(finished=build_status(api.config["COMPOSER_CFG"], "FINISHED", api=1))
- -
[docs]@v1_api.route("/compose/failed") -def v1_compose_failed(): - """Return the list of failed composes - - **/api/v1/compose/failed** - - Return the details on all of the failed composes on the system. - - Example:: - - { - "failed": [ - { - "id": "8c8435ef-d6bd-4c68-9bf1-a2ef832e6b1a", - "blueprint": "http-server", - "queue_status": "FAILED", - "job_created": 1517523249.9301329, - "job_started": 1517523249.9314211, - "job_finished": 1517523255.5623411, - "version": "0.0.2", - "uploads": [ - { - "creation_time": 1568150660.524401, - "image_name": "http-server", - "image_path": null, - "provider_name": "azure", - "settings": { - "client_id": "need", - "location": "need", - "resource_group": "group", - "secret": "need", - "storage_account_name": "need", - "storage_container": "need", - "subscription_id": "need", - "tenant": "need" - }, - "status": "WAITING", - "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65" - } - ] - } - ] - } - """ - return jsonify(failed=build_status(api.config["COMPOSER_CFG"], "FAILED", api=1))
- -
[docs]@v1_api.route("/compose/status", defaults={'uuids': ""}) -@v1_api.route("/compose/status/<uuids>") -@checkparams([("uuids", "", "no UUIDs given")]) -def v1_compose_status(uuids): - """Return the status of the listed uuids - - **/api/v1/compose/status/<uuids>[?blueprint=<blueprint_name>&status=<compose_status>&type=<compose_type>]** - - Return the details for each of the comma-separated list of uuids. A uuid of '*' will return - details for all composes. - - Example:: - - { - "uuids": [ - { - "id": "8c8435ef-d6bd-4c68-9bf1-a2ef832e6b1a", - "blueprint": "http-server", - "queue_status": "FINISHED", - "job_created": 1517523644.2384307, - "job_started": 1517523644.2551234, - "job_finished": 1517523689.9864314, - "version": "0.0.2" - }, - { - "id": "45502a6d-06e8-48a5-a215-2b4174b3614b", - "blueprint": "glusterfs", - "queue_status": "FINISHED", - "job_created": 1517363442.188399, - "job_started": 1517363442.325324, - "job_finished": 1517363451.653621, - "version": "0.0.6", - "uploads": [ - { - "creation_time": 1568150660.524401, - "image_name": "glusterfs server", - "image_path": null, - "provider_name": "azure", - "settings": { - "client_id": "need", - "location": "need", - "resource_group": "group", - "secret": "need", - "storage_account_name": "need", - "storage_container": "need", - "subscription_id": "need", - "tenant": "need" - }, - "status": "WAITING", - "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65" - } - ] - } - ] - } - """ - if VALID_API_STRING.match(uuids) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - blueprint = request.args.get("blueprint", None) - status = request.args.get("status", None) - compose_type = request.args.get("type", None) - - results = [] - errors = [] - - if uuids.strip() == '*': - queue_status_dict = queue_status(api.config["COMPOSER_CFG"], api=1) - queue_new = queue_status_dict["new"] - queue_running = queue_status_dict["run"] - candidates = queue_new + queue_running + build_status(api.config["COMPOSER_CFG"], api=1) - else: - candidates = [] - for uuid in [n.strip().lower() for n in uuids.split(",")]: - details = uuid_status(api.config["COMPOSER_CFG"], uuid, api=1) - if details is None: - errors.append({"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}) - else: - candidates.append(details) - - for details in candidates: - if blueprint is not None and details['blueprint'] != blueprint: - continue - - if status is not None and details['queue_status'] != status: - continue - - if compose_type is not None and details['compose_type'] != compose_type: - continue - - results.append(details) - - return jsonify(uuids=results, errors=errors)
- -
[docs]@v1_api.route("/compose/info", defaults={'uuid': ""}) -@v1_api.route("/compose/info/<uuid>") -@checkparams([("uuid", "", "no UUID given")]) -def v1_compose_info(uuid): - """Return detailed info about a compose - - **/api/v1/compose/info/<uuid>** - - Get detailed information about the compose. The returned JSON string will - contain the following information: - - * id - The uuid of the comoposition - * config - containing the configuration settings used to run Anaconda - * blueprint - The depsolved blueprint used to generate the kickstart - * commit - The (local) git commit hash for the blueprint used - * deps - The NEVRA of all of the dependencies used in the composition - * compose_type - The type of output generated (tar, iso, etc.) - * queue_status - The final status of the composition (FINISHED or FAILED) - - Example:: - - { - "commit": "7078e521a54b12eae31c3fd028680da7a0815a4d", - "compose_type": "tar", - "config": { - "anaconda_args": "", - "armplatform": "", - "compress_args": [], - "compression": "xz", - "image_name": "root.tar.xz", - ... - }, - "deps": { - "packages": [ - { - "arch": "x86_64", - "epoch": "0", - "name": "acl", - "release": "14.el7", - "version": "2.2.51" - } - ] - }, - "id": "c30b7d80-523b-4a23-ad52-61b799739ce8", - "queue_status": "FINISHED", - "blueprint": { - "description": "An example kubernetes master", - ... - }, - "uploads": [ - { - "creation_time": 1568150660.524401, - "image_name": "glusterfs server", - "image_path": "/var/lib/lorax/composer/results/c30b7d80-523b-4a23-ad52-61b799739ce8/disk.vhd", - "provider_name": "azure", - "settings": { - "client_id": "need", - "location": "need", - "resource_group": "group", - "secret": "need", - "storage_account_name": "need", - "storage_container": "need", - "subscription_id": "need", - "tenant": "need" - }, - "status": "FAILED", - "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65" - } - ] - } - """ - if VALID_API_STRING.match(uuid) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - try: - info = uuid_info(api.config["COMPOSER_CFG"], uuid, api=1) - except Exception as e: - return jsonify(status=False, errors=[{"id": COMPOSE_ERROR, "msg": str(e)}]), 400 - - if info is None: - return jsonify(status=False, errors=[{"id": UNKNOWN_UUID, "msg": "%s is not a valid build uuid" % uuid}]), 400 - else: - return jsonify(**info)
- -
[docs]@v1_api.route("/compose/uploads/schedule", defaults={'compose_uuid': ""}, methods=["POST"]) -@v1_api.route("/compose/uploads/schedule/<compose_uuid>", methods=["POST"]) -@checkparams([("compose_uuid", "", "no compose UUID given")]) -def v1_compose_uploads_schedule(compose_uuid): - """Schedule an upload of a compose to a given cloud provider - - **POST /api/v1/uploads/schedule/<compose_uuid>** - - The body can specify either a pre-existing profile to use (as returned by - `/uploads/providers`) or one-time use settings for the provider. - - Example with upload profile:: - - { - "image_name": "My Image", - "provider": "azure", - "profile": "production-azure-settings" - } - - Example with upload settings:: - - { - "image_name": "My Image", - "provider": "azure", - "settings": { - "resource_group": "SOMEBODY", - "storage_account_name": "ONCE", - "storage_container": "TOLD", - "location": "ME", - "subscription_id": "THE", - "client_id": "WORLD", - "secret": "IS", - "tenant": "GONNA" - } - } - - Example response:: - - { - "status": true, - "upload_id": "572eb0d0-5348-4600-9666-14526ba628bb" - } - """ - if VALID_API_STRING.match(compose_uuid) is None: - error = {"id": INVALID_CHARS, "msg": "Invalid characters in API path"} - return jsonify(status=False, errors=[error]), 400 - - parsed = request.get_json(cache=False) - if not parsed: - return jsonify(status=False, errors=[{"id": MISSING_POST, "msg": "Missing POST body"}]), 400 - - try: - image_name = parsed["image_name"] - provider_name = parsed["provider"] - if "profile" in parsed: - # Load a specific profile for this provider - profile = parsed["profile"] - settings = load_settings(api.config["COMPOSER_CFG"]["upload"], provider_name, profile) - else: - settings = parsed["settings"] - except KeyError as e: - error = {"id": UPLOAD_ERROR, "msg": f'Missing parameter {str(e)}!'} - return jsonify(status=False, errors=[error]), 400 - try: - compose_type = uuid_status(api.config["COMPOSER_CFG"], compose_uuid)["compose_type"] - provider = resolve_provider(api.config["COMPOSER_CFG"]["upload"], provider_name) - if "supported_types" in provider and compose_type not in provider["supported_types"]: - raise RuntimeError( - f'Type "{compose_type}" is not supported by provider "{provider_name}"!' - ) - except Exception as e: - return jsonify(status=False, errors=[{"id": UPLOAD_ERROR, "msg": str(e)}]), 400 - - try: - upload_id = uuid_schedule_upload( - api.config["COMPOSER_CFG"], - compose_uuid, - provider_name, - image_name, - settings - ) - except RuntimeError as e: - return jsonify(status=False, errors=[{"id": UPLOAD_ERROR, "msg": str(e)}]), 400 - return jsonify(status=True, upload_id=upload_id)
- -
[docs]@v1_api.route("/upload/delete", defaults={"upload_uuid": ""}, methods=["DELETE"]) -@v1_api.route("/upload/delete/<upload_uuid>", methods=["DELETE"]) -@checkparams([("upload_uuid", "", "no upload UUID given")]) -def v1_compose_uploads_delete(upload_uuid): - """Delete an upload and disassociate it from its compose - - **DELETE /api/v1/upload/delete/<upload_uuid>** - - Example response:: - - { - "status": true, - "upload_id": "572eb0d0-5348-4600-9666-14526ba628bb" - } - """ - if VALID_API_STRING.match(upload_uuid) is None: - error = {"id": INVALID_CHARS, "msg": "Invalid characters in API path"} - return jsonify(status=False, errors=[error]), 400 - - try: - uuid_remove_upload(api.config["COMPOSER_CFG"], upload_uuid) - delete_upload(api.config["COMPOSER_CFG"]["upload"], upload_uuid) - except RuntimeError as error: - return jsonify(status=False, errors=[{"id": UPLOAD_ERROR, "msg": str(error)}]) - return jsonify(status=True, upload_id=upload_uuid)
- -
[docs]@v1_api.route("/upload/info", defaults={"upload_uuid": ""}) -@v1_api.route("/upload/info/<upload_uuid>") -@checkparams([("upload_uuid", "", "no UUID given")]) -def v1_upload_info(upload_uuid): - """Returns information about a given upload - - **GET /api/v1/upload/info/<upload_uuid>** - - Example response:: - - { - "status": true, - "upload": { - "creation_time": 1565620940.069004, - "image_name": "My Image", - "image_path": "/var/lib/lorax/composer/results/b6218e8f-0fa2-48ec-9394-f5c2918544c4/disk.vhd", - "provider_name": "azure", - "settings": { - "resource_group": "SOMEBODY", - "storage_account_name": "ONCE", - "storage_container": "TOLD", - "location": "ME", - "subscription_id": "THE", - "client_id": "WORLD", - "secret": "IS", - "tenant": "GONNA" - }, - "status": "FAILED", - "uuid": "b637c411-9d9d-4279-b067-6c8d38e3b211" - } - } - """ - if VALID_API_STRING.match(upload_uuid) is None: - return jsonify(status=False, errors=[{"id": INVALID_CHARS, "msg": "Invalid characters in API path"}]), 400 - - try: - upload = get_upload(api.config["COMPOSER_CFG"]["upload"], upload_uuid).summary() - except RuntimeError as error: - return jsonify(status=False, errors=[{"id": UPLOAD_ERROR, "msg": str(error)}]) - return jsonify(status=True, upload=upload)
- -
[docs]@v1_api.route("/upload/log", defaults={"upload_uuid": ""}) -@v1_api.route("/upload/log/<upload_uuid>") -@checkparams([("upload_uuid", "", "no UUID given")]) -def v1_upload_log(upload_uuid): - """Returns an upload's log - - **GET /api/v1/upload/log/<upload_uuid>** - - Example response:: - - { - "status": true, - "upload_id": "b637c411-9d9d-4279-b067-6c8d38e3b211", - "log": "< PLAY [localhost] >..." - } - """ - if VALID_API_STRING.match(upload_uuid) is None: - error = {"id": INVALID_CHARS, "msg": "Invalid characters in API path"} - return jsonify(status=False, errors=[error]), 400 - - try: - upload = get_upload(api.config["COMPOSER_CFG"]["upload"], upload_uuid) - except RuntimeError as error: - return jsonify(status=False, errors=[{"id": UPLOAD_ERROR, "msg": str(error)}]) - return jsonify(status=True, upload_id=upload_uuid, log=upload.upload_log)
- -
[docs]@v1_api.route("/upload/reset", defaults={"upload_uuid": ""}, methods=["POST"]) -@v1_api.route("/upload/reset/<upload_uuid>", methods=["POST"]) -@checkparams([("upload_uuid", "", "no UUID given")]) -def v1_upload_reset(upload_uuid): - """Reset an upload so it can be attempted again - - **POST /api/v1/upload/reset/<upload_uuid>** - - Optionally pass in a new image name and/or new settings. - - Example request:: - - { - "image_name": "My renamed image", - "settings": { - "resource_group": "ROLL", - "storage_account_name": "ME", - "storage_container": "I", - "location": "AIN'T", - "subscription_id": "THE", - "client_id": "SHARPEST", - "secret": "TOOL", - "tenant": "IN" - } - } - - Example response:: - - { - "status": true, - "upload_id": "c75d5d62-9d26-42fc-a8ef-18bb14679fc7" - } - """ - if VALID_API_STRING.match(upload_uuid) is None: - error = {"id": INVALID_CHARS, "msg": "Invalid characters in API path"} - return jsonify(status=False, errors=[error]), 400 - - parsed = request.get_json(cache=False) - image_name = parsed.get("image_name") if parsed else None - settings = parsed.get("settings") if parsed else None - - try: - reset_upload(api.config["COMPOSER_CFG"]["upload"], upload_uuid, image_name, settings) - except RuntimeError as error: - return jsonify(status=False, errors=[{"id": UPLOAD_ERROR, "msg": str(error)}]) - return jsonify(status=True, upload_id=upload_uuid)
- -
[docs]@v1_api.route("/upload/cancel", defaults={"upload_uuid": ""}, methods=["DELETE"]) -@v1_api.route("/upload/cancel/<upload_uuid>", methods=["DELETE"]) -@checkparams([("upload_uuid", "", "no UUID given")]) -def v1_upload_cancel(upload_uuid): - """Cancel an upload that is either queued or in progress - - **DELETE /api/v1/upload/cancel/<upload_uuid>** - - Example response:: - - { - "status": true, - "upload_id": "037a3d56-b421-43e9-9935-c98350c89996" - } - """ - if VALID_API_STRING.match(upload_uuid) is None: - error = {"id": INVALID_CHARS, "msg": "Invalid characters in API path"} - return jsonify(status=False, errors=[error]), 400 - - try: - cancel_upload(api.config["COMPOSER_CFG"]["upload"], upload_uuid) - except RuntimeError as error: - return jsonify(status=False, errors=[{"id": UPLOAD_ERROR, "msg": str(error)}]) - return jsonify(status=True, upload_id=upload_uuid)
- -
[docs]@v1_api.route("/upload/providers") -def v1_upload_providers(): - """Return the information about all upload providers, including their - display names, expected settings, and saved profiles. Refer to the - `resolve_provider` function. - - **GET /api/v1/upload/providers** - - Example response:: - - { - "providers": { - "azure": { - "display": "Azure", - "profiles": { - "default": { - "client_id": "example", - ... - } - }, - "settings-info": { - "client_id": { - "display": "Client ID", - "placeholder": "", - "regex": "", - "type": "string" - }, - ... - }, - "supported_types": ["vhd"] - }, - ... - } - } - """ - - ucfg = api.config["COMPOSER_CFG"]["upload"] - - provider_names = list_providers(ucfg) - - def get_provider_info(provider_name): - provider = resolve_provider(ucfg, provider_name) - provider["profiles"] = load_profiles(ucfg, provider_name) - return provider - - providers = {provider_name: get_provider_info(provider_name) - for provider_name in provider_names} - return jsonify(status=True, providers=providers)
- -
[docs]@v1_api.route("/upload/providers/save", methods=["POST"]) -def v1_providers_save(): - """Save provider settings as a profile for later use - - **POST /api/v1/upload/providers/save** - - Example request:: - - { - "provider": "azure", - "profile": "my-profile", - "settings": { - "resource_group": "SOMEBODY", - "storage_account_name": "ONCE", - "storage_container": "TOLD", - "location": "ME", - "subscription_id": "THE", - "client_id": "WORLD", - "secret": "IS", - "tenant": "GONNA" - } - } - - Saving to an existing profile will overwrite it. - - Example response:: - - { - "status": true - } - """ - parsed = request.get_json(cache=False) - - if parsed is None: - return jsonify(status=False, errors=[{"id": MISSING_POST, "msg": "Missing POST body"}]), 400 - - try: - provider_name = parsed["provider"] - profile = parsed["profile"] - settings = parsed["settings"] - except KeyError as e: - error = {"id": UPLOAD_ERROR, "msg": f'Missing parameter {str(e)}!'} - return jsonify(status=False, errors=[error]), 400 - try: - save_settings(api.config["COMPOSER_CFG"]["upload"], provider_name, profile, settings) - except Exception as e: - error = {"id": UPLOAD_ERROR, "msg": str(e)} - return jsonify(status=False, errors=[error]) - return jsonify(status=True)
- -
[docs]@v1_api.route("/upload/providers/delete", defaults={"provider_name": "", "profile": ""}, methods=["DELETE"]) -@v1_api.route("/upload/providers/delete/<provider_name>/<profile>", methods=["DELETE"]) -@checkparams([("provider_name", "", "no provider name given"), ("profile", "", "no profile given")]) -def v1_providers_delete(provider_name, profile): - """Delete a provider's profile settings - - **DELETE /api/v1/upload/providers/delete/<provider_name>/<profile>** - - Example response:: - - { - "status": true - } - """ - if None in (VALID_API_STRING.match(provider_name), VALID_API_STRING.match(profile)): - error = {"id": INVALID_CHARS, "msg": "Invalid characters in API path"} - return jsonify(status=False, errors=[error]), 400 - - try: - delete_profile(api.config["COMPOSER_CFG"]["upload"], provider_name, profile) - except Exception as e: - error = {"id": UPLOAD_ERROR, "msg": str(e)} - return jsonify(status=False, errors=[error]) - return jsonify(status=True)
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/api/workspace.html b/docs/html/_modules/pylorax/api/workspace.html deleted file mode 100644 index 0d6cc6e3..00000000 --- a/docs/html/_modules/pylorax/api/workspace.html +++ /dev/null @@ -1,329 +0,0 @@ - - - - - - - - - - - pylorax.api.workspace — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -

Source code for pylorax.api.workspace

-#
-# Copyright (C) 2017  Red Hat, Inc.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
-#
-import os
-
-from pylorax.api.recipes import recipe_filename, recipe_from_toml, RecipeFileError
-from pylorax.sysutils import joinpaths
-
-
-
[docs]def workspace_dir(repo, branch): - """Create the workspace's path from a Repository and branch - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :returns: The path to the branch's workspace directory - :rtype: str - - """ - repo_path = repo.get_location().get_path() - return joinpaths(repo_path, "workspace", branch)
- - -
[docs]def workspace_read(repo, branch, recipe_name): - """Read a Recipe from the branch's workspace - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe_name: The name of the recipe - :type recipe_name: str - :returns: The workspace copy of the recipe, or None if it doesn't exist - :rtype: Recipe or None - :raises: RecipeFileError - """ - ws_dir = workspace_dir(repo, branch) - if not os.path.isdir(ws_dir): - os.makedirs(ws_dir) - filename = joinpaths(ws_dir, recipe_filename(recipe_name)) - if not os.path.exists(filename): - return None - try: - f = open(filename, 'rb') - recipe = recipe_from_toml(f.read().decode("UTF-8")) - except IOError: - raise RecipeFileError - return recipe
- - -
[docs]def workspace_write(repo, branch, recipe): - """Write a recipe to the workspace - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe: The recipe to write to the workspace - :type recipe: Recipe - :returns: None - :raises: IO related errors - """ - ws_dir = workspace_dir(repo, branch) - if not os.path.isdir(ws_dir): - os.makedirs(ws_dir) - filename = joinpaths(ws_dir, recipe.filename) - open(filename, 'wb').write(recipe.toml().encode("UTF-8"))
- - -
[docs]def workspace_filename(repo, branch, recipe_name): - """Return the path and filename of the workspace recipe - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe_name: The name of the recipe - :type recipe_name: str - :returns: workspace recipe path and filename - :rtype: str - """ - ws_dir = workspace_dir(repo, branch) - return joinpaths(ws_dir, recipe_filename(recipe_name))
- - -
[docs]def workspace_exists(repo, branch, recipe_name): - """Return true of the workspace recipe exists - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe_name: The name of the recipe - :type recipe_name: str - :returns: True if the file exists - :rtype: bool - """ - return os.path.exists(workspace_filename(repo, branch, recipe_name))
- - -
[docs]def workspace_delete(repo, branch, recipe_name): - """Delete the recipe from the workspace - - :param repo: Open repository - :type repo: Git.Repository - :param branch: Branch name - :type branch: str - :param recipe_name: The name of the recipe - :type recipe_name: str - :returns: None - :raises: IO related errors - """ - filename = workspace_filename(repo, branch, recipe_name) - if os.path.exists(filename): - os.unlink(filename)
-
- -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/_modules/pylorax/base.html b/docs/html/_modules/pylorax/base.html index 885e0dc9..6ce6b2b9 100644 --- a/docs/html/_modules/pylorax/base.html +++ b/docs/html/_modules/pylorax/base.html @@ -1,38 +1,38 @@ - - + - + - + + + pylorax.base — Lorax 35.1 documentation + + + + + + - pylorax.base — Lorax 35.0 documentation - + - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -251,7 +259,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -250,7 +258,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -505,7 +513,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -940,7 +948,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -214,7 +222,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -229,7 +237,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -371,7 +379,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -294,7 +302,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -557,7 +565,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -749,7 +757,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
    -
  • Docs »
  • +
  • »
  • Module code »
  • @@ -478,7 +481,7 @@ # Make sure the process is really finished (it should be, since it was started from a subprocess call) # and then remove the pid file. if os.path.exists("/var/run/anaconda.pid"): - # lorax-composer runs anaconda using unshare so the pid is always 1 + # anaconda may be started using unshare so the pid is always 1 if open("/var/run/anaconda.pid").read().strip() == "1": os.unlink("/var/run/anaconda.pid") @@ -843,20 +846,25 @@
- @@ -865,7 +873,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -1068,7 +1076,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -388,7 +396,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -288,7 +296,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -344,7 +352,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -607,7 +615,6 @@ - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,11 +131,13 @@ + +
- @@ -247,7 +255,6 @@ - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -
-

composer-cli

-
-
Authors
-

Brian C. Lane <bcl@redhat.com>

-
-
-

composer-cli is an interactive tool for use with a WELDR API server, -managing blueprints, exploring available packages, and building new images. As -of Fedora 34, osbuild-composer <https://osbuild.org> is the recommended -server.

-

It requires the server to be installed on the local system, and the user -running it needs to be a member of the weldr group.

-
-

composer-cli cmdline arguments

-

Lorax Composer commandline tool

-

-
usage: composer-cli [-h] [-j] [-s SOCKET] [--log LOG] [-a APIVER] [--test TESTMODE] [-V] ...
-
-
-
-

Positional Arguments

-
-
args
-
-
-
-
-

Named Arguments

-
-
-j, --json
-

Output the raw JSON response instead of the normal output.

-

Default: False

-
-
-s, --socket
-

Path to the socket file to listen on

-

Default: "/run/weldr/api.socket"

-
-
--log
-

Path to logfile (./composer-cli.log)

-
-
-a, --api
-

API Version to use

-

Default: "1"

-
-
--test
-

Pass test mode to compose. 1=Mock compose with fail. 2=Mock compose with finished.

-

Default: 0

-
-
-V
-

show program's version number and exit

-

Default: False

-
-
-
-

-
compose start [--size XXXX] <BLUEPRINT> <TYPE> [<IMAGE-NAME> <PROVIDER> <PROFILE> | <IMAGE-NAME> <PROFILE.TOML>]

Start a compose using the selected blueprint and output type. Optionally start an upload. ---size is supported by osbuild-composer, and is in MiB.

-
-
compose start-ostree [--size XXXX] [--parent PARENT] [--ref REF] [--url url] <BLUEPRINT> <TYPE> [<IMAGE-NAME> <PROFILE.TOML>]

Start an ostree compose using the selected blueprint and output type. Optionally start an upload. This command -is only supported by osbuild-composer. --size is in MiB.

-
-
compose types

List the supported output types.

-
-
compose status

List the status of all running and finished composes.

-
-
compose list [waiting|running|finished|failed]

List basic information about composes.

-
-
compose log <UUID> [<SIZE>]

Show the last SIZE kB of the compose log.

-
-
compose cancel <UUID>

Cancel a running compose and delete any intermediate results.

-
-
compose delete <UUID,...>

Delete the listed compose results.

-
-
compose info <UUID>

Show detailed information on the compose.

-
-
compose metadata <UUID>

Download the metadata use to create the compose to <uuid>-metadata.tar

-
-
compose logs <UUID>

Download the compose logs to <uuid>-logs.tar

-
-
compose results <UUID>

Download all of the compose results; metadata, logs, and image to <uuid>.tar

-
-
compose image <UUID>

Download the output image from the compose. Filename depends on the type.

-
-
blueprints list

List the names of the available blueprints.

-
-
blueprints show <BLUEPRINT,...>

Display the blueprint in TOML format.

-
-
blueprints changes <BLUEPRINT,...>

Display the changes for each blueprint.

-
-
blueprints diff <BLUEPRINT> <FROM-COMMIT> <TO-COMMIT>

Display the differences between 2 versions of a blueprint. -FROM-COMMIT can be a commit hash or NEWEST -TO-COMMIT can be a commit hash, NEWEST, or WORKSPACE

-
-
blueprints save <BLUEPRINT,...>

Save the blueprint to a file, <BLUEPRINT>.toml

-
-
blueprints delete <BLUEPRINT>

Delete a blueprint from the server

-
-
blueprints depsolve <BLUEPRINT,...>

Display the packages needed to install the blueprint.

-
-
blueprints push <BLUEPRINT>

Push a blueprint TOML file to the server.

-
-
blueprints freeze <BLUEPRINT,...>

Display the frozen blueprint's modules and packages.

-
-
blueprints freeze show <BLUEPRINT,...>

Display the frozen blueprint in TOML format.

-
-
blueprints freeze save <BLUEPRINT,...>

Save the frozen blueprint to a file, <blueprint-name>.frozen.toml.

-
-
blueprints tag <BLUEPRINT>

Tag the most recent blueprint commit as a release.

-
-
blueprints undo <BLUEPRINT> <COMMIT>

Undo changes to a blueprint by reverting to the selected commit.

-
-
blueprints workspace <BLUEPRINT>

Push the blueprint TOML to the temporary workspace storage.

-
-
modules list

List the available modules.

-
-
projects list

List the available projects.

-
-
projects info <PROJECT,...>

Show details about the listed projects.

-
-
sources list

List the available sources

-
-
sources info <SOURCE-NAME,...>

Details about the source.

-
-
sources add <SOURCE.TOML>

Add a package source to the server.

-
-
sources change <SOURCE.TOML>

Change an existing source

-
-
sources delete <SOURCE-NAME>

Delete a package source.

-
-
-

status show Show API server status.

-
-
upload info <UPLOAD-UUID>

Details about an upload

-
-
upload start <BUILD-UUID> <IMAGE-NAME> [<PROVIDER> <PROFILE>|<PROFILE.TOML>]

Upload a build image to the selected provider.

-
-
upload log <UPLOAD-UUID>

Show the upload log

-
-
upload cancel <UPLOAD-UUID>

Cancel an upload with that is queued or in progress

-
-
upload delete <UPLOAD-UUID>

Delete the upload and remove it from the build

-
-
upload reset <UPLOAD-UUID>

Reset the upload so that it can be tried again

-
-
providers list <PROVIDER>

List the available providers, or list the <provider's> available profiles

-
-
providers show <PROVIDER> <PROFILE>

show the details of a specific provider's profile

-
-
providers push <PROFILE.TOML>

Add a new profile, or overwrite an existing one

-
-
providers save <PROVIDER> <PROFILE>

Save the profile's details to a TOML file named <PROFILE>.toml

-
-
providers delete <PROVIDER> <PROFILE>

Delete a profile from a provider

-
-
-

-
-
-

Edit a Blueprint

-

Start out by listing the available blueprints using composer-cli blueprints -list, pick one and save it to the local directory by running composer-cli -blueprints save http-server.

-

Edit the file (it will be saved with a .toml extension) and change the -description, add a package or module to it. Send it back to the server by -running composer-cli blueprints push http-server.toml. You can verify that it was -saved by viewing the changelog - composer-cli blueprints changes http-server.

-

See the Example Blueprint for an example.

-
-
-

Build an image

-

Build a qcow2 disk image from this blueprint by running composer-cli -compose start http-server qcow2. It will print a UUID that you can use to -keep track of the build. You can also cancel the build if needed.

-

The available types of images is displayed by composer-cli compose types. -Currently this consists of: alibaba, ami, ext4-filesystem, google, hyper-v, -live-iso, openstack, partitioned-disk, qcow2, tar, vhd, vmdk

-

You can optionally start an upload of the finished image, see Image Uploads for -more information.

-
-
-

Monitor the build status

-

Monitor it using composer-cli compose status, which will show the status of -all the builds on the system. You can view the end of the anaconda build logs -once it is in the RUNNING state using composer-cli compose log UUID -where UUID is the UUID returned by the start command.

-

Once the build is in the FINISHED state you can download the image.

-
-
-

Download the image

-

Downloading the final image is done with composer-cli compose image UUID and it will -save the qcow2 image as UUID-disk.qcow2 which you can then use to boot a VM like this:

-
qemu-kvm --name test-image -m 1024 -hda ./UUID-disk.qcow2
-
-
-
-
-

Image Uploads

-

composer-cli can upload the images to a number of services, including AWS, -OpenStack, and vSphere. The upload can be started when the build is finished, -by using composer-cli compose start ... or an existing image can be uploaded -with composer-cli upload start .... In order to access the service you need -to pass authentication details to composer-cli using a TOML file, or reference -a previously saved profile.

-
-

Note

-

With osbuild-composer you can only specify upload targets during -the compose process.

-
-
-
-

Providers

-

Providers are the services providers with Ansible playbook support under -/usr/share/lorax/lifted/providers/, you will need to gather some provider -specific information in order to authenticate with it. You can view the -required fields using composer-cli providers template <PROVIDER>, eg. for AWS -you would run:

-
composer-cli upload template aws
-
-
-

The output looks like this:

-
provider = "aws"
-
-[settings]
-aws_access_key = "AWS Access Key"
-aws_bucket = "AWS Bucket"
-aws_region = "AWS Region"
-aws_secret_key = "AWS Secret Key"
-
-
-

Save this into an aws-credentials.toml file and use it when running start.

-
-

AWS

-

The access key and secret key can be created by going to the -IAM->Users->Security Credentials section and creating a new access key. The -secret key will only be shown when it is first created so make sure to record -it in a secure place. The region should be the region that you want to use the -AMI in, and the bucket can be an existing bucket, or a new one, following the -normal AWS bucket naming rules. It will be created if it doesn't already exist.

-

When uploading the image it is first uploaded to the s3 bucket, and then -converted to an AMI. If the conversion is successful the s3 object will be -deleted. If it fails, re-trying after correcting the problem will re-use the -object if you have not deleted it in the meantime, speeding up the process.

-
-
-
-

Profiles

-

Profiles store the authentication settings associated with a specific provider. -Providers can have multiple profiles, as long as their names are unique. For -example, you may have one profile for testing and another for production -uploads.

-

Profiles are created by pushing the provider settings template to the server using -composer-cli providers push <PROFILE.TOML> where PROFILE.TOML is the same as the -provider template, but with the addition of a profile field. For example, an AWS -profile named test-uploads would look like this:

-
provider = "aws"
-profile = "test-uploads"
-
-[settings]
-aws_access_key = "AWS Access Key"
-aws_bucket = "AWS Bucket"
-aws_region = "AWS Region"
-aws_secret_key = "AWS Secret Key"
-
-
-

You can view the profile by using composer-cli providers aws test-uploads.

-
-
-

Build an image and upload results

-

If you have a profile named test-uploads:

-
composer-cli compose start example-http-server ami "http image" aws test-uploads
-
-
-

Or if you have the settings stored in a TOML file:

-
composer-cli compose start example-http-server ami "http image" aws-settings.toml
-
-
-

It will return the UUID of the image build, and the UUID of the upload. Once -the build has finished successfully it will start the upload process, which you -can monitor with composer-cli upload info <UPLOAD-UUID>

-

You can also view the upload logs from the Ansible playbook with:

-
``composer-cli upload log <UPLOAD-UUID>``
-
-
-

The type of the image must match the type supported by the provider.

-
-
-

Upload an existing image

-

You can upload previously built images, as long as they are in the FINISHED state, using composer-cli upload start ...`. If you have a profile named test-uploads:

-
composer-cli upload start <UUID> "http-image" aws test-uploads
-
-
-

Or if you have the settings stored in a TOML file:

-
composer-cli upload start <UUID> "http-image" aws-settings.toml
-
-
-

This will output the UUID of the upload, which can then be used to monitor the status in the same way -described above.

-
-
-

Debugging

-

There are a couple of arguments that can be helpful when debugging problems. -These are only meant for debugging and should not be used to script access to -the API. If you need to do that you can communicate with it directly in the -language of your choice.

-

--json will return the server's response as a nicely formatted json output -instead of printing what the command would usually print.

-

--test=1 will cause a compose start to start creating an image, and then -end with a failed state.

-

--test=2 will cause a compose to start and then end with a finished state, -without actually composing anything.

-
-
-

Blueprint Reference

-

Blueprints are simple text files in TOML format that describe -which packages, and what versions, to install into the image. They can also define a limited set -of customizations to make to the final image.

-

A basic blueprint looks like this:

-
name = "base"
-description = "A base system with bash"
-version = "0.0.1"
-
-[[packages]]
-name = "bash"
-version = "4.4.*"
-
-
-

The name field is the name of the blueprint. It can contain spaces, but they will be converted to - -when it is written to disk. It should be short and descriptive.

-

description can be a longer description of the blueprint, it is only used for display purposes.

-

version is a semver compatible version number. If -a new blueprint is uploaded with the same version the server will -automatically bump the PATCH level of the version. If the version -doesn't match it will be used as is. eg. Uploading a blueprint with version -set to 0.1.0 when the existing blueprint version is 0.0.1 will -result in the new blueprint being stored as version 0.1.0.

-
-

[[packages]] and [[modules]]

-

These entries describe the package names and matching version glob to be installed into the image.

-

The names must match the names exactly, and the versions can be an exact match -or a filesystem-like glob of the version using * wildcards and ? -character matching.

-
-

Note

-

Currently there are no differences between packages and modules -in osbuild-composer. Both are treated like an rpm package dependency.

-
-

For example, to install tmux-2.9a and openssh-server-8.*, you would add -this to your blueprint:

-
[[packages]]
-name = "tmux"
-version = "2.9a"
-
-[[packages]]
-name = "openssh-server"
-version = "8.*"
-
-
-
-
-

[[groups]]

-

The groups entries describe a group of packages to be installed into the image. Package groups are -defined in the repository metadata. Each group has a descriptive name used primarily for display -in user interfaces and an ID more commonly used in kickstart files. Here, the ID is the expected -way of listing a group.

-

Groups have three different ways of categorizing their packages: mandatory, default, and optional. -For purposes of blueprints, mandatory and default packages will be installed. There is no mechanism -for selecting optional packages.

-

For example, if you want to install the anaconda-tools group you would add this to your -blueprint:

-
[[groups]]
-name="anaconda-tools"
-
-
-

groups is a TOML list, so each group needs to be listed separately, like packages but with -no version number.

-
-
-

Customizations

-

The [customizations] section can be used to configure the hostname of the final image. eg.:

-
[customizations]
-hostname = "baseimage"
-
-
-

This is optional and may be left out to use the defaults.

-
-

[customizations.kernel]

-

This allows you to append arguments to the bootloader's kernel commandline. This will not have any -effect on tar or ext4-filesystem images since they do not include a bootloader.

-

For example:

-
[customizations.kernel]
-append = "nosmt=force"
-
-
-
-
-

[[customizations.sshkey]]

-

Set an existing user's ssh key in the final image:

-
[[customizations.sshkey]]
-user = "root"
-key = "PUBLIC SSH KEY"
-
-
-

The key will be added to the user's authorized_keys file.

-
-

Warning

-

key expects the entire content of ~/.ssh/id_rsa.pub

-
-
-
-

[[customizations.user]]

-

Add a user to the image, and/or set their ssh key. -All fields for this section are optional except for the name, here is a complete example:

-
[[customizations.user]]
-name = "admin"
-description = "Administrator account"
-password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L..."
-key = "PUBLIC SSH KEY"
-home = "/srv/widget/"
-shell = "/usr/bin/bash"
-groups = ["widget", "users", "wheel"]
-uid = 1200
-gid = 1200
-
-
-

If the password starts with $6$, $5$, or $2b$ it will be stored as -an encrypted password. Otherwise it will be treated as a plain text password.

-
-

Warning

-

key expects the entire content of ~/.ssh/id_rsa.pub

-
-
-
-

[[customizations.group]]

-

Add a group to the image. name is required and gid is optional:

-
[[customizations.group]]
-name = "widget"
-gid = 1130
-
-
-
-
-

[customizations.timezone]

-

Customizing the timezone and the NTP servers to use for the system:

-
[customizations.timezone]
-timezone = "US/Eastern"
-ntpservers = ["0.north-america.pool.ntp.org", "1.north-america.pool.ntp.org"]
-
-
-

The values supported by timezone can be listed by running timedatectl list-timezones.

-

If no timezone is setup the system will default to using UTC. The ntp servers are also -optional and will default to using the distribution defaults which are fine for most uses.

-

In some image types there are already NTP servers setup, eg. Google cloud image, and they -cannot be overridden because they are required to boot in the selected environment. But the -timezone will be updated to the one selected in the blueprint.

-
-
-

[customizations.locale]

-

Customize the locale settings for the system:

-
[customizations.locale]
-languages = ["en_US.UTF-8"]
-keyboard = "us"
-
-
-

The values supported by languages can be listed by running localectl list-locales from -the command line.

-

The values supported by keyboard can be listed by running localectl list-keymaps from -the command line.

-

Multiple languages can be added. The first one becomes the -primary, and the others are added as secondary. One or the other of languages -or keyboard must be included (or both) in the section.

-
-
-

[customizations.firewall]

-

By default the firewall blocks all access except for services that enable their ports explicitly, -like sshd. This command can be used to open other ports or services. Ports are configured using -the port:protocol format:

-
[customizations.firewall]
-ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp"]
-
-
-

Numeric ports, or their names from /etc/services can be used in the ports enabled/disabled lists.

-

The blueprint settings extend any existing settings in the image templates, so if sshd is -already enabled it will extend the list of ports with the ones listed by the blueprint.

-

If the distribution uses firewalld you can specify services listed by firewall-cmd --get-services -in a customizations.firewall.services section:

-
[customizations.firewall.services]
-enabled = ["ftp", "ntp", "dhcp"]
-disabled = ["telnet"]
-
-
-

Remember that the firewall.services are different from the names in /etc/services.

-

Both are optional, if they are not used leave them out or set them to an empty list []. If you -only want the default firewall setup this section can be omitted from the blueprint.

-

NOTE: The Google and OpenStack templates explicitly disable the firewall for their environment. -This cannot be overridden by the blueprint.

-
-
-

[customizations.services]

-

This section can be used to control which services are enabled at boot time. -Some image types already have services enabled or disabled in order for the -image to work correctly, and cannot be overridden. eg. ami requires -sshd, chronyd, and cloud-init. Without them the image will not -boot. Blueprint services are added to, not replacing, the list already in the -templates, if any.

-

The service names are systemd service units. You may specify any systemd unit -file accepted by systemctl enable eg. cockpit.socket:

-
[customizations.services]
-enabled = ["sshd", "cockpit.socket", "httpd"]
-disabled = ["postfix", "telnetd"]
-
-
-
-
[[repos.git]]
-
-

Note

-

Currently osbuild-composer does not support repos.git

-
-

The [[repos.git]] entries are used to add files from a git repository -repository to the created image. The repository is cloned, the specified ref is checked out -and an rpm is created to install the files to a destination path. The rpm includes a summary -with the details of the repository and reference used to create it. The rpm is also included in the -image build metadata.

-

To create an rpm named server-config-1.0-1.noarch.rpm you would add this to your blueprint:

-
[[repos.git]]
-rpmname="server-config"
-rpmversion="1.0"
-rpmrelease="1"
-summary="Setup files for server deployment"
-repo="PATH OF GIT REPO TO CLONE"
-ref="v1.0"
-destination="/opt/server/"
-
-
-
    -
  • rpmname: Name of the rpm to create, also used as the prefix name in the tar archive

  • -
  • rpmversion: Version of the rpm, eg. "1.0.0"

  • -
  • rpmrelease: Release of the rpm, eg. "1"

  • -
  • summary: Summary string for the rpm

  • -
  • repo: URL of the get repo to clone and create the archive from

  • -
  • ref: Git reference to check out. eg. origin/branch-name, git tag, or git commit hash

  • -
  • destination: Path to install the / of the git repo at when installing the rpm

  • -
-

An rpm will be created with the contents of the git repository referenced, with the files -being installed under /opt/server/ in this case.

-

ref can be any valid git reference for use with git archive. eg. to use the head -of a branch set it to origin/branch-name, a tag name, or a commit hash.

-

Note that the repository is cloned in full each time a build is started, so pointing to a -repository with a large amount of history may take a while to clone and use a significant -amount of disk space. The clone is temporary and is removed once the rpm is created.

-
-
-
-
-
-

Example Blueprint

-

This example blueprint will install the tmux, git, and vim-enhanced -packages. It will set the root ssh key, add the widget and admin -users as well as a students group:

-
name = "example-custom-base"
-description = "A base system with customizations"
-version = "0.0.1"
-
-[[packages]]
-name = "tmux"
-version = "*"
-
-[[packages]]
-name = "git"
-version = "*"
-
-[[packages]]
-name = "vim-enhanced"
-version = "*"
-
-[customizations]
-hostname = "custombase"
-
-[[customizations.sshkey]]
-user = "root"
-key = "A SSH KEY FOR ROOT"
-
-[[customizations.user]]
-name = "widget"
-description = "Widget process user account"
-home = "/srv/widget/"
-shell = "/usr/bin/false"
-groups = ["dialout", "users"]
-
-[[customizations.user]]
-name = "admin"
-description = "Widget admin account"
-password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31LeOUleVK/R/aeWVHVZDi26zAH.o0ywBKH9Tc0/wm7sW/q39uyd1"
-home = "/srv/widget/"
-shell = "/usr/bin/bash"
-groups = ["widget", "users", "students"]
-uid = 1200
-
-[[customizations.user]]
-name = "plain"
-password = "simple plain password"
-
-[[customizations.user]]
-name = "bart"
-key = "SSH KEY FOR BART"
-groups = ["students"]
-
-[[customizations.group]]
-name = "widget"
-
-[[customizations.group]]
-name = "students"
-
-
-
-
- - -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/composer.cli.html b/docs/html/composer.cli.html deleted file mode 100644 index 2f1bfb28..00000000 --- a/docs/html/composer.cli.html +++ /dev/null @@ -1,1520 +0,0 @@ - - - - - - - - - - - composer.cli package — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -
-

composer.cli package

-
-

Submodules

-
-
-

composer.cli.blueprints module

-
-
-composer.cli.blueprints.blueprints_changes(socket_path, api_version, args, show_json=False)[source]
-

Display the changes for each of the blueprints

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints changes <blueprint,...> Display the changes for each blueprint.

-
- -
-
-composer.cli.blueprints.blueprints_cmd(opts)[source]
-

Process blueprints commands

-
-
Parameters
-

opts (argparse.Namespace) -- Cmdline arguments

-
-
Returns
-

Value to return from sys.exit()

-
-
Return type
-

int

-
-
-

This dispatches the blueprints commands to a function

-
- -
-
-composer.cli.blueprints.blueprints_delete(socket_path, api_version, args, show_json=False)[source]
-

Delete a blueprint from the server

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

delete <blueprint> Delete a blueprint from the server

-
- -
-
-composer.cli.blueprints.blueprints_depsolve(socket_path, api_version, args, show_json=False)[source]
-

Display the packages needed to install the blueprint

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints depsolve <blueprint,...> Display the packages needed to install the blueprint.

-
- -
-
-composer.cli.blueprints.blueprints_diff(socket_path, api_version, args, show_json=False)[source]
-

Display the differences between 2 versions of a blueprint

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-
-
blueprints diff <blueprint-name> Display the differences between 2 versions of a blueprint.

<from-commit> Commit hash or NEWEST -<to-commit> Commit hash, NEWEST, or WORKSPACE

-
-
-
- -
-
-composer.cli.blueprints.blueprints_freeze(socket_path, api_version, args, show_json=False)[source]
-

Handle the blueprints freeze commands

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints freeze <blueprint,...> Display the frozen blueprint's modules and packages. -blueprints freeze show <blueprint,...> Display the frozen blueprint in TOML format. -blueprints freeze save <blueprint,...> Save the frozen blueprint to a file, <blueprint-name>.frozen.toml.

-
- -
-
-composer.cli.blueprints.blueprints_freeze_save(socket_path, api_version, args, show_json=False)[source]
-

Save the frozen blueprint to a TOML file

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints freeze save <blueprint,...> Save the frozen blueprint to a file, <blueprint-name>.frozen.toml.

-
- -
-
-composer.cli.blueprints.blueprints_freeze_show(socket_path, api_version, args, show_json=False)[source]
-

Show the frozen blueprint in TOML format

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints freeze show <blueprint,...> Display the frozen blueprint in TOML format.

-
- -
-
-composer.cli.blueprints.blueprints_list(socket_path, api_version, args, show_json=False)[source]
-

Output the list of available blueprints

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints list

-
- -
-
-composer.cli.blueprints.blueprints_push(socket_path, api_version, args, show_json=False)[source]
-

Push a blueprint TOML file to the server, updating the blueprint

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

push <blueprint> Push a blueprint TOML file to the server.

-
- -
-
-composer.cli.blueprints.blueprints_save(socket_path, api_version, args, show_json=False)[source]
-

Save the blueprint to a TOML file

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints save <blueprint,...> Save the blueprint to a file, <blueprint-name>.toml

-
- -
-
-composer.cli.blueprints.blueprints_show(socket_path, api_version, args, show_json=False)[source]
-

Show the blueprints, in TOML format

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints show <blueprint,...> Display the blueprint in TOML format.

-

Multiple blueprints will be separated by

-
- -
-
-composer.cli.blueprints.blueprints_tag(socket_path, api_version, args, show_json=False)[source]
-

Tag the most recent blueprint commit as a release

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints tag <blueprint> Tag the most recent blueprint commit as a release.

-
- -
-
-composer.cli.blueprints.blueprints_undo(socket_path, api_version, args, show_json=False)[source]
-

Undo changes to a blueprint

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints undo <blueprint> <commit> Undo changes to a blueprint by reverting to the selected commit.

-
- -
-
-composer.cli.blueprints.blueprints_workspace(socket_path, api_version, args, show_json=False)[source]
-

Push the blueprint TOML to the temporary workspace storage

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

blueprints workspace <blueprint> Push the blueprint TOML to the temporary workspace storage.

-
- -
-
-composer.cli.blueprints.dict_names(lst)[source]
-

Return comma-separated list of the dict's name/user fields

-
-
Parameters
-

d (dict) -- key/values

-
-
Returns
-

String of the dict's keys and values

-
-
Return type
-

str

-
-
-

root, norm

-
- -
-
-composer.cli.blueprints.prettyCommitDetails(change, indent=4)[source]
-

Print the blueprint's change in a nice way

-
-
Parameters
-
    -
  • change (dict) -- The individual blueprint change dict

  • -
  • indent (int) -- Number of spaces to indent

  • -
-
-
-
- -
-
-composer.cli.blueprints.pretty_dict(d)[source]
-

Return the dict as a human readable single line

-
-
Parameters
-

d (dict) -- key/values

-
-
Returns
-

String of the dict's keys and values

-
-
Return type
-

str

-
-
-

key="str", key="str1,str2", ...

-
- -
-
-composer.cli.blueprints.pretty_diff_entry(diff)[source]
-

Generate nice diff entry string.

-
-
Parameters
-

diff (dict) -- Difference entry dict

-
-
Returns
-

Nice string

-
-
-
- -
-
-

composer.cli.cmdline module

-
-
-composer.cli.cmdline.composer_cli_parser()[source]
-

Return the ArgumentParser for composer-cli

-
- -
-
-

composer.cli.compose module

-
-
-composer.cli.compose.compose_cancel(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Cancel a running compose

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

compose cancel <uuid>

-

This will cancel a running compose. It does nothing if the compose has finished.

-
- -
-
-composer.cli.compose.compose_cmd(opts)[source]
-

Process compose commands

-
-
Parameters
-

opts (argparse.Namespace) -- Cmdline arguments

-
-
Returns
-

Value to return from sys.exit()

-
-
Return type
-

int

-
-
-

This dispatches the compose commands to a function

-

compose_cmd expects api to be passed. eg.

-
-

{"version": 1, "backend": "lorax-composer"}

-
-
- -
-
-composer.cli.compose.compose_delete(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Delete a finished compose's results

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

compose delete <uuid,...>

-

Delete the listed compose results. It will only delete results for composes that have finished -or failed, not a running compose.

-
- -
-
-composer.cli.compose.compose_image(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Download the compose's output image

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

compose image <uuid>

-

This downloads only the result image, saving it as the image name, which depends on the type -of compose that was selected.

-
- -
-
-composer.cli.compose.compose_info(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Return detailed information about the compose

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

compose info <uuid>

-

This returns information about the compose, including the blueprint and the dependencies.

-
- -
-
-composer.cli.compose.compose_list(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Return a simple list of compose identifiers

-
- -
-
-composer.cli.compose.compose_log(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Show the last part of the compose log

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

compose log <uuid> [<size>kB]

-

This will display the last 1kB of the compose's log file. Can be used to follow progress -during the build.

-
- -
-
-composer.cli.compose.compose_logs(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Download a tar of the compose's logs

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

compose logs <uuid>

-

Saves the logs as uuid-logs.tar

-
- -
-
-composer.cli.compose.compose_metadata(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Download a tar file of the compose's metadata

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

compose metadata <uuid>

-

Saves the metadata as uuid-metadata.tar

-
- -
-
-composer.cli.compose.compose_ostree(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Start a new ostree compose using the selected blueprint and type

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- Set to 1 to simulate a failed compose, set to 2 to simulate a finished one.

  • -
  • api (dict) -- Details about the API server, "version" and "backend"

  • -
-
-
-

compose start-ostree [--size XXXX] [--parent PARENT] [--ref REF] [--url URL] <BLUEPRINT> <TYPE> [<IMAGE-NAME> <PROFILE.TOML>]

-
- -
-
-composer.cli.compose.compose_results(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Download a tar file of the compose's results

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

compose results <uuid>

-

The results includes the metadata, output image, and logs. -It is saved as uuid.tar

-
- -
-
-composer.cli.compose.compose_start(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Start a new compose using the selected blueprint and type

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- Set to 1 to simulate a failed compose, set to 2 to simulate a finished one.

  • -
  • api (dict) -- Details about the API server, "version" and "backend"

  • -
-
-
-

compose start [--size XXX] <blueprint-name> <compose-type> [<image-name> <provider> <profile> | <image-name> <profile.toml>]

-
- -
-
-composer.cli.compose.compose_status(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Return the status of all known composes

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

This doesn't map directly to an API command, it combines the results from queue, finished, -and failed so raw JSON output is not available.

-
- -
-
-composer.cli.compose.compose_types(socket_path, api_version, args, show_json=False, testmode=0, api=None)[source]
-

Return information about the supported compose types

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

Add additional details to types that are known to composer-cli. Raw JSON output does not -include this extra information.

-
- -
-
-composer.cli.compose.get_parent(args)[source]
-

Return optional --parent argument, and remaining args

-
-
Parameters
-

args (list of strings) -- list of arguments

-
-
Returns
-

(args, parent)

-
-
Return type
-

tuple

-
-
-
- -
-
-composer.cli.compose.get_ref(args)[source]
-

Return optional --ref argument, and remaining args

-
-
Parameters
-

args (list of strings) -- list of arguments

-
-
Returns
-

(args, parent)

-
-
Return type
-

tuple

-
-
-
- -
-
-composer.cli.compose.get_size(args)[source]
-

Return optional --size argument, and remaining args

-
-
Parameters
-

args (list of strings) -- list of arguments

-
-
Returns
-

(args, size)

-
-
Return type
-

tuple

-
-
-
    -
  • check size argument for int

  • -
  • check other args for --size in wrong place

  • -
  • raise error? Or just return 0?

  • -
  • no size returns 0 in size

  • -
  • multiply by 1024**2 to make it easier on users to specify large sizes

  • -
-
- -
-
-composer.cli.compose.get_url(args)[source]
-

Return optional --url argument, and remaining args

-
-
Parameters
-

args (list of strings) -- list of arguments

-
-
Returns
-

(args, parent)

-
-
Return type
-

tuple

-
-
-
- -
-
-

composer.cli.help module

-
-
-

composer.cli.modules module

-
-
-composer.cli.modules.modules_cmd(opts)[source]
-

Process modules commands

-
-
Parameters
-

opts (argparse.Namespace) -- Cmdline arguments

-
-
Returns
-

Value to return from sys.exit()

-
-
Return type
-

int

-
-
-
- -
-
-

composer.cli.projects module

-
-
-composer.cli.projects.projects_cmd(opts)[source]
-

Process projects commands

-
-
Parameters
-

opts (argparse.Namespace) -- Cmdline arguments

-
-
Returns
-

Value to return from sys.exit()

-
-
Return type
-

int

-
-
-
- -
-
-composer.cli.projects.projects_info(socket_path, api_version, args, show_json=False)[source]
-

Output info on a list of projects

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

projects info <project,...>

-
- -
-
-composer.cli.projects.projects_list(socket_path, api_version, args, show_json=False)[source]
-

Output the list of available projects

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

projects list

-
- -
-
-

composer.cli.providers module

-
-
-composer.cli.providers.providers_cmd(opts)[source]
-

Process providers commands

-
-
Parameters
-

opts (argparse.Namespace) -- Cmdline arguments

-
-
Returns
-

Value to return from sys.exit()

-
-
Return type
-

int

-
-
-

This dispatches the providers commands to a function

-
- -
-
-composer.cli.providers.providers_delete(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Delete a profile from a provider

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

providers delete <provider> <profile>

-
- -
-
-composer.cli.providers.providers_info(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Show information about each provider

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

providers info <PROVIDER>

-
- -
-
-composer.cli.providers.providers_list(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Return the list of providers

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

providers list

-
- -
-
-composer.cli.providers.providers_push(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Add a new provider profile or overwrite an existing one

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

providers push <profile.toml>

-
- -
-
-composer.cli.providers.providers_save(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Save a provider's profile to a TOML file

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

providers save <provider> <profile>

-
- -
-
-composer.cli.providers.providers_show(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Return details about a provider

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

providers show <provider> <profile>

-
- -
-
-composer.cli.providers.providers_template(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Return a TOML template for setting the provider's fields

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

providers template <provider>

-
- -
-
-

composer.cli.sources module

-
-
-composer.cli.sources.sources_add(socket_path, api_version, args, show_json=False)[source]
-

Add or change a source

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

sources add <source.toml>

-
- -
-
-composer.cli.sources.sources_cmd(opts)[source]
-

Process sources commands

-
-
Parameters
-

opts (argparse.Namespace) -- Cmdline arguments

-
-
Returns
-

Value to return from sys.exit()

-
-
Return type
-

int

-
-
-
- -
-
-composer.cli.sources.sources_delete(socket_path, api_version, args, show_json=False)[source]
-

Delete a source

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

sources delete <source-name>

-
- -
-
-composer.cli.sources.sources_info(socket_path, api_version, args, show_json=False)[source]
-

Output info on a list of projects

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

sources info <source-name>

-
- -
-
-composer.cli.sources.sources_list(socket_path, api_version, args, show_json=False)[source]
-

Output the list of available sources

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
-
-
-

sources list

-
- -
-
-

composer.cli.status module

-
-
-composer.cli.status.status_cmd(opts)[source]
-

Process status commands

-
-
Parameters
-

opts (argparse.Namespace) -- Cmdline arguments

-
-
Returns
-

Value to return from sys.exit()

-
-
Return type
-

int

-
-
-
- -
-
-

composer.cli.upload module

-
-
-composer.cli.upload.upload_cancel(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Cancel the queued or running upload

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

upload cancel <build-uuid>

-
- -
-
-composer.cli.upload.upload_cmd(opts)[source]
-

Process upload commands

-
-
Parameters
-

opts (argparse.Namespace) -- Cmdline arguments

-
-
Returns
-

Value to return from sys.exit()

-
-
Return type
-

int

-
-
-

This dispatches the upload commands to a function

-
- -
-
-composer.cli.upload.upload_delete(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Delete an upload and remove it from the build

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

upload delete <build-uuid>

-
- -
-
-composer.cli.upload.upload_info(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Return detailed information about the upload

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

upload info <uuid>

-

This returns information about the upload, including uuid, name, status, service, and image.

-
- -
-
-composer.cli.upload.upload_list(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Return the composes and their associated upload uuids and status

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

upload list

-
- -
-
-composer.cli.upload.upload_log(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Return the upload log

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

upload log <build-uuid>

-
- -
-
-composer.cli.upload.upload_reset(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Reset the upload and execute it again

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

upload reset <build-uuid>

-
- -
-
-composer.cli.upload.upload_start(socket_path, api_version, args, show_json=False, testmode=0)[source]
-

Start upload up a build uuid image

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • api_version (str) -- Version of the API to talk to. eg. "0"

  • -
  • args (list of str) -- List of remaining arguments from the cmdline

  • -
  • show_json (bool) -- Set to True to show the JSON output instead of the human readable output

  • -
  • testmode (int) -- unused in this function

  • -
-
-
-

upload start <build-uuid> <image-name> [<provider> <profile> | <profile.toml>]

-
- -
-
-

composer.cli.utilities module

-
-
-composer.cli.utilities.argify(args)[source]
-

Take a list of human args and return a list with each item

-
-
Parameters
-

args (list of str) -- list of strings with possible commas and spaces

-
-
Returns
-

List of all the items

-
-
Return type
-

list of str

-
-
-

Examples:

-

["one,two", "three", ",four", ",five,"] returns ["one", "two", "three", "four", "five"]

-
- -
-
-composer.cli.utilities.frozen_toml_filename(blueprint_name)[source]
-

Convert a blueprint name into a filename.toml

-
-
Parameters
-

blueprint_name (str) -- The blueprint's name

-
-
Returns
-

The blueprint name with ' ' converted to - and .toml appended

-
-
Return type
-

str

-
-
-
- -
-
-composer.cli.utilities.get_arg(args, name, argtype=None)[source]
-

Return optional value from args, and remaining args

-
-
Parameters
-
    -
  • args (list of strings) -- list of arguments

  • -
  • name (string) -- The argument to remove from the args list

  • -
  • argtype (type) -- Type to use for checking the argument value

  • -
-
-
Returns
-

(args, value)

-
-
Return type
-

tuple

-
-
-

This removes the optional argument and value from the argument list, returns the new list, -and the value of the argument.

-
- -
-
-composer.cli.utilities.handle_api_result(result, show_json=False)[source]
-

Log any errors, return the correct value

-
-
Parameters
-

result (dict) -- JSON result from the http query

-
-
Return type
-

tuple

-
-
Returns
-

(rc, should_exit_now)

-
-
-

Return the correct rc for the program (0 or 1), and whether or -not to continue processing the results.

-
- -
-
-composer.cli.utilities.packageNEVRA(pkg)[source]
-

Return the package info as a NEVRA

-
-
Parameters
-

pkg (dict) -- The package details

-
-
Returns
-

name-[epoch:]version-release-arch

-
-
Return type
-

str

-
-
-
- -
-
-composer.cli.utilities.toml_filename(blueprint_name)[source]
-

Convert a blueprint name into a filename.toml

-
-
Parameters
-

blueprint_name (str) -- The blueprint's name

-
-
Returns
-

The blueprint name with ' ' converted to - and .toml appended

-
-
Return type
-

str

-
-
-
- -
-
-

Module contents

-
-
-composer.cli.main(opts)[source]
-

Main program execution

-
-
Parameters
-

opts (argparse.Namespace) -- Cmdline arguments

-
-
-
- -
-
- - -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/composer.html b/docs/html/composer.html deleted file mode 100644 index f327301b..00000000 --- a/docs/html/composer.html +++ /dev/null @@ -1,504 +0,0 @@ - - - - - - - - - - - composer package — Lorax 35.0 documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - -
- - - - - -
- -
- - - - - - - - - - - - - - - - - -
- - - - -
-
-
-
- -
-

composer package

- -
-

Submodules

-
-
-

composer.http_client module

-
-
-composer.http_client.api_url(api_version, url)[source]
-

Return the versioned path to the API route

-
-
Parameters
-
    -
  • api_version (str) -- The version of the API to talk to. eg. "0"

  • -
  • url (str) -- The API route to talk to

  • -
-
-
Returns
-

The full url to use for the route and API version

-
-
Return type
-

str

-
-
-
- -
-
-composer.http_client.append_query(url, query)[source]
-

Add a query argument to a URL

-

The query should be of the form "param1=what&param2=ever", i.e., no -leading '?'. The new query data will be appended to any existing -query string.

-
-
Parameters
-
    -
  • url (str) -- The original URL

  • -
  • query (str) -- The query to append

  • -
-
-
Returns
-

The new URL with the query argument included

-
-
Return type
-

str

-
-
-
- -
-
-composer.http_client.delete_url_json(socket_path, url)[source]
-

Send a DELETE request to the url and return JSON response

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • url (str) -- URL to send DELETE to

  • -
-
-
Returns
-

The json response from the server

-
-
Return type
-

dict

-
-
-
- -
-
-composer.http_client.download_file(socket_path, url, progress=True)[source]
-

Download a file, saving it to the CWD with the included filename

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • url (str) -- URL to send POST to

  • -
-
-
-
- -
-
-composer.http_client.get_filename(headers)[source]
-

Get the filename from the response header

-
-
Parameters
-

response (Response) -- The urllib3 response object

-
-
Raises
-

RuntimeError if it cannot find a filename in the header

-
-
Returns
-

Filename from content-disposition header

-
-
Return type
-

str

-
-
-
- -
-
-composer.http_client.get_url_json(socket_path, url)[source]
-

Return the JSON results of a GET request

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • url (str) -- URL to request

  • -
-
-
Returns
-

The json response from the server

-
-
Return type
-

dict

-
-
-
- -
-
-composer.http_client.get_url_json_unlimited(socket_path, url, total_fn=None)[source]
-

Return the JSON results of a GET request

-

For URLs that use offset/limit arguments, this command will -fetch all results for the given request.

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • url (str) -- URL to request

  • -
-
-
Returns
-

The json response from the server

-
-
Return type
-

dict

-
-
-
- -
-
-composer.http_client.get_url_raw(socket_path, url)[source]
-

Return the raw results of a GET request

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • url (str) -- URL to request

  • -
-
-
Returns
-

The raw response from the server

-
-
Return type
-

str

-
-
-
- -
-
-composer.http_client.post_url(socket_path, url, body)[source]
-

POST raw data to the URL

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • url (str) -- URL to send POST to

  • -
  • body (str) -- The data for the body of the POST

  • -
-
-
Returns
-

The json response from the server

-
-
Return type
-

dict

-
-
-
- -
-
-composer.http_client.post_url_json(socket_path, url, body)[source]
-

POST some JSON data to the URL

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • url (str) -- URL to send POST to

  • -
  • body (str) -- The data for the body of the POST

  • -
-
-
Returns
-

The json response from the server

-
-
Return type
-

dict

-
-
-
- -
-
-composer.http_client.post_url_toml(socket_path, url, body)[source]
-

POST a TOML string to the URL

-
-
Parameters
-
    -
  • socket_path (str) -- Path to the Unix socket to use for API communication

  • -
  • url (str) -- URL to send POST to

  • -
  • body (str) -- The data for the body of the POST

  • -
-
-
Returns
-

The json response from the server

-
-
Return type
-

dict

-
-
-
- -
-
-

composer.unix_socket module

-
-
-class composer.unix_socket.UnixHTTPConnection(socket_path, timeout=300)[source]
-

Bases: http.client.HTTPConnection, object

-
-
-connect()[source]
-

Connect to the host and port specified in __init__.

-
- -
- -
-
-class composer.unix_socket.UnixHTTPConnectionPool(socket_path, timeout=300)[source]
-

Bases: urllib3.connectionpool.HTTPConnectionPool

-
- -
-
-

Module contents

-
-
- - -
- -
- - -
-
- -
- -
- - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/genindex.html b/docs/html/genindex.html index 1cfbeb02..5b5e4692 100644 --- a/docs/html/genindex.html +++ b/docs/html/genindex.html @@ -1,38 +1,38 @@ - - + - + - + + + Index — Lorax 35.1 documentation + + + + + + - Index — Lorax 35.0 documentation - + - - - - - - @@ -58,7 +58,7 @@
- 35.0 + 35.1
@@ -75,6 +75,7 @@ + + @@ -130,18 +131,20 @@ + +
- + + - @@ -561,8 +365,6 @@ @@ -681,8 +483,6 @@

M

-
  • composer-cli
  • mkksiso
  • Product and Updates Images
  • src
  • @@ -134,6 +134,7 @@ + @@ -168,18 +169,20 @@ + +
      -
    • Docs »
    • +
    • »
    • livemedia-creator
    • - + View page source @@ -225,14 +228,14 @@ you have the anaconda-tui package installed.

      livemedia-creator cmdline arguments

      Create Live Install Media

      -
      usage: livemedia-creator [-h] (--make-iso | --make-disk | --make-fsimage | --make-appliance | --make-ami | --make-tar | --make-tar-disk | --make-pxe-live | --make-ostree-live | --make-oci | --make-vagrant) [--iso ISO] [--iso-only]
      -                         [--iso-name ISO_NAME] [--ks KS] [--image-only] [--no-virt] [--proxy PROXY] [--anaconda-arg ANACONDA_ARGS] [--armplatform ARMPLATFORM] [--location LOCATION] [--logfile LOGFILE] [--lorax-templates LORAX_TEMPLATES]
      -                         [--tmp TMP] [--resultdir RESULT_DIR] [--macboot] [--nomacboot] [--extra-boot-args EXTRA_BOOT_ARGS] [--disk-image DISK_IMAGE] [--keep-image] [--fs-image FS_IMAGE] [--image-name IMAGE_NAME]
      -                         [--tar-disk-name TAR_DISK_NAME] [--fs-label FS_LABEL] [--image-size-align IMAGE_SIZE_ALIGN] [--image-type IMAGE_TYPE] [--qemu-arg QEMU_ARGS] [--qcow2] [--qcow2-arg QEMU_ARGS] [--compression COMPRESSION]
      -                         [--compress-arg COMPRESS_ARGS] [--app-name APP_NAME] [--app-template APP_TEMPLATE] [--app-file APP_FILE] [--ram MEMORY] [--vcpus VCPUS] [--vnc VNC] [--arch ARCH] [--kernel-args KERNEL_ARGS]
      -                         [--ovmf-path OVMF_PATH] [--virt-uefi] [--no-kvm] [--with-rng WITH_RNG] [--dracut-conf DRACUT_CONF] [--dracut-arg DRACUT_ARGS] [--live-rootfs-size LIVE_ROOTFS_SIZE] [--live-rootfs-keep-size]
      -                         [--oci-config OCI_CONFIG] [--oci-runtime OCI_RUNTIME] [--vagrant-metadata VAGRANT_METADATA] [--vagrantfile VAGRANTFILE] [--project PROJECT] [--releasever RELEASEVER] [--volid VOLID] [--squashfs-only]
      -                         [--timeout TIMEOUT] [-V]
      +
      usage: livemedia-creator [-h] (--make-iso | --make-disk | --make-fsimage | --make-appliance | --make-ami | --make-tar | --make-tar-disk | --make-pxe-live | --make-ostree-live | --make-oci | --make-vagrant)
      +                         [--iso ISO] [--iso-only] [--iso-name ISO_NAME] [--ks KS] [--image-only] [--no-virt] [--proxy PROXY] [--anaconda-arg ANACONDA_ARGS] [--armplatform ARMPLATFORM] [--location LOCATION]
      +                         [--logfile LOGFILE] [--lorax-templates LORAX_TEMPLATES] [--tmp TMP] [--resultdir RESULT_DIR] [--macboot] [--nomacboot] [--extra-boot-args EXTRA_BOOT_ARGS] [--disk-image DISK_IMAGE]
      +                         [--keep-image] [--fs-image FS_IMAGE] [--image-name IMAGE_NAME] [--tar-disk-name TAR_DISK_NAME] [--fs-label FS_LABEL] [--image-size-align IMAGE_SIZE_ALIGN] [--image-type IMAGE_TYPE]
      +                         [--qemu-arg QEMU_ARGS] [--qcow2] [--qcow2-arg QEMU_ARGS] [--compression COMPRESSION] [--compress-arg COMPRESS_ARGS] [--app-name APP_NAME] [--app-template APP_TEMPLATE]
      +                         [--app-file APP_FILE] [--ram MEMORY] [--vcpus VCPUS] [--vnc VNC] [--arch ARCH] [--kernel-args KERNEL_ARGS] [--ovmf-path OVMF_PATH] [--virt-uefi] [--no-kvm] [--with-rng WITH_RNG]
      +                         [--dracut-conf DRACUT_CONF] [--dracut-arg DRACUT_ARGS] [--live-rootfs-size LIVE_ROOTFS_SIZE] [--live-rootfs-keep-size] [--oci-config OCI_CONFIG] [--oci-runtime OCI_RUNTIME]
      +                         [--vagrant-metadata VAGRANT_METADATA] [--vagrantfile VAGRANTFILE] [--project PROJECT] [--releasever RELEASEVER] [--volid VOLID] [--squashfs-only] [--timeout TIMEOUT] [-V]
       
      @@ -1045,29 +1048,29 @@ report bugs against the lorax component.

      -
      @@ -1076,7 +1079,6 @@ report bugs against the lorax component.

    - - - - - - - - - - - - - - - - - - - - - - - -
    - - - -
    - - - - - -
    - -
    - - - - - - - - - - - - - - - - - -
    - - - - -
    -
    -
    -
    - -
    -

    lorax-composer

    -

    lorax-composer has been replaced by the osbuild-composer WELDR API -server which implements more features (eg. ostree, image uploads, etc.) You -can still use composer-cli and cockpit-composer with -osbuild-composer. See the documentation or the osbuild website for more information.

    -
    - - -
    - -
    - - -
    -
    - -
    - -
    - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/lorax.html b/docs/html/lorax.html index 0b8cc68d..1cf53560 100644 --- a/docs/html/lorax.html +++ b/docs/html/lorax.html @@ -1,38 +1,38 @@ - - + - + - + + + Lorax — Lorax 35.1 documentation + + + + + + - Lorax — Lorax 35.0 documentation - + - - - - - - @@ -60,7 +60,7 @@
    - 35.0 + 35.1
    @@ -77,6 +77,7 @@ + + @@ -158,18 +159,20 @@ + +
      -
    • Docs »
    • +
    • »
    • Lorax
    • - + View page source @@ -203,10 +206,10 @@ repositories.

      lorax cmdline arguments

      Create the Anaconda boot.iso

      -
      usage: lorax [-h] -p PRODUCT -v VERSION -r RELEASE [-s REPOSITORY] [--repo REPOSITORY] [-m REPOSITORY] [-t VARIANT] [-b URL] [--isfinal] [-c CONFIGFILE] [--proxy HOST] [-i PACKAGE] [-e PACKAGE] [--buildarch ARCH] [--volid VOLID]
      -             [--macboot] [--nomacboot] [--noupgrade] [--logfile LOGFILE] [--tmp TMP] [--cachedir CACHEDIR] [--workdir WORKDIR] [--force] [--add-template ADD_TEMPLATES] [--add-template-var ADD_TEMPLATE_VARS]
      -             [--add-arch-template ADD_ARCH_TEMPLATES] [--add-arch-template-var ADD_ARCH_TEMPLATE_VARS] [--noverify] [--sharedir SHAREDIR] [--enablerepo [repo]] [--disablerepo [repo]] [--rootfs-size ROOTFS_SIZE] [--noverifyssl]
      -             [--dnfplugin DNFPLUGINS] [--squashfs-only] [--skip-branding] [--dracut-conf DRACUT_CONF] [--dracut-arg DRACUT_ARGS] [-V]
      +
      usage: lorax [-h] -p PRODUCT -v VERSION -r RELEASE [-s REPOSITORY] [--repo REPOSITORY] [-m REPOSITORY] [-t VARIANT] [-b URL] [--isfinal] [-c CONFIGFILE] [--proxy HOST] [-i PACKAGE] [-e PACKAGE]
      +             [--buildarch ARCH] [--volid VOLID] [--macboot] [--nomacboot] [--noupgrade] [--logfile LOGFILE] [--tmp TMP] [--cachedir CACHEDIR] [--workdir WORKDIR] [--force] [--add-template ADD_TEMPLATES]
      +             [--add-template-var ADD_TEMPLATE_VARS] [--add-arch-template ADD_ARCH_TEMPLATES] [--add-arch-template-var ADD_ARCH_TEMPLATE_VARS] [--noverify] [--sharedir SHAREDIR] [--enablerepo [repo]]
      +             [--disablerepo [repo]] [--rootfs-size ROOTFS_SIZE] [--noverifyssl] [--dnfplugin DNFPLUGINS] [--squashfs-only] [--skip-branding] [--dracut-conf DRACUT_CONF] [--dracut-arg DRACUT_ARGS] [-V]
                    OUTPUTDIR
       
      @@ -396,7 +399,7 @@ repositories.

      You can add your own repos with -s and packages with higher NVRs will override the ones in the distribution repositories.

      Under ./results/ will be the release tree files: .discinfo, .treeinfo, everything that -goes onto the boot.iso, the pxeboot directory, and the boot.iso under ./images/.

      +goes onto the boot.iso, the pxeboot directory, and the boot.iso under ./results/images/.

      Branding

      @@ -528,29 +531,29 @@ should) select the specific template directory by passing - - - - - - + +
      -

      - © Copyright 2018, Red Hat, Inc. + © Copyright 2018, Red Hat, Inc..

      - Built with Sphinx using a theme provided by Read the Docs. + + + + Built with Sphinx using a + + theme + + provided by Read the Docs. -
    @@ -559,7 +562,6 @@ should) select the specific template directory by passing jQuery(function () { SphinxRtdTheme.Navigation.enable(true); diff --git a/docs/html/mkksiso.html b/docs/html/mkksiso.html index 454441f3..d5ca78b4 100644 --- a/docs/html/mkksiso.html +++ b/docs/html/mkksiso.html @@ -1,42 +1,42 @@ - - + - + - + + + mkksiso — Lorax 35.1 documentation + + + + + + - mkksiso — Lorax 35.0 documentation - + - - - - - - - + @@ -60,7 +60,7 @@
    - 35.0 + 35.1
    @@ -77,6 +77,7 @@ + + @@ -143,18 +144,20 @@ + +
    - @@ -339,7 +342,6 @@ will pass.

    - - - - - - - + @@ -60,7 +60,7 @@
    - 35.0 + 35.1
    @@ -77,6 +77,7 @@ + + @@ -136,18 +136,20 @@ + +
    @@ -254,7 +230,6 @@ - - - - - - @@ -60,7 +60,7 @@
    - 35.0 + 35.1
    @@ -77,6 +77,7 @@ + + @@ -132,18 +133,20 @@ + +
    -

    - © Copyright 2018, Red Hat, Inc. + © Copyright 2018, Red Hat, Inc..

    - Built with Sphinx using a theme provided by Read the Docs. + + + + Built with Sphinx using a + + theme + + provided by Read the Docs. - @@ -217,7 +220,6 @@ command or the installpkgs paramater of jQuery(function () { SphinxRtdTheme.Navigation.enable(true); diff --git a/docs/html/py-modindex.html b/docs/html/py-modindex.html index 181f6fe9..7beeaf03 100644 --- a/docs/html/py-modindex.html +++ b/docs/html/py-modindex.html @@ -1,38 +1,38 @@ - - + - + - + + + Python Module Index — Lorax 35.1 documentation + + + + + + - Python Module Index — Lorax 35.0 documentation - + - - - - - - @@ -61,7 +61,7 @@
    - 35.0 + 35.1
    @@ -78,6 +78,7 @@ +
    + @@ -133,11 +134,13 @@ + +
      -
    • Docs »
    • +
    • »
    • Python Module Index
    • @@ -158,185 +161,105 @@

      Python Module Index

      - c | p
      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + id="toggle-1" style="display: none" alt="-" /> - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - +
       
      - c
      - composer -
          - composer.cli -
          - composer.cli.blueprints -
          - composer.cli.cmdline -
          - composer.cli.compose -
          - composer.cli.help -
          - composer.cli.modules -
          - composer.cli.projects -
          - composer.cli.providers -
          - composer.cli.sources -
          - composer.cli.status -
          - composer.cli.upload -
          - composer.cli.utilities -
          - composer.http_client -
          - composer.unix_socket -
       
      p
      pylorax
          pylorax.base
          pylorax.buildstamp
          pylorax.cmdline
          pylorax.creator
          pylorax.decorators
          pylorax.discinfo
          pylorax.dnfbase
          pylorax.dnfhelper
          pylorax.executils
          pylorax.imgutils
          pylorax.installer
          pylorax.ltmpl
          pylorax.monitor
          pylorax.mount
          pylorax.output
          pylorax.sysutils
          pylorax.treebuilder
          pylorax.treeinfo @@ -348,20 +271,25 @@ - @@ -370,7 +298,6 @@ - - - - - - - - - - - - - - - - - - - - - - - - -
      - - - -
      - - - - - -
      - -
      - - - - - - - - - - - - - - - - - -
      - - - - -
      -
      -
      -
      - -
      -

      pylorax.api package

      -
      -

      Submodules

      -
      -
      -

      pylorax.api.bisect module

      -
      -
      -pylorax.api.bisect.insort_left(a, x, key=None, lo=0, hi=None)[source]
      -

      Insert item x in list a, and keep it sorted assuming a is sorted.

      -
      -
      Parameters
      -
        -
      • a (list) -- sorted list

      • -
      • x (object) -- item to insert into the list

      • -
      • key (function) -- Function to use to compare items in the list

      • -
      -
      -
      Returns
      -

      index where the item was inserted

      -
      -
      Return type
      -

      int

      -
      -
      -

      If x is already in a, insert it to the left of the leftmost x. -Optional args lo (default 0) and hi (default len(a)) bound the -slice of a to be searched.

      -

      This is a modified version of bisect.insort_left that can use a -function for the compare, and returns the index position where it -was inserted.

      -
      - -
      -
      -

      pylorax.api.checkparams module

      -
      -
      -pylorax.api.checkparams.checkparams(tuples)[source]
      -
      - -
      -
      -

      pylorax.api.cmdline module

      -
      -
      -pylorax.api.cmdline.lorax_composer_parser()[source]
      -

      Return the ArgumentParser for lorax-composer

      -
      - -
      -
      -

      pylorax.api.compose module

      -

      Setup for composing an image

      -
      -

      Adding New Output Types

      -

      The new output type must add a kickstart template to ./share/composer/ where the -name of the kickstart (without the trailing .ks) matches the entry in compose_args.

      -

      The kickstart should not have any url or repo entries, these will be added at build -time. The %packages section should be the last thing, and while it can contain mandatory -packages required by the output type, it should not have the trailing %end because the -package NEVRAs will be appended to it at build time.

      -

      compose_args should have a name matching the kickstart, and it should set the novirt_install -parameters needed to generate the desired output. Other types should be set to False.

      -
      -
      -pylorax.api.compose.add_customizations(f, recipe)[source]
      -

      Add customizations to the kickstart file

      -
      -
      Parameters
      -
        -
      • f (open file object) -- kickstart file object

      • -
      • recipe (Recipe object) --

      • -
      -
      -
      Returns
      -

      None

      -
      -
      Raises
      -

      RuntimeError if there was a problem writing to the kickstart

      -
      -
      -
      - -
      -
      -pylorax.api.compose.bootloader_append(line, kernel_append)[source]
      -

      Insert the kernel_append string into the --append argument

      -
      -
      Parameters
      -
        -
      • line (str) -- The bootloader ... line

      • -
      • kernel_append (str) -- The arguments to append to the --append section

      • -
      -
      -
      -

      Using pykickstart to process the line is the best way to make sure it -is parsed correctly, and re-assembled for inclusion into the final kickstart

      -
      - -
      -
      -pylorax.api.compose.compose_args(compose_type)[source]
      -

      Returns the settings to pass to novirt_install for the compose type

      -
      -
      Parameters
      -

      compose_type (str) -- The type of compose to create, from compose_types()

      -
      -
      -

      This will return a dict of options that match the ArgumentParser options for livemedia-creator. -These are the ones the define the type of output, it's filename, etc. -Other options will be filled in by make_compose()

      -
      - -
      -
      -pylorax.api.compose.compose_types(share_dir)[source]
      -

      Returns a list of tuples of the supported output types, and their state

      -

      The output types come from the kickstart names in /usr/share/lorax/composer/*ks

      -

      If they are disabled on the current arch their state is False. If enabled, it is True. -eg. [("alibaba", False), ("ext4-filesystem", True), ...]

      -
      - -
      -
      -pylorax.api.compose.customize_ks_template(ks_template, recipe)[source]
      -

      Customize the kickstart template and return it

      -
      -
      Parameters
      -
        -
      • ks_template (str) -- The kickstart template

      • -
      • recipe (Recipe object) --

      • -
      -
      -
      -

      Apply customizations to existing template commands, or add defaults for ones that are -missing and required.

      -

      Apply customizations.kernel.append to the bootloader argument in the template. -Add bootloader line if it is missing.

      -

      Add default timezone if needed. It does NOT replace an existing timezone entry

      -
      - -
      -
      -pylorax.api.compose.firewall_cmd(line, settings)[source]
      -

      Update the firewall line with the new ports and services

      -
      -
      Parameters
      -
        -
      • line (str) -- The firewall ... line

      • -
      • settings (dict) -- A dict with the list of services and ports to enable and disable

      • -
      -
      -
      -

      Using pykickstart to process the line is the best way to make sure it -is parsed correctly, and re-assembled for inclusion into the final kickstart

      -
      - -
      -
      -pylorax.api.compose.get_default_services(recipe)[source]
      -

      Get the default string for services, based on recipe -:param recipe: The recipe

      -
      -
      Returns
      -

      string with "services" or ""

      -
      -
      Return type
      -

      str

      -
      -
      -

      When no services have been selected we don't need to add anything to the kickstart -so return an empty string. Otherwise return "services" which will be updated with -the settings.

      -
      - -
      -
      -pylorax.api.compose.get_extra_pkgs(dbo, share_dir, compose_type)[source]
      -

      Return extra packages needed for the output type

      -
      -
      Parameters
      -
        -
      • dbo (dnf.Base) -- dnf base object

      • -
      • share_dir (str) -- Path to the top level share directory

      • -
      • compose_type (str) -- The type of output to create from the recipe

      • -
      -
      -
      Returns
      -

      List of package names (name only, not NEVRA)

      -
      -
      Return type
      -

      list

      -
      -
      -

      Currently this is only needed by live-iso, it reads ./live/live-install.tmpl and -processes only the installpkg lines. It lists the packages needed to complete creation of the -iso using the templates such as x86.tmpl

      -

      Keep in mind that the live-install.tmpl is shared between livemedia-creator and lorax-composer, -even though the results are applied differently.

      -
      - -
      -
      -pylorax.api.compose.get_firewall_settings(recipe)[source]
      -

      Return the customizations.firewall settings

      -
      -
      Parameters
      -

      recipe (Recipe object) -- The recipe

      -
      -
      Returns
      -

      A dict of settings

      -
      -
      Return type
      -

      dict

      -
      -
      -
      - -
      -
      -pylorax.api.compose.get_kernel_append(recipe)[source]
      -

      Return the customizations.kernel append value

      -
      -
      Parameters
      -

      recipe (Recipe object) --

      -
      -
      Returns
      -

      append value or empty string

      -
      -
      Return type
      -

      str

      -
      -
      -
      - -
      -
      -pylorax.api.compose.get_keyboard_layout(recipe)[source]
      -

      Return the customizations.locale.keyboard list

      -
      -
      Parameters
      -

      recipe (Recipe object) -- The recipe

      -
      -
      Returns
      -

      The keyboard layout string

      -
      -
      Return type
      -

      str

      -
      -
      -
      - -
      -
      -pylorax.api.compose.get_languages(recipe)[source]
      -

      Return the customizations.locale.languages list

      -
      -
      Parameters
      -

      recipe (Recipe object) -- The recipe

      -
      -
      Returns
      -

      list of language strings

      -
      -
      Return type
      -

      list

      -
      -
      -
      - -
      -
      -pylorax.api.compose.get_services(recipe)[source]
      -

      Return the customizations.services settings

      -
      -
      Parameters
      -

      recipe (Recipe object) -- The recipe

      -
      -
      Returns
      -

      A dict of settings

      -
      -
      Return type
      -

      dict

      -
      -
      -
      - -
      -
      -pylorax.api.compose.get_timezone_settings(recipe)[source]
      -

      Return the customizations.timezone dict

      -
      -
      Parameters
      -

      recipe (Recipe object) --

      -
      -
      Returns
      -

      append value or empty string

      -
      -
      Return type
      -

      dict

      -
      -
      -
      - -
      -
      -pylorax.api.compose.keyboard_cmd(line, layout)[source]
      -

      Update the keyboard line with the layout

      -
      -
      Parameters
      -
        -
      • line (str) -- The keyboard ... line

      • -
      • settings (str) -- The keyboard layout

      • -
      -
      -
      -

      Using pykickstart to process the line is the best way to make sure it -is parsed correctly, and re-assembled for inclusion into the final kickstart

      -
      - -
      -
      -pylorax.api.compose.lang_cmd(line, languages)[source]
      -

      Update the lang line with the languages

      -
      -
      Parameters
      -
        -
      • line (str) -- The lang ... line

      • -
      • settings (list) -- The list of languages

      • -
      -
      -
      -

      Using pykickstart to process the line is the best way to make sure it -is parsed correctly, and re-assembled for inclusion into the final kickstart

      -
      - -
      -
      -pylorax.api.compose.move_compose_results(cfg, results_dir)[source]
      -

      Move the final image to the results_dir and cleanup the unneeded compose files

      -
      -
      Parameters
      -
        -
      • cfg (DataHolder) -- Build configuration

      • -
      • results_dir (str) -- Directory to put the results into

      • -
      -
      -
      -
      - -
      -
      -pylorax.api.compose.repo_to_ks(r, url='url')[source]
      -

      Return a kickstart line with the correct args. -:param r: DNF repository information -:type r: dnf.Repo -:param url: "url" or "baseurl" to use for the baseurl parameter -:type url: str -:returns: kickstart command arguments for url/repo command -:rtype: str

      -

      Set url to "baseurl" if it is a repo, leave it as "url" for the installation url.

      -
      - -
      -
      -pylorax.api.compose.services_cmd(line, settings)[source]
      -

      Update the services line with additional services to enable/disable

      -
      -
      Parameters
      -
        -
      • line (str) -- The services ... line

      • -
      • settings (dict) -- A dict with the list of services to enable and disable

      • -
      -
      -
      -

      Using pykickstart to process the line is the best way to make sure it -is parsed correctly, and re-assembled for inclusion into the final kickstart

      -
      - -
      -
      -pylorax.api.compose.start_build(cfg, dnflock, gitlock, branch, recipe_name, compose_type, test_mode=0)[source]
      -

      Start the build

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration object

      • -
      • dnflock (YumLock) -- Lock and YumBase for depsolving

      • -
      • recipe (str) -- The recipe to build

      • -
      • compose_type (str) -- The type of output to create from the recipe

      • -
      -
      -
      Returns
      -

      Unique ID for the build that can be used to track its status

      -
      -
      Return type
      -

      str

      -
      -
      -
      - -
      -
      -pylorax.api.compose.test_templates(dbo, share_dir)[source]
      -

      Try depsolving each of the the templates and report any errors

      -
      -
      Parameters
      -

      dbo (dnf.Base) -- dnf base object

      -
      -
      Returns
      -

      List of template types and errors

      -
      -
      Return type
      -

      List of errors

      -
      -
      -

      Return a list of templates and errors encountered or an empty list

      -
      - -
      -
      -pylorax.api.compose.timezone_cmd(line, settings)[source]
      -

      Update the timezone line with the settings

      -
      -
      Parameters
      -
        -
      • line (str) -- The timezone ... line

      • -
      • settings (dict) -- A dict with timezone and/or ntpservers list

      • -
      -
      -
      -

      Using pykickstart to process the line is the best way to make sure it -is parsed correctly, and re-assembled for inclusion into the final kickstart

      -
      - -
      -
      -pylorax.api.compose.write_ks_group(f, group)[source]
      -

      Write kickstart group entry

      -
      -
      Parameters
      -
        -
      • f (open file object) -- kickstart file object

      • -
      • group -- A blueprint group dictionary

      • -
      -
      -
      -

      gid is optional

      -
      - -
      -
      -pylorax.api.compose.write_ks_root(f, user)[source]
      -

      Write kickstart root password and sshkey entry

      -
      -
      Parameters
      -
        -
      • f (open file object) -- kickstart file object

      • -
      • user (dict) -- A blueprint user dictionary

      • -
      -
      -
      Returns
      -

      True if it wrote a rootpw command to the kickstart

      -
      -
      Return type
      -

      bool

      -
      -
      -

      If the entry contains a ssh key, use sshkey to write it -If it contains password, use rootpw to set it

      -

      root cannot be used with the user command. So only key and password are supported -for root.

      -
      - -
      -
      -pylorax.api.compose.write_ks_user(f, user)[source]
      -

      Write kickstart user and sshkey entry

      -
      -
      Parameters
      -
        -
      • f (open file object) -- kickstart file object

      • -
      • user (dict) -- A blueprint user dictionary

      • -
      -
      -
      -

      If the entry contains a ssh key, use sshkey to write it -All of the user fields are optional, except name, write out a kickstart user entry -with whatever options are relevant.

      -
      - -
      -
      -
      -

      pylorax.api.config module

      -
      -
      -class pylorax.api.config.ComposerConfig(defaults=None, dict_type=<class 'dict'>, allow_no_value=False, *, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True, default_section='DEFAULT', interpolation=<object object>, converters=<object object>)[source]
      -

      Bases: configparser.ConfigParser

      -
      -
      -get_default(section, option, default)[source]
      -
      - -
      - -
      -
      -pylorax.api.config.configure(conf_file='/etc/lorax/composer.conf', root_dir='/', test_config=False)[source]
      -

      lorax-composer configuration

      -
      -
      Parameters
      -
        -
      • conf_file (str) -- Path to the config file overriding the default settings

      • -
      • root_dir (str) -- Directory to prepend to paths, defaults to /

      • -
      • test_config (bool) -- Set to True to skip reading conf_file

      • -
      -
      -
      Returns
      -

      Configuration

      -
      -
      Return type
      -

      ComposerConfig

      -
      -
      -
      - -
      -
      -pylorax.api.config.make_dnf_dirs(conf, uid, gid)[source]
      -

      Make any missing dnf directories owned by user:group

      -
      -
      Parameters
      -
        -
      • conf (ComposerConfig) -- The configuration to use

      • -
      • uid (int) -- uid of owner

      • -
      • gid (int) -- gid of owner

      • -
      -
      -
      Returns
      -

      list of errors

      -
      -
      Return type
      -

      list of str

      -
      -
      -
      - -
      -
      -pylorax.api.config.make_owned_dir(p_dir, uid, gid)[source]
      -

      Make a directory and its parents, setting owner and group

      -
      -
      Parameters
      -
        -
      • p_dir (string) -- path to directory to create

      • -
      • uid (int) -- uid of owner

      • -
      • gid (int) -- gid of owner

      • -
      -
      -
      Returns
      -

      list of errors

      -
      -
      Return type
      -

      list of str

      -
      -
      -

      Check to make sure it does not have o+rw permissions and that it is owned by uid:gid

      -
      - -
      -
      -pylorax.api.config.make_queue_dirs(conf, gid)[source]
      -

      Make any missing queue directories

      -
      -
      Parameters
      -
        -
      • conf (ComposerConfig) -- The configuration to use

      • -
      • gid (int) -- Group ID that has access to the queue directories

      • -
      -
      -
      Returns
      -

      list of errors

      -
      -
      Return type
      -

      list of str

      -
      -
      -
      - -
      -
      -

      pylorax.api.dnfbase module

      -
      -
      -class pylorax.api.dnfbase.DNFLock(conf, expire_secs=21600)[source]
      -

      Bases: object

      -

      Hold the dnf.Base object and a Lock to control access to it.

      -

      self.dbo is a property that returns the dnf.Base object, but it may change -from one call to the next if the upstream repositories have changed.

      -
      -
      -property lock
      -

      Check for repo updates (using expiration time) and return the lock

      -

      If the repository has been updated, tear down the old dnf.Base and -create a new one. This is the only way to force dnf to use the new -metadata.

      -
      - -
      -
      -property lock_check
      -

      Force a check for repo updates and return the lock

      -

      Use this method sparingly, it removes the repodata and downloads a new copy every time.

      -
      - -
      - -
      -
      -pylorax.api.dnfbase.get_base_object(conf)[source]
      -

      Get the DNF object with settings from the config file

      -
      -
      Parameters
      -

      conf (ComposerParser) -- configuration object

      -
      -
      Returns
      -

      A DNF Base object

      -
      -
      Return type
      -

      dnf.Base

      -
      -
      -
      - -
      -
      -

      pylorax.api.errors module

      -
      -
      -

      pylorax.api.flask_blueprint module

      -

      Flask Blueprints that support skipping routes

      -

      When using Blueprints for API versioning you will usually want to fall back -to the previous version's rules for routes that have no new behavior. To do -this we add a 'skip_rule' list to the Blueprint's options dictionary. It lists -all of the routes that you do not want to register.

      -
      -
      For example:

      from pylorax.api.v0 import v0 -from pylorax.api.v1 import v1

      -

      server.register_blueprint(v0, url_prefix="/api/v0/") -server.register_blueprint(v0, url_prefix="/api/v1/", skip_rules=["/blueprints/list"] -server.register_blueprint(v1, url_prefix="/api/v1/")

      -
      -
      -

      This will register all of v0's routes under /api/v0, and all but /blueprints/list under /api/v1, -and then register v1's version of /blueprints/list under /api/v1

      -
      -
      -class pylorax.api.flask_blueprint.BlueprintSetupStateSkip(blueprint, app, options, first_registration, skip_rules)[source]
      -

      Bases: flask.blueprints.BlueprintSetupState

      -
      -
      -add_url_rule(rule, endpoint=None, view_func=None, **options)[source]
      -

      A helper method to register a rule (and optionally a view function) -to the application. The endpoint is automatically prefixed with the -blueprint's name.

      -
      - -
      - -
      -
      -class pylorax.api.flask_blueprint.BlueprintSkip(*args, **kwargs)[source]
      -

      Bases: flask.blueprints.Blueprint

      -
      -
      -make_setup_state(app, options, first_registration=False)[source]
      -

      Creates an instance of BlueprintSetupState() -object that is later passed to the register callback functions. -Subclasses can override this to return a subclass of the setup state.

      -
      - -
      - -
      -
      -

      pylorax.api.gitrpm module

      -

      Clone a git repository and package it as an rpm

      -

      This module contains functions for cloning a git repo, creating a tar archive of -the selected commit, branch, or tag, and packaging the files into an rpm that will -be installed by anaconda when creating the image.

      -
      -
      -class pylorax.api.gitrpm.GitArchiveTarball(gitRepo)[source]
      -

      Bases: object

      -

      Create a git archive of the selected git repo and reference

      -
      -
      -write_file(sourcesDir)[source]
      -

      Create the tar archive

      -
      -
      Parameters
      -

      sourcesDir (str) -- Path to use for creating the archive

      -
      -
      -

      This clones the git repository and creates a git archive from the specified reference. -The result is in RPMNAME.tar.xz under the sourcesDir

      -
      - -
      - -
      -
      -class pylorax.api.gitrpm.GitRpmBuild(*args, **kwargs)[source]
      -

      Bases: rpmfluff.rpmbuild.SimpleRpmBuild

      -

      Build an rpm containing files from a git repository

      -
      -
      -add_git_tarball(gitRepo)[source]
      -

      Add a tar archive of a git repository to the rpm

      -
      -
      Parameters
      -

      gitRepo (dict) -- A dict with the repository details

      -
      -
      -

      This populates the rpm with the URL of the git repository, the summary -describing the repo, the description of the repository and reference used, -and sets up the rpm to install the archive contents into the destination -path.

      -
      - -
      -
      -check()[source]
      -
      - -
      -
      -clean()[source]
      -

      Remove the base directory from inside the tmpdir

      -
      - -
      -
      -cleanup_tmpdir()[source]
      -

      Remove the temporary directory and all of its contents

      -
      - -
      -
      -get_base_dir()[source]
      -

      Place all the files under a temporary directory + rpmbuild/

      -
      - -
      - -
      -
      -pylorax.api.gitrpm.create_gitrpm_repo(results_dir, recipe)[source]
      -

      Create a dnf repository with the rpms from the recipe

      -
      -
      Parameters
      -
        -
      • results_dir (str) -- Path to create the repository under

      • -
      • recipe (Recipe) -- The recipe to get the repos.git entries from

      • -
      -
      -
      Returns
      -

      Path to the dnf repository or ""

      -
      -
      Return type
      -

      str

      -
      -
      -

      This function creates a dnf repository directory at results_dir+"repo/", -creates rpms for all of the repos.git entries in the recipe, runs createrepo_c -on the dnf repository so that Anaconda can use it, and returns the path to the -repository to the caller.

      -
      - -
      -
      -pylorax.api.gitrpm.get_repo_description(gitRepo)[source]
      -

      Return a description including the git repo and reference

      -
      -
      Parameters
      -

      gitRepo (dict) -- A dict with the repository details

      -
      -
      Returns
      -

      A string with the git repo url and reference

      -
      -
      Return type
      -

      str

      -
      -
      -
      - -
      -
      -pylorax.api.gitrpm.make_git_rpm(gitRepo, dest)[source]
      -

      Create an rpm from the specified git repo

      -
      -
      Parameters
      -

      gitRepo (dict) -- A dict with the repository details

      -
      -
      -

      This will clone the git repository, create an archive of the selected reference, -and build an rpm that will install the files from the repository under the destination -directory. The gitRepo dict should have the following fields:

      -
      rpmname: "server-config"
      -rpmversion: "1.0"
      -rpmrelease: "1"
      -summary: "Setup files for server deployment"
      -repo: "PATH OF GIT REPO TO CLONE"
      -ref: "v1.0"
      -destination: "/opt/server/"
      -
      -
      -
        -
      • rpmname: Name of the rpm to create, also used as the prefix name in the tar archive

      • -
      • rpmversion: Version of the rpm, eg. "1.0.0"

      • -
      • rpmrelease: Release of the rpm, eg. "1"

      • -
      • summary: Summary string for the rpm

      • -
      • repo: URL of the get repo to clone and create the archive from

      • -
      • ref: Git reference to check out. eg. origin/branch-name, git tag, or git commit hash

      • -
      • destination: Path to install the / of the git repo at when installing the rpm

      • -
      -
      - -
      -
      -

      pylorax.api.projects module

      -
      -
      -exception pylorax.api.projects.ProjectsError[source]
      -

      Bases: Exception

      -
      - -
      -
      -pylorax.api.projects.api_changelog(changelog)[source]
      -

      Convert the changelog to a string

      -
      -
      Parameters
      -

      changelog (tuple) -- A list of time, author, string tuples.

      -
      -
      Returns
      -

      The most recent changelog text or ""

      -
      -
      Return type
      -

      str

      -
      -
      -

      This returns only the most recent changelog entry.

      -
      - -
      -
      -pylorax.api.projects.api_time(t)[source]
      -

      Convert time since epoch to a string

      -
      -
      Parameters
      -

      t (int) -- Seconds since epoch

      -
      -
      Returns
      -

      Time string

      -
      -
      Return type
      -

      str

      -
      -
      -
      - -
      -
      -pylorax.api.projects.delete_repo_source(source_glob, source_id)[source]
      -

      Delete a source from a repo file

      -
      -
      Parameters
      -
        -
      • source_glob (str) -- A glob of the repo sources to search

      • -
      • source_id (str) -- The repo id to delete

      • -
      -
      -
      Returns
      -

      None

      -
      -
      Raises
      -

      ProjectsError if there was a problem

      -
      -
      -

      A repo file may have multiple sources in it, delete only the selected source. -If it is the last one in the file, delete the file.

      -

      WARNING: This will delete ANY source, the caller needs to ensure that a system -source_id isn't passed to it.

      -
      - -
      -
      -pylorax.api.projects.dep_evra(dep)[source]
      -

      Return the epoch:version-release.arch for the dep

      -
      -
      Parameters
      -

      dep (dict) -- dependency dict

      -
      -
      Returns
      -

      epoch:version-release.arch

      -
      -
      Return type
      -

      str

      -
      -
      -
      - -
      -
      -pylorax.api.projects.dep_nevra(dep)[source]
      -

      Return the name-epoch:version-release.arch

      -
      - -
      -
      -pylorax.api.projects.dnf_repo_to_file_repo(repo)[source]
      -

      Return a string representation of a DNF Repo object suitable for writing to a .repo file

      -
      -
      Parameters
      -

      repo (dnf.RepoDict) -- DNF Repository

      -
      -
      Returns
      -

      A string

      -
      -
      Return type
      -

      str

      -
      -
      -

      The DNF Repo.dump() function does not produce a string that can be used as a dnf .repo file, -it ouputs baseurl and gpgkey as python lists which DNF cannot read. So do this manually with -only the attributes we care about.

      -
      - -
      -
      -pylorax.api.projects.estimate_size(packages, block_size=6144)[source]
      -

      Estimate the installed size of a package list

      -
      -
      Parameters
      -
        -
      • packages (list of hawkey.Package objects) -- The packages to be installed

      • -
      • block_size (int) -- The block size to use for rounding up file sizes.

      • -
      -
      -
      Returns
      -

      The estimated size of installed packages

      -
      -
      Return type
      -

      int

      -
      -
      -

      Estimating actual requirements is difficult without the actual file sizes, which -dnf doesn't provide access to. So use the file count and block size to estimate -a minimum size for each package.

      -
      - -
      -
      -pylorax.api.projects.get_repo_sources(source_glob)[source]
      -

      Return a list of sources from a directory of yum repositories

      -
      -
      Parameters
      -

      source_glob (str) -- A glob to use to match the source files, including full path

      -
      -
      Returns
      -

      A list of the source ids in all of the matching files

      -
      -
      Return type
      -

      list of str

      -
      -
      -
      - -
      -
      -pylorax.api.projects.get_source_ids(source_path)[source]
      -

      Return a list of the source ids in a file

      -
      -
      Parameters
      -

      source_path (str) -- Full path and filename of the source (yum repo) file

      -
      -
      Returns
      -

      A list of source id strings

      -
      -
      Return type
      -

      list of str

      -
      -
      -
      - -
      -
      -pylorax.api.projects.modules_info(dbo, module_names)[source]
      -

      Return details about a module, including dependencies

      -
      -
      Parameters
      -
        -
      • dbo (dnf.Base) -- dnf base object

      • -
      • module_names (str) -- Names of the modules to get info about

      • -
      -
      -
      Returns
      -

      List of dicts with module details and dependencies.

      -
      -
      Return type
      -

      list of dicts

      -
      -
      -
      - -
      -
      -pylorax.api.projects.modules_list(dbo, module_names)[source]
      -

      Return a list of modules

      -
      -
      Parameters
      -
        -
      • dbo (dnf.Base) -- dnf base object

      • -
      • offset -- Number of modules to skip

      • -
      • limit (int) -- Maximum number of modules to return

      • -
      -
      -
      Returns
      -

      List of module information and total count

      -
      -
      Return type
      -

      tuple of a list of dicts and an Int

      -
      -
      -

      Modules don't exist in RHEL7 so this only returns projects -and sets the type to "rpm"

      -
      - -
      -
      -pylorax.api.projects.new_repo_source(dbo, repoid, source, repo_dir)[source]
      -

      Add a new repo source from a Weldr source dict

      -
      -
      Parameters
      -
        -
      • dbo (dnf.Base) -- dnf base object

      • -
      • id (str) -- The repo id (API v0 uses the name, v1 uses the id)

      • -
      • source (dict) -- A Weldr source dict

      • -
      -
      -
      Returns
      -

      None

      -
      -
      Raises
      -

      ...

      -
      -
      -

      Make sure access to the dbo has been locked before calling this. -The id parameter will the the 'name' field for API v0, and the 'id' field for API v1

      -

      DNF variables will be substituted at load time, and on restart.

      -
      - -
      -
      -pylorax.api.projects.pkg_to_build(pkg)[source]
      -

      Extract the build details from a hawkey.Package object

      -
      -
      Parameters
      -

      pkg (hawkey.Package) -- hawkey.Package object with package details

      -
      -
      Returns
      -

      A dict with the build details, epoch, release, arch, build_time, changelog, ...

      -
      -
      Return type
      -

      dict

      -
      -
      -

      metadata entries are hard-coded to {}

      -

      Note that this only returns the build dict, it does not include the name, description, etc.

      -
      - -
      -
      -pylorax.api.projects.pkg_to_dep(pkg)[source]
      -

      Extract the info from a hawkey.Package object

      -
      -
      Parameters
      -

      pkg (hawkey.Package) -- A hawkey.Package object

      -
      -
      Returns
      -

      A dict with name, epoch, version, release, arch

      -
      -
      Return type
      -

      dict

      -
      -
      -
      - -
      -
      -pylorax.api.projects.pkg_to_project(pkg)[source]
      -

      Extract the details from a hawkey.Package object

      -
      -
      Parameters
      -

      pkgs (hawkey.Package) -- hawkey.Package object with package details

      -
      -
      Returns
      -

      A dict with the name, summary, description, and url.

      -
      -
      Return type
      -

      dict

      -
      -
      -

      upstream_vcs is hard-coded to UPSTREAM_VCS

      -
      - -
      -
      -pylorax.api.projects.pkg_to_project_info(pkg)[source]
      -

      Extract the details from a hawkey.Package object

      -
      -
      Parameters
      -

      pkg (hawkey.Package) -- hawkey.Package object with package details

      -
      -
      Returns
      -

      A dict with the project details, as well as epoch, release, arch, build_time, changelog, ...

      -
      -
      Return type
      -

      dict

      -
      -
      -

      metadata entries are hard-coded to {}

      -
      - -
      -
      -pylorax.api.projects.proj_to_module(proj)[source]
      -

      Extract the name from a project_info dict

      -
      -
      Parameters
      -

      pkg (dict) -- dict with package details

      -
      -
      Returns
      -

      A dict with name, and group_type

      -
      -
      Return type
      -

      dict

      -
      -
      -

      group_type is hard-coded to "rpm"

      -
      - -
      -
      -pylorax.api.projects.projects_depsolve(dbo, projects, groups)[source]
      -

      Return the dependencies for a list of projects

      -
      -
      Parameters
      -
        -
      • dbo (dnf.Base) -- dnf base object

      • -
      • projects (List of Strings) -- The projects to find the dependencies for

      • -
      • groups (List of str) -- The groups to include in dependency solving

      • -
      -
      -
      Returns
      -

      NEVRA's of the project and its dependencies

      -
      -
      Return type
      -

      list of dicts

      -
      -
      Raises
      -

      ProjectsError if there was a problem installing something

      -
      -
      -
      - -
      -
      -pylorax.api.projects.projects_depsolve_with_size(dbo, projects, groups, with_core=True)[source]
      -

      Return the dependencies and installed size for a list of projects

      -
      -
      Parameters
      -
        -
      • dbo (dnf.Base) -- dnf base object

      • -
      • project_names (List of Strings) -- The projects to find the dependencies for

      • -
      • groups (List of str) -- The groups to include in dependency solving

      • -
      -
      -
      Returns
      -

      installed size and a list of NEVRA's of the project and its dependencies

      -
      -
      Return type
      -

      tuple of (int, list of dicts)

      -
      -
      Raises
      -

      ProjectsError if there was a problem installing something

      -
      -
      -
      - -
      -
      -pylorax.api.projects.projects_info(dbo, project_names)[source]
      -

      Return details about specific projects

      -
      -
      Parameters
      -
        -
      • dbo (dnf.Base) -- dnf base object

      • -
      • project_names (str) -- List of names of projects to get info about

      • -
      -
      -
      Returns
      -

      List of project info dicts with pkg_to_project as well as epoch, version, release, etc.

      -
      -
      Return type
      -

      list of dicts

      -
      -
      -

      If project_names is None it will return the full list of available packages

      -
      - -
      -
      -pylorax.api.projects.projects_list(dbo)[source]
      -

      Return a list of projects

      -
      -
      Parameters
      -

      dbo (dnf.Base) -- dnf base object

      -
      -
      Returns
      -

      List of project info dicts with name, summary, description, homepage, upstream_vcs

      -
      -
      Return type
      -

      list of dicts

      -
      -
      -
      - -
      -
      -pylorax.api.projects.repo_to_source(repo, system_source, api=1)[source]
      -

      Return a Weldr Source dict created from the DNF Repository

      -
      -
      Parameters
      -
        -
      • repo (dnf.RepoDict) -- DNF Repository

      • -
      • system_source (bool) -- True if this source is an immutable system source

      • -
      • api (int) -- Select which api version of the dict to return (default 1)

      • -
      -
      -
      Returns
      -

      A dict with Weldr Source fields filled in

      -
      -
      Return type
      -

      dict

      -
      -
      -

      Example:

      -
      {
      -  "check_gpg": true,
      -  "check_ssl": true,
      -  "gpgkey_url": [
      -    "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64"
      -  ],
      -  "id": "fedora",
      -  "name": "Fedora $releasever - $basearch",
      -  "proxy": "http://proxy.brianlane.com:8123",
      -  "system": true
      -  "type": "yum-metalink",
      -  "url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64"
      -}
      -
      -
      -

      The name field has changed in v1 of the API. -In v0 of the API name is the repo.id, in v1 it is the repo.name and a new field, -id has been added for the repo.id

      -
      - -
      -
      -pylorax.api.projects.source_to_repo(source, dnf_conf)[source]
      -

      Return a dnf Repo object created from a source dict

      -
      -
      Parameters
      -
        -
      • source (dict) -- A Weldr source dict

      • -
      • dnf_conf (dnf.conf) -- The dnf Config object

      • -
      -
      -
      Returns
      -

      A dnf Repo object

      -
      -
      Return type
      -

      dnf.Repo

      -
      -
      -

      Example:

      -
      {
      -  "check_gpg": True,
      -  "check_ssl": True,
      -  "gpgkey_urls": [
      -    "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64"
      -  ],
      -  "id": "fedora",
      -  "name": "Fedora $releasever - $basearch",
      -  "proxy": "http://proxy.brianlane.com:8123",
      -  "system": True
      -  "type": "yum-metalink",
      -  "url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64"
      -}
      -
      -
      -

      If the id field is included it is used for the repo id, otherwise name is used. -v0 of the API only used name, v1 added the distinction between id and name.

      -
      - -
      -
      -pylorax.api.projects.source_to_repodict(source)[source]
      -

      Return a tuple suitable for use with dnf.add_new_repo

      -
      -
      Parameters
      -

      source (dict) -- A Weldr source dict

      -
      -
      Returns
      -

      A tuple of dnf.Repo attributes

      -
      -
      Return type
      -

      (str, list, dict)

      -
      -
      -

      Return a tuple with (id, baseurl|(), kwargs) that can be used -with dnf.repos.add_new_repo

      -
      - -
      -
      -

      pylorax.api.queue module

      -

      Functions to monitor compose queue and run anaconda

      -
      -
      -pylorax.api.queue.build_status(cfg, status_filter=None, api=1)[source]
      -

      Return the details of finished or failed builds

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • status_filter (str) -- What builds to return. None == all, "FINISHED", or "FAILED"

      • -
      • api (int) -- Select which api version of the dict to return (default 1)

      • -
      -
      -
      Returns
      -

      A list of the build details (from compose_detail)

      -
      -
      Return type
      -

      list of dicts

      -
      -
      -

      This returns a list of build details for each of the matching builds on the -system. It does not return the status of builds that have not been finished. -Use queue_status() for those.

      -
      - -
      -
      -pylorax.api.queue.check_queues(cfg)[source]
      -

      Check to make sure the new and run queue symlinks are correct

      -
      -
      Parameters
      -

      cfg (DataHolder) -- Configuration settings

      -
      -
      -

      Also check all of the existing results and make sure any with WAITING -set in STATUS have a symlink in queue/new/

      -
      - -
      -
      -pylorax.api.queue.compose_detail(cfg, results_dir, api=1)[source]
      -

      Return details about the build.

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings (required for api=1)

      • -
      • results_dir (str) -- The directory containing the metadata and results for the build

      • -
      • api (int) -- Select which api version of the dict to return (default 1)

      • -
      -
      -
      Returns
      -

      A dictionary with details about the compose

      -
      -
      Return type
      -

      dict

      -
      -
      Raises
      -

      IOError if it cannot read the directory, STATUS, or blueprint file.

      -
      -
      -

      The following details are included in the dict:

      -
        -
      • id - The uuid of the comoposition

      • -
      • queue_status - The final status of the composition (FINISHED or FAILED)

      • -
      • compose_type - The type of output generated (tar, iso, etc.)

      • -
      • blueprint - Blueprint name

      • -
      • version - Blueprint version

      • -
      • image_size - Size of the image, if finished. 0 otherwise.

      • -
      • uploads - For API v1 details about uploading the image are included

      • -
      -

      Various timestamps are also included in the dict. These are all Unix UTC timestamps. -It is possible for these timestamps to not always exist, in which case they will be -None in Python (or null in JSON). The following timestamps are included:

      -
        -
      • job_created - When the user submitted the compose

      • -
      • job_started - Anaconda started running

      • -
      • job_finished - Job entered FINISHED or FAILED state

      • -
      -
      - -
      -
      -pylorax.api.queue.get_compose_type(results_dir)[source]
      -

      Return the type of composition.

      -
      -
      Parameters
      -

      results_dir (str) -- The directory containing the metadata and results for the build

      -
      -
      Returns
      -

      The type of compose (eg. 'tar')

      -
      -
      Return type
      -

      str

      -
      -
      Raises
      -

      RuntimeError if no kickstart template can be found.

      -
      -
      -
      - -
      -
      -pylorax.api.queue.get_image_name(uuid_dir)[source]
      -

      Return the filename and full path of the build's image file

      -
      -
      Parameters
      -

      uuid (str) -- The UUID of the build

      -
      -
      Returns
      -

      The image filename and full path

      -
      -
      Return type
      -

      tuple of strings

      -
      -
      Raises
      -

      RuntimeError if there was a problem (eg. invalid uuid, missing config file)

      -
      -
      -
      - -
      -
      -pylorax.api.queue.make_compose(cfg, results_dir)[source]
      -

      Run anaconda with the final-kickstart.ks from results_dir

      -
      -
      Parameters
      -
        -
      • cfg (DataHolder) -- Configuration settings

      • -
      • results_dir (str) -- The directory containing the metadata and results for the build

      • -
      -
      -
      Returns
      -

      Nothing

      -
      -
      Raises
      -

      May raise various exceptions

      -
      -
      -

      This takes the final-kickstart.ks, and the settings in config.toml and runs Anaconda -in no-virt mode (directly on the host operating system). Exceptions should be caught -at the higer level.

      -

      If there is a failure, the build artifacts will be cleaned up, and any logs will be -moved into logs/anaconda/ and their ownership will be set to the user from the cfg -object.

      -
      - -
      -
      -pylorax.api.queue.monitor(cfg)[source]
      -

      Monitor the queue for new compose requests

      -
      -
      Parameters
      -

      cfg (DataHolder) -- Configuration settings

      -
      -
      Returns
      -

      Does not return

      -
      -
      -

      The queue has 2 subdirectories, new and run. When a compose is ready to be run -a symlink to the uniquely named results directory should be placed in ./queue/new/

      -

      When the it is ready to be run (it is checked every 30 seconds or after a previous -compose is finished) the symlink will be moved into ./queue/run/ and a STATUS file -will be created in the results directory.

      -

      STATUS can contain one of: WAITING, RUNNING, FINISHED, FAILED

      -

      If the system is restarted while a compose is running it will move any old symlinks -from ./queue/run/ to ./queue/new/ and rerun them.

      -
      - -
      -
      -pylorax.api.queue.queue_status(cfg, api=1)[source]
      -

      Return details about what is in the queue.

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • api (int) -- Select which api version of the dict to return (default 1)

      • -
      -
      -
      Returns
      -

      A list of the new composes, and a list of the running composes

      -
      -
      Return type
      -

      dict

      -
      -
      -

      This returns a dict with 2 lists. "new" is the list of uuids that are waiting to be built, -and "run" has the uuids that are being built (currently limited to 1 at a time).

      -
      - -
      -
      -pylorax.api.queue.start_queue_monitor(cfg, uid, gid)[source]
      -

      Start the queue monitor as a mp process

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uid (int) -- User ID that owns the queue

      • -
      • gid (int) -- Group ID that owns the queue

      • -
      -
      -
      Returns
      -

      None

      -
      -
      -
      - -
      -
      -pylorax.api.queue.uuid_add_upload(cfg, uuid, upload_uuid)[source]
      -

      Add an upload UUID to a build

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      • upload_uuid (str) -- The UUID of the upload

      • -
      -
      -
      Returns
      -

      None

      -
      -
      Return type
      -

      None

      -
      -
      -
      - -
      -
      -pylorax.api.queue.uuid_cancel(cfg, uuid)[source]
      -

      Cancel a build and delete its results

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      -
      -
      Returns
      -

      True if it was canceled and deleted

      -
      -
      Return type
      -

      bool

      -
      -
      -

      Only call this if the build status is WAITING or RUNNING

      -
      - -
      -
      -pylorax.api.queue.uuid_delete(cfg, uuid)[source]
      -

      Delete all of the results from a compose

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      -
      -
      Returns
      -

      True if it was deleted

      -
      -
      Return type
      -

      bool

      -
      -
      Raises
      -

      This will raise an error if the delete failed

      -
      -
      -
      - -
      -
      -pylorax.api.queue.uuid_get_uploads(cfg, uuid)[source]
      -

      Return the list of uploads for a build uuid

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      -
      -
      Returns
      -

      The upload UUIDs associated with the build UUID

      -
      -
      Return type
      -

      frozenset

      -
      -
      -
      - -
      -
      -pylorax.api.queue.uuid_image(cfg, uuid)[source]
      -

      Return the filename and full path of the build's image file

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      -
      -
      Returns
      -

      The image filename and full path

      -
      -
      Return type
      -

      tuple of strings

      -
      -
      Raises
      -

      RuntimeError if there was a problem (eg. invalid uuid, missing config file)

      -
      -
      -
      - -
      -
      -pylorax.api.queue.uuid_info(cfg, uuid, api=1)[source]
      -

      Return information about the composition

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      -
      -
      Returns
      -

      dictionary of information about the composition or None

      -
      -
      Return type
      -

      dict

      -
      -
      Raises
      -

      RuntimeError if there was a problem

      -
      -
      -

      This will return a dict with the following fields populated:

      -
        -
      • id - The uuid of the comoposition

      • -
      • config - containing the configuration settings used to run Anaconda

      • -
      • blueprint - The depsolved blueprint used to generate the kickstart

      • -
      • commit - The (local) git commit hash for the blueprint used

      • -
      • deps - The NEVRA of all of the dependencies used in the composition

      • -
      • compose_type - The type of output generated (tar, iso, etc.)

      • -
      • queue_status - The final status of the composition (FINISHED or FAILED)

      • -
      -
      - -
      -
      -pylorax.api.queue.uuid_log(cfg, uuid, size=1024)[source]
      -

      Return size KiB from the end of the most currently relevant log for a -given compose

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      • size (int) -- Number of KiB to read. Default is 1024

      • -
      -
      -
      Returns
      -

      Up to size KiB from the end of the log

      -
      -
      Return type
      -

      str

      -
      -
      Raises
      -

      RuntimeError if there was a problem (eg. no log file available)

      -
      -
      -

      This function will return the end of either the anaconda log, the packaging -log, or the combined composer logs, depending on the progress of the -compose. It tries to return lines from the end of the log, it will attempt -to start on a line boundary, and it may return less than size kbytes.

      -
      - -
      -
      -pylorax.api.queue.uuid_ready_upload(cfg, uuid, upload_uuid)[source]
      -

      Set an upload to READY if the build is in FINISHED state

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      • upload_uuid (str) -- The UUID of the upload

      • -
      -
      -
      Returns
      -

      None

      -
      -
      Return type
      -

      None

      -
      -
      Raises
      -

      RuntimeError if the build uuid is invalid or not in FINISHED state.

      -
      -
      -
      - -
      -
      -pylorax.api.queue.uuid_remove_upload(cfg, upload_uuid)[source]
      -

      Remove an upload UUID from the build

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • upload_uuid (str) -- The UUID of the upload

      • -
      -
      -
      Returns
      -

      None

      -
      -
      Return type
      -

      None

      -
      -
      Raises
      -

      RuntimeError if the upload_uuid is not found

      -
      -
      -
      - -
      -
      -pylorax.api.queue.uuid_schedule_upload(cfg, uuid, provider_name, image_name, settings)[source]
      -

      Schedule an upload of an image

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      • provider_name (str) -- The name of the cloud provider, e.g. "azure"

      • -
      • image_name (str) -- Path of the image to upload

      • -
      • settings (dict) -- Settings to use for the selected provider

      • -
      -
      -
      Returns
      -

      uuid of the upload

      -
      -
      Return type
      -

      str

      -
      -
      Raises
      -

      RuntimeError if the uuid is not a valid build uuid

      -
      -
      -
      - -
      -
      -pylorax.api.queue.uuid_status(cfg, uuid, api=1)[source]
      -

      Return the details of a specific UUID compose

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      • api (int) -- Select which api version of the dict to return (default 1)

      • -
      -
      -
      Returns
      -

      Details about the build

      -
      -
      Return type
      -

      dict or None

      -
      -
      -

      Returns the same dict as compose_detail()

      -
      - -
      -
      -pylorax.api.queue.uuid_tar(cfg, uuid, metadata=False, image=False, logs=False)[source]
      -

      Return a tar of the build data

      -
      -
      Parameters
      -
        -
      • cfg (ComposerConfig) -- Configuration settings

      • -
      • uuid (str) -- The UUID of the build

      • -
      • metadata (bool) -- Set to true to include all the metadata needed to reproduce the build

      • -
      • image (bool) -- Set to true to include the output image

      • -
      • logs (bool) -- Set to true to include the logs from the build

      • -
      -
      -
      Returns
      -

      A stream of bytes from tar

      -
      -
      Return type
      -

      A generator

      -
      -
      Raises
      -

      RuntimeError if there was a problem (eg. missing config file)

      -
      -
      -

      This yields an uncompressed tar's data to the caller. It includes -the selected data to the caller by returning the Popen stdout from the tar process.

      -
      - -
      -
      -

      pylorax.api.recipes module

      -
      -
      -class pylorax.api.recipes.CommitDetails(commit, timestamp, message, revision=None)[source]
      -

      Bases: pylorax.base.DataHolder

      -
      - -
      -
      -exception pylorax.api.recipes.CommitTimeValError[source]
      -

      Bases: Exception

      -
      - -
      -
      -pylorax.api.recipes.NewRecipeGit(toml_dict)[source]
      -

      Create a RecipeGit object from fields in a TOML dict

      -
      -
      Parameters
      -
        -
      • rpmname (str) -- Name of the rpm to create, also used as the prefix name in the tar archive

      • -
      • rpmversion (str) -- Version of the rpm, eg. "1.0.0"

      • -
      • rpmrelease (str) -- Release of the rpm, eg. "1"

      • -
      • summary (str) -- Summary string for the rpm

      • -
      • repo (str) -- URL of the get repo to clone and create the archive from

      • -
      • ref (str) -- Git reference to check out. eg. origin/branch-name, git tag, or git commit hash

      • -
      • destination (str) -- Path to install the / of the git repo at when installing the rpm

      • -
      -
      -
      Returns
      -

      A populated RecipeGit object

      -
      -
      Return type
      -

      RecipeGit

      -
      -
      -

      The TOML should look like this:

      -
      [[repos.git]]
      -rpmname="server-config"
      -rpmversion="1.0"
      -rpmrelease="1"
      -summary="Setup files for server deployment"
      -repo="PATH OF GIT REPO TO CLONE"
      -ref="v1.0"
      -destination="/opt/server/"
      -
      -
      -

      Note that the repo path supports anything that git supports, file://, https://, http://

      -

      Currently there is no support for authentication

      -
      - -
      -
      -class pylorax.api.recipes.Recipe(name, description, version, modules, packages, groups, customizations=None, gitrepos=None)[source]
      -

      Bases: dict

      -

      A Recipe of package and modules

      -

      This is a subclass of dict that enforces the constructor arguments -and adds a .filename property to return the recipe's filename, -and a .toml() function to return the recipe as a TOML string.

      -
      -
      -bump_version(old_version=None)[source]
      -

      semver recipe version number bump

      -
      -
      Parameters
      -

      old_version (str) -- An optional old version number

      -
      -
      Returns
      -

      The new version number or None

      -
      -
      Return type
      -

      str

      -
      -
      Raises
      -

      ValueError

      -
      -
      -

      If neither have a version, 0.0.1 is returned -If there is no old version the new version is checked and returned -If there is no new version, but there is a old one, bump its patch level -If the old and new versions are the same, bump the patch level -If they are different, check and return the new version

      -
      - -
      -
      -property filename
      -

      Return the Recipe's filename

      -

      Replaces spaces in the name with '-' and appends .toml

      -
      - -
      -
      -freeze(deps)[source]
      -

      Return a new Recipe with full module and package NEVRA

      -
      -
      Parameters
      -

      deps (list() -- A list of dependency NEVRA to use to fill in the modules and packages

      -
      -
      Returns
      -

      A new Recipe object

      -
      -
      Return type
      -

      Recipe

      -
      -
      -
      - -
      -
      -property group_names
      -

      Return the names of the groups. Groups do not have versions.

      -
      - -
      -
      -property module_names
      -

      Return the names of the modules

      -
      - -
      -
      -property module_nver
      -

      Return the names and version globs of the modules

      -
      - -
      -
      -property package_names
      -

      Return the names of the packages

      -
      - -
      -
      -property package_nver
      -

      Return the names and version globs of the packages

      -
      - -
      -
      -toml()[source]
      -

      Return the Recipe in TOML format

      -
      - -
      - -
      -
      -exception pylorax.api.recipes.RecipeError[source]
      -

      Bases: Exception

      -
      - -
      -
      -exception pylorax.api.recipes.RecipeFileError[source]
      -

      Bases: Exception

      -
      - -
      -
      -class pylorax.api.recipes.RecipeGit(rpmname, rpmversion, rpmrelease, summary, repo, ref, destination)[source]
      -

      Bases: dict

      -
      - -
      -
      -class pylorax.api.recipes.RecipeGroup(name)[source]
      -

      Bases: dict

      -
      - -
      -
      -class pylorax.api.recipes.RecipeModule(name, version)[source]
      -

      Bases: dict

      -
      - -
      -
      -class pylorax.api.recipes.RecipePackage(name, version)[source]
      -

      Bases: pylorax.api.recipes.RecipeModule

      -
      - -
      -
      -pylorax.api.recipes.check_list_case(expected_keys, recipe_keys, prefix='')[source]
      -

      Check the case of the recipe keys

      -
      -
      Parameters
      -
        -
      • expected_keys (list of str) -- A list of expected key strings

      • -
      • recipe_keys (list of str) -- A list of the recipe's key strings

      • -
      -
      -
      Returns
      -

      list of errors

      -
      -
      Return type
      -

      list of str

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.check_recipe_dict(recipe_dict)[source]
      -

      Check a dict before using it to create a new Recipe

      -
      -
      Parameters
      -

      recipe_dict (dict) -- A plain dict of the recipe

      -
      -
      Returns
      -

      True if dict is ok

      -
      -
      Return type
      -

      bool

      -
      -
      Raises
      -

      RecipeError

      -
      -
      -

      This checks a dict to make sure required fields are present, -that optional fields are correct, and that other optional fields -are of the correct format, when included.

      -

      This collects all of the errors and returns a single RecipeError with -a string that can be presented to users.

      -
      - -
      -
      -pylorax.api.recipes.check_required_list(lst, fields)[source]
      -

      Check a list of dicts for required fields

      -
      -
      Parameters
      -
        -
      • lst (list of dict) -- A list of dicts with fields

      • -
      • fields (list of str) -- A list of field name strings

      • -
      -
      -
      Returns
      -

      A list of error strings

      -
      -
      Return type
      -

      list of str

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.commit_recipe(repo, branch, recipe)[source]
      -

      Commit a recipe to a branch

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe (Recipe) -- Recipe to commit

      • -
      -
      -
      Returns
      -

      OId of the new commit

      -
      -
      Return type
      -

      Git.OId

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.commit_recipe_directory(repo, branch, directory)[source]
      -

      Commit all *.toml files from a directory, if they aren't already in git.

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • directory (str) -- The directory of *.toml recipes to commit

      • -
      -
      -
      Returns
      -

      None

      -
      -
      Raises
      -

      Can raise errors from Ggit or RecipeFileError

      -
      -
      -

      Files with Toml or RecipeFileErrors will be skipped, and the remainder will -be tried.

      -
      - -
      -
      -pylorax.api.recipes.commit_recipe_file(repo, branch, filename)[source]
      -

      Commit a recipe file to a branch

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • filename (str) -- Path to the recipe file to commit

      • -
      -
      -
      Returns
      -

      OId of the new commit

      -
      -
      Return type
      -

      Git.OId

      -
      -
      Raises
      -

      Can raise errors from Ggit or RecipeFileError

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.customizations_diff(old_recipe, new_recipe)[source]
      -

      Diff the customizations sections from two versions of a recipe

      -
      - -
      -
      -pylorax.api.recipes.delete_file(repo, branch, filename)[source]
      -

      Delete a file from a branch.

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • filename (str) -- filename to delete

      • -
      -
      -
      Returns
      -

      OId of the new commit

      -
      -
      Return type
      -

      Git.OId

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.delete_recipe(repo, branch, recipe_name)[source]
      -

      Delete a recipe from a branch.

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- Recipe name to delete

      • -
      -
      -
      Returns
      -

      OId of the new commit

      -
      -
      Return type
      -

      Git.OId

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.diff_lists(title, field, old_items, new_items)[source]
      -

      Return the differences between two lists of dicts.

      -
      -
      Parameters
      -
        -
      • title (str) -- Title of the entry

      • -
      • field (str) -- Field to use as the key for comparisons

      • -
      • old_items (list(dict)) -- List of item dicts with "name" field

      • -
      • new_items (list(dict)) -- List of item dicts with "name" field

      • -
      -
      -
      Returns
      -

      List of diff dicts with old/new entries

      -
      -
      Return type
      -

      list(dict)

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.find_commit_tag(repo, branch, filename, commit_id)[source]
      -

      Find the tag that matches the commit_id

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • filename (str) -- filename to revert

      • -
      • commit_id (Git.OId) -- The commit id to check

      • -
      -
      -
      Returns
      -

      The tag or None if there isn't one

      -
      -
      Return type
      -

      str or None

      -
      -
      -

      There should be only 1 tag pointing to a commit, but there may not -be a tag at all.

      -

      The tag will look like: 'refs/tags/<branch>/<filename>/r<revision>'

      -
      - -
      -
      -pylorax.api.recipes.find_field_value(field, value, lst)[source]
      -

      Find a field matching value in the list of dicts.

      -
      -
      Parameters
      -
        -
      • field (str) -- field to search for

      • -
      • value (str) -- value to match in the field

      • -
      • lst (list of dict) -- List of dict's with field

      • -
      -
      -
      Returns
      -

      First dict with matching field:value, or None

      -
      -
      Return type
      -

      dict or None

      -
      -
      -

      Used to return a specific entry from a list that looks like this:

      -

      [{"name": "one", "attr": "green"}, ...]

      -

      find_field_value("name", "one", lst) will return the matching dict.

      -
      - -
      -
      -pylorax.api.recipes.find_name(name, lst)[source]
      -

      Find the dict matching the name in a list and return it.

      -
      -
      Parameters
      -
        -
      • name (str) -- Name to search for

      • -
      • lst (list of dict) -- List of dict's with "name" field

      • -
      -
      -
      Returns
      -

      First dict with matching name, or None

      -
      -
      Return type
      -

      dict or None

      -
      -
      -

      This is just a wrapper for find_field_value with field set to "name"

      -
      - -
      -
      -pylorax.api.recipes.find_recipe_obj(path, recipe, default=None)[source]
      -

      Find a recipe object

      -
      -
      Parameters
      -
        -
      • path (list of str) -- A list of dict field names

      • -
      • recipe (Recipe) -- The recipe to search

      • -
      • default (Any) -- The value to return if it is not found

      • -
      -
      -
      -

      Return the object found by applying the path to the dicts in the recipe, or -return the default if it doesn't exist.

      -

      eg. {"customizations": {"hostname": "foo", "users": [...]}}

      -

      find_recipe_obj(["customizations", "hostname"], recipe, "")

      -
      - -
      -
      -pylorax.api.recipes.get_commit_details(commit, revision=None)[source]
      -

      Return the details about a specific commit.

      -
      -
      Parameters
      -
        -
      • commit (Git.Commit) -- The commit to get details from

      • -
      • revision (int) -- Optional commit revision

      • -
      -
      -
      Returns
      -

      Details about the commit

      -
      -
      Return type
      -

      CommitDetails

      -
      -
      Raises
      -

      CommitTimeValError or Ggit exceptions

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.get_revision_from_tag(tag)[source]
      -

      Return the revision number from a tag

      -
      -
      Parameters
      -

      tag (str) -- The tag to exract the revision from

      -
      -
      Returns
      -

      The integer revision or None

      -
      -
      Return type
      -

      int or None

      -
      -
      -

      The revision is the part after the r in 'branch/filename/rXXX'

      -
      - -
      -
      -pylorax.api.recipes.gfile(path)[source]
      -

      Convert a string path to GFile for use with Git

      -
      - -
      -
      -pylorax.api.recipes.head_commit(repo, branch)[source]
      -

      Get the branch's HEAD Commit Object

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      -
      -
      Returns
      -

      Branch's head commit

      -
      -
      Return type
      -

      Git.Commit

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.is_commit_tag(repo, commit_id, tag)[source]
      -

      Check to see if a tag points to a specific commit.

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • commit_id (Git.OId) -- The commit id to check

      • -
      • tag (str) -- The tag to check

      • -
      -
      -
      Returns
      -

      True if the tag points to the commit, False otherwise

      -
      -
      Return type
      -

      bool

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.is_parent_diff(repo, filename, tree, parent)[source]
      -

      Check to see if the commit is different from its parents

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • filename (str) -- filename to revert

      • -
      • tree (Git.Tree) -- The commit's tree

      • -
      • parent (Git.Commit) -- The commit's parent commit

      • -
      -
      -
      Retuns
      -

      True if filename in the commit is different from its parents

      -
      -
      Return type
      -

      bool

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.list_branch_files(repo, branch)[source]
      -

      Return a sorted list of the files on the branch HEAD

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      -
      -
      Returns
      -

      A sorted list of the filenames

      -
      -
      Return type
      -

      list(str)

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.list_commit_files(repo, commit)[source]
      -

      Return a sorted list of the files on a commit

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • commit (str) -- The commit hash to list

      • -
      -
      -
      Returns
      -

      A sorted list of the filenames

      -
      -
      Return type
      -

      list(str)

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.list_commits(repo, branch, filename, limit=0)[source]
      -

      List the commit history of a file on a branch.

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • filename (str) -- filename to revert

      • -
      • limit (int) -- Number of commits to return (0=all)

      • -
      -
      -
      Returns
      -

      A list of commit details

      -
      -
      Return type
      -

      list(CommitDetails)

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.open_or_create_repo(path)[source]
      -

      Open an existing repo, or create a new one

      -
      -
      Parameters
      -

      path (string) -- path to recipe directory

      -
      -
      Returns
      -

      A repository object

      -
      -
      Return type
      -

      Git.Repository

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -

      A bare git repo will be created in the git directory of the specified path. -If a repo already exists it will be opened and returned instead of -creating a new one.

      -
      - -
      -
      -pylorax.api.recipes.prepare_commit(repo, branch, builder)[source]
      -

      Prepare for a commit

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • builder (TreeBuilder) -- instance of TreeBuilder

      • -
      -
      -
      Returns
      -

      (Tree, Sig, Ref)

      -
      -
      Return type
      -

      tuple

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.read_commit(repo, branch, filename, commit=None)[source]
      -

      Return the contents of a file on a specific branch or commit.

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • filename (str) -- filename to read

      • -
      • commit (str) -- Optional commit hash

      • -
      -
      -
      Returns
      -

      The commit id, and the contents of the commit

      -
      -
      Return type
      -

      tuple(str, str)

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -

      If no commit is passed the master:filename is returned, otherwise it will be -commit:filename

      -
      - -
      -
      -pylorax.api.recipes.read_commit_spec(repo, spec)[source]
      -

      Return the raw content of the blob specified by the spec

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • spec (str) -- Git revparse spec

      • -
      -
      -
      Returns
      -

      Contents of the commit

      -
      -
      Return type
      -

      str

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -

      eg. To read the README file from master the spec is "master:README"

      -
      - -
      -
      -pylorax.api.recipes.read_recipe_and_id(repo, branch, recipe_name, commit=None)[source]
      -

      Read a recipe commit and its id from git

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- Recipe name to read

      • -
      • commit (str) -- Optional commit hash

      • -
      -
      -
      Returns
      -

      The commit id, and a Recipe object

      -
      -
      Return type
      -

      tuple(str, Recipe)

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -

      If no commit is passed the master:filename is returned, otherwise it will be -commit:filename

      -
      - -
      -
      -pylorax.api.recipes.read_recipe_commit(repo, branch, recipe_name, commit=None)[source]
      -

      Read a recipe commit from git and return a Recipe object

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- Recipe name to read

      • -
      • commit (str) -- Optional commit hash

      • -
      -
      -
      Returns
      -

      A Recipe object

      -
      -
      Return type
      -

      Recipe

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -

      If no commit is passed the master:filename is returned, otherwise it will be -commit:filename

      -
      - -
      -
      -pylorax.api.recipes.recipe_diff(old_recipe, new_recipe)[source]
      -

      Diff two versions of a recipe

      -
      -
      Parameters
      -
        -
      • old_recipe (Recipe) -- The old version of the recipe

      • -
      • new_recipe (Recipe) -- The new version of the recipe

      • -
      -
      -
      Returns
      -

      A list of diff dict entries with old/new

      -
      -
      Return type
      -

      list(dict)

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.recipe_filename(name)[source]
      -

      Return the toml filename for a recipe

      -

      Replaces spaces with '-' and appends '.toml'

      -
      - -
      -
      -pylorax.api.recipes.recipe_from_dict(recipe_dict)[source]
      -

      Create a Recipe object from a plain dict.

      -
      -
      Parameters
      -

      recipe_dict (dict) -- A plain dict of the recipe

      -
      -
      Returns
      -

      A Recipe object

      -
      -
      Return type
      -

      Recipe

      -
      -
      Raises
      -

      RecipeError

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.recipe_from_file(recipe_path)[source]
      -

      Return a recipe file as a Recipe object

      -
      -
      Parameters
      -

      recipe_path (str) -- Path to the recipe fila

      -
      -
      Returns
      -

      A Recipe object

      -
      -
      Return type
      -

      Recipe

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.recipe_from_toml(recipe_str)[source]
      -

      Create a Recipe object from a toml string.

      -
      -
      Parameters
      -

      recipe_str (str) -- The Recipe TOML string

      -
      -
      Returns
      -

      A Recipe object

      -
      -
      Return type
      -

      Recipe

      -
      -
      Raises
      -

      TomlError

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.repo_file_exists(repo, branch, filename)[source]
      -

      Return True if the filename exists on the branch

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • filename (str) -- Filename to check

      • -
      -
      -
      Returns
      -

      True if the filename exists on the HEAD of the branch, False otherwise.

      -
      -
      Return type
      -

      bool

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.revert_file(repo, branch, filename, commit)[source]
      -

      Revert the contents of a file to that of a previous commit

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • filename (str) -- filename to revert

      • -
      • commit (str) -- Commit hash

      • -
      -
      -
      Returns
      -

      OId of the new commit

      -
      -
      Return type
      -

      Git.OId

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.revert_recipe(repo, branch, recipe_name, commit)[source]
      -

      Revert the contents of a recipe to that of a previous commit

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- Recipe name to revert

      • -
      • commit (str) -- Commit hash

      • -
      -
      -
      Returns
      -

      OId of the new commit

      -
      -
      Return type
      -

      Git.OId

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -pylorax.api.recipes.tag_file_commit(repo, branch, filename)[source]
      -

      Tag a file's most recent commit

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • filename (str) -- Filename to tag

      • -
      -
      -
      Returns
      -

      Tag id or None if it failed.

      -
      -
      Return type
      -

      Git.OId

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -

      This uses git tags, of the form refs/tags/<branch>/<filename>/r<revision> -Only the most recent recipe commit can be tagged to prevent out of order tagging. -Revisions start at 1 and increment for each new commit that is tagged. -If the commit has already been tagged it will return false.

      -
      - -
      -
      -pylorax.api.recipes.tag_recipe_commit(repo, branch, recipe_name)[source]
      -

      Tag a file's most recent commit

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- Recipe name to tag

      • -
      -
      -
      Returns
      -

      Tag id or None if it failed.

      -
      -
      Return type
      -

      Git.OId

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -

      Uses tag_file_commit()

      -
      - -
      -
      -pylorax.api.recipes.write_commit(repo, branch, filename, message, content)[source]
      -

      Make a new commit to a repository's branch

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • filename (str) -- full path of the file to add

      • -
      • message (str) -- The commit message

      • -
      • content (str) -- The data to write

      • -
      -
      -
      Returns
      -

      OId of the new commit

      -
      -
      Return type
      -

      Git.OId

      -
      -
      Raises
      -

      Can raise errors from Ggit

      -
      -
      -
      - -
      -
      -

      pylorax.api.regexes module

      -
      -
      -

      pylorax.api.server module

      -
      -
      -class pylorax.api.server.GitLock(repo, lock, dir)
      -

      Bases: tuple

      -
      -
      -dir
      -

      Alias for field number 2

      -
      - -
      -
      -lock
      -

      Alias for field number 1

      -
      - -
      -
      -repo
      -

      Alias for field number 0

      -
      - -
      - -
      -
      -

      pylorax.api.timestamp module

      -
      -
      -pylorax.api.timestamp.timestamp_dict(destdir)[source]
      -
      - -
      -
      -pylorax.api.timestamp.write_timestamp(destdir, ty)[source]
      -
      - -
      -
      -

      pylorax.api.toml module

      -
      -
      -exception pylorax.api.toml.TomlError(msg, doc, pos)[source]
      -

      Bases: toml.decoder.TomlDecodeError

      -
      - -
      -
      -pylorax.api.toml.dump(o, file)[source]
      -
      - -
      -
      -pylorax.api.toml.dumps(o)[source]
      -
      - -
      -
      -pylorax.api.toml.load(file)[source]
      -
      - -
      -
      -pylorax.api.toml.loads(s)[source]
      -
      - -
      -
      -

      pylorax.api.utils module

      -

      API utility functions

      -
      -
      -pylorax.api.utils.blueprint_exists(api, branch, blueprint_name)[source]
      -

      Return True if the blueprint exists

      -
      -
      Parameters
      -
        -
      • api (Flask) -- flask object

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- Recipe name to read

      • -
      -
      -
      -
      - -
      -
      -pylorax.api.utils.take_limits(iterable, offset, limit)[source]
      -

      Apply offset and limit to an iterable object

      -
      -
      Parameters
      -
        -
      • iterable (iter) -- The object to limit

      • -
      • offset (int) -- The number of items to skip

      • -
      • limit (int) -- The total number of items to return

      • -
      -
      -
      Returns
      -

      A subset of the iterable

      -
      -
      -
      - -
      -
      -

      pylorax.api.v0 module

      -

      Setup v0 of the API server

      -

      v0_api() must be called to setup the API routes for Flask

      -
      -

      Status Responses

      -

      Some requests only return a status/error response.

      -
      -

      The response will be a status response with status set to true, or an -error response with it set to false and an error message included.

      -

      Example response:

      -
      {
      -  "status": true
      -}
      -
      -
      -

      Error response:

      -
      {
      -  "errors": ["ggit-error: Failed to remove entry. File isn't in the tree - jboss.toml (-1)"]
      -  "status": false
      -}
      -
      -
      -
      -
      -
      -

      API Routes

      -

      All of the blueprints routes support the optional branch argument. If it is not -used then the API will use the master branch for blueprints. If you want to create -a new branch use the new or workspace routes with ?branch=<branch-name> to -store the new blueprint on the new branch.

      -
      -
      -pylorax.api.v0.v0_blueprints_changes(blueprint_names)[source]
      -

      Return the changes to a blueprint or list of blueprints

      -

      /api/v0/blueprints/changes/<blueprint_names>[?offset=0&limit=20]

      -
      -

      Return the commits to a blueprint. By default it returns the first 20 commits, this -can be changed by passing offset and/or limit. The response will include the -commit hash, summary, timestamp, and optionally the revision number. The commit -hash can be passed to /api/v0/blueprints/diff/ to retrieve the exact changes.

      -

      Example:

      -
      {
      -  "errors": [],
      -  "limit": 20,
      -  "offset": 0,
      -  "blueprints": [
      -    {
      -      "changes": [
      -        {
      -          "commit": "e083921a7ed1cf2eec91ad12b9ad1e70ef3470be",
      -          "message": "blueprint glusterfs, version 0.0.6 saved.",
      -          "revision": null,
      -          "timestamp": "2017-11-23T00:18:13Z"
      -        },
      -        {
      -          "commit": "cee5f4c20fc33ea4d54bfecf56f4ad41ad15f4f3",
      -          "message": "blueprint glusterfs, version 0.0.5 saved.",
      -          "revision": null,
      -          "timestamp": "2017-11-11T01:00:28Z"
      -        },
      -        {
      -          "commit": "29b492f26ed35d80800b536623bafc51e2f0eff2",
      -          "message": "blueprint glusterfs, version 0.0.4 saved.",
      -          "revision": null,
      -          "timestamp": "2017-11-11T00:28:30Z"
      -        },
      -        {
      -          "commit": "03374adbf080fe34f5c6c29f2e49cc2b86958bf2",
      -          "message": "blueprint glusterfs, version 0.0.3 saved.",
      -          "revision": null,
      -          "timestamp": "2017-11-10T23:15:52Z"
      -        },
      -        {
      -          "commit": "0e08ecbb708675bfabc82952599a1712a843779d",
      -          "message": "blueprint glusterfs, version 0.0.2 saved.",
      -          "revision": null,
      -          "timestamp": "2017-11-10T23:14:56Z"
      -        },
      -        {
      -          "commit": "3e11eb87a63d289662cba4b1804a0947a6843379",
      -          "message": "blueprint glusterfs, version 0.0.1 saved.",
      -          "revision": null,
      -          "timestamp": "2017-11-08T00:02:47Z"
      -        }
      -      ],
      -      "name": "glusterfs",
      -      "total": 6
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_delete(blueprint_name)[source]
      -

      Delete a blueprint from git

      -

      DELETE /api/v0/blueprints/delete/<blueprint_name>

      -
      -

      Delete a blueprint. The blueprint is deleted from the branch, and will no longer -be listed by the list route. A blueprint can be undeleted using the undo route -to revert to a previous commit. This will also delete the workspace copy of the -blueprint.

      -

      The response will be a status response with status set to true, or an -error response with it set to false and an error message included.

      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_delete_workspace(blueprint_name)[source]
      -

      Delete a blueprint from the workspace

      -

      DELETE /api/v0/blueprints/workspace/<blueprint_name>

      -
      -

      Remove the temporary workspace copy of a blueprint. The info route will now -return the most recent commit of the blueprint. Any changes that were in the -workspace will be lost.

      -

      The response will be a status response with status set to true, or an -error response with it set to false and an error message included.

      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_depsolve(blueprint_names)[source]
      -

      Return the dependencies for a blueprint

      -

      /api/v0/blueprints/depsolve/<blueprint_names>

      -
      -

      Depsolve the blueprint using yum, return the blueprint used, and the NEVRAs of the packages -chosen to satisfy the blueprint's requirements. The response will include a list of results, -with the full dependency list in dependencies, the NEVRAs for the blueprint's direct modules -and packages in modules, and any error will be in errors.

      -

      Example:

      -
      {
      -  "errors": [],
      -  "blueprints": [
      -    {
      -      "dependencies": [
      -        {
      -          "arch": "noarch",
      -          "epoch": "0",
      -          "name": "2ping",
      -          "release": "2.el7",
      -          "version": "3.2.1"
      -        },
      -        {
      -          "arch": "x86_64",
      -          "epoch": "0",
      -          "name": "acl",
      -          "release": "12.el7",
      -          "version": "2.2.51"
      -        },
      -        {
      -          "arch": "x86_64",
      -          "epoch": "0",
      -          "name": "audit-libs",
      -          "release": "3.el7",
      -          "version": "2.7.6"
      -        },
      -        {
      -          "arch": "x86_64",
      -          "epoch": "0",
      -          "name": "avahi-libs",
      -          "release": "17.el7",
      -          "version": "0.6.31"
      -        },
      -        ...
      -      ],
      -      "modules": [
      -        {
      -          "arch": "noarch",
      -          "epoch": "0",
      -          "name": "2ping",
      -          "release": "2.el7",
      -          "version": "3.2.1"
      -        },
      -        {
      -          "arch": "x86_64",
      -          "epoch": "0",
      -          "name": "glusterfs",
      -          "release": "18.4.el7",
      -          "version": "3.8.4"
      -        },
      -        ...
      -      ],
      -      "blueprint": {
      -        "description": "An example GlusterFS server with samba",
      -        "modules": [
      -          {
      -            "name": "glusterfs",
      -            "version": "3.7.*"
      -          },
      -       ...
      -      }
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_diff(blueprint_name, from_commit, to_commit)[source]
      -

      Return the differences between two commits of a blueprint

      -

      /api/v0/blueprints/diff/<blueprint_name>/<from_commit>/<to_commit>

      -
      -

      Return the differences between two commits, or the workspace. The commit hash -from the changes response can be used here, or several special strings:

      -
        -
      • NEWEST will select the newest git commit. This works for from_commit or to_commit

      • -
      • WORKSPACE will select the workspace copy. This can only be used in to_commit

      • -
      -

      eg. /api/v0/blueprints/diff/glusterfs/NEWEST/WORKSPACE will return the differences -between the most recent git commit and the contents of the workspace.

      -

      Each entry in the response's diff object contains the old blueprint value and the new one. -If old is null and new is set, then it was added. -If new is null and old is set, then it was removed. -If both are set, then it was changed.

      -

      The old/new entries will have the name of the blueprint field that was changed. This -can be one of: Name, Description, Version, Module, or Package. -The contents for these will be the old/new values for them.

      -

      In the example below the version was changed and the ping package was added.

      -

      Example:

      -
      {
      -  "diff": [
      -    {
      -      "new": {
      -        "Version": "0.0.6"
      -      },
      -      "old": {
      -        "Version": "0.0.5"
      -      }
      -    },
      -    {
      -      "new": {
      -        "Package": {
      -          "name": "ping",
      -          "version": "3.2.1"
      -        }
      -      },
      -      "old": null
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_freeze(blueprint_names)[source]
      -

      Return the blueprint with the exact modules and packages selected by depsolve

      -

      /api/v0/blueprints/freeze/<blueprint_names>

      -
      -

      Return a JSON representation of the blueprint with the package and module versions set -to the exact versions chosen by depsolving the blueprint.

      -

      Example:

      -
      {
      -  "errors": [],
      -  "blueprints": [
      -    {
      -      "blueprint": {
      -        "description": "An example GlusterFS server with samba",
      -        "modules": [
      -          {
      -            "name": "glusterfs",
      -            "version": "3.8.4-18.4.el7.x86_64"
      -          },
      -          {
      -            "name": "glusterfs-cli",
      -            "version": "3.8.4-18.4.el7.x86_64"
      -          }
      -        ],
      -        "name": "glusterfs",
      -        "packages": [
      -          {
      -            "name": "ping",
      -            "version": "2:3.2.1-2.el7.noarch"
      -          },
      -          {
      -            "name": "samba",
      -            "version": "4.6.2-8.el7.x86_64"
      -          }
      -        ],
      -        "version": "0.0.6"
      -      }
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_info(blueprint_names)[source]
      -

      Return the contents of the blueprint, or a list of blueprints

      -

      /api/v0/blueprints/info/<blueprint_names>[?format=<json|toml>]

      -
      -

      Return the JSON representation of the blueprint. This includes 3 top level -objects. changes which lists whether or not the workspace is different from -the most recent commit. blueprints which lists the JSON representation of the -blueprint, and errors which will list any errors, like non-existant blueprints.

      -

      By default the response is JSON, but if ?format=toml is included in the URL's -arguments it will return the response as the blueprint's raw TOML content. -Unless there is an error which will only return a 400 and a standard error -Status Responses.

      -

      If there is an error when JSON is requested the successful blueprints and the -errors will both be returned.

      -

      Example of json response:

      -
      {
      -  "changes": [
      -    {
      -      "changed": false,
      -      "name": "glusterfs"
      -    }
      -  ],
      -  "errors": [],
      -  "blueprints": [
      -    {
      -      "description": "An example GlusterFS server with samba",
      -      "modules": [
      -        {
      -          "name": "glusterfs",
      -          "version": "3.7.*"
      -        },
      -        {
      -          "name": "glusterfs-cli",
      -          "version": "3.7.*"
      -        }
      -      ],
      -      "name": "glusterfs",
      -      "packages": [
      -        {
      -          "name": "2ping",
      -          "version": "3.2.1"
      -        },
      -        {
      -          "name": "samba",
      -          "version": "4.2.*"
      -        }
      -      ],
      -      "version": "0.0.6"
      -    }
      -  ]
      -}
      -
      -
      -

      Error example:

      -
      {
      -  "changes": [],
      -  "errors": ["ggit-error: the path 'missing.toml' does not exist in the given tree (-3)"]
      -  "blueprints": []
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_list()[source]
      -

      List the available blueprints on a branch.

      -

      /api/v0/blueprints/list

      -
      -

      List the available blueprints:

      -
      { "limit": 20,
      -  "offset": 0,
      -  "blueprints": [
      -    "atlas",
      -    "development",
      -    "glusterfs",
      -    "http-server",
      -    "jboss",
      -    "kubernetes" ],
      -  "total": 6 }
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_new()[source]
      -

      Commit a new blueprint

      -

      POST /api/v0/blueprints/new

      -
      -

      Create a new blueprint, or update an existing blueprint. This supports both JSON and TOML -for the blueprint format. The blueprint should be in the body of the request with the -Content-Type header set to either application/json or text/x-toml.

      -

      The response will be a status response with status set to true, or an -error response with it set to false and an error message included.

      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_tag(blueprint_name)[source]
      -

      Tag a blueprint's latest blueprint commit as a 'revision'

      -

      POST /api/v0/blueprints/tag/<blueprint_name>

      -
      -

      Tag a blueprint as a new release. This uses git tags with a special format. -refs/tags/<branch>/<filename>/r<revision>. Only the most recent blueprint commit -can be tagged. Revisions start at 1 and increment for each new tag -(per-blueprint). If the commit has already been tagged it will return false.

      -

      The response will be a status response with status set to true, or an -error response with it set to false and an error message included.

      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_undo(blueprint_name, commit)[source]
      -

      Undo changes to a blueprint by reverting to a previous commit.

      -

      POST /api/v0/blueprints/undo/<blueprint_name>/<commit>

      -
      -

      This will revert the blueprint to a previous commit. The commit hash from the changes -route can be used in this request.

      -

      The response will be a status response with status set to true, or an -error response with it set to false and an error message included.

      -
      -
      - -
      -
      -pylorax.api.v0.v0_blueprints_workspace()[source]
      -

      Write a blueprint to the workspace

      -

      POST /api/v0/blueprints/workspace

      -
      -

      Write a blueprint to the temporary workspace. This works exactly the same as new except -that it does not create a commit. JSON and TOML bodies are supported.

      -

      The workspace is meant to be used as a temporary blueprint storage for clients. -It will be read by the info and diff routes if it is different from the -most recent commit.

      -

      The response will be a status response with status set to true, or an -error response with it set to false and an error message included.

      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_cancel(uuid)[source]
      -

      Cancel a running compose and delete its results directory

      -

      DELETE /api/v0/compose/cancel/<uuid>

      -
      -

      Cancel the build, if it is not finished, and delete the results. It will return a -status of True if it is successful.

      -

      Example:

      -
      {
      -  "status": true,
      -  "uuid": "03397f8d-acff-4cdb-bd31-f629b7a948f5"
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_delete(uuids)[source]
      -

      Delete the compose results for the listed uuids

      -

      DELETE /api/v0/compose/delete/<uuids>

      -
      -

      Delete the list of comma-separated uuids from the compose results.

      -

      Example:

      -
      {
      -  "errors": [],
      -  "uuids": [
      -    {
      -      "status": true,
      -      "uuid": "ae1bf7e3-7f16-4c9f-b36e-3726a1093fd0"
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_failed()[source]
      -

      Return the list of failed composes

      -

      /api/v0/compose/failed

      -
      -

      Return the details on all of the failed composes on the system.

      -

      Example:

      -
      {
      -  "failed": [
      -     {
      -      "id": "8c8435ef-d6bd-4c68-9bf1-a2ef832e6b1a",
      -      "blueprint": "http-server",
      -      "queue_status": "FAILED",
      -      "job_created": 1517523249.9301329,
      -      "job_started": 1517523249.9314211,
      -      "job_finished": 1517523255.5623411,
      -      "version": "0.0.2"
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_finished()[source]
      -

      Return the list of finished composes

      -

      /api/v0/compose/finished

      -
      -

      Return the details on all of the finished composes on the system.

      -

      Example:

      -
      {
      -  "finished": [
      -    {
      -      "id": "70b84195-9817-4b8a-af92-45e380f39894",
      -      "blueprint": "glusterfs",
      -      "queue_status": "FINISHED",
      -      "job_created": 1517351003.8210032,
      -      "job_started": 1517351003.8230415,
      -      "job_finished": 1517359234.1003145,
      -      "version": "0.0.6"
      -    },
      -    {
      -      "id": "e695affd-397f-4af9-9022-add2636e7459",
      -      "blueprint": "glusterfs",
      -      "queue_status": "FINISHED",
      -      "job_created": 1517362289.7193348,
      -      "job_started": 1517362289.9751132,
      -      "job_finished": 1517363500.1234567,
      -      "version": "0.0.6"
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_image(uuid)[source]
      -

      Return the output image for the build

      -

      /api/v0/compose/image/<uuid>

      -
      -

      Returns the output image from the build. The filename is set to the filename -from the build with the UUID as a prefix. eg. UUID-root.tar.xz or UUID-boot.iso.

      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_info(uuid)[source]
      -

      Return detailed info about a compose

      -

      /api/v0/compose/info/<uuid>

      -
      -

      Get detailed information about the compose. The returned JSON string will -contain the following information:

      -
      -
        -
      • id - The uuid of the comoposition

      • -
      • config - containing the configuration settings used to run Anaconda

      • -
      • blueprint - The depsolved blueprint used to generate the kickstart

      • -
      • commit - The (local) git commit hash for the blueprint used

      • -
      • deps - The NEVRA of all of the dependencies used in the composition

      • -
      • compose_type - The type of output generated (tar, iso, etc.)

      • -
      • queue_status - The final status of the composition (FINISHED or FAILED)

      • -
      -
      -

      Example:

      -
      {
      -  "commit": "7078e521a54b12eae31c3fd028680da7a0815a4d",
      -  "compose_type": "tar",
      -  "config": {
      -    "anaconda_args": "",
      -    "armplatform": "",
      -    "compress_args": [],
      -    "compression": "xz",
      -    "image_name": "root.tar.xz",
      -    ...
      -  },
      -  "deps": {
      -    "packages": [
      -      {
      -        "arch": "x86_64",
      -        "epoch": "0",
      -        "name": "acl",
      -        "release": "14.el7",
      -        "version": "2.2.51"
      -      }
      -    ]
      -  },
      -  "id": "c30b7d80-523b-4a23-ad52-61b799739ce8",
      -  "queue_status": "FINISHED",
      -  "blueprint": {
      -    "description": "An example kubernetes master",
      -    ...
      -  }
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_log_tail(uuid)[source]
      -

      Return the tail of the most currently relevant log

      -

      /api/v0/compose/log/<uuid>[?size=KiB]

      -
      -

      Returns the end of either the anaconda log, the packaging log, or the -composer logs, depending on the progress of the compose. The size -parameter is optional and defaults to 1 MiB if it is not included. The -returned data is raw text from the end of the log file, starting on a -line boundary.

      -

      Example:

      -
      12:59:24,222 INFO anaconda: Running Thread: AnaConfigurationThread (140629395244800)
      -12:59:24,223 INFO anaconda: Configuring installed system
      -12:59:24,912 INFO anaconda: Configuring installed system
      -12:59:24,912 INFO anaconda: Creating users
      -12:59:24,913 INFO anaconda: Clearing libuser.conf at /tmp/libuser.Dyy8Gj
      -12:59:25,154 INFO anaconda: Creating users
      -12:59:25,155 INFO anaconda: Configuring addons
      -12:59:25,155 INFO anaconda: Configuring addons
      -12:59:25,155 INFO anaconda: Generating initramfs
      -12:59:49,467 INFO anaconda: Generating initramfs
      -12:59:49,467 INFO anaconda: Running post-installation scripts
      -12:59:49,467 INFO anaconda: Running kickstart %%post script(s)
      -12:59:50,782 INFO anaconda: All kickstart %%post script(s) have been run
      -12:59:50,782 INFO anaconda: Running post-installation scripts
      -12:59:50,784 INFO anaconda: Thread Done: AnaConfigurationThread (140629395244800)
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_logs(uuid)[source]
      -

      Return a tar of the metadata for the build

      -

      /api/v0/compose/logs/<uuid>

      -
      -

      Returns a .tar of the anaconda build logs. The tar is not compressed, but is -not large.

      -

      The mime type is set to 'application/x-tar' and the filename is set to -UUID-logs.tar

      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_metadata(uuid)[source]
      -

      Return a tar of the metadata for the build

      -

      /api/v0/compose/metadata/<uuid>

      -
      -

      Returns a .tar of the metadata used for the build. This includes all the -information needed to reproduce the build, including the final kickstart -populated with repository and package NEVRA.

      -

      The mime type is set to 'application/x-tar' and the filename is set to -UUID-metadata.tar

      -

      The .tar is uncompressed, but is not large.

      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_queue()[source]
      -

      Return the status of the new and running queues

      -

      /api/v0/compose/queue

      -
      -

      Return the status of the build queue. It includes information about the builds waiting, -and the build that is running.

      -

      Example:

      -
      {
      -  "new": [
      -    {
      -      "id": "45502a6d-06e8-48a5-a215-2b4174b3614b",
      -      "blueprint": "glusterfs",
      -      "queue_status": "WAITING",
      -      "job_created": 1517362647.4570868,
      -      "version": "0.0.6"
      -    },
      -    {
      -      "id": "6d292bd0-bec7-4825-8d7d-41ef9c3e4b73",
      -      "blueprint": "kubernetes",
      -      "queue_status": "WAITING",
      -      "job_created": 1517362659.0034983,
      -      "version": "0.0.1"
      -    }
      -  ],
      -  "run": [
      -    {
      -      "id": "745712b2-96db-44c0-8014-fe925c35e795",
      -      "blueprint": "glusterfs",
      -      "queue_status": "RUNNING",
      -      "job_created": 1517362633.7965999,
      -      "job_started": 1517362633.8001345,
      -      "version": "0.0.6"
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_results(uuid)[source]
      -

      Return a tar of the metadata and the results for the build

      -

      /api/v0/compose/results/<uuid>

      -
      -

      Returns a .tar of the metadata, logs, and output image of the build. This -includes all the information needed to reproduce the build, including the -final kickstart populated with repository and package NEVRA. The output image -is already in compressed form so the returned tar is not compressed.

      -

      The mime type is set to 'application/x-tar' and the filename is set to -UUID.tar

      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_start()[source]
      -

      Start a compose

      -
      -
      The body of the post should have these fields:

      blueprint_name - The blueprint name from /blueprints/list/ -compose_type - The type of output to create, from /compose/types -branch - Optional, defaults to master, selects the git branch to use for the blueprint.

      -
      -
      -

      POST /api/v0/compose

      -
      -

      Start a compose. The content type should be 'application/json' and the body of the POST -should look like this

      -

      Example:

      -
      {
      -  "blueprint_name": "http-server",
      -  "compose_type": "tar",
      -  "branch": "master"
      -}
      -
      -
      -

      Pass it the name of the blueprint, the type of output (from '/api/v0/compose/types'), and the -blueprint branch to use. 'branch' is optional and will default to master. It will create a new -build and add it to the queue. It returns the build uuid and a status if it succeeds

      -

      Example:

      -
      {
      -  "build_id": "e6fa6db4-9c81-4b70-870f-a697ca405cdf",
      -  "status": true
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_status(uuids)[source]
      -

      Return the status of the listed uuids

      -

      /api/v0/compose/status/<uuids>[?blueprint=<blueprint_name>&status=<compose_status>&type=<compose_type>]

      -
      -

      Return the details for each of the comma-separated list of uuids. A uuid of '*' will return -details for all composes.

      -

      Example:

      -
      {
      -  "uuids": [
      -    {
      -      "id": "8c8435ef-d6bd-4c68-9bf1-a2ef832e6b1a",
      -      "blueprint": "http-server",
      -      "queue_status": "FINISHED",
      -      "job_created": 1517523644.2384307,
      -      "job_started": 1517523644.2551234,
      -      "job_finished": 1517523689.9864314,
      -      "version": "0.0.2"
      -    },
      -    {
      -      "id": "45502a6d-06e8-48a5-a215-2b4174b3614b",
      -      "blueprint": "glusterfs",
      -      "queue_status": "FINISHED",
      -      "job_created": 1517363442.188399,
      -      "job_started": 1517363442.325324,
      -      "job_finished": 1517363451.653621,
      -      "version": "0.0.6"
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_compose_types()[source]
      -

      Return the list of enabled output types

      -

      (only enabled types are returned)

      -

      /api/v0/compose/types

      -
      -

      Returns the list of supported output types that are valid for use with 'POST /api/v0/compose'

      -

      Example:

      -
      {
      -  "types": [
      -    {
      -      "enabled": true,
      -      "name": "tar"
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_modules_info(module_names)[source]
      -

      Return detailed information about the listed modules

      -

      /api/v0/modules/info/<module_names>

      -
      -

      Return the module's dependencies, and the information about the module.

      -

      Example:

      -
      {
      -  "modules": [
      -    {
      -      "dependencies": [
      -        {
      -          "arch": "noarch",
      -          "epoch": "0",
      -          "name": "basesystem",
      -          "release": "7.el7",
      -          "version": "10.0"
      -        },
      -        {
      -          "arch": "x86_64",
      -          "epoch": "0",
      -          "name": "bash",
      -          "release": "28.el7",
      -          "version": "4.2.46"
      -        },
      -        ...
      -      ],
      -      "description": "The GNU tar program saves ...",
      -      "homepage": "http://www.gnu.org/software/tar/",
      -      "name": "tar",
      -      "summary": "A GNU file archiving program",
      -      "upstream_vcs": "UPSTREAM_VCS"
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_modules_list(module_names=None)[source]
      -

      List available modules, filtering by module_names

      -

      /api/v0/modules/list[?offset=0&limit=20]

      -
      -

      Return a list of all of the available modules. This includes the name and the -group_type, which is always "rpm" for lorax-composer. By default this returns -the first 20 items. This can be changed by setting the offset and limit -arguments.

      -

      Example:

      -
      {
      -  "limit": 20,
      -  "modules": [
      -    {
      -      "group_type": "rpm",
      -      "name": "0ad"
      -    },
      -    {
      -      "group_type": "rpm",
      -      "name": "0ad-data"
      -    },
      -    {
      -      "group_type": "rpm",
      -      "name": "0install"
      -    },
      -    {
      -      "group_type": "rpm",
      -      "name": "2048-cli"
      -    },
      -    ...
      -  ]
      -  "total": 21770
      -}
      -
      -
      -
      -

      /api/v0/modules/list/<module_names>[?offset=0&limit=20]

      -
      -

      Return the list of comma-separated modules. Output is the same as /modules/list

      -

      Example:

      -
      {
      -  "limit": 20,
      -  "modules": [
      -    {
      -      "group_type": "rpm",
      -      "name": "tar"
      -    }
      -  ],
      -  "offset": 0,
      -  "total": 1
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_projects_depsolve(project_names)[source]
      -

      Return detailed information about the listed projects

      -

      /api/v0/projects/depsolve/<project_names>

      -
      -

      Depsolve the comma-separated list of projects and return the list of NEVRAs needed -to satisfy the request.

      -

      Example:

      -
      {
      -  "projects": [
      -    {
      -      "arch": "noarch",
      -      "epoch": "0",
      -      "name": "basesystem",
      -      "release": "7.el7",
      -      "version": "10.0"
      -    },
      -    {
      -      "arch": "x86_64",
      -      "epoch": "0",
      -      "name": "bash",
      -      "release": "28.el7",
      -      "version": "4.2.46"
      -    },
      -    {
      -      "arch": "x86_64",
      -      "epoch": "0",
      -      "name": "filesystem",
      -      "release": "21.el7",
      -      "version": "3.2"
      -    },
      -    ...
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_projects_info(project_names)[source]
      -

      Return detailed information about the listed projects

      -

      /api/v0/projects/info/<project_names>

      -
      -

      Return information about the comma-separated list of projects. It includes the description -of the package along with the list of available builds.

      -

      Example:

      -
      {
      -  "projects": [
      -    {
      -      "builds": [
      -        {
      -          "arch": "x86_64",
      -          "build_config_ref": "BUILD_CONFIG_REF",
      -          "build_env_ref": "BUILD_ENV_REF",
      -          "build_time": "2017-03-01T08:39:23",
      -          "changelog": "- restore incremental backups correctly, files ...",
      -          "epoch": "2",
      -          "metadata": {},
      -          "release": "32.el7",
      -          "source": {
      -            "license": "GPLv3+",
      -            "metadata": {},
      -            "source_ref": "SOURCE_REF",
      -            "version": "1.26"
      -          }
      -        }
      -      ],
      -      "description": "The GNU tar program saves many ...",
      -      "homepage": "http://www.gnu.org/software/tar/",
      -      "name": "tar",
      -      "summary": "A GNU file archiving program",
      -      "upstream_vcs": "UPSTREAM_VCS"
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_projects_list()[source]
      -

      List all of the available projects/packages

      -

      /api/v0/projects/list[?offset=0&limit=20]

      -
      -

      List all of the available projects. By default this returns the first 20 items, -but this can be changed by setting the offset and limit arguments.

      -

      Example:

      -
      {
      -  "limit": 20,
      -  "offset": 0,
      -  "projects": [
      -    {
      -      "description": "0 A.D. (pronounced "zero ey-dee") is a ...",
      -      "homepage": "http://play0ad.com",
      -      "name": "0ad",
      -      "summary": "Cross-Platform RTS Game of Ancient Warfare",
      -      "upstream_vcs": "UPSTREAM_VCS"
      -    },
      -    ...
      -  ],
      -  "total": 21770
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_projects_source_delete(source_name)[source]
      -

      Delete the named source and return a status response

      -

      DELETE /api/v0/projects/source/delete/<source-name>

      -
      -

      Delete a user added source. This will fail if a system source is passed to -it.

      -

      The response will be a status response with status set to true, or an -error response with it set to false and an error message included.

      -
      -
      - -
      -
      -pylorax.api.v0.v0_projects_source_info(source_names)[source]
      -

      Return detailed info about the list of sources

      -

      /api/v0/projects/source/info/<source-names>

      -
      -

      Return information about the comma-separated list of source names. Or all of the -sources if '*' is passed. Note that general globbing is not supported, only '*'.

      -

      immutable system sources will have the "system" field set to true. User added sources -will have it set to false. System sources cannot be changed or deleted.

      -

      Example:

      -
      {
      -  "errors": [],
      -  "sources": {
      -    "fedora": {
      -      "check_gpg": true,
      -      "check_ssl": true,
      -      "gpgkey_urls": [
      -        "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64"
      -      ],
      -      "name": "fedora",
      -      "proxy": "http://proxy.brianlane.com:8123",
      -      "system": true,
      -      "type": "yum-metalink",
      -      "url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64"
      -    }
      -  }
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_projects_source_list()[source]
      -

      Return the list of source names

      -

      /api/v0/projects/source/list

      -
      -

      Return the list of repositories used for depsolving and installing packages.

      -

      Example:

      -
      {
      -  "sources": [
      -    "fedora",
      -    "fedora-cisco-openh264",
      -    "fedora-updates-testing",
      -    "fedora-updates"
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v0.v0_projects_source_new()[source]
      -

      Add a new package source. Or change an existing one

      -

      POST /api/v0/projects/source/new

      -
      -

      Add (or change) a source for use when depsolving blueprints and composing images.

      -

      The proxy and gpgkey_urls entries are optional. All of the others are required. The supported -types for the urls are:

      -
        -
      • yum-baseurl is a URL to a yum repository.

      • -
      • yum-mirrorlist is a URL for a mirrorlist.

      • -
      • yum-metalink is a URL for a metalink.

      • -
      -

      If check_ssl is true the https certificates must be valid. If they are self-signed you can either set -this to false, or add your Certificate Authority to the host system.

      -

      If check_gpg is true the GPG key must either be installed on the host system, or gpgkey_urls -should point to it.

      -

      You can edit an existing source (other than system sources), by doing a POST -of the new version of the source. It will overwrite the previous one.

      -

      Example:

      -
      {
      -    "name": "custom-source-1",
      -    "url": "https://url/path/to/repository/",
      -    "type": "yum-baseurl",
      -    "check_ssl": true,
      -    "check_gpg": true,
      -    "gpgkey_urls": [
      -        "https://url/path/to/gpg-key"
      -    ]
      -}
      -
      -
      -
      -
      - -
      -
      -
      -

      pylorax.api.v1 module

      -

      Setup v1 of the API server

      -
      -
      -pylorax.api.v1.v1_compose_failed()[source]
      -

      Return the list of failed composes

      -

      /api/v1/compose/failed

      -
      -

      Return the details on all of the failed composes on the system.

      -

      Example:

      -
      {
      -  "failed": [
      -     {
      -      "id": "8c8435ef-d6bd-4c68-9bf1-a2ef832e6b1a",
      -      "blueprint": "http-server",
      -      "queue_status": "FAILED",
      -      "job_created": 1517523249.9301329,
      -      "job_started": 1517523249.9314211,
      -      "job_finished": 1517523255.5623411,
      -      "version": "0.0.2",
      -      "uploads": [
      -          {
      -              "creation_time": 1568150660.524401,
      -              "image_name": "http-server",
      -              "image_path": null,
      -              "provider_name": "azure",
      -              "settings": {
      -                  "client_id": "need",
      -                  "location": "need",
      -                  "resource_group": "group",
      -                  "secret": "need",
      -                  "storage_account_name": "need",
      -                  "storage_container": "need",
      -                  "subscription_id": "need",
      -                  "tenant": "need"
      -              },
      -              "status": "WAITING",
      -              "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65"
      -          }
      -      ]
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_compose_finished()[source]
      -

      Return the list of finished composes

      -

      /api/v1/compose/finished

      -
      -

      Return the details on all of the finished composes on the system.

      -

      Example:

      -
      {
      -  "finished": [
      -    {
      -      "id": "70b84195-9817-4b8a-af92-45e380f39894",
      -      "blueprint": "glusterfs",
      -      "queue_status": "FINISHED",
      -      "job_created": 1517351003.8210032,
      -      "job_started": 1517351003.8230415,
      -      "job_finished": 1517359234.1003145,
      -      "version": "0.0.6"
      -    },
      -    {
      -      "id": "e695affd-397f-4af9-9022-add2636e7459",
      -      "blueprint": "glusterfs",
      -      "queue_status": "FINISHED",
      -      "job_created": 1517362289.7193348,
      -      "job_started": 1517362289.9751132,
      -      "job_finished": 1517363500.1234567,
      -      "version": "0.0.6",
      -      "uploads": [
      -          {
      -              "creation_time": 1568150660.524401,
      -              "image_name": "glusterfs server",
      -              "image_path": "/var/lib/lorax/composer/results/e695affd-397f-4af9-9022-add2636e7459/disk.vhd",
      -              "provider_name": "azure",
      -              "settings": {
      -                  "client_id": "need",
      -                  "location": "need",
      -                  "resource_group": "group",
      -                  "secret": "need",
      -                  "storage_account_name": "need",
      -                  "storage_container": "need",
      -                  "subscription_id": "need",
      -                  "tenant": "need"
      -              },
      -              "status": "WAITING",
      -              "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65"
      -          }
      -      ]
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_compose_info(uuid)[source]
      -

      Return detailed info about a compose

      -

      /api/v1/compose/info/<uuid>

      -
      -

      Get detailed information about the compose. The returned JSON string will -contain the following information:

      -
      -
        -
      • id - The uuid of the comoposition

      • -
      • config - containing the configuration settings used to run Anaconda

      • -
      • blueprint - The depsolved blueprint used to generate the kickstart

      • -
      • commit - The (local) git commit hash for the blueprint used

      • -
      • deps - The NEVRA of all of the dependencies used in the composition

      • -
      • compose_type - The type of output generated (tar, iso, etc.)

      • -
      • queue_status - The final status of the composition (FINISHED or FAILED)

      • -
      -
      -

      Example:

      -
      {
      -  "commit": "7078e521a54b12eae31c3fd028680da7a0815a4d",
      -  "compose_type": "tar",
      -  "config": {
      -    "anaconda_args": "",
      -    "armplatform": "",
      -    "compress_args": [],
      -    "compression": "xz",
      -    "image_name": "root.tar.xz",
      -    ...
      -  },
      -  "deps": {
      -    "packages": [
      -      {
      -        "arch": "x86_64",
      -        "epoch": "0",
      -        "name": "acl",
      -        "release": "14.el7",
      -        "version": "2.2.51"
      -      }
      -    ]
      -  },
      -  "id": "c30b7d80-523b-4a23-ad52-61b799739ce8",
      -  "queue_status": "FINISHED",
      -  "blueprint": {
      -    "description": "An example kubernetes master",
      -    ...
      -  },
      -  "uploads": [
      -      {
      -          "creation_time": 1568150660.524401,
      -          "image_name": "glusterfs server",
      -          "image_path": "/var/lib/lorax/composer/results/c30b7d80-523b-4a23-ad52-61b799739ce8/disk.vhd",
      -          "provider_name": "azure",
      -          "settings": {
      -              "client_id": "need",
      -              "location": "need",
      -              "resource_group": "group",
      -              "secret": "need",
      -              "storage_account_name": "need",
      -              "storage_container": "need",
      -              "subscription_id": "need",
      -              "tenant": "need"
      -          },
      -          "status": "FAILED",
      -          "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65"
      -      }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_compose_queue()[source]
      -

      Return the status of the new and running queues

      -

      /api/v1/compose/queue

      -
      -

      Return the status of the build queue. It includes information about the builds waiting, -and the build that is running.

      -

      Example:

      -
      {
      -  "new": [
      -    {
      -      "id": "45502a6d-06e8-48a5-a215-2b4174b3614b",
      -      "blueprint": "glusterfs",
      -      "queue_status": "WAITING",
      -      "job_created": 1517362647.4570868,
      -      "version": "0.0.6"
      -    },
      -    {
      -      "id": "6d292bd0-bec7-4825-8d7d-41ef9c3e4b73",
      -      "blueprint": "kubernetes",
      -      "queue_status": "WAITING",
      -      "job_created": 1517362659.0034983,
      -      "version": "0.0.1"
      -    }
      -  ],
      -  "run": [
      -    {
      -      "id": "745712b2-96db-44c0-8014-fe925c35e795",
      -      "blueprint": "glusterfs",
      -      "queue_status": "RUNNING",
      -      "job_created": 1517362633.7965999,
      -      "job_started": 1517362633.8001345,
      -      "version": "0.0.6",
      -      "uploads": [
      -          {
      -              "creation_time": 1568150660.524401,
      -              "image_name": "glusterfs server",
      -              "image_path": null,
      -              "provider_name": "azure",
      -              "settings": {
      -                  "client_id": "need",
      -                  "location": "need",
      -                  "resource_group": "group",
      -                  "secret": "need",
      -                  "storage_account_name": "need",
      -                  "storage_container": "need",
      -                  "subscription_id": "need",
      -                  "tenant": "need"
      -              },
      -              "status": "WAITING",
      -              "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65"
      -          }
      -      ]
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_compose_start()[source]
      -

      Start a compose

      -
      -
      The body of the post should have these fields:

      blueprint_name - The blueprint name from /blueprints/list/ -compose_type - The type of output to create, from /compose/types -branch - Optional, defaults to master, selects the git branch to use for the blueprint.

      -
      -
      -

      POST /api/v1/compose

      -
      -

      Start a compose. The content type should be 'application/json' and the body of the POST -should look like this. The "upload" object is optional.

      -

      The upload object can specify either a pre-existing profile to use (as returned by -/uploads/providers) or one-time use settings for the provider.

      -

      Example with upload profile:

      -
      {
      -  "blueprint_name": "http-server",
      -  "compose_type": "tar",
      -  "branch": "master",
      -  "upload": {
      -    "image_name": "My Image",
      -    "provider": "azure",
      -    "profile": "production-azure-settings"
      -  }
      -}
      -
      -
      -

      Example with upload settings:

      -
      {
      -  "blueprint_name": "http-server",
      -  "compose_type": "tar",
      -  "branch": "master",
      -  "upload": {
      -    "image_name": "My Image",
      -    "provider": "azure",
      -    "settings": {
      -      "resource_group": "SOMEBODY",
      -      "storage_account_name": "ONCE",
      -      "storage_container": "TOLD",
      -      "location": "ME",
      -      "subscription_id": "THE",
      -      "client_id": "WORLD",
      -      "secret": "IS",
      -      "tenant": "GONNA"
      -    }
      -  }
      -}
      -
      -
      -

      Pass it the name of the blueprint, the type of output (from -'/api/v1/compose/types'), and the blueprint branch to use. 'branch' is -optional and will default to master. It will create a new build and add -it to the queue. It returns the build uuid and a status if it succeeds. -If an "upload" is given, it will schedule an upload to run when the build -finishes.

      -

      Example response:

      -
      {
      -  "build_id": "e6fa6db4-9c81-4b70-870f-a697ca405cdf",
      -  "upload_id": "572eb0d0-5348-4600-9666-14526ba628bb",
      -  "status": true
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_compose_status(uuids)[source]
      -

      Return the status of the listed uuids

      -

      /api/v1/compose/status/<uuids>[?blueprint=<blueprint_name>&status=<compose_status>&type=<compose_type>]

      -
      -

      Return the details for each of the comma-separated list of uuids. A uuid of '*' will return -details for all composes.

      -

      Example:

      -
      {
      -  "uuids": [
      -    {
      -      "id": "8c8435ef-d6bd-4c68-9bf1-a2ef832e6b1a",
      -      "blueprint": "http-server",
      -      "queue_status": "FINISHED",
      -      "job_created": 1517523644.2384307,
      -      "job_started": 1517523644.2551234,
      -      "job_finished": 1517523689.9864314,
      -      "version": "0.0.2"
      -    },
      -    {
      -      "id": "45502a6d-06e8-48a5-a215-2b4174b3614b",
      -      "blueprint": "glusterfs",
      -      "queue_status": "FINISHED",
      -      "job_created": 1517363442.188399,
      -      "job_started": 1517363442.325324,
      -      "job_finished": 1517363451.653621,
      -      "version": "0.0.6",
      -      "uploads": [
      -          {
      -              "creation_time": 1568150660.524401,
      -              "image_name": "glusterfs server",
      -              "image_path": null,
      -              "provider_name": "azure",
      -              "settings": {
      -                  "client_id": "need",
      -                  "location": "need",
      -                  "resource_group": "group",
      -                  "secret": "need",
      -                  "storage_account_name": "need",
      -                  "storage_container": "need",
      -                  "subscription_id": "need",
      -                  "tenant": "need"
      -              },
      -              "status": "WAITING",
      -              "uuid": "21898dfd-9ac9-4e22-bb1d-7f12d0129e65"
      -          }
      -      ]
      -    }
      -  ]
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_compose_uploads_delete(upload_uuid)[source]
      -

      Delete an upload and disassociate it from its compose

      -

      DELETE /api/v1/upload/delete/<upload_uuid>

      -
      -

      Example response:

      -
      {
      -  "status": true,
      -  "upload_id": "572eb0d0-5348-4600-9666-14526ba628bb"
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_compose_uploads_schedule(compose_uuid)[source]
      -

      Schedule an upload of a compose to a given cloud provider

      -

      POST /api/v1/uploads/schedule/<compose_uuid>

      -
      -

      The body can specify either a pre-existing profile to use (as returned by -/uploads/providers) or one-time use settings for the provider.

      -

      Example with upload profile:

      -
      {
      -  "image_name": "My Image",
      -  "provider": "azure",
      -  "profile": "production-azure-settings"
      -}
      -
      -
      -

      Example with upload settings:

      -
      {
      -  "image_name": "My Image",
      -  "provider": "azure",
      -  "settings": {
      -    "resource_group": "SOMEBODY",
      -    "storage_account_name": "ONCE",
      -    "storage_container": "TOLD",
      -    "location": "ME",
      -    "subscription_id": "THE",
      -    "client_id": "WORLD",
      -    "secret": "IS",
      -    "tenant": "GONNA"
      -  }
      -}
      -
      -
      -

      Example response:

      -
      {
      -  "status": true,
      -  "upload_id": "572eb0d0-5348-4600-9666-14526ba628bb"
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_projects_source_info(source_ids)[source]
      -

      Return detailed info about the list of sources

      -

      /api/v1/projects/source/info/<source-ids>

      -
      -

      Return information about the comma-separated list of source ids. Or all of the -sources if '*' is passed. Note that general globbing is not supported, only '*'.

      -

      Immutable system sources will have the "system" field set to true. User added sources -will have it set to false. System sources cannot be changed or deleted.

      -

      Example:

      -
      {
      -  "errors": [],
      -  "sources": {
      -    "fedora": {
      -      "check_gpg": true,
      -      "check_ssl": true,
      -      "gpgkey_urls": [
      -        "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64"
      -      ],
      -      "id": "fedora",
      -      "name": "Fedora $releasever - $basearch",
      -      "proxy": "http://proxy.brianlane.com:8123",
      -      "system": true,
      -      "type": "yum-metalink",
      -      "url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64"
      -    }
      -  }
      -}
      -
      -
      -
      -

      In v0 the name field was used for the id (a short name for the repo). In v1 name changed -to id and name is now used for the longer descriptive name of the repository.

      -
      - -
      -
      -pylorax.api.v1.v1_projects_source_new()[source]
      -

      Add a new package source. Or change an existing one

      -

      POST /api/v1/projects/source/new

      -
      -

      Add (or change) a source for use when depsolving blueprints and composing images.

      -

      The proxy and gpgkey_urls entries are optional. All of the others are required. The supported -types for the urls are:

      -
        -
      • yum-baseurl is a URL to a yum repository.

      • -
      • yum-mirrorlist is a URL for a mirrorlist.

      • -
      • yum-metalink is a URL for a metalink.

      • -
      -

      If check_ssl is true the https certificates must be valid. If they are self-signed you can either set -this to false, or add your Certificate Authority to the host system.

      -

      If check_gpg is true the GPG key must either be installed on the host system, or gpgkey_urls -should point to it.

      -

      You can edit an existing source (other than system sources), by doing a POST -of the new version of the source. It will overwrite the previous one.

      -

      Example:

      -
      {
      -    "id": "custom-source-1",
      -    "name": "Custom Package Source #1",
      -    "url": "https://url/path/to/repository/",
      -    "type": "yum-baseurl",
      -    "check_ssl": true,
      -    "check_gpg": true,
      -    "gpgkey_urls": [
      -        "https://url/path/to/gpg-key"
      -    ]
      -}
      -
      -
      -
      -

      In v0 the name field was used for the id (a short name for the repo). In v1 name changed -to id and name is now used for the longer descriptive name of the repository.

      -
      - -
      -
      -pylorax.api.v1.v1_providers_delete(provider_name, profile)[source]
      -

      Delete a provider's profile settings

      -

      DELETE /api/v1/upload/providers/delete/<provider_name>/<profile>

      -
      -

      Example response:

      -
      {
      -  "status": true
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_providers_save()[source]
      -

      Save provider settings as a profile for later use

      -

      POST /api/v1/upload/providers/save

      -
      -

      Example request:

      -
      {
      -  "provider": "azure",
      -  "profile": "my-profile",
      -  "settings": {
      -    "resource_group": "SOMEBODY",
      -    "storage_account_name": "ONCE",
      -    "storage_container": "TOLD",
      -    "location": "ME",
      -    "subscription_id": "THE",
      -    "client_id": "WORLD",
      -    "secret": "IS",
      -    "tenant": "GONNA"
      -  }
      -}
      -
      -
      -

      Saving to an existing profile will overwrite it.

      -

      Example response:

      -
      {
      -  "status": true
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_upload_cancel(upload_uuid)[source]
      -

      Cancel an upload that is either queued or in progress

      -

      DELETE /api/v1/upload/cancel/<upload_uuid>

      -
      -

      Example response:

      -
      {
      -  "status": true,
      -  "upload_id": "037a3d56-b421-43e9-9935-c98350c89996"
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_upload_info(upload_uuid)[source]
      -

      Returns information about a given upload

      -

      GET /api/v1/upload/info/<upload_uuid>

      -
      -

      Example response:

      -
      {
      -  "status": true,
      -  "upload": {
      -    "creation_time": 1565620940.069004,
      -    "image_name": "My Image",
      -    "image_path": "/var/lib/lorax/composer/results/b6218e8f-0fa2-48ec-9394-f5c2918544c4/disk.vhd",
      -    "provider_name": "azure",
      -    "settings": {
      -      "resource_group": "SOMEBODY",
      -      "storage_account_name": "ONCE",
      -      "storage_container": "TOLD",
      -      "location": "ME",
      -      "subscription_id": "THE",
      -      "client_id": "WORLD",
      -      "secret": "IS",
      -      "tenant": "GONNA"
      -    },
      -    "status": "FAILED",
      -    "uuid": "b637c411-9d9d-4279-b067-6c8d38e3b211"
      -  }
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_upload_log(upload_uuid)[source]
      -

      Returns an upload's log

      -

      GET /api/v1/upload/log/<upload_uuid>

      -
      -

      Example response:

      -
      {
      -  "status": true,
      -  "upload_id": "b637c411-9d9d-4279-b067-6c8d38e3b211",
      -  "log": "< PLAY [localhost] >..."
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_upload_providers()[source]
      -

      Return the information about all upload providers, including their -display names, expected settings, and saved profiles. Refer to the -resolve_provider function.

      -

      GET /api/v1/upload/providers

      -
      -

      Example response:

      -
      {
      -  "providers": {
      -    "azure": {
      -      "display": "Azure",
      -      "profiles": {
      -        "default": {
      -          "client_id": "example",
      -          ...
      -        }
      -      },
      -      "settings-info": {
      -        "client_id": {
      -          "display": "Client ID",
      -          "placeholder": "",
      -          "regex": "",
      -          "type": "string"
      -        },
      -        ...
      -      },
      -      "supported_types": ["vhd"]
      -    },
      -    ...
      -  }
      -}
      -
      -
      -
      -
      - -
      -
      -pylorax.api.v1.v1_upload_reset(upload_uuid)[source]
      -

      Reset an upload so it can be attempted again

      -

      POST /api/v1/upload/reset/<upload_uuid>

      -
      -

      Optionally pass in a new image name and/or new settings.

      -

      Example request:

      -
      {
      -  "image_name": "My renamed image",
      -  "settings": {
      -    "resource_group": "ROLL",
      -    "storage_account_name": "ME",
      -    "storage_container": "I",
      -    "location": "AIN'T",
      -    "subscription_id": "THE",
      -    "client_id": "SHARPEST",
      -    "secret": "TOOL",
      -    "tenant": "IN"
      -  }
      -}
      -
      -
      -

      Example response:

      -
      {
      -  "status": true,
      -  "upload_id": "c75d5d62-9d26-42fc-a8ef-18bb14679fc7"
      -}
      -
      -
      -
      -
      - -
      -
      -

      pylorax.api.workspace module

      -
      -
      -pylorax.api.workspace.workspace_delete(repo, branch, recipe_name)[source]
      -

      Delete the recipe from the workspace

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- The name of the recipe

      • -
      -
      -
      Returns
      -

      None

      -
      -
      Raises
      -

      IO related errors

      -
      -
      -
      - -
      -
      -pylorax.api.workspace.workspace_dir(repo, branch)[source]
      -

      Create the workspace's path from a Repository and branch

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      -
      -
      Returns
      -

      The path to the branch's workspace directory

      -
      -
      Return type
      -

      str

      -
      -
      -
      - -
      -
      -pylorax.api.workspace.workspace_exists(repo, branch, recipe_name)[source]
      -

      Return true of the workspace recipe exists

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- The name of the recipe

      • -
      -
      -
      Returns
      -

      True if the file exists

      -
      -
      Return type
      -

      bool

      -
      -
      -
      - -
      -
      -pylorax.api.workspace.workspace_filename(repo, branch, recipe_name)[source]
      -

      Return the path and filename of the workspace recipe

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- The name of the recipe

      • -
      -
      -
      Returns
      -

      workspace recipe path and filename

      -
      -
      Return type
      -

      str

      -
      -
      -
      - -
      -
      -pylorax.api.workspace.workspace_read(repo, branch, recipe_name)[source]
      -

      Read a Recipe from the branch's workspace

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe_name (str) -- The name of the recipe

      • -
      -
      -
      Returns
      -

      The workspace copy of the recipe, or None if it doesn't exist

      -
      -
      Return type
      -

      Recipe or None

      -
      -
      Raises
      -

      RecipeFileError

      -
      -
      -
      - -
      -
      -pylorax.api.workspace.workspace_write(repo, branch, recipe)[source]
      -

      Write a recipe to the workspace

      -
      -
      Parameters
      -
        -
      • repo (Git.Repository) -- Open repository

      • -
      • branch (str) -- Branch name

      • -
      • recipe (Recipe) -- The recipe to write to the workspace

      • -
      -
      -
      Returns
      -

      None

      -
      -
      Raises
      -

      IO related errors

      -
      -
      -
      - -
      -
      -

      Module contents

      -
      -
      - - -
      - -
      - - -
      -
      - -
      - -
      - - - - - - - - - - - - \ No newline at end of file diff --git a/docs/html/pylorax.html b/docs/html/pylorax.html index 855cbc32..64f771e7 100644 --- a/docs/html/pylorax.html +++ b/docs/html/pylorax.html @@ -1,41 +1,41 @@ - - + - + - + + + pylorax package — Lorax 35.1 documentation + + + + + + - pylorax package — Lorax 35.0 documentation - + - - - - - - - + @@ -59,7 +59,7 @@
      - 35.0 + 35.1
      @@ -76,6 +76,7 @@ + + @@ -157,11 +157,13 @@ + +
        -
      • Docs »
      • +
      • »
      • src »
      • @@ -170,7 +172,7 @@
      • - + View page source @@ -193,42 +195,42 @@

        pylorax.base module

        -class pylorax.base.BaseLoraxClass[source]
        +class pylorax.base.BaseLoraxClass[source]

        Bases: object

        -pcritical(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        +pcritical(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        -pdebug(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        +pdebug(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        -perror(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        +perror(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        -pinfo(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        +pinfo(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        -pwarning(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        +pwarning(msg, fobj=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
        -class pylorax.base.DataHolder(**kwargs)[source]
        +class pylorax.base.DataHolder(**kwargs)[source]

        Bases: dict

        -copy() → a shallow copy of D[source]
        +copy()a shallow copy of D[source]
        @@ -238,11 +240,11 @@

        pylorax.buildstamp module

        -class pylorax.buildstamp.BuildStamp(product, version, bugurl, isfinal, buildarch, variant='')[source]
        +class pylorax.buildstamp.BuildStamp(product, version, bugurl, isfinal, buildarch, variant='')[source]

        Bases: object

        -write(outfile)[source]
        +write(outfile)[source]
        @@ -252,13 +254,13 @@

        pylorax.cmdline module

        -pylorax.cmdline.lmc_parser(dracut_default='')[source]
        +pylorax.cmdline.lmc_parser(dracut_default='')[source]

        Return a ArgumentParser object for live-media-creator.

        -pylorax.cmdline.lorax_parser(dracut_default='')[source]
        +pylorax.cmdline.lorax_parser(dracut_default='')[source]

        Return the ArgumentParser for lorax

        @@ -267,7 +269,7 @@

        pylorax.creator module

        -class pylorax.creator.FakeDNF(conf)[source]
        +class pylorax.creator.FakeDNF(conf)[source]

        Bases: object

        A minimal DNF object suitable for passing to RuntimeBuilder

        lmc uses RuntimeBuilder to run the arch specific iso creation @@ -275,14 +277,14 @@ templates, so the the installroot config value is the important part of this. Everything else should be a nop.

        -reset()[source]
        +reset()[source]
        -pylorax.creator.calculate_disk_size(opts, ks)[source]
        +pylorax.creator.calculate_disk_size(opts, ks)[source]

        Calculate the disk size from the kickstart

        Parameters
        @@ -303,7 +305,7 @@ this. Everything else should be a nop.

        -pylorax.creator.check_kickstart(ks, opts)[source]
        +pylorax.creator.check_kickstart(ks, opts)[source]

        Check the parsed kickstart object for errors

        Parameters
        @@ -323,7 +325,7 @@ this. Everything else should be a nop.

        -pylorax.creator.create_pxe_config(template, images_dir, live_image_name, add_args=None)[source]
        +pylorax.creator.create_pxe_config(template, images_dir, live_image_name, add_args=None)[source]

        Create template for pxe to live configuration

        Parameters
        @@ -338,7 +340,7 @@ this. Everything else should be a nop.

        -pylorax.creator.dracut_args(opts)[source]
        +pylorax.creator.dracut_args(opts)[source]

        Return a list of the args to pass to dracut

        Return the default argument list unless one of the dracut cmdline arguments has been used.

        @@ -346,7 +348,7 @@ has been used.

        -pylorax.creator.find_ostree_root(phys_root)[source]
        +pylorax.creator.find_ostree_root(phys_root)[source]

        Find root of ostree deployment

        Parameters
        @@ -366,7 +368,7 @@ has been used.

        -pylorax.creator.get_arch(mount_dir)[source]
        +pylorax.creator.get_arch(mount_dir)[source]

        Get the kernel arch

        Returns
        @@ -380,7 +382,7 @@ has been used.

        -pylorax.creator.is_image_mounted(disk_img)[source]
        +pylorax.creator.is_image_mounted(disk_img)[source]

        Check to see if the disk_img is mounted

        Returns
        @@ -394,7 +396,7 @@ has been used.

        -pylorax.creator.make_appliance(disk_img, name, template, outfile, networks=None, ram=1024, vcpus=1, arch=None, title='Linux', project='Linux', releasever='34')[source]
        +pylorax.creator.make_appliance(disk_img, name, template, outfile, networks=None, ram=1024, vcpus=1, arch=None, title='Linux', project='Linux', releasever='34')[source]

        Generate an appliance description file

        Parameters
        @@ -417,7 +419,7 @@ has been used.

        -pylorax.creator.make_image(opts, ks, cancel_func=None)[source]
        +pylorax.creator.make_image(opts, ks, cancel_func=None)[source]

        Install to a disk image

        Parameters
        @@ -439,7 +441,7 @@ has been used.

        -pylorax.creator.make_live_images(opts, work_dir, disk_img)[source]
        +pylorax.creator.make_live_images(opts, work_dir, disk_img)[source]

        Create live images from direcory or rootfs image

        Parameters
        @@ -463,7 +465,7 @@ it will return None and log the error.

        -pylorax.creator.make_livecd(opts, mount_dir, work_dir)[source]
        +pylorax.creator.make_livecd(opts, mount_dir, work_dir)[source]

        Take the content from the disk image and make a livecd out of it

        Parameters
        @@ -487,7 +489,7 @@ root=live:CDLABEL=<volid> rd.live.image

      • -pylorax.creator.make_runtime(opts, mount_dir, work_dir, size=None)[source]
        +pylorax.creator.make_runtime(opts, mount_dir, work_dir, size=None)[source]

        Make the squashfs image from a directory

        Parameters
        @@ -509,7 +511,7 @@ root=live:CDLABEL=<volid> rd.live.image

        -pylorax.creator.mount_boot_part_over_root(img_mount)[source]
        +pylorax.creator.mount_boot_part_over_root(img_mount)[source]

        Mount boot partition to /boot of root fs mounted in img_mount

        Used for OSTree so it finds deployment configurations on live rootfs

        param img_mount: object with mounted disk image root partition @@ -518,7 +520,7 @@ type img_mount: imgutils.PartitionMount

        -pylorax.creator.rebuild_initrds_for_live(opts, sys_root_dir, results_dir)[source]
        +pylorax.creator.rebuild_initrds_for_live(opts, sys_root_dir, results_dir)[source]

        Rebuild intrds for pxe live image (root=live:http://)

        Parameters
        @@ -533,7 +535,7 @@ type img_mount: imgutils.PartitionMount

        -pylorax.creator.run_creator(opts, cancel_func=None)[source]
        +pylorax.creator.run_creator(opts, cancel_func=None)[source]

        Run the image creator process

        Parameters
        @@ -556,7 +558,7 @@ See the cmdline --help for livemedia-creator for the possible options

        -pylorax.creator.squashfs_args(opts)[source]
        +pylorax.creator.squashfs_args(opts)[source]

        Returns the compression type and args to use when making squashfs

        Parameters
        @@ -576,7 +578,7 @@ See the cmdline --help for livemedia-creator for the possible options

        pylorax.decorators module

        -pylorax.decorators.singleton(cls)[source]
        +pylorax.decorators.singleton(cls)[source]
      @@ -584,11 +586,11 @@ See the cmdline --help for livemedia-creator for the possible options

      pylorax.discinfo module

      -class pylorax.discinfo.DiscInfo(release, basearch)[source]
      +class pylorax.discinfo.DiscInfo(release, basearch)[source]

      Bases: object

      -write(outfile)[source]
      +write(outfile)[source]
      @@ -598,7 +600,7 @@ See the cmdline --help for livemedia-creator for the possible options

      pylorax.dnfbase module

      -pylorax.dnfbase.get_dnf_base_object(installroot, sources, mirrorlists=None, repos=None, enablerepos=None, disablerepos=None, tempdir='/var/tmp', proxy=None, releasever='34', cachedir=None, logdir=None, sslverify=True, dnfplugins=None)[source]
      +pylorax.dnfbase.get_dnf_base_object(installroot, sources, mirrorlists=None, repos=None, enablerepos=None, disablerepos=None, tempdir='/var/tmp', proxy=None, releasever='34', cachedir=None, logdir=None, sslverify=True, dnfplugins=None)[source]

      Create a dnf Base object and setup the repositories and installroot

      Parameters
      @@ -625,37 +627,37 @@ If cachedir is None a dnf.cache directory is created inside tmpdir

      pylorax.dnfhelper module

      -class pylorax.dnfhelper.LoraxDownloadCallback[source]
      +class pylorax.dnfhelper.LoraxDownloadCallback[source]

      Bases: DownloadProgress

      -end(payload, status, msg)[source]
      +end(payload, status, msg)[source]
      -progress(payload, done)[source]
      +progress(payload, done)[source]
      -start(total_files, total_size, total_drpms=0)[source]
      +start(total_files, total_size, total_drpms=0)[source]
      -class pylorax.dnfhelper.LoraxRpmCallback[source]
      +class pylorax.dnfhelper.LoraxRpmCallback[source]

      Bases: TransactionProgress

      -error(message)[source]
      +error(message)[source]
      -progress(package, action, ti_done, ti_total, ts_done, ts_total)[source]
      +progress(package, action, ti_done, ti_total, ts_done, ts_total)[source]
      @@ -665,18 +667,18 @@ If cachedir is None a dnf.cache directory is created inside tmpdir

      pylorax.executils module

      -class pylorax.executils.ExecProduct(rc, stdout, stderr)[source]
      +class pylorax.executils.ExecProduct(rc, stdout, stderr)[source]

      Bases: object

      -pylorax.executils.augmentEnv()[source]
      +pylorax.executils.augmentEnv()[source]
      -pylorax.executils.execReadlines(command, argv, stdin=None, root='/', env_prune=None, filter_stderr=False, callback=<function <lambda>>, env_add=None, reset_handlers=True, reset_lang=True)[source]
      +pylorax.executils.execReadlines(command, argv, stdin=None, root='/', env_prune=None, filter_stderr=False, callback=<function <lambda>>, env_add=None, reset_handlers=True, reset_lang=True)[source]

      Execute an external command and return the line output of the command in real-time.

      This method assumes that there is a reasonably low delay between the @@ -712,7 +714,7 @@ This returns an iterator with the lines from the command until it has finished

      -pylorax.executils.execWithCapture(command, argv, stdin=None, root='/', log_output=True, filter_stderr=False, raise_err=False, callback=None, env_add=None, reset_handlers=True, reset_lang=True)[source]
      +pylorax.executils.execWithCapture(command, argv, stdin=None, root='/', log_output=True, filter_stderr=False, raise_err=False, callback=None, env_add=None, reset_handlers=True, reset_lang=True)[source]

      Run an external program and capture standard out and err.

      Parameters
      @@ -737,7 +739,7 @@ This returns an iterator with the lines from the command until it has finished
      -pylorax.executils.execWithRedirect(command, argv, stdin=None, stdout=None, root='/', env_prune=None, log_output=True, binary_output=False, raise_err=False, callback=None, env_add=None, reset_handlers=True, reset_lang=True)[source]
      +pylorax.executils.execWithRedirect(command, argv, stdin=None, stdout=None, root='/', env_prune=None, log_output=True, binary_output=False, raise_err=False, callback=None, env_add=None, reset_handlers=True, reset_lang=True)[source]

      Run an external program and redirect the output to a file.

      Parameters
      @@ -765,19 +767,19 @@ This returns an iterator with the lines from the command until it has finished
      -pylorax.executils.runcmd(cmd, **kwargs)[source]
      +pylorax.executils.runcmd(cmd, **kwargs)[source]

      run execWithRedirect with raise_err=True

      -pylorax.executils.runcmd_output(cmd, **kwargs)[source]
      +pylorax.executils.runcmd_output(cmd, **kwargs)[source]

      run execWithCapture with raise_err=True

      -pylorax.executils.setenv(name, value)[source]
      +pylorax.executils.setenv(name, value)[source]

      Set an environment variable to be used by child processes.

      This method does not modify os.environ for the running process, which is not thread-safe. If setenv has already been called for a particular @@ -794,7 +796,7 @@ variable name, the old value is overwritten.

      -pylorax.executils.startProgram(argv, root='/', stdin=None, stdout=- 1, stderr=- 2, env_prune=None, env_add=None, reset_handlers=True, reset_lang=True, **kwargs)[source]
      +pylorax.executils.startProgram(argv, root='/', stdin=None, stdout=- 1, stderr=- 2, env_prune=None, env_add=None, reset_handlers=True, reset_lang=True, **kwargs)[source]

      Start an external program and return the Popen object.

      The root and reset_handlers arguments are handled by passing a preexec_fn argument to subprocess.Popen, but an additional preexec_fn @@ -827,32 +829,32 @@ last.

      pylorax.imgutils module

      -class pylorax.imgutils.DMDev(dev, size, name=None)[source]
      +class pylorax.imgutils.DMDev(dev, size, name=None)[source]

      Bases: object

      -class pylorax.imgutils.LoopDev(filename, size=None)[source]
      +class pylorax.imgutils.LoopDev(filename, size=None)[source]

      Bases: object

      -class pylorax.imgutils.Mount(dev, opts='', mnt=None)[source]
      +class pylorax.imgutils.Mount(dev, opts='', mnt=None)[source]

      Bases: object

      -class pylorax.imgutils.PartitionMount(disk_img, mount_ok=None, submount=None)[source]
      +class pylorax.imgutils.PartitionMount(disk_img, mount_ok=None, submount=None)[source]

      Bases: object

      Mount a partitioned image file using kpartx

      -pylorax.imgutils.compress(command, root, outfile, compression='xz', compressargs=None)[source]
      +pylorax.imgutils.compress(command, root, outfile, compression='xz', compressargs=None)[source]

      Make a compressed archive of the given rootdir or file. command is a list of the archiver commands to run compression should be "xz", "gzip", "lzma", "bzip2", or None. @@ -861,7 +863,7 @@ compressargs will be used on the compression commandline.

      -pylorax.imgutils.copytree(src, dest, preserve=True)[source]
      +pylorax.imgutils.copytree(src, dest, preserve=True)[source]

      Copy a tree of files using cp -a, thus preserving modes, timestamps, links, acls, sparse files, xattrs, selinux contexts, etc. If preserve is False, uses cp -R (useful for modeless filesystems) @@ -870,7 +872,7 @@ raises CalledProcessError if copy fails.

      -pylorax.imgutils.default_image_name(compression, basename)[source]
      +pylorax.imgutils.default_image_name(compression, basename)[source]

      Return a default image name with the correct suffix for the compression type.

      Parameters
      @@ -888,7 +890,7 @@ raises CalledProcessError if copy fails.

      -pylorax.imgutils.dm_attach(dev, size, name=None)[source]
      +pylorax.imgutils.dm_attach(dev, size, name=None)[source]

      Attach a devicemapper device to the given device, with the given size. If name is None, a random name will be chosen. Returns the device name. raises CalledProcessError if dmsetup fails.

      @@ -896,13 +898,13 @@ raises CalledProcessError if dmsetup fails.

      -pylorax.imgutils.dm_detach(dev)[source]
      +pylorax.imgutils.dm_detach(dev)[source]

      Detach the named devicemapper device. Returns False if dmsetup fails.

      -pylorax.imgutils.do_grafts(grafts, dest, preserve=True)[source]
      +pylorax.imgutils.do_grafts(grafts, dest, preserve=True)[source]

      Copy each of the items listed in grafts into dest. If the key ends with '/' it's assumed to be a directory which should be created, otherwise just the leading directories will be created.

      @@ -910,19 +912,19 @@ created, otherwise just the leading directories will be created.

      -pylorax.imgutils.estimate_size(rootdir, graft=None, fstype=None, blocksize=4096, overhead=256)[source]
      +pylorax.imgutils.estimate_size(rootdir, graft=None, fstype=None, blocksize=4096, overhead=256)[source]
      -pylorax.imgutils.get_loop_name(path)[source]
      +pylorax.imgutils.get_loop_name(path)[source]

      Return the loop device associated with the path. Raises RuntimeError if more than one loop is associated

      -pylorax.imgutils.kpartx_disk_img(disk_img)[source]
      +pylorax.imgutils.kpartx_disk_img(disk_img)[source]

      Attach a disk image's partitions to /dev/loopX using kpartx

      Parameters
      @@ -939,7 +941,7 @@ Raises RuntimeError if more than one loop is associated

      -pylorax.imgutils.loop_attach(outfile)[source]
      +pylorax.imgutils.loop_attach(outfile)[source]

      Attach a loop device to the given file. Return the loop device name.

      On rare occasions it appears that the device never shows up, some experiments seem to indicate that it may be a race with another process using /dev/loop* devices.

      @@ -949,13 +951,13 @@ seem to indicate that it may be a race with another process using /dev/loop* dev
      -pylorax.imgutils.loop_detach(loopdev)[source]
      +pylorax.imgutils.loop_detach(loopdev)[source]

      Detach the given loop device. Return False on failure.

      -pylorax.imgutils.loop_waitfor(loop_dev, outfile)[source]
      +pylorax.imgutils.loop_waitfor(loop_dev, outfile)[source]

      Make sure the loop device is attached to the outfile.

      It seems that on rare occasions losetup can return before the /dev/loopX is ready for use, causing problems with mkfs. This tries to make sure that the @@ -965,27 +967,27 @@ loop device really is associated with the backing file before continuing.

      -pylorax.imgutils.mkbtrfsimg(rootdir, outfile, size=None, label='', mountargs='', graft=None)[source]
      +pylorax.imgutils.mkbtrfsimg(rootdir, outfile, size=None, label='', mountargs='', graft=None)[source]
      -pylorax.imgutils.mkcpio(root, outfile, compression='xz', compressargs=None)[source]
      +pylorax.imgutils.mkcpio(root, outfile, compression='xz', compressargs=None)[source]
      -pylorax.imgutils.mkdosimg(rootdir, outfile, size=None, label='', mountargs='shortname=winnt,umask=0077', graft=None)[source]
      +pylorax.imgutils.mkdosimg(rootdir, outfile, size=None, label='', mountargs='shortname=winnt,umask=0077', graft=None)[source]
      -pylorax.imgutils.mkext4img(rootdir, outfile, size=None, label='', mountargs='', graft=None)[source]
      +pylorax.imgutils.mkext4img(rootdir, outfile, size=None, label='', mountargs='', graft=None)[source]
      -pylorax.imgutils.mkfsimage(fstype, rootdir, outfile, size=None, mkfsargs=None, mountargs='', graft=None)[source]
      +pylorax.imgutils.mkfsimage(fstype, rootdir, outfile, size=None, mkfsargs=None, mountargs='', graft=None)[source]

      Generic filesystem image creation function. fstype should be a filesystem type - "mkfs.${fstype}" must exist. graft should be a dict: {"some/path/in/image": "local/file/or/dir"}; @@ -995,7 +997,7 @@ Will raise CalledProcessError if something goes wrong.

      -pylorax.imgutils.mkfsimage_from_disk(diskimage, fsimage, img_size=None, label='Anaconda')[source]
      +pylorax.imgutils.mkfsimage_from_disk(diskimage, fsimage, img_size=None, label='Anaconda')[source]

      Copy the / partition of a partitioned disk image to an un-partitioned disk image.

      @@ -1013,12 +1015,12 @@ it as small as possible

      -pylorax.imgutils.mkhfsimg(rootdir, outfile, size=None, label='', mountargs='', graft=None)[source]
      +pylorax.imgutils.mkhfsimg(rootdir, outfile, size=None, label='', mountargs='', graft=None)[source]
      -pylorax.imgutils.mkqcow2(outfile, size, options=None)[source]
      +pylorax.imgutils.mkqcow2(outfile, size, options=None)[source]

      use qemu-img to create a file of the given size. options is a list of options passed to qemu-img

      Default format is qcow2, override by passing "-f", fmt @@ -1027,7 +1029,7 @@ in options.

      -pylorax.imgutils.mkqemu_img(outfile, size, options=None)[source]
      +pylorax.imgutils.mkqemu_img(outfile, size, options=None)[source]

      use qemu-img to create a file of the given size. options is a list of options passed to qemu-img

      Default format is qcow2, override by passing "-f", fmt @@ -1036,7 +1038,7 @@ in options.

      -pylorax.imgutils.mkrootfsimg(rootdir, outfile, label, size=2, sysroot='')[source]
      +pylorax.imgutils.mkrootfsimg(rootdir, outfile, label, size=2, sysroot='')[source]

      Make rootfs image from a directory

      Parameters
      @@ -1053,24 +1055,24 @@ in options.

      -pylorax.imgutils.mksparse(outfile, size)[source]
      +pylorax.imgutils.mksparse(outfile, size)[source]

      use os.ftruncate to create a sparse file of the given size.

      -pylorax.imgutils.mksquashfs(rootdir, outfile, compression='default', compressargs=None)[source]
      +pylorax.imgutils.mksquashfs(rootdir, outfile, compression='default', compressargs=None)[source]

      Make a squashfs image containing the given rootdir.

      -pylorax.imgutils.mktar(root, outfile, compression='xz', compressargs=None, selinux=True)[source]
      +pylorax.imgutils.mktar(root, outfile, compression='xz', compressargs=None, selinux=True)[source]
      -pylorax.imgutils.mount(dev, opts='', mnt=None)[source]
      +pylorax.imgutils.mount(dev, opts='', mnt=None)[source]

      Mount the given device at the given mountpoint, using the given opts. opts should be a comma-separated string of mount options. if mnt is none, a temporary directory will be created and its path will be @@ -1080,13 +1082,13 @@ raises CalledProcessError if mount fails.

      -pylorax.imgutils.round_to_blocks(size, blocksize)[source]
      +pylorax.imgutils.round_to_blocks(size, blocksize)[source]

      If size isn't a multiple of blocksize, round up to the next multiple

      -pylorax.imgutils.umount(mnt, lazy=False, maxretry=3, retrysleep=1.0, delete=True)[source]
      +pylorax.imgutils.umount(mnt, lazy=False, maxretry=3, retrysleep=1.0, delete=True)[source]

      Unmount the given mountpoint. If lazy is True, do a lazy umount (-l). If the mount was a temporary dir created by mount, it will be deleted. raises CalledProcessError if umount fails.

      @@ -1097,25 +1099,25 @@ raises CalledProcessError if umount fails.

      pylorax.installer module

      -exception pylorax.installer.InstallError[source]
      +exception pylorax.installer.InstallError[source]

      Bases: Exception

      -class pylorax.installer.QEMUInstall(opts, iso, ks_paths, disk_img, img_size=2048, kernel_args=None, memory=1024, vcpus=None, vnc=None, arch=None, cancel_func=None, virtio_host='127.0.0.1', virtio_port=6080, image_type=None, boot_uefi=False, ovmf_path=None)[source]
      +class pylorax.installer.QEMUInstall(opts, iso, ks_paths, disk_img, img_size=2048, kernel_args=None, memory=1024, vcpus=None, vnc=None, arch=None, cancel_func=None, virtio_host='127.0.0.1', virtio_port=6080, image_type=None, boot_uefi=False, ovmf_path=None)[source]

      Bases: object

      Run qemu using an iso and a kickstart

      -QEMU_CMDS = {'aarch64': 'qemu-system-aarch64', 'arm': 'qemu-system-arm', 'i386': 'qemu-system-i386', 'ppc64le': 'qemu-system-ppc64', 'x86_64': 'qemu-system-x86_64'}
      +QEMU_CMDS = {'aarch64': 'qemu-system-aarch64', 'arm': 'qemu-system-arm', 'i386': 'qemu-system-i386', 'ppc64le': 'qemu-system-ppc64', 'x86_64': 'qemu-system-x86_64'}
      -pylorax.installer.anaconda_cleanup(dirinstall_path)[source]
      +pylorax.installer.anaconda_cleanup(dirinstall_path)[source]

      Cleanup any leftover mounts from anaconda

      Parameters
      @@ -1134,7 +1136,7 @@ other mountpoints.

      -pylorax.installer.append_initrd(initrd, files)[source]
      +pylorax.installer.append_initrd(initrd, files)[source]

      Append files to an initrd.

      Parameters
      @@ -1159,7 +1161,7 @@ cpio archive.

      -pylorax.installer.create_vagrant_metadata(path, size=0)[source]
      +pylorax.installer.create_vagrant_metadata(path, size=0)[source]

      Create a default Vagrant metadata.json file

      Parameters
      @@ -1173,7 +1175,7 @@ cpio archive.

      -pylorax.installer.find_free_port(start=5900, end=5999, host='127.0.0.1')[source]
      +pylorax.installer.find_free_port(start=5900, end=5999, host='127.0.0.1')[source]

      Return first free port in range.

      Parameters
      @@ -1194,7 +1196,7 @@ cpio archive.

      -pylorax.installer.novirt_cancel_check(cancel_funcs, proc)[source]
      +pylorax.installer.novirt_cancel_check(cancel_funcs, proc)[source]

      Check to see if there has been an error in the logs

      Parameters
      @@ -1213,7 +1215,7 @@ When an error is detected the process is terminated and this returns True

      -pylorax.installer.novirt_install(opts, disk_img, disk_size, cancel_func=None, tar_img=None)[source]
      +pylorax.installer.novirt_install(opts, disk_img, disk_size, cancel_func=None, tar_img=None)[source]

      Use Anaconda to install to a disk image

      Parameters
      @@ -1232,7 +1234,7 @@ passed creates a qemu disk image or tarfile.

      -pylorax.installer.update_vagrant_metadata(path, size)[source]
      +pylorax.installer.update_vagrant_metadata(path, size)[source]

      Update the Vagrant metadata.json file

      Parameters
      @@ -1248,7 +1250,7 @@ metadata file are set correctly. All other values are left untouched.

      -pylorax.installer.virt_install(opts, install_log, disk_img, disk_size, cancel_func=None, tar_img=None)[source]
      +pylorax.installer.virt_install(opts, install_log, disk_img, disk_size, cancel_func=None, tar_img=None)[source]

      Use qemu to install to a disk image

      Parameters
      @@ -1271,7 +1273,7 @@ image and then optionally, based on the opts passed, creates tarfile.

      pylorax.ltmpl module

      -class pylorax.ltmpl.LiveTemplateRunner(dbo, fatalerrors=True, templatedir=None, defaults=None)[source]
      +class pylorax.ltmpl.LiveTemplateRunner(dbo, fatalerrors=True, templatedir=None, defaults=None)[source]

      Bases: pylorax.ltmpl.TemplateRunner

      This class parses and executes a limited Lorax template. Sample usage:

      @@ -1283,7 +1285,7 @@ pacages needed to build the live-iso output.

      -installpkg(*pkgs)[source]
      +installpkg(*pkgs)[source]
      installpkg [--required|--optional] [--except PKGGLOB [--except PKGGLOB ...]] PKGGLOB [PKGGLOB ...]

      Request installation of all packages matching the given globs. Note that this is just a request - nothing is actually installed @@ -1297,18 +1299,18 @@ until the 'run_pkg_transaction' command is given.

      -class pylorax.ltmpl.LoraxTemplate(directories=None)[source]
      +class pylorax.ltmpl.LoraxTemplate(directories=None)[source]

      Bases: object

      -parse(template_file, variables)[source]
      +parse(template_file, variables)[source]
      -class pylorax.ltmpl.LoraxTemplateRunner(inroot, outroot, dbo=None, fatalerrors=True, templatedir=None, defaults=None)[source]
      +class pylorax.ltmpl.LoraxTemplateRunner(inroot, outroot, dbo=None, fatalerrors=True, templatedir=None, defaults=None)[source]

      Bases: pylorax.ltmpl.TemplateRunner

      This class parses and executes Lorax templates. Sample usage:

      @@ -1337,7 +1339,7 @@ on that line (after word splitting and brace expansion)

      -append(filename, data)[source]
      +append(filename, data)[source]
      append FILE STRING

      Append STRING (followed by a newline character) to FILE. Python character escape sequences ('n', 't', etc.) will be @@ -1353,7 +1355,7 @@ append /etc/resolv.conf ""

      -chmod(fileglob, mode)[source]
      +chmod(fileglob, mode)[source]
      chmod FILEGLOB OCTALMODE

      Change the mode of all the files matching FILEGLOB to OCTALMODE.

      @@ -1362,7 +1364,7 @@ append /etc/resolv.conf ""

      -copy(src, dest)[source]
      +copy(src, dest)[source]
      copy SRC DEST

      Copy SRC to DEST. If DEST is a directory, SRC will be copied inside it. @@ -1374,7 +1376,7 @@ that name, if the path leading to it exists.

      -createaddrsize(addr, src, dest)[source]
      +createaddrsize(addr, src, dest)[source]
      createaddrsize INITRD_ADDRESS INITRD ADDRSIZE

      Create the initrd.addrsize file required in LPAR boot process.

      @@ -1387,7 +1389,7 @@ that name, if the path leading to it exists.

      +hardlink(src, dest)[source]
      hardlink SRC DEST

      Create a hardlink at DEST which is linked to SRC.

      @@ -1396,7 +1398,7 @@ that name, if the path leading to it exists.

      -install(srcglob, dest)[source]
      +install(srcglob, dest)[source]
      install SRC DEST

      Copy the given file (or files, if a glob is used) from the input tree to the given destination in the output tree. @@ -1416,7 +1418,7 @@ install /usr/share/myconfig/grub.conf.in /boot/grub.conf

      -installimg(*args)[source]
      +installimg(*args)[source]
      installimg [--xz|--gzip|--bzip2|--lzma] [-ARG|--ARG=OPTION] SRCDIR DESTFILE

      Create a compressed cpio archive of the contents of SRCDIR and place it in DESTFILE.

      @@ -1436,7 +1438,7 @@ passed to it. The default is xz -9

      -installinitrd(section, src, dest)[source]
      +installinitrd(section, src, dest)[source]
      installinitrd SECTION SRC DEST

      Same as installkernel, but for "initrd".

      @@ -1445,7 +1447,7 @@ passed to it. The default is xz -9

      -installkernel(section, src, dest)[source]
      +installkernel(section, src, dest)[source]
      installkernel SECTION SRC DEST

      Install the kernel from SRC in the input tree to DEST in the output tree, and then add an item to the treeinfo data store, in the named @@ -1461,7 +1463,7 @@ treeinfo SECTION kernel DEST

      -installpkg(*pkgs)[source]
      +installpkg(*pkgs)[source]
      installpkg [--required|--optional] [--except PKGGLOB [--except PKGGLOB ...]] PKGGLOB [PKGGLOB ...]

      Request installation of all packages matching the given globs. Note that this is just a request - nothing is actually installed @@ -1473,7 +1475,7 @@ until the 'run_pkg_transaction' command is given.

      -installupgradeinitrd(section, src, dest)[source]
      +installupgradeinitrd(section, src, dest)[source]
      installupgradeinitrd SECTION SRC DEST

      Same as installkernel, but for "upgrade".

      @@ -1482,7 +1484,7 @@ until the 'run_pkg_transaction' command is given.

      -log(msg)[source]
      +log(msg)[source]
      log MESSAGE

      Emit the given log message. Be sure to put it in quotes!

      @@ -1495,7 +1497,7 @@ until the 'run_pkg_transaction' command is given.

      -mkdir(*dirs)[source]
      +mkdir(*dirs)[source]
      mkdir DIR [DIR ...]

      Create the named DIR(s). Will create leading directories as needed.

      @@ -1508,7 +1510,7 @@ until the 'run_pkg_transaction' command is given.

      -move(src, dest)[source]
      +move(src, dest)[source]
      move SRC DEST

      Move SRC to DEST.

      @@ -1517,7 +1519,7 @@ until the 'run_pkg_transaction' command is given.

      -remove(*fileglobs)[source]
      +remove(*fileglobs)[source]
      remove FILEGLOB [FILEGLOB ...]

      Remove all the named files or directories. Will not raise exceptions if the file(s) are not found.

      @@ -1527,7 +1529,7 @@ Will not raise exceptions if the file(s) are not found.

      -removefrom(pkg, *globs)[source]
      +removefrom(pkg, *globs)[source]
      removefrom PKGGLOB [--allbut] FILEGLOB [FILEGLOB...]

      Remove all files matching the given file globs from the package (or packages) named. @@ -1544,7 +1546,7 @@ removefrom xfsprogs --allbut /sbin/*

      -removekmod(*globs)[source]
      +removekmod(*globs)[source]
      removekmod GLOB [GLOB...] [--allbut] KEEPGLOB [KEEPGLOB...]

      Remove all files and directories matching the given file globs from the kernel modules directory.

      @@ -1564,7 +1566,7 @@ removekmod drivers/char --allbut virtio_console hw_random

      -removepkg(*pkgs)[source]
      +removepkg(*pkgs)[source]
      removepkg PKGGLOB [PKGGLOB...]

      Delete the named package(s).

      @@ -1578,7 +1580,7 @@ Files are deleted, but directories are left behind.

      -replace(pat, repl, *fileglobs)[source]
      +replace(pat, repl, *fileglobs)[source]
      replace PATTERN REPLACEMENT FILEGLOB [FILEGLOB ...]

      Find-and-replace the given PATTERN (Python-style regex) with the given REPLACEMENT string for each of the files listed.

      @@ -1592,14 +1594,14 @@ REPLACEMENT string for each of the files listed.

      -run_pkg_transaction()[source]
      +run_pkg_transaction()[source]

      Actually install all the packages requested by previous 'installpkg' commands.

      -runcmd(*cmdlist)[source]
      +runcmd(*cmdlist)[source]
      runcmd CMD [ARG ...]

      Run the given command with the given arguments.

      NOTE: All paths given MUST be COMPLETE, ABSOLUTE PATHS to the file @@ -1622,7 +1624,7 @@ remove ${f}

      +symlink(target, dest)[source]
      symlink SRC DEST

      Create a symlink at DEST which points to SRC.

      @@ -1631,7 +1633,7 @@ remove ${f}
      -systemctl(cmd, *units)[source]
      +systemctl(cmd, *units)[source]
      systemctl [enable|disable|mask] UNIT [UNIT...]

      Enable, disable, or mask the given systemd units.

      @@ -1645,7 +1647,7 @@ systemctl mask fedora-storage-init.service fedora-configure.service

      -treeinfo(section, key, *valuetoks)[source]
      +treeinfo(section, key, *valuetoks)[source]
      treeinfo SECTION KEY ARG [ARG ...]

      Add an item to the treeinfo data store. The given SECTION will have a new item added where @@ -1662,7 +1664,7 @@ KEY = ARG ARG ...

      -class pylorax.ltmpl.TemplateRunner(fatalerrors=True, templatedir=None, defaults=None, builtins=None)[source]
      +class pylorax.ltmpl.TemplateRunner(fatalerrors=True, templatedir=None, defaults=None, builtins=None)[source]

      Bases: object

      This class parses and executes Lorax templates. Sample usage:

      @@ -1689,29 +1691,29 @@ of a command in an %if statement (or any other control statements)!

      -run(templatefile, **variables)[source]
      +run(templatefile, **variables)[source]
      -pylorax.ltmpl.brace_expand(s)[source]
      +pylorax.ltmpl.brace_expand(s)[source]
      -pylorax.ltmpl.rexists(pathname, root='')[source]
      +pylorax.ltmpl.rexists(pathname, root='')[source]
      -pylorax.ltmpl.rglob(pathname, root='/', fatal=False)[source]
      +pylorax.ltmpl.rglob(pathname, root='/', fatal=False)[source]
      -pylorax.ltmpl.split_and_expand(line)[source]
      +pylorax.ltmpl.split_and_expand(line)[source]
      @@ -1719,14 +1721,14 @@ of a command in an %if statement (or any other control statements)!

      pylorax.monitor module

      -class pylorax.monitor.LogMonitor(log_path=None, host='localhost', port=0, timeout=None, log_request_handler_class=<class 'pylorax.monitor.LogRequestHandler'>)[source]
      +class pylorax.monitor.LogMonitor(log_path=None, host='localhost', port=0, timeout=None, log_request_handler_class=<class 'pylorax.monitor.LogRequestHandler'>)[source]

      Bases: object

      Setup a server to monitor the logs output by the installation

      This needs to be running before the virt-install runs, it expects there to be a listener on the port used for the virtio log port.

      -shutdown()[source]
      +shutdown()[source]

      Force shutdown of the monitoring thread

      @@ -1734,7 +1736,7 @@ there to be a listener on the port used for the virtio log port.

      -class pylorax.monitor.LogRequestHandler(request, client_address, server)[source]
      +class pylorax.monitor.LogRequestHandler(request, client_address, server)[source]

      Bases: socketserver.BaseRequestHandler

      Handle monitoring and saving the logfiles from the virtual install

      Incoming data is written to self.server.log_path and each line is checked @@ -1742,12 +1744,12 @@ for patterns that would indicate that the installation failed. self.server.log_error is set True when this happens.

      -finish()[source]
      +finish()[source]
      -handle()[source]
      +handle()[source]

      Write incoming data to a logfile and check for errors

      Split incoming data into lines and check for any Tracebacks or other errors that indicate that the install failed.

      @@ -1756,7 +1758,7 @@ errors that indicate that the install failed.

      -iserror(line)[source]
      +iserror(line)[source]

      Check a line to see if it contains an error indicating installation failure

      Parameters
      @@ -1768,30 +1770,30 @@ errors that indicate that the install failed.

      -re_tests = ['packaging: base repo .* not valid', 'packaging: .* requires .*']
      +re_tests = ['packaging: base repo .* not valid', 'packaging: .* requires .*']
      -setup()[source]
      +setup()[source]

      Start writing to self.server.log_path

      -simple_tests = ['Traceback (', 'traceback script(s) have been run', 'Out of memory:', 'Call Trace:', 'insufficient disk space:', 'Not enough disk space to download the packages', 'error populating transaction after', 'crashed on signal', 'packaging: Missed: NoSuchPackage', 'packaging: Installation failed', 'The following error occurred while installing. This is a fatal error', 'Error in POSTIN scriptlet in rpm package']
      +simple_tests = ['Traceback (', 'traceback script(s) have been run', 'Out of memory:', 'Call Trace:', 'insufficient disk space:', 'Not enough disk space to download the packages', 'error populating transaction after', 'crashed on signal', 'packaging: Missed: NoSuchPackage', 'packaging: Installation failed', 'The following error occurred while installing.  This is a fatal error', 'Error in POSTIN scriptlet in rpm package']
      -class pylorax.monitor.LogServer(log_path, *args, **kwargs)[source]
      +class pylorax.monitor.LogServer(log_path, *args, **kwargs)[source]

      Bases: socketserver.TCPServer

      A TCP Server that listens for log data

      -log_check()[source]
      +log_check()[source]

      Check to see if an error has been found in the log

      Returns
      @@ -1805,7 +1807,7 @@ errors that indicate that the install failed.

      -timeout = 60
      +timeout = 60
      @@ -1815,7 +1817,7 @@ errors that indicate that the install failed.

      pylorax.mount module

      -class pylorax.mount.IsoMountpoint(iso_path, initrd_path=None)[source]
      +class pylorax.mount.IsoMountpoint(iso_path, initrd_path=None)[source]

      Bases: object

      Mount the iso and check to make sure the vmlinuz and initrd.img files exist

      Also check the iso for a a stage2 image and set a flag and extract the @@ -1823,14 +1825,14 @@ iso's label.

      stage2 can be either LiveOS/squashfs.img or images/install.img

      -get_iso_label()[source]
      +get_iso_label()[source]

      Get the iso's label using isoinfo

      Sets self.label if one is found

      -umount()[source]
      +umount()[source]

      Unmount the iso

      @@ -1844,37 +1846,37 @@ iso's label.

      pylorax.sysutils module

      -pylorax.sysutils.chmod_(path, mode, recursive=False)[source]
      +pylorax.sysutils.chmod_(path, mode, recursive=False)[source]
      -pylorax.sysutils.chown_(path, user=None, group=None, recursive=False)[source]
      +pylorax.sysutils.chown_(path, user=None, group=None, recursive=False)[source]
      -pylorax.sysutils.joinpaths(*args, **kwargs)[source]
      +pylorax.sysutils.joinpaths(*args, **kwargs)[source]
      -pylorax.sysutils.linktree(src, dst)[source]
      +pylorax.sysutils.linktree(src, dst)[source]
      -pylorax.sysutils.remove(target)[source]
      +pylorax.sysutils.remove(target)[source]
      -pylorax.sysutils.replace(fname, find, sub)[source]
      +pylorax.sysutils.replace(fname, find, sub)[source]
      -pylorax.sysutils.touch(fname)[source]
      +pylorax.sysutils.touch(fname)[source]
      @@ -1882,66 +1884,66 @@ iso's label.

      pylorax.treebuilder module

      -class pylorax.treebuilder.RuntimeBuilder(product, arch, dbo, templatedir=None, installpkgs=None, excludepkgs=None, add_templates=None, add_template_vars=None, skip_branding=False)[source]
      +class pylorax.treebuilder.RuntimeBuilder(product, arch, dbo, templatedir=None, installpkgs=None, excludepkgs=None, add_templates=None, add_template_vars=None, skip_branding=False)[source]

      Bases: object

      Builds the anaconda runtime image.

      -cleanup()[source]
      +cleanup()[source]

      Remove unneeded packages and files with runtime-cleanup.tmpl

      -create_ext4_runtime(outfile='/var/tmp/squashfs.img', compression='xz', compressargs=None, size=2)[source]
      +create_ext4_runtime(outfile='/var/tmp/squashfs.img', compression='xz', compressargs=None, size=2)[source]

      Create a squashfs compressed ext4 runtime

      -create_squashfs_runtime(outfile='/var/tmp/squashfs.img', compression='xz', compressargs=None, size=2)[source]
      +create_squashfs_runtime(outfile='/var/tmp/squashfs.img', compression='xz', compressargs=None, size=2)[source]

      Create a plain squashfs runtime

      -finished()[source]
      +finished()[source]

      Done using RuntimeBuilder

      Close the dnf base object

      -generate_module_data()[source]
      +generate_module_data()[source]
      -install()[source]
      +install()[source]

      Install packages and do initial setup with runtime-install.tmpl

      -postinstall()[source]
      +postinstall()[source]

      Do some post-install setup work with runtime-postinstall.tmpl

      -verify()[source]
      +verify()[source]

      Ensure that contents of the installroot can run

      -writepkglists(pkglistdir)[source]
      +writepkglists(pkglistdir)[source]

      debugging data: write out lists of package contents

      -writepkgsizes(pkgsizefile)[source]
      +writepkgsizes(pkgsizefile)[source]

      debugging data: write a big list of pkg sizes

      @@ -1949,18 +1951,18 @@ iso's label.

      -class pylorax.treebuilder.TreeBuilder(product, arch, inroot, outroot, runtime, isolabel, domacboot=True, doupgrade=True, templatedir=None, add_templates=None, add_template_vars=None, workdir=None, extra_boot_args='')[source]
      +class pylorax.treebuilder.TreeBuilder(product, arch, inroot, outroot, runtime, isolabel, domacboot=True, doupgrade=True, templatedir=None, add_templates=None, add_template_vars=None, workdir=None, extra_boot_args='')[source]

      Bases: object

      Builds the arch-specific boot images. inroot should be the installtree root (the newly-built runtime dir)

      -build()[source]
      +build()[source]
      -copy_dracut_hooks(hooks)[source]
      +copy_dracut_hooks(hooks)[source]

      Copy the hook scripts in hooks into the installroot's /tmp/ and return a list of commands to pass to dracut when creating the initramfs

      @@ -1971,7 +1973,7 @@ target dracut hook directory
      -property dracut_hooks_path
      +property dracut_hooks_path

      Return the path to the lorax dracut hooks scripts

      Use the configured share dir if it is setup, otherwise default to /usr/share/lorax/dracut_hooks

      @@ -1979,17 +1981,17 @@ otherwise default to /usr/share/lorax/dracut_hooks

      -implantisomd5()[source]
      +implantisomd5()[source]
      -property kernels
      +property kernels
      -rebuild_initrds(add_args=None, backup='', prefix='')[source]
      +rebuild_initrds(add_args=None, backup='', prefix='')[source]

      Rebuild all the initrds in the tree. If backup is specified, each initrd will be renamed with backup as a suffix before rebuilding. If backup is empty, the existing initrd files will be overwritten. @@ -2003,17 +2005,17 @@ name of the kernel.

      -pylorax.treebuilder.findkernels(root='/', kdir='boot')[source]
      +pylorax.treebuilder.findkernels(root='/', kdir='boot')[source]
      -pylorax.treebuilder.generate_module_info(moddir, outfile=None)[source]
      +pylorax.treebuilder.generate_module_info(moddir, outfile=None)[source]
      -pylorax.treebuilder.string_lower(string)[source]
      +pylorax.treebuilder.string_lower(string)[source]

      Return a lowercase string.

      Parameters
      @@ -2025,7 +2027,7 @@ name of the kernel.

      -pylorax.treebuilder.udev_escape(label)[source]
      +pylorax.treebuilder.udev_escape(label)[source]
      @@ -2033,16 +2035,16 @@ name of the kernel.

      pylorax.treeinfo module

      -class pylorax.treeinfo.TreeInfo(product, version, variant, basearch, packagedir='')[source]
      +class pylorax.treeinfo.TreeInfo(product, version, variant, basearch, packagedir='')[source]

      Bases: object

      -add_section(section, data)[source]
      +add_section(section, data)[source]
      -write(outfile)[source]
      +write(outfile)[source]
      @@ -2052,47 +2054,47 @@ name of the kernel.

      Module contents

      -class pylorax.ArchData(buildarch)[source]
      +class pylorax.ArchData(buildarch)[source]

      Bases: pylorax.base.DataHolder

      -bcj_arch = {'arm': 'arm', 'armhfp': 'arm', 'i386': 'x86', 'ppc64le': 'powerpc', 'x86_64': 'x86'}
      +bcj_arch = {'arm': 'arm', 'armhfp': 'arm', 'i386': 'x86', 'ppc64le': 'powerpc', 'x86_64': 'x86'}
      -lib64_arches = ('x86_64', 'ppc64le', 's390x', 'ia64', 'aarch64')
      +lib64_arches = ('x86_64', 'ppc64le', 's390x', 'ia64', 'aarch64')
      -class pylorax.Lorax[source]
      +class pylorax.Lorax[source]

      Bases: pylorax.base.BaseLoraxClass

      -configure(conf_file='/etc/lorax/lorax.conf')[source]
      +configure(conf_file='/etc/lorax/lorax.conf')[source]
      -init_file_logging(logdir, logname='pylorax.log')[source]
      +init_file_logging(logdir, logname='pylorax.log')[source]
      -init_stream_logging()[source]
      +init_stream_logging()[source]
      -run(dbo, product, version, release, variant='', bugurl='', isfinal=False, workdir=None, outputdir=None, buildarch=None, volid=None, domacboot=True, doupgrade=True, remove_temp=False, installpkgs=None, excludepkgs=None, size=2, add_templates=None, add_template_vars=None, add_arch_templates=None, add_arch_template_vars=None, verify=True, user_dracut_args=None, squashfs_only=False, skip_branding=False)[source]
      +run(dbo, product, version, release, variant='', bugurl='', isfinal=False, workdir=None, outputdir=None, buildarch=None, volid=None, domacboot=True, doupgrade=True, remove_temp=False, installpkgs=None, excludepkgs=None, size=2, add_templates=None, add_template_vars=None, add_arch_templates=None, add_arch_template_vars=None, verify=True, user_dracut_args=None, squashfs_only=False, skip_branding=False)[source]
      -property templatedir
      +property templatedir

      Find the template directory.

      Pick the first directory under sharedir/templates.d/ if it exists. Otherwise use the sharedir

      @@ -2102,7 +2104,7 @@ Otherwise use the sharedir

      -pylorax.find_templates(templatedir='/usr/share/lorax')[source]
      +pylorax.find_templates(templatedir='/usr/share/lorax')[source]

      Find the templates to use.

      Parameters
      @@ -2122,18 +2124,18 @@ lowest numbered directory entry is returned.

      -pylorax.get_buildarch(dbo)[source]
      +pylorax.get_buildarch(dbo)[source]
      -pylorax.log_selinux_state()[source]
      +pylorax.log_selinux_state()[source]

      Log the current state of selinux

      -pylorax.setup_logging(logfile, theLogger)[source]
      +pylorax.setup_logging(logfile, theLogger)[source]

      Setup the various logs

      Parameters
      @@ -2153,27 +2155,28 @@ lowest numbered directory entry is returned.

      - @@ -2182,7 +2185,6 @@ lowest numbered directory entry is returned.

      - - - - - - + @@ -60,7 +61,7 @@
      - 35.0 + 35.1
      @@ -77,6 +78,7 @@ + + @@ -132,19 +134,19 @@ + +
        -
      • Docs »
      • +
      • »
      • Search
      • - -
      @@ -158,8 +160,7 @@ @@ -173,20 +174,25 @@
      - @@ -195,7 +201,6 @@ -