Remove lorax-composer, it has been replaced by osbuild-composer

Remove the code, related files, and tests.
This commit is contained in:
Brian C. Lane 2020-09-30 14:53:29 -07:00
parent 506d5d9ebd
commit 7616a10373
121 changed files with 19 additions and 21220 deletions

View File

@ -5,7 +5,7 @@ mandir ?= $(PREFIX)/share/man
DOCKER ?= podman
DOCS_VERSION ?= next
RUN_TESTS ?= ci
BACKEND ?= lorax-composer
BACKEND ?= osbuild-composer
PKGNAME = lorax
VERSION = $(shell awk '/Version:/ { print $$2 }' $(PKGNAME).spec)
@ -47,14 +47,12 @@ install: all
check:
@echo "*** Running pylint ***"
PYTHONPATH=$(PYTHONPATH):./src/ ./tests/pylint/runpylint.py
@echo "*** Running yamllint ***"
./tests/lint-playbooks.sh
test:
@echo "*** Running tests ***"
PYTHONPATH=$(PYTHONPATH):./src/ $(PYTHON) -m pytest -v --cov-branch \
--cov=pylorax --cov=lifted --cov=composer \
./tests/pylorax/ ./tests/composer/ ./tests/lifted/
--cov=pylorax --cov=composer \
./tests/pylorax/ ./tests/composer/
coverage3 report -m
[ -f "/usr/bin/coveralls" ] && [ -n "$(COVERALLS_REPO_TOKEN)" ] && coveralls || echo

View File

@ -265,7 +265,6 @@ latex_documents = [
man_pages = [
('lorax', 'lorax', u'Lorax Documentation', [u'Weldr Team'], 1),
('livemedia-creator', 'livemedia-creator', u'Live Media Creator Documentation', [u'Weldr Team'], 1),
('lorax-composer', 'lorax-composer', u'Lorax Composer Documentation', [u'Weldr Team'], 1),
('composer-cli', 'composer-cli', u'Composer Cmdline Utility Documentation', [u'Weldr Team'], 1),
('mkksiso', 'mkksiso', u'Make Kickstart ISO Utility Documentation', [u'Weldr Team'], 1),
]

View File

@ -1,535 +1,8 @@
lorax-composer
==============
:Authors:
Brian C. Lane <bcl@redhat.com>
``lorax-composer`` is a WELDR API server that allows you to build disk images using
`Blueprints`_ to describe the package versions to be installed into the image.
It is compatible with the Weldr project's bdcs-api REST protocol. More
information on Weldr can be found `on the Weldr blog <http://www.weldr.io>`_.
Behind the scenes it uses `livemedia-creator <livemedia-creator.html>`_ and
`Anaconda <https://anaconda-installer.readthedocs.io/en/latest/>`_ to handle the
installation and configuration of the images.
.. note::
``lorax-composer`` is now deprecated. It is being replaced by the
``osbuild-composer`` WELDR API server which implements more features (eg.
ostree, image uploads, etc.) You can still use ``composer-cli`` and
``cockpit-composer`` with ``osbuild-composer``. See the documentation or
the `osbuild website <https://www.osbuild.org/>`_ for more information.
Important Things To Note
------------------------
* As of version 30.7 SELinux can be set to Enforcing. The current state is
logged for debugging purposes and if there are SELinux denials they should
be reported as a bug.
* All image types lock the root account, except for live-iso. You will need to either
use one of the `Customizations`_ methods for setting a ssh key/password, install a
package that creates a user, or use something like `cloud-init` to setup access at
boot time.
Installation
------------
The best way to install ``lorax-composer`` is to use ``sudo dnf install
lorax-composer composer-cli``, this will setup the weldr user and install the
systemd socket activation service. You will then need to enable it with ``sudo
systemctl enable lorax-composer.socket && sudo systemctl start
lorax-composer.socket``. This will leave the server off until the first request
is made. Systemd will then launch the server and it will remain running until
the system is rebooted. This will cause some delay in responding to the first
request from the UI or `composer-cli`.
.. note::
If you want lorax-composer to respond immediately to the first request you can
start and enable `lorax-composer.service` instead of `lorax-composer.socket`
Quickstart
----------
1. Create a ``weldr`` user and group by running ``useradd weldr``
2. Remove any pre-existing socket directory with ``rm -rf /run/weldr/``
A new directory with correct permissions will be created the first time the server runs.
3. Enable the socket activation with ``systemctl enable lorax-composer.socket
&& sudo systemctl start lorax-composer.socket``.
NOTE: You can also run it directly with ``lorax-composer /path/to/blueprints``. However,
``lorax-composer`` does not react well to being started both on the command line and via
socket activation at the same time. It is therefore recommended that you run it directly
on the command line only for testing or development purposes. For real use or development
of other projects that simply use the API, you should stick to socket activation only.
The ``/path/to/blueprints/`` directory is where the blueprints' git repo will
be created, and all the blueprints created with the ``/api/v0/blueprints/new``
route will be stored. If there are blueprint ``.toml`` files in the top level
of the directory they will be imported into the blueprint git storage when
``lorax-composer`` starts.
Logs
----
Logs are stored under ``/var/log/lorax-composer/`` and include all console
messages as well as extra debugging info and API requests.
Security
--------
Some security related issues that you should be aware of before running ``lorax-composer``:
* One of the API server threads needs to retain root privileges in order to run Anaconda.
* Only allow authorized users access to the ``weldr`` group and socket.
Since Anaconda kickstarts are used there is the possibility that a user could
inject commands into a blueprint that would result in the kickstart executing
arbitrary code on the host. Only authorized users should be allowed to build
images using ``lorax-composer``.
lorax-composer cmdline arguments
--------------------------------
.. argparse::
:ref: pylorax.api.cmdline.lorax_composer_parser
:prog: lorax-composer
How it Works
------------
The server runs as root, and as ``weldr``. Communication with it is via a unix
domain socket (``/run/weldr/api.socket`` by default). The directory and socket
are owned by ``root:weldr`` so that any user in the ``weldr`` group can use the API
to control ``lorax-composer``.
At startup the server will check for the correct permissions and
ownership of a pre-existing directory, or it will create a new one if it
doesn't exist. The socket path and group owner's name can be changed from the
cmdline by passing it the ``--socket`` and ``--group`` arguments.
It will then drop root privileges for the API thread and run as the ``weldr``
user. The queue and compose thread still runs as root because it needs to be
able to mount/umount files and run Anaconda.
Composing Images
----------------
The `welder-web <https://github.com/weldr/welder-web/>`_ GUI project can be used to construct
blueprints and create composes using a web browser.
Or use the command line with `composer-cli <composer-cli.html>`_.
Blueprints
----------
Blueprints are simple text files in `TOML <https://github.com/toml-lang/toml>`_ format that describe
which packages, and what versions, to install into the image. They can also define a limited set
of customizations to make to the final image.
Example blueprints can be found in the ``lorax-composer`` `test suite
<https://github.com/weldr/lorax/tree/master/tests/pylorax/blueprints/>`_, with a simple one
looking like this::
name = "base"
description = "A base system with bash"
version = "0.0.1"
[[packages]]
name = "bash"
version = "4.4.*"
The ``name`` field is the name of the blueprint. It can contain spaces, but they will be converted to ``-``
when it is written to disk. It should be short and descriptive.
``description`` can be a longer description of the blueprint, it is only used for display purposes.
``version`` is a `semver compatible <https://semver.org/>`_ version number. If
a new blueprint is uploaded with the same ``version`` the server will
automatically bump the PATCH level of the ``version``. If the ``version``
doesn't match it will be used as is. eg. Uploading a blueprint with ``version``
set to ``0.1.0`` when the existing blueprint ``version`` is ``0.0.1`` will
result in the new blueprint being stored as ``version 0.1.0``.
[[packages]] and [[modules]]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These entries describe the package names and matching version glob to be installed into the image.
The names must match the names exactly, and the versions can be an exact match
or a filesystem-like glob of the version using ``*`` wildcards and ``?``
character matching.
NOTE: Currently there are no differences between ``packages`` and ``modules``
in ``lorax-composer``. Both are treated like an rpm package dependency.
For example, to install ``tmux-2.9a`` and ``openssh-server-8.*``, you would add
this to your blueprint::
[[packages]]
name = "tmux"
version = "2.9a"
[[packages]]
name = "openssh-server"
version = "8.*"
[[groups]]
~~~~~~~~~~
The ``groups`` entries describe a group of packages to be installed into the image. Package groups are
defined in the repository metadata. Each group has a descriptive name used primarily for display
in user interfaces and an ID more commonly used in kickstart files. Here, the ID is the expected
way of listing a group.
Groups have three different ways of categorizing their packages: mandatory, default, and optional.
For purposes of blueprints, mandatory and default packages will be installed. There is no mechanism
for selecting optional packages.
For example, if you want to install the ``anaconda-tools`` group you would add this to your
blueprint::
[[groups]]
name="anaconda-tools"
``groups`` is a TOML list, so each group needs to be listed separately, like ``packages`` but with
no version number.
Customizations
~~~~~~~~~~~~~~
The ``[customizations]`` section can be used to configure the hostname of the final image. eg.::
[customizations]
hostname = "baseimage"
This is optional and may be left out to use the defaults.
[customizations.kernel]
***********************
This allows you to append arguments to the bootloader's kernel commandline. This will not have any
effect on ``tar`` or ``ext4-filesystem`` images since they do not include a bootloader.
For example::
[customizations.kernel]
append = "nosmt=force"
[[customizations.sshkey]]
*************************
Set an existing user's ssh key in the final image::
[[customizations.sshkey]]
user = "root"
key = "PUBLIC SSH KEY"
The key will be added to the user's authorized_keys file.
.. warning::
``key`` expects the entire content of ``~/.ssh/id_rsa.pub``
[[customizations.user]]
***********************
Add a user to the image, and/or set their ssh key.
All fields for this section are optional except for the ``name``, here is a complete example::
[[customizations.user]]
name = "admin"
description = "Administrator account"
password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L..."
key = "PUBLIC SSH KEY"
home = "/srv/widget/"
shell = "/usr/bin/bash"
groups = ["widget", "users", "wheel"]
uid = 1200
gid = 1200
If the password starts with ``$6$``, ``$5$``, or ``$2b$`` it will be stored as
an encrypted password. Otherwise it will be treated as a plain text password.
.. warning::
``key`` expects the entire content of ``~/.ssh/id_rsa.pub``
[[customizations.group]]
************************
Add a group to the image. ``name`` is required and ``gid`` is optional::
[[customizations.group]]
name = "widget"
gid = 1130
[customizations.timezone]
*************************
Customizing the timezone and the NTP servers to use for the system::
[customizations.timezone]
timezone = "US/Eastern"
ntpservers = ["0.north-america.pool.ntp.org", "1.north-america.pool.ntp.org"]
The values supported by ``timezone`` can be listed by running ``timedatectl list-timezones``.
If no timezone is setup the system will default to using `UTC`. The ntp servers are also
optional and will default to using the distribution defaults which are fine for most uses.
In some image types there are already NTP servers setup, eg. Google cloud image, and they
cannot be overridden because they are required to boot in the selected environment. But the
timezone will be updated to the one selected in the blueprint.
[customizations.locale]
***********************
Customize the locale settings for the system::
[customizations.locale]
languages = ["en_US.UTF-8"]
keyboard = "us"
The values supported by ``languages`` can be listed by running ``localectl list-locales`` from
the command line.
The values supported by ``keyboard`` can be listed by running ``localectl list-keymaps`` from
the command line.
Multiple languages can be added. The first one becomes the
primary, and the others are added as secondary. One or the other of ``languages``
or ``keyboard`` must be included (or both) in the section.
[customizations.firewall]
*************************
By default the firewall blocks all access except for services that enable their ports explicitly,
like ``sshd``. This command can be used to open other ports or services. Ports are configured using
the port:protocol format::
[customizations.firewall]
ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp"]
Numeric ports, or their names from ``/etc/services`` can be used in the ``ports`` enabled/disabled lists.
The blueprint settings extend any existing settings in the image templates, so if ``sshd`` is
already enabled it will extend the list of ports with the ones listed by the blueprint.
If the distribution uses ``firewalld`` you can specify services listed by ``firewall-cmd --get-services``
in a ``customizations.firewall.services`` section::
[customizations.firewall.services]
enabled = ["ftp", "ntp", "dhcp"]
disabled = ["telnet"]
Remember that the ``firewall.services`` are different from the names in ``/etc/services``.
Both are optional, if they are not used leave them out or set them to an empty list ``[]``. If you
only want the default firewall setup this section can be omitted from the blueprint.
NOTE: The ``Google`` and ``OpenStack`` templates explicitly disable the firewall for their environment.
This cannot be overridden by the blueprint.
[customizations.services]
*************************
This section can be used to control which services are enabled at boot time.
Some image types already have services enabled or disabled in order for the
image to work correctly, and cannot be overridden. eg. ``ami`` requires
``sshd``, ``chronyd``, and ``cloud-init``. Without them the image will not
boot. Blueprint services are added to, not replacing, the list already in the
templates, if any.
The service names are systemd service units. You may specify any systemd unit
file accepted by ``systemctl enable`` eg. ``cockpit.socket``::
[customizations.services]
enabled = ["sshd", "cockpit.socket", "httpd"]
disabled = ["postfix", "telnetd"]
[[repos.git]]
~~~~~~~~~~~~~
The ``[[repos.git]]`` entries are used to add files from a `git repository <https://git-scm.com/>`_
repository to the created image. The repository is cloned, the specified ``ref`` is checked out
and an rpm is created to install the files to a ``destination`` path. The rpm includes a summary
with the details of the repository and reference used to create it. The rpm is also included in the
image build metadata.
To create an rpm named ``server-config-1.0-1.noarch.rpm`` you would add this to your blueprint::
[[repos.git]]
rpmname="server-config"
rpmversion="1.0"
rpmrelease="1"
summary="Setup files for server deployment"
repo="PATH OF GIT REPO TO CLONE"
ref="v1.0"
destination="/opt/server/"
* rpmname: Name of the rpm to create, also used as the prefix name in the tar archive
* rpmversion: Version of the rpm, eg. "1.0.0"
* rpmrelease: Release of the rpm, eg. "1"
* summary: Summary string for the rpm
* repo: URL of the get repo to clone and create the archive from
* ref: Git reference to check out. eg. origin/branch-name, git tag, or git commit hash
* destination: Path to install the / of the git repo at when installing the rpm
An rpm will be created with the contents of the git repository referenced, with the files
being installed under ``/opt/server/`` in this case.
``ref`` can be any valid git reference for use with ``git archive``. eg. to use the head
of a branch set it to ``origin/branch-name``, a tag name, or a commit hash.
Note that the repository is cloned in full each time a build is started, so pointing to a
repository with a large amount of history may take a while to clone and use a significant
amount of disk space. The clone is temporary and is removed once the rpm is created.
Adding Output Types
-------------------
``livemedia-creator`` supports a large number of output types, and only some of
these are currently available via ``lorax-composer``. To add a new output type to
lorax-composer a kickstart file needs to be added to ``./share/composer/``. The
name of the kickstart is what will be used by the ``/compose/types`` route, and the
``compose_type`` field of the POST to start a compose. It also needs to have
code added to the :py:func:`pylorax.api.compose.compose_args` function. The
``_MAP`` entry in this function defines what lorax-composer will pass to
:py:func:`pylorax.installer.novirt_install` when it runs the compose. When the
compose is finished the output files need to be copied out of the build
directory (``/var/lib/lorax/composer/results/<UUID>/compose/``),
:py:func:`pylorax.api.compose.move_compose_results` handles this for each type.
You should move them instead of copying to save space.
If the new output type does not have support in livemedia-creator it should be
added there first. This will make the output available to the widest number of
users.
Example: Add partitioned disk support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Partitioned disk support is something that livemedia-creator already supports
via the ``--make-disk`` cmdline argument. To add this to lorax-composer it
needs 3 things:
* A ``partitioned-disk.ks`` file in ``./share/composer/``
* A new entry in the _MAP in :py:func:`pylorax.api.compose.compose_args`
* Add a bit of code to :py:func:`pylorax.api.compose.move_compose_results` to move the disk image from
the compose directory to the results directory.
The ``partitioned-disk.ks`` is pretty similar to the example minimal kickstart
in ``./docs/fedora-minimal.ks``. You should remove the ``url`` and ``repo``
commands, they will be added by the compose process. Make sure the bootloader
packages are included in the ``%packages`` section at the end of the kickstart,
and you will want to leave off the ``%end`` so that the compose can append the
list of packages from the blueprint.
The new ``_MAP`` entry should be a copy of one of the existing entries, but with ``make_disk`` set
to ``True``. Make sure that none of the other ``make_*`` options are ``True``. The ``image_name`` is
what the name of the final image will be.
``move_compose_results()`` can be as simple as moving the output file into
the results directory, or it could do some post-processing on it. The end of
the function should always clean up the ``./compose/`` directory, removing any
unneeded extra files. This is especially true for the ``live-iso`` since it produces
the contents of the iso as well as the boot.iso itself.
Package Sources
---------------
By default lorax-composer uses the host's configured repositories. It copies
the ``*.repo`` files from ``/etc/yum.repos.d/`` into
``/var/lib/lorax/composer/repos.d/`` at startup, these are immutable system
repositories and cannot be deleted or changed. If you want to add additional
repos you can put them into ``/var/lib/lorax/composer/repos.d/`` or use the
``/api/v0/projects/source/*`` API routes to create them.
The new source can be added by doing a POST to the ``/api/v0/projects/source/new``
route using JSON (with `Content-Type` header set to `application/json`) or TOML
(with it set to `text/x-toml`). The format of the source looks like this (in
TOML)::
name = "custom-source-1"
url = "https://url/path/to/repository/"
type = "yum-baseurl"
proxy = "https://proxy-url/"
check_ssl = true
check_gpg = true
gpgkey_urls = ["https://url/path/to/gpg-key"]
The ``proxy`` and ``gpgkey_urls`` entries are optional. All of the others are required. The supported
types for the urls are:
* ``yum-baseurl`` is a URL to a yum repository.
* ``yum-mirrorlist`` is a URL for a mirrorlist.
* ``yum-metalink`` is a URL for a metalink.
If ``check_ssl`` is true the https certificates must be valid. If they are self-signed you can either set
this to false, or add your Certificate Authority to the host system.
If ``check_gpg`` is true the GPG key must either be installed on the host system, or ``gpgkey_urls``
should point to it.
You can edit an existing source (other than system sources), by doing a POST to the ``new`` route
with the new version of the source. It will overwrite the previous one.
A list of existing sources is available from ``/api/v0/projects/source/list``, and detailed info
on a source can be retrieved with the ``/api/v0/projects/source/info/<source-name>`` route. By default
it returns JSON but it can also return TOML if ``?format=toml`` is added to the request.
Non-system sources can be deleted by doing a ``DELETE`` request to the
``/api/v0/projects/source/delete/<source-name>`` route.
The documentation for the source API routes can be `found here <pylorax.api.html#api-v0-projects-source-list>`_
The configured sources are used for all blueprint depsolve operations, and for composing images.
When adding additional sources you must make sure that the packages in the source do not
conflict with any other package sources, otherwise depsolving will fail.
DVD ISO Package Source
~~~~~~~~~~~~~~~~~~~~~~
In some situations the system may want to *only* use a DVD iso as the package
source, not the repos from the network. ``lorax-composer`` and ``anaconda``
understand ``file://`` URLs so you can mount an iso on the host, and replace the
system repo files with a configuration file pointing to the DVD.
* Stop the ``lorax-composer.service`` if it is running
* Move the repo files in ``/etc/yum.repos.d/`` someplace safe
* Create a new ``iso.repo`` file in ``/etc/yum.repos.d/``::
[iso]
name=iso
baseurl=file:///mnt/iso/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/iso/RPM-GPG-KEY-redhat-release
* Remove all the cached repo files from ``/var/lib/lorax/composer/repos/``
* Restart the ``lorax-composer.service``
* Check the output of ``composer-cli status show`` for any output specific depsolve errors.
For example, the DVD usually does not include ``grub2-efi-*-cdboot-*`` so the live-iso image
type will not be available.
If you want to *add* the DVD source to the existing sources you can do that by
mounting the iso and creating a source file to point to it as described in the
`Package Sources`_ documentation. In that case there is no need to remove the other
sources from ``/etc/yum.repos.d/`` or clear the cached repos.
``lorax-composer`` has been replaced by the ``osbuild-composer`` WELDR API
server which implements more features (eg. ostree, image uploads, etc.) You
can still use ``composer-cli`` and ``cockpit-composer`` with
``osbuild-composer``. See the documentation or the `osbuild website
<https://www.osbuild.org/>`_ for more information.

View File

@ -1,750 +0,0 @@
.\" Man page generated from reStructuredText.
.
.TH "LORAX-COMPOSER" "1" "Sep 08, 2020" "34.0" "Lorax"
.SH NAME
lorax-composer \- Lorax Composer Documentation
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.INDENT 0.0
.TP
.B Authors
Brian C. Lane <\fI\%bcl@redhat.com\fP>
.UNINDENT
.sp
\fBlorax\-composer\fP is a WELDR API server that allows you to build disk images using
\fI\%Blueprints\fP to describe the package versions to be installed into the image.
It is compatible with the Weldr project\(aqs bdcs\-api REST protocol. More
information on Weldr can be found \fI\%on the Weldr blog\fP\&.
.sp
Behind the scenes it uses \fI\%livemedia\-creator\fP and
\fI\%Anaconda\fP to handle the
installation and configuration of the images.
.sp
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
\fBlorax\-composer\fP is now deprecated. It is being replaced by the
\fBosbuild\-composer\fP WELDR API server which implements more features (eg.
ostree, image uploads, etc.) You can still use \fBcomposer\-cli\fP and
\fBcockpit\-composer\fP with \fBosbuild\-composer\fP\&. See the documentation or
the \fI\%osbuild website\fP for more information.
.UNINDENT
.UNINDENT
.SH IMPORTANT THINGS TO NOTE
.INDENT 0.0
.IP \(bu 2
As of version 30.7 SELinux can be set to Enforcing. The current state is
logged for debugging purposes and if there are SELinux denials they should
be reported as a bug.
.IP \(bu 2
All image types lock the root account, except for live\-iso. You will need to either
use one of the \fI\%Customizations\fP methods for setting a ssh key/password, install a
package that creates a user, or use something like \fIcloud\-init\fP to setup access at
boot time.
.UNINDENT
.SH INSTALLATION
.sp
The best way to install \fBlorax\-composer\fP is to use \fBsudo dnf install
lorax\-composer composer\-cli\fP, this will setup the weldr user and install the
systemd socket activation service. You will then need to enable it with \fBsudo
systemctl enable lorax\-composer.socket && sudo systemctl start
lorax\-composer.socket\fP\&. This will leave the server off until the first request
is made. Systemd will then launch the server and it will remain running until
the system is rebooted. This will cause some delay in responding to the first
request from the UI or \fIcomposer\-cli\fP\&.
.sp
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
If you want lorax\-composer to respond immediately to the first request you can
start and enable \fIlorax\-composer.service\fP instead of \fIlorax\-composer.socket\fP
.UNINDENT
.UNINDENT
.SH QUICKSTART
.INDENT 0.0
.IP 1. 3
Create a \fBweldr\fP user and group by running \fBuseradd weldr\fP
.IP 2. 3
Remove any pre\-existing socket directory with \fBrm \-rf /run/weldr/\fP
A new directory with correct permissions will be created the first time the server runs.
.IP 3. 3
Enable the socket activation with \fBsystemctl enable lorax\-composer.socket
&& sudo systemctl start lorax\-composer.socket\fP\&.
.UNINDENT
.sp
NOTE: You can also run it directly with \fBlorax\-composer /path/to/blueprints\fP\&. However,
\fBlorax\-composer\fP does not react well to being started both on the command line and via
socket activation at the same time. It is therefore recommended that you run it directly
on the command line only for testing or development purposes. For real use or development
of other projects that simply use the API, you should stick to socket activation only.
.sp
The \fB/path/to/blueprints/\fP directory is where the blueprints\(aq git repo will
be created, and all the blueprints created with the \fB/api/v0/blueprints/new\fP
route will be stored. If there are blueprint \fB\&.toml\fP files in the top level
of the directory they will be imported into the blueprint git storage when
\fBlorax\-composer\fP starts.
.SH LOGS
.sp
Logs are stored under \fB/var/log/lorax\-composer/\fP and include all console
messages as well as extra debugging info and API requests.
.SH SECURITY
.sp
Some security related issues that you should be aware of before running \fBlorax\-composer\fP:
.INDENT 0.0
.IP \(bu 2
One of the API server threads needs to retain root privileges in order to run Anaconda.
.IP \(bu 2
Only allow authorized users access to the \fBweldr\fP group and socket.
.UNINDENT
.sp
Since Anaconda kickstarts are used there is the possibility that a user could
inject commands into a blueprint that would result in the kickstart executing
arbitrary code on the host. Only authorized users should be allowed to build
images using \fBlorax\-composer\fP\&.
.SH LORAX-COMPOSER CMDLINE ARGUMENTS
.sp
Lorax Composer API Server
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
usage: lorax\-composer [\-h] [\-\-socket SOCKET] [\-\-user USER] [\-\-group GROUP] [\-\-log LOG] [\-\-mockfiles MOCKFILES] [\-\-sharedir SHAREDIR] [\-V] [\-c CONFIG] [\-\-releasever STRING] [\-\-tmp TMP] [\-\-proxy PROXY] [\-\-no\-system\-repos] BLUEPRINTS
.ft P
.fi
.UNINDENT
.UNINDENT
.SS Positional Arguments
.INDENT 0.0
.TP
.BBLUEPRINTS
Path to the blueprints
.UNINDENT
.SS Named Arguments
.INDENT 0.0
.TP
.B\-\-socket
Path to the socket file to listen on
.sp
Default: "/run/weldr/api.socket"
.TP
.B\-\-user
User to use for reduced permissions
.sp
Default: "root"
.TP
.B\-\-group
Group to set ownership of the socket to
.sp
Default: "weldr"
.TP
.B\-\-log
Path to logfile (/var/log/lorax\-composer/composer.log)
.sp
Default: "/var/log/lorax\-composer/composer.log"
.TP
.B\-\-mockfiles
Path to JSON files used for /api/mock/ paths (/var/tmp/bdcs\-mockfiles/)
.sp
Default: "/var/tmp/bdcs\-mockfiles/"
.TP
.B\-\-sharedir
Directory containing all the templates. Overrides config file sharedir
.TP
.B\-V
show program\(aqs version number and exit
.sp
Default: False
.TP
.B\-c, \-\-config
Path to lorax\-composer configuration file.
.sp
Default: "/etc/lorax/composer.conf"
.TP
.B\-\-releasever
Release version to use for $releasever in dnf repository urls
.TP
.B\-\-tmp
Top level temporary directory
.sp
Default: "/var/tmp"
.TP
.B\-\-proxy
Set proxy for DNF, overrides configuration file setting.
.TP
.B\-\-no\-system\-repos
Do not copy over system repos from /etc/yum.repos.d/ at startup
.sp
Default: False
.UNINDENT
.SH HOW IT WORKS
.sp
The server runs as root, and as \fBweldr\fP\&. Communication with it is via a unix
domain socket (\fB/run/weldr/api.socket\fP by default). The directory and socket
are owned by \fBroot:weldr\fP so that any user in the \fBweldr\fP group can use the API
to control \fBlorax\-composer\fP\&.
.sp
At startup the server will check for the correct permissions and
ownership of a pre\-existing directory, or it will create a new one if it
doesn\(aqt exist. The socket path and group owner\(aqs name can be changed from the
cmdline by passing it the \fB\-\-socket\fP and \fB\-\-group\fP arguments.
.sp
It will then drop root privileges for the API thread and run as the \fBweldr\fP
user. The queue and compose thread still runs as root because it needs to be
able to mount/umount files and run Anaconda.
.SH COMPOSING IMAGES
.sp
The \fI\%welder\-web\fP GUI project can be used to construct
blueprints and create composes using a web browser.
.sp
Or use the command line with \fI\%composer\-cli\fP\&.
.SH BLUEPRINTS
.sp
Blueprints are simple text files in \fI\%TOML\fP format that describe
which packages, and what versions, to install into the image. They can also define a limited set
of customizations to make to the final image.
.sp
Example blueprints can be found in the \fBlorax\-composer\fP \fI\%test suite\fP, with a simple one
looking like this:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
name = "base"
description = "A base system with bash"
version = "0.0.1"
[[packages]]
name = "bash"
version = "4.4.*"
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
The \fBname\fP field is the name of the blueprint. It can contain spaces, but they will be converted to \fB\-\fP
when it is written to disk. It should be short and descriptive.
.sp
\fBdescription\fP can be a longer description of the blueprint, it is only used for display purposes.
.sp
\fBversion\fP is a \fI\%semver compatible\fP version number. If
a new blueprint is uploaded with the same \fBversion\fP the server will
automatically bump the PATCH level of the \fBversion\fP\&. If the \fBversion\fP
doesn\(aqt match it will be used as is. eg. Uploading a blueprint with \fBversion\fP
set to \fB0.1.0\fP when the existing blueprint \fBversion\fP is \fB0.0.1\fP will
result in the new blueprint being stored as \fBversion 0.1.0\fP\&.
.SS [[packages]] and [[modules]]
.sp
These entries describe the package names and matching version glob to be installed into the image.
.sp
The names must match the names exactly, and the versions can be an exact match
or a filesystem\-like glob of the version using \fB*\fP wildcards and \fB?\fP
character matching.
.sp
NOTE: Currently there are no differences between \fBpackages\fP and \fBmodules\fP
in \fBlorax\-composer\fP\&. Both are treated like an rpm package dependency.
.sp
For example, to install \fBtmux\-2.9a\fP and \fBopenssh\-server\-8.*\fP, you would add
this to your blueprint:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[[packages]]
name = "tmux"
version = "2.9a"
[[packages]]
name = "openssh\-server"
version = "8.*"
.ft P
.fi
.UNINDENT
.UNINDENT
.SS [[groups]]
.sp
The \fBgroups\fP entries describe a group of packages to be installed into the image. Package groups are
defined in the repository metadata. Each group has a descriptive name used primarily for display
in user interfaces and an ID more commonly used in kickstart files. Here, the ID is the expected
way of listing a group.
.sp
Groups have three different ways of categorizing their packages: mandatory, default, and optional.
For purposes of blueprints, mandatory and default packages will be installed. There is no mechanism
for selecting optional packages.
.sp
For example, if you want to install the \fBanaconda\-tools\fP group you would add this to your
blueprint:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[[groups]]
name="anaconda\-tools"
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
\fBgroups\fP is a TOML list, so each group needs to be listed separately, like \fBpackages\fP but with
no version number.
.SS Customizations
.sp
The \fB[customizations]\fP section can be used to configure the hostname of the final image. eg.:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[customizations]
hostname = "baseimage"
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
This is optional and may be left out to use the defaults.
.SS [customizations.kernel]
.sp
This allows you to append arguments to the bootloader\(aqs kernel commandline. This will not have any
effect on \fBtar\fP or \fBext4\-filesystem\fP images since they do not include a bootloader.
.sp
For example:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[customizations.kernel]
append = "nosmt=force"
.ft P
.fi
.UNINDENT
.UNINDENT
.SS [[customizations.sshkey]]
.sp
Set an existing user\(aqs ssh key in the final image:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[[customizations.sshkey]]
user = "root"
key = "PUBLIC SSH KEY"
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
The key will be added to the user\(aqs authorized_keys file.
.sp
\fBWARNING:\fP
.INDENT 0.0
.INDENT 3.5
\fBkey\fP expects the entire content of \fB~/.ssh/id_rsa.pub\fP
.UNINDENT
.UNINDENT
.SS [[customizations.user]]
.sp
Add a user to the image, and/or set their ssh key.
All fields for this section are optional except for the \fBname\fP, here is a complete example:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[[customizations.user]]
name = "admin"
description = "Administrator account"
password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L..."
key = "PUBLIC SSH KEY"
home = "/srv/widget/"
shell = "/usr/bin/bash"
groups = ["widget", "users", "wheel"]
uid = 1200
gid = 1200
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
If the password starts with \fB$6$\fP, \fB$5$\fP, or \fB$2b$\fP it will be stored as
an encrypted password. Otherwise it will be treated as a plain text password.
.sp
\fBWARNING:\fP
.INDENT 0.0
.INDENT 3.5
\fBkey\fP expects the entire content of \fB~/.ssh/id_rsa.pub\fP
.UNINDENT
.UNINDENT
.SS [[customizations.group]]
.sp
Add a group to the image. \fBname\fP is required and \fBgid\fP is optional:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[[customizations.group]]
name = "widget"
gid = 1130
.ft P
.fi
.UNINDENT
.UNINDENT
.SS [customizations.timezone]
.sp
Customizing the timezone and the NTP servers to use for the system:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[customizations.timezone]
timezone = "US/Eastern"
ntpservers = ["0.north\-america.pool.ntp.org", "1.north\-america.pool.ntp.org"]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
The values supported by \fBtimezone\fP can be listed by running \fBtimedatectl list\-timezones\fP\&.
.sp
If no timezone is setup the system will default to using \fIUTC\fP\&. The ntp servers are also
optional and will default to using the distribution defaults which are fine for most uses.
.sp
In some image types there are already NTP servers setup, eg. Google cloud image, and they
cannot be overridden because they are required to boot in the selected environment. But the
timezone will be updated to the one selected in the blueprint.
.SS [customizations.locale]
.sp
Customize the locale settings for the system:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[customizations.locale]
languages = ["en_US.UTF\-8"]
keyboard = "us"
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
The values supported by \fBlanguages\fP can be listed by running \fBlocalectl list\-locales\fP from
the command line.
.sp
The values supported by \fBkeyboard\fP can be listed by running \fBlocalectl list\-keymaps\fP from
the command line.
.sp
Multiple languages can be added. The first one becomes the
primary, and the others are added as secondary. One or the other of \fBlanguages\fP
or \fBkeyboard\fP must be included (or both) in the section.
.SS [customizations.firewall]
.sp
By default the firewall blocks all access except for services that enable their ports explicitly,
like \fBsshd\fP\&. This command can be used to open other ports or services. Ports are configured using
the port:protocol format:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[customizations.firewall]
ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp"]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Numeric ports, or their names from \fB/etc/services\fP can be used in the \fBports\fP enabled/disabled lists.
.sp
The blueprint settings extend any existing settings in the image templates, so if \fBsshd\fP is
already enabled it will extend the list of ports with the ones listed by the blueprint.
.sp
If the distribution uses \fBfirewalld\fP you can specify services listed by \fBfirewall\-cmd \-\-get\-services\fP
in a \fBcustomizations.firewall.services\fP section:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[customizations.firewall.services]
enabled = ["ftp", "ntp", "dhcp"]
disabled = ["telnet"]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Remember that the \fBfirewall.services\fP are different from the names in \fB/etc/services\fP\&.
.sp
Both are optional, if they are not used leave them out or set them to an empty list \fB[]\fP\&. If you
only want the default firewall setup this section can be omitted from the blueprint.
.sp
NOTE: The \fBGoogle\fP and \fBOpenStack\fP templates explicitly disable the firewall for their environment.
This cannot be overridden by the blueprint.
.SS [customizations.services]
.sp
This section can be used to control which services are enabled at boot time.
Some image types already have services enabled or disabled in order for the
image to work correctly, and cannot be overridden. eg. \fBami\fP requires
\fBsshd\fP, \fBchronyd\fP, and \fBcloud\-init\fP\&. Without them the image will not
boot. Blueprint services are added to, not replacing, the list already in the
templates, if any.
.sp
The service names are systemd service units. You may specify any systemd unit
file accepted by \fBsystemctl enable\fP eg. \fBcockpit.socket\fP:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[customizations.services]
enabled = ["sshd", "cockpit.socket", "httpd"]
disabled = ["postfix", "telnetd"]
.ft P
.fi
.UNINDENT
.UNINDENT
.SS [[repos.git]]
.sp
The \fB[[repos.git]]\fP entries are used to add files from a \fI\%git repository\fP
repository to the created image. The repository is cloned, the specified \fBref\fP is checked out
and an rpm is created to install the files to a \fBdestination\fP path. The rpm includes a summary
with the details of the repository and reference used to create it. The rpm is also included in the
image build metadata.
.sp
To create an rpm named \fBserver\-config\-1.0\-1.noarch.rpm\fP you would add this to your blueprint:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[[repos.git]]
rpmname="server\-config"
rpmversion="1.0"
rpmrelease="1"
summary="Setup files for server deployment"
repo="PATH OF GIT REPO TO CLONE"
ref="v1.0"
destination="/opt/server/"
.ft P
.fi
.UNINDENT
.UNINDENT
.INDENT 0.0
.IP \(bu 2
rpmname: Name of the rpm to create, also used as the prefix name in the tar archive
.IP \(bu 2
rpmversion: Version of the rpm, eg. "1.0.0"
.IP \(bu 2
rpmrelease: Release of the rpm, eg. "1"
.IP \(bu 2
summary: Summary string for the rpm
.IP \(bu 2
repo: URL of the get repo to clone and create the archive from
.IP \(bu 2
ref: Git reference to check out. eg. origin/branch\-name, git tag, or git commit hash
.IP \(bu 2
destination: Path to install the / of the git repo at when installing the rpm
.UNINDENT
.sp
An rpm will be created with the contents of the git repository referenced, with the files
being installed under \fB/opt/server/\fP in this case.
.sp
\fBref\fP can be any valid git reference for use with \fBgit archive\fP\&. eg. to use the head
of a branch set it to \fBorigin/branch\-name\fP, a tag name, or a commit hash.
.sp
Note that the repository is cloned in full each time a build is started, so pointing to a
repository with a large amount of history may take a while to clone and use a significant
amount of disk space. The clone is temporary and is removed once the rpm is created.
.SH ADDING OUTPUT TYPES
.sp
\fBlivemedia\-creator\fP supports a large number of output types, and only some of
these are currently available via \fBlorax\-composer\fP\&. To add a new output type to
lorax\-composer a kickstart file needs to be added to \fB\&./share/composer/\fP\&. The
name of the kickstart is what will be used by the \fB/compose/types\fP route, and the
\fBcompose_type\fP field of the POST to start a compose. It also needs to have
code added to the \fBpylorax.api.compose.compose_args()\fP function. The
\fB_MAP\fP entry in this function defines what lorax\-composer will pass to
\fBpylorax.installer.novirt_install()\fP when it runs the compose. When the
compose is finished the output files need to be copied out of the build
directory (\fB/var/lib/lorax/composer/results/<UUID>/compose/\fP),
\fBpylorax.api.compose.move_compose_results()\fP handles this for each type.
You should move them instead of copying to save space.
.sp
If the new output type does not have support in livemedia\-creator it should be
added there first. This will make the output available to the widest number of
users.
.SS Example: Add partitioned disk support
.sp
Partitioned disk support is something that livemedia\-creator already supports
via the \fB\-\-make\-disk\fP cmdline argument. To add this to lorax\-composer it
needs 3 things:
.INDENT 0.0
.IP \(bu 2
A \fBpartitioned\-disk.ks\fP file in \fB\&./share/composer/\fP
.IP \(bu 2
A new entry in the _MAP in \fBpylorax.api.compose.compose_args()\fP
.IP \(bu 2
Add a bit of code to \fBpylorax.api.compose.move_compose_results()\fP to move the disk image from
the compose directory to the results directory.
.UNINDENT
.sp
The \fBpartitioned\-disk.ks\fP is pretty similar to the example minimal kickstart
in \fB\&./docs/fedora\-minimal.ks\fP\&. You should remove the \fBurl\fP and \fBrepo\fP
commands, they will be added by the compose process. Make sure the bootloader
packages are included in the \fB%packages\fP section at the end of the kickstart,
and you will want to leave off the \fB%end\fP so that the compose can append the
list of packages from the blueprint.
.sp
The new \fB_MAP\fP entry should be a copy of one of the existing entries, but with \fBmake_disk\fP set
to \fBTrue\fP\&. Make sure that none of the other \fBmake_*\fP options are \fBTrue\fP\&. The \fBimage_name\fP is
what the name of the final image will be.
.sp
\fBmove_compose_results()\fP can be as simple as moving the output file into
the results directory, or it could do some post\-processing on it. The end of
the function should always clean up the \fB\&./compose/\fP directory, removing any
unneeded extra files. This is especially true for the \fBlive\-iso\fP since it produces
the contents of the iso as well as the boot.iso itself.
.SH PACKAGE SOURCES
.sp
By default lorax\-composer uses the host\(aqs configured repositories. It copies
the \fB*.repo\fP files from \fB/etc/yum.repos.d/\fP into
\fB/var/lib/lorax/composer/repos.d/\fP at startup, these are immutable system
repositories and cannot be deleted or changed. If you want to add additional
repos you can put them into \fB/var/lib/lorax/composer/repos.d/\fP or use the
\fB/api/v0/projects/source/*\fP API routes to create them.
.sp
The new source can be added by doing a POST to the \fB/api/v0/projects/source/new\fP
route using JSON (with \fIContent\-Type\fP header set to \fIapplication/json\fP) or TOML
(with it set to \fItext/x\-toml\fP). The format of the source looks like this (in
TOML):
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
name = "custom\-source\-1"
url = "https://url/path/to/repository/"
type = "yum\-baseurl"
proxy = "https://proxy\-url/"
check_ssl = true
check_gpg = true
gpgkey_urls = ["https://url/path/to/gpg\-key"]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
The \fBproxy\fP and \fBgpgkey_urls\fP entries are optional. All of the others are required. The supported
types for the urls are:
.INDENT 0.0
.IP \(bu 2
\fByum\-baseurl\fP is a URL to a yum repository.
.IP \(bu 2
\fByum\-mirrorlist\fP is a URL for a mirrorlist.
.IP \(bu 2
\fByum\-metalink\fP is a URL for a metalink.
.UNINDENT
.sp
If \fBcheck_ssl\fP is true the https certificates must be valid. If they are self\-signed you can either set
this to false, or add your Certificate Authority to the host system.
.sp
If \fBcheck_gpg\fP is true the GPG key must either be installed on the host system, or \fBgpgkey_urls\fP
should point to it.
.sp
You can edit an existing source (other than system sources), by doing a POST to the \fBnew\fP route
with the new version of the source. It will overwrite the previous one.
.sp
A list of existing sources is available from \fB/api/v0/projects/source/list\fP, and detailed info
on a source can be retrieved with the \fB/api/v0/projects/source/info/<source\-name>\fP route. By default
it returns JSON but it can also return TOML if \fB?format=toml\fP is added to the request.
.sp
Non\-system sources can be deleted by doing a \fBDELETE\fP request to the
\fB/api/v0/projects/source/delete/<source\-name>\fP route.
.sp
The documentation for the source API routes can be \fI\%found here\fP
.sp
The configured sources are used for all blueprint depsolve operations, and for composing images.
When adding additional sources you must make sure that the packages in the source do not
conflict with any other package sources, otherwise depsolving will fail.
.SS DVD ISO Package Source
.sp
In some situations the system may want to \fIonly\fP use a DVD iso as the package
source, not the repos from the network. \fBlorax\-composer\fP and \fBanaconda\fP
understand \fBfile://\fP URLs so you can mount an iso on the host, and replace the
system repo files with a configuration file pointing to the DVD.
.INDENT 0.0
.IP \(bu 2
Stop the \fBlorax\-composer.service\fP if it is running
.IP \(bu 2
Move the repo files in \fB/etc/yum.repos.d/\fP someplace safe
.IP \(bu 2
Create a new \fBiso.repo\fP file in \fB/etc/yum.repos.d/\fP:
.INDENT 2.0
.INDENT 3.5
.sp
.nf
.ft C
[iso]
name=iso
baseurl=file:///mnt/iso/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/iso/RPM\-GPG\-KEY\-redhat\-release
.ft P
.fi
.UNINDENT
.UNINDENT
.IP \(bu 2
Remove all the cached repo files from \fB/var/lib/lorax/composer/repos/\fP
.IP \(bu 2
Restart the \fBlorax\-composer.service\fP
.IP \(bu 2
Check the output of \fBcomposer\-cli status show\fP for any output specific depsolve errors.
For example, the DVD usually does not include \fBgrub2\-efi\-*\-cdboot\-*\fP so the live\-iso image
type will not be available.
.UNINDENT
.sp
If you want to \fIadd\fP the DVD source to the existing sources you can do that by
mounting the iso and creating a source file to point to it as described in the
\fI\%Package Sources\fP documentation. In that case there is no need to remove the other
sources from \fB/etc/yum.repos.d/\fP or clear the cached repos.
.SH AUTHOR
Weldr Team
.SH COPYRIGHT
2018, Red Hat, Inc.
.\" Generated by docutils manpage writer.
.

View File

@ -1 +0,0 @@
# lorax-composer configuration file

View File

@ -94,8 +94,7 @@ Summary: Lorax html documentation
Requires: lorax = %{version}-%{release}
%description docs
Includes the full html documentation for lorax, livemedia-creator, lorax-composer and the
pylorax library.
Includes the full html documentation for lorax, livemedia-creator, and the pylorax library.
%package lmc-virt
Summary: livemedia-creator libvirt dependencies
@ -132,42 +131,6 @@ Provides: lorax-templates = %{version}-%{release}
Lorax templates for creating the boot.iso and live isos are placed in
/usr/share/lorax/templates.d/99-generic
%package composer
Summary: Lorax Image Composer API Server
# For Sphinx documentation build
BuildRequires: python3-flask python3-gobject libgit2-glib python3-toml python3-semantic_version
Requires: lorax = %{version}-%{release}
Requires(pre): /usr/bin/getent
Requires(pre): /usr/sbin/groupadd
Requires(pre): /usr/sbin/useradd
Requires: python3-toml
Requires: python3-semantic_version
Requires: libgit2
Requires: libgit2-glib
Requires: python3-flask
Requires: python3-gevent
Requires: anaconda-tui >= 29.19-1
Requires: qemu-img
Requires: tar
Requires: python3-rpmfluff
Requires: git
Requires: xz
Requires: createrepo_c
Requires: python3-ansible-runner
# For AWS playbook support
Requires: python3-boto3
%{?systemd_requires}
BuildRequires: systemd
# Implements the weldr API
Provides: weldr
%description composer
lorax-composer provides a REST API for building images using lorax.
%package -n composer-cli
Summary: A command line tool for use with the lorax-composer API server
@ -188,29 +151,6 @@ build images, etc. from the command line.
rm -rf $RPM_BUILD_ROOT
make DESTDIR=$RPM_BUILD_ROOT mandir=%{_mandir} install
# Install example blueprints from the test suite.
# This path MUST match the lorax-composer.service blueprint path.
mkdir -p $RPM_BUILD_ROOT/var/lib/lorax/composer/blueprints/
for bp in example-http-server.toml example-development.toml example-atlas.toml; do
cp ./tests/pylorax/blueprints/$bp $RPM_BUILD_ROOT/var/lib/lorax/composer/blueprints/
done
%pre composer
getent group weldr >/dev/null 2>&1 || groupadd -r weldr >/dev/null 2>&1 || :
getent passwd weldr >/dev/null 2>&1 || useradd -r -g weldr -d / -s /sbin/nologin -c "User for lorax-composer" weldr >/dev/null 2>&1 || :
%post composer
%systemd_post lorax-composer.service
%systemd_post lorax-composer.socket
%preun composer
%systemd_preun lorax-composer.service
%systemd_preun lorax-composer.socket
%postun composer
%systemd_postun_with_restart lorax-composer.service
%systemd_postun_with_restart lorax-composer.socket
%files
%defattr(-,root,root,-)
%license COPYING
@ -245,22 +185,6 @@ getent passwd weldr >/dev/null 2>&1 || useradd -r -g weldr -d / -s /sbin/nologin
%dir %{_datadir}/lorax/templates.d
%{_datadir}/lorax/templates.d/*
%files composer
%config(noreplace) %{_sysconfdir}/lorax/composer.conf
%{python3_sitelib}/pylorax/api/*
%{python3_sitelib}/lifted/*
%{_sbindir}/lorax-composer
%{_unitdir}/lorax-composer.service
%{_unitdir}/lorax-composer.socket
%dir %{_datadir}/lorax/composer
%{_datadir}/lorax/composer/*
%{_datadir}/lorax/lifted/*
%{_tmpfilesdir}/lorax-composer.conf
%dir %attr(0771, root, weldr) %{_sharedstatedir}/lorax/composer/
%dir %attr(0771, root, weldr) %{_sharedstatedir}/lorax/composer/blueprints/
%attr(0771, weldr, weldr) %{_sharedstatedir}/lorax/composer/blueprints/*
%{_mandir}/man1/lorax-composer.1*
%files -n composer-cli
%{_bindir}/composer-cli
%{python3_sitelib}/composer/*

View File

@ -7,11 +7,7 @@ import sys
# config file
data_files = [("/etc/lorax", ["etc/lorax.conf"]),
("/etc/lorax", ["etc/composer.conf"]),
("/usr/lib/systemd/system", ["systemd/lorax-composer.service",
"systemd/lorax-composer.socket"]),
("/usr/lib/tmpfiles.d/", ["systemd/lorax-composer.conf",
"systemd/lorax.conf"])]
("/usr/lib/tmpfiles.d/", ["systemd/lorax.conf"])]
# shared files
for root, dnames, fnames in os.walk("share"):
@ -21,8 +17,7 @@ for root, dnames, fnames in os.walk("share"):
# executable
data_files.append(("/usr/sbin", ["src/sbin/lorax", "src/sbin/mkefiboot",
"src/sbin/livemedia-creator", "src/sbin/lorax-composer",
"src/sbin/mkksiso"]))
"src/sbin/livemedia-creator", "src/sbin/mkksiso"]))
data_files.append(("/usr/bin", ["src/bin/image-minimizer",
"src/bin/mk-s390-cdboot",
"src/bin/composer-cli"]))
@ -48,7 +43,7 @@ setup(name="lorax",
url="http://www.github.com/weldr/lorax/",
download_url="http://www.github.com/weldr/lorax/releases/",
license="GPLv2+",
packages=["pylorax", "pylorax.api", "composer", "composer.cli", "lifted"],
packages=["pylorax", "composer", "composer.cli"],
package_dir={"" : "src"},
data_files=data_files
)

View File

@ -1,44 +0,0 @@
# Lorax Composer partitioned disk output kickstart template
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration
bootloader --location=mbr
# Basic services
services --enabled=sshd,cloud-init
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
%end
%packages
kernel
selinux-policy-targeted
cloud-init
# NOTE lorax-composer will add the blueprint packages below here, including the final %end

View File

@ -1,51 +0,0 @@
# Lorax Composer AMI output kickstart template
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration
bootloader --location=mbr --append="no_timer_check console=ttyS0,115200n8 console=tty1 net.ifnames=0"
# Add platform specific partitions
reqpart --add-boot
# Basic services
services --enabled=sshd,chronyd,cloud-init
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# tell cloud-init to create the ec2-user account
sed -i 's/cloud-user/ec2-user/' /etc/cloud/cloud.cfg
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
%end
%packages
kernel
selinux-policy-targeted
chrony
cloud-init
# NOTE lorax-composer will add the recipe packages below here, including the final %end

View File

@ -1,42 +0,0 @@
# Lorax Composer filesystem output kickstart template
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration (unpartitioned fs image doesn't use a bootloader)
bootloader --location=none
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
%end
# NOTE Do NOT add any other sections after %packages
%packages --nocore
# Packages requires to support this output format go here
policycoreutils
selinux-policy-targeted
kernel
# NOTE lorax-composer will add the blueprint packages below here, including the final %end

View File

@ -1,78 +0,0 @@
# Lorax Composer partitioned disk output kickstart template
# Firewall configuration
firewall --disabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --mtu=1460 --noipv6 --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System timezone
timezone --ntpservers metadata.google.internal UTC
# System bootloader configuration
bootloader --location=mbr --append="console=ttyS0,38400n8d"
# Add platform specific partitions
reqpart --add-boot
services --disabled=irqbalance
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
# Replace the ssh configuration
cat > /etc/ssh/sshd_config << EOF
# Disable PasswordAuthentication as ssh keys are more secure.
PasswordAuthentication no
# Disable root login, using sudo provides better auditing.
PermitRootLogin no
PermitTunnel no
AllowTcpForwarding yes
X11Forwarding no
# Compute times out connections after 10 minutes of inactivity. Keep alive
# ssh connections by sending a packet every 7 minutes.
ClientAliveInterval 420
EOF
cat > /etc/ssh/ssh_config << EOF
Host *
Protocol 2
ForwardAgent no
ForwardX11 no
HostbasedAuthentication no
StrictHostKeyChecking no
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
Tunnel no
# Google Compute Engine times out connections after 10 minutes of inactivity.
# Keep alive ssh connections by sending a packet every 7 minutes.
ServerAliveInterval 420
EOF
%end
%packages
kernel
selinux-policy-targeted
# NOTE lorax-composer will add the blueprint packages below here, including the final %end

View File

@ -1,59 +0,0 @@
# Lorax Composer VHD (Azure, Hyper-V) output kickstart template
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration
bootloader --location=mbr --append="no_timer_check console=ttyS0,115200n8 earlyprintk=ttyS0,115200 rootdelay=300 net.ifnames=0"
# Add platform specific partitions
reqpart --add-boot
# Basic services
services --enabled=sshd,chronyd
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
# Add Hyper-V modules into initramfs
cat > /etc/dracut.conf.d/10-hyperv.conf << EOF
add_drivers+=" hv_vmbus hv_netvsc hv_storvsc "
EOF
# Regenerate the intramfs image
dracut -f -v --persistent-policy by-uuid
%end
%addon com_redhat_kdump --disable
%end
%packages
kernel
selinux-policy-targeted
chrony
hyperv-daemons
# NOTE lorax-composer will add the recipe packages below here, including the final %end

View File

@ -1,374 +0,0 @@
# Lorax Composer Live ISO output kickstart template
# Firewall configuration
firewall --enabled --service=mdns
# X Window System configuration information
xconfig --startxonboot
# Root password is removed for live-iso
rootpw --plaintext removethispw
# Network information
network --bootproto=dhcp --device=link --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System services
services --disabled="network,sshd" --enabled="NetworkManager"
# System bootloader configuration
bootloader --location=none
%post
# FIXME: it'd be better to get this installed from a package
cat > /etc/rc.d/init.d/livesys << EOF
#!/bin/bash
#
# live: Init script for live image
#
# chkconfig: 345 00 99
# description: Init script for live image.
### BEGIN INIT INFO
# X-Start-Before: display-manager
### END INIT INFO
. /etc/init.d/functions
if ! strstr "\`cat /proc/cmdline\`" rd.live.image || [ "\$1" != "start" ]; then
exit 0
fi
if [ -e /.liveimg-configured ] ; then
configdone=1
fi
exists() {
which \$1 >/dev/null 2>&1 || return
\$*
}
livedir="LiveOS"
for arg in \`cat /proc/cmdline\` ; do
if [ "\${arg##rd.live.dir=}" != "\${arg}" ]; then
livedir=\${arg##rd.live.dir=}
return
fi
if [ "\${arg##live_dir=}" != "\${arg}" ]; then
livedir=\${arg##live_dir=}
return
fi
done
# enable swaps unless requested otherwise
swaps=\`blkid -t TYPE=swap -o device\`
if ! strstr "\`cat /proc/cmdline\`" noswap && [ -n "\$swaps" ] ; then
for s in \$swaps ; do
action "Enabling swap partition \$s" swapon \$s
done
fi
if ! strstr "\`cat /proc/cmdline\`" noswap && [ -f /run/initramfs/live/\${livedir}/swap.img ] ; then
action "Enabling swap file" swapon /run/initramfs/live/\${livedir}/swap.img
fi
mountPersistentHome() {
# support label/uuid
if [ "\${homedev##LABEL=}" != "\${homedev}" -o "\${homedev##UUID=}" != "\${homedev}" ]; then
homedev=\`/sbin/blkid -o device -t "\$homedev"\`
fi
# if we're given a file rather than a blockdev, loopback it
if [ "\${homedev##mtd}" != "\${homedev}" ]; then
# mtd devs don't have a block device but get magic-mounted with -t jffs2
mountopts="-t jffs2"
elif [ ! -b "\$homedev" ]; then
loopdev=\`losetup -f\`
if [ "\${homedev##/run/initramfs/live}" != "\${homedev}" ]; then
action "Remounting live store r/w" mount -o remount,rw /run/initramfs/live
fi
losetup \$loopdev \$homedev
homedev=\$loopdev
fi
# if it's encrypted, we need to unlock it
if [ "\$(/sbin/blkid -s TYPE -o value \$homedev 2>/dev/null)" = "crypto_LUKS" ]; then
echo
echo "Setting up encrypted /home device"
plymouth ask-for-password --command="cryptsetup luksOpen \$homedev EncHome"
homedev=/dev/mapper/EncHome
fi
# and finally do the mount
mount \$mountopts \$homedev /home
# if we have /home under what's passed for persistent home, then
# we should make that the real /home. useful for mtd device on olpc
if [ -d /home/home ]; then mount --bind /home/home /home ; fi
[ -x /sbin/restorecon ] && /sbin/restorecon /home
if [ -d /home/liveuser ]; then USERADDARGS="-M" ; fi
}
findPersistentHome() {
for arg in \`cat /proc/cmdline\` ; do
if [ "\${arg##persistenthome=}" != "\${arg}" ]; then
homedev=\${arg##persistenthome=}
return
fi
done
}
if strstr "\`cat /proc/cmdline\`" persistenthome= ; then
findPersistentHome
elif [ -e /run/initramfs/live/\${livedir}/home.img ]; then
homedev=/run/initramfs/live/\${livedir}/home.img
fi
# if we have a persistent /home, then we want to go ahead and mount it
if ! strstr "\`cat /proc/cmdline\`" nopersistenthome && [ -n "\$homedev" ] ; then
action "Mounting persistent /home" mountPersistentHome
fi
if [ -n "\$configdone" ]; then
exit 0
fi
# add fedora user with no passwd
action "Adding live user" useradd \$USERADDARGS -c "Live System User" liveuser
passwd -d liveuser > /dev/null
usermod -aG wheel liveuser > /dev/null
# Remove root password lock
passwd -d root > /dev/null
# turn off firstboot for livecd boots
systemctl --no-reload disable firstboot-text.service 2> /dev/null || :
systemctl --no-reload disable firstboot-graphical.service 2> /dev/null || :
systemctl stop firstboot-text.service 2> /dev/null || :
systemctl stop firstboot-graphical.service 2> /dev/null || :
# don't use prelink on a running live image
sed -i 's/PRELINKING=yes/PRELINKING=no/' /etc/sysconfig/prelink &>/dev/null || :
# turn off mdmonitor by default
systemctl --no-reload disable mdmonitor.service 2> /dev/null || :
systemctl --no-reload disable mdmonitor-takeover.service 2> /dev/null || :
systemctl stop mdmonitor.service 2> /dev/null || :
systemctl stop mdmonitor-takeover.service 2> /dev/null || :
# don't enable the gnome-settings-daemon packagekit plugin
gsettings set org.gnome.software download-updates 'false' || :
# don't start cron/at as they tend to spawn things which are
# disk intensive that are painful on a live image
systemctl --no-reload disable crond.service 2> /dev/null || :
systemctl --no-reload disable atd.service 2> /dev/null || :
systemctl stop crond.service 2> /dev/null || :
systemctl stop atd.service 2> /dev/null || :
# turn off abrtd on a live image
systemctl --no-reload disable abrtd.service 2> /dev/null || :
systemctl stop abrtd.service 2> /dev/null || :
# Don't sync the system clock when running live (RHBZ #1018162)
sed -i 's/rtcsync//' /etc/chrony.conf
# Mark things as configured
touch /.liveimg-configured
# add static hostname to work around xauth bug
# https://bugzilla.redhat.com/show_bug.cgi?id=679486
echo "localhost" > /etc/hostname
EOF
# bah, hal starts way too late
cat > /etc/rc.d/init.d/livesys-late << EOF
#!/bin/bash
#
# live: Late init script for live image
#
# chkconfig: 345 99 01
# description: Late init script for live image.
. /etc/init.d/functions
if ! strstr "\`cat /proc/cmdline\`" rd.live.image || [ "\$1" != "start" ] || [ -e /.liveimg-late-configured ] ; then
exit 0
fi
exists() {
which \$1 >/dev/null 2>&1 || return
\$*
}
touch /.liveimg-late-configured
# read some variables out of /proc/cmdline
for o in \`cat /proc/cmdline\` ; do
case \$o in
ks=*)
ks="--kickstart=\${o#ks=}"
;;
xdriver=*)
xdriver="\${o#xdriver=}"
;;
esac
done
# if liveinst or textinst is given, start anaconda
if strstr "\`cat /proc/cmdline\`" liveinst ; then
plymouth --quit
/usr/sbin/liveinst \$ks
fi
if strstr "\`cat /proc/cmdline\`" textinst ; then
plymouth --quit
/usr/sbin/liveinst --text \$ks
fi
# configure X, allowing user to override xdriver
if [ -n "\$xdriver" ]; then
cat > /etc/X11/xorg.conf.d/00-xdriver.conf <<FOE
Section "Device"
Identifier "Videocard0"
Driver "\$xdriver"
EndSection
FOE
fi
EOF
chmod 755 /etc/rc.d/init.d/livesys
/sbin/restorecon /etc/rc.d/init.d/livesys
/sbin/chkconfig --add livesys
chmod 755 /etc/rc.d/init.d/livesys-late
/sbin/restorecon /etc/rc.d/init.d/livesys-late
/sbin/chkconfig --add livesys-late
# enable tmpfs for /tmp
systemctl enable tmp.mount
# make it so that we don't do writing to the overlay for things which
# are just tmpdirs/caches
# note https://bugzilla.redhat.com/show_bug.cgi?id=1135475
cat >> /etc/fstab << EOF
vartmp /var/tmp tmpfs defaults 0 0
EOF
# work around for poor key import UI in PackageKit
rm -f /var/lib/rpm/__db*
releasever=$(rpm -q --qf '%{version}\n' --whatprovides system-release)
basearch=$(uname -i)
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
echo "Packages within this LiveCD"
rpm -qa
# Note that running rpm recreates the rpm db files which aren't needed or wanted
rm -f /var/lib/rpm/__db*
# go ahead and pre-make the man -k cache (#455968)
/usr/bin/mandb
# make sure there aren't core files lying around
rm -f /core*
# convince readahead not to collect
# FIXME: for systemd
echo 'File created by kickstart. See systemd-update-done.service(8).' \
| tee /etc/.updated >/var/.updated
# Remove random-seed
rm /var/lib/systemd/random-seed
# Remove the rescue kernel and image to save space
# Installation will recreate these on the target
rm -f /boot/*-rescue*
%end
%post
cat >> /etc/rc.d/init.d/livesys << EOF
# disable updates plugin
cat >> /usr/share/glib-2.0/schemas/org.gnome.software.gschema.override << FOE
[org.gnome.software]
download-updates=false
FOE
# don't autostart gnome-software session service
rm -f /etc/xdg/autostart/gnome-software-service.desktop
# disable the gnome-software shell search provider
cat >> /usr/share/gnome-shell/search-providers/org.gnome.Software-search-provider.ini << FOE
DefaultDisabled=true
FOE
# don't run gnome-initial-setup
mkdir ~liveuser/.config
touch ~liveuser/.config/gnome-initial-setup-done
# make the installer show up
if [ -f /usr/share/applications/liveinst.desktop ]; then
# Show harddisk install in shell dash
sed -i -e 's/NoDisplay=true/NoDisplay=false/' /usr/share/applications/liveinst.desktop ""
# need to move it to anaconda.desktop to make shell happy
mv /usr/share/applications/liveinst.desktop /usr/share/applications/anaconda.desktop
cat >> /usr/share/glib-2.0/schemas/org.gnome.shell.gschema.override << FOE
[org.gnome.shell]
favorite-apps=['firefox.desktop', 'evolution.desktop', 'rhythmbox.desktop', 'shotwell.desktop', 'org.gnome.Nautilus.desktop', 'anaconda.desktop']
FOE
# Make the welcome screen show up
if [ -f /usr/share/anaconda/gnome/fedora-welcome.desktop ]; then
mkdir -p ~liveuser/.config/autostart
cp /usr/share/anaconda/gnome/fedora-welcome.desktop /usr/share/applications/
cp /usr/share/anaconda/gnome/fedora-welcome.desktop ~liveuser/.config/autostart/
fi
# Copy Anaconda branding in place
if [ -d /usr/share/lorax/product/usr/share/anaconda ]; then
cp -a /usr/share/lorax/product/* /
fi
fi
# rebuild schema cache with any overrides we installed
glib-compile-schemas /usr/share/glib-2.0/schemas
# set up auto-login
cat > /etc/gdm/custom.conf << FOE
[daemon]
AutomaticLoginEnable=True
AutomaticLogin=liveuser
FOE
# Turn off PackageKit-command-not-found while uninstalled
if [ -f /etc/PackageKit/CommandNotFound.conf ]; then
sed -i -e 's/^SoftwareSourceSearch=true/SoftwareSourceSearch=false/' /etc/PackageKit/CommandNotFound.conf
fi
# make sure to set the right permissions and selinux contexts
chown -R liveuser:liveuser /home/liveuser/
restorecon -R /home/liveuser/
EOF
%end
# NOTE Do NOT add any other sections after %packages
%packages
# Packages requires to support this output format go here
isomd5sum
kernel
dracut-config-generic
dracut-live
system-logos
selinux-policy-targeted
# no longer in @core since 2018-10, but needed for livesys script
initscripts
chkconfig
# NOTE lorax-composer will add the blueprint packages below here, including the final %end%packages

View File

@ -1,47 +0,0 @@
# Lorax Composer tar output kickstart template
# Add kernel and grub2 for use with anaconda's kickstart liveimg command
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration (tar doesn't need a bootloader)
bootloader --location=none
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
%end
# NOTE Do NOT add any other sections after %packages
%packages --nocore
# Packages requires to support this output format go here
policycoreutils
selinux-policy-targeted
# Packages needed for liveimg
kernel
grub2
grub2-tools
# NOTE lorax-composer will add the blueprint packages below here, including the final %end

View File

@ -1,49 +0,0 @@
# Lorax Composer openstack output kickstart template
# Firewall configuration
firewall --disabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration
bootloader --location=mbr --append="no_timer_check console=ttyS0,115200n8 console=tty1 net.ifnames=0"
# Add platform specific partitions
reqpart --add-boot
# Start sshd and cloud-init at boot time
services --enabled=sshd,cloud-init,cloud-init-local,cloud-config,cloud-final
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
%end
%packages
kernel
selinux-policy-targeted
# Make sure virt guest agents are installed
qemu-guest-agent
spice-vdagent
cloud-init
# NOTE lorax-composer will add the recipe packages below here, including the final %end

View File

@ -1,41 +0,0 @@
# Lorax Composer partitioned disk output kickstart template
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration
bootloader --location=mbr
# Add platform specific partitions
reqpart --add-boot
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
%end
%packages
kernel
selinux-policy-targeted
# NOTE lorax-composer will add the blueprint packages below here, including the final %end

View File

@ -1,45 +0,0 @@
# Lorax Composer qcow2 output kickstart template
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration
bootloader --location=mbr
# Add platform specific partitions
reqpart --add-boot
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
%end
%packages
kernel
selinux-policy-targeted
# Make sure virt guest agents are installed
qemu-guest-agent
spice-vdagent
# NOTE lorax-composer will add the recipe packages below here, including the final %end

View File

@ -1,41 +0,0 @@
# Lorax Composer tar output kickstart template
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration (tar doesn't need a bootloader)
bootloader --location=none
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
%end
# NOTE Do NOT add any other sections after %packages
%packages --nocore
# Packages requires to support this output format go here
policycoreutils
selinux-policy-targeted
# NOTE lorax-composer will add the blueprint packages below here, including the final %end

View File

@ -1,89 +0,0 @@
# Lorax Composer VHD (Azure, Hyper-V) output kickstart template
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration
bootloader --location=mbr --append="no_timer_check console=ttyS0,115200n8 earlyprintk=ttyS0,115200 rootdelay=300 net.ifnames=0"
# Add platform specific partitions
reqpart --add-boot
# Basic services
services --enabled=sshd,chronyd,waagent,cloud-init,cloud-init-local,cloud-config,cloud-final
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
# This file is required by waagent in RHEL, but compatible with NetworkManager
cat > /etc/sysconfig/network-scripts/ifcfg-eth0 << EOF
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=yes
PEERDNS=yes
IPV6INIT=no
EOF
# Restrict cloud-init to Azure datasource
cat > /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg << EOF
# Azure Data Source config
datasource_list: [ Azure ]
datasource:
Azure:
apply_network_config: False
EOF
# Setup waagent to work with cloud-init
sed -i 's/Provisioning.Enabled=y/Provisioning.Enabled=n/g' /etc/waagent.conf
sed -i 's/Provisioning.UseCloudInit=n/Provisioning.UseCloudInit=y/g' /etc/waagent.conf
# Add Hyper-V modules into initramfs
cat > /etc/dracut.conf.d/10-hyperv.conf << EOF
add_drivers+=" hv_vmbus hv_netvsc hv_storvsc "
EOF
# Regenerate the intramfs image
dracut -f -v --persistent-policy by-uuid
%end
%addon com_redhat_kdump --disable
%end
%packages
kernel
selinux-policy-targeted
chrony
WALinuxAgent
python3
net-tools
cloud-init
cloud-utils-growpart
gdisk
# NOTE lorax-composer will add the recipe packages below here, including the final %end

View File

@ -1,47 +0,0 @@
# Lorax Composer vmdk kickstart template
# Firewall configuration
firewall --enabled
# NOTE: The root account is locked by default
# Network information
network --bootproto=dhcp --onboot=on --activate
# NOTE: keyboard and lang can be replaced by blueprint customizations.locale settings
# System keyboard
keyboard --xlayouts=us --vckeymap=us
# System language
lang en_US.UTF-8
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info
# Shutdown after installation
shutdown
# System bootloader configuration
bootloader --location=mbr
# Add platform specific partitions
reqpart --add-boot
# Basic services
services --enabled=sshd,chronyd,vmtoolsd
%post
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
# Remove the rescue kernel and image to save space
rm -f /boot/*-rescue*
%end
%packages
kernel
selinux-policy-targeted
chrony
open-vm-tools
# NOTE lorax-composer will add the recipe packages below here, including the final %end

View File

@ -1,258 +0,0 @@
#!/usr/bin/python
# Copyright (C) 2019 Red Hat, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ec2_snapshot_import
short_description: Imports a disk into an EBS snapshot
description:
- Imports a disk into an EBS snapshot
version_added: "2.10"
options:
description:
description:
- description of the import snapshot task
required: false
type: str
format:
description:
- The format of the disk image being imported.
required: true
type: str
url:
description:
- The URL to the Amazon S3-based disk image being imported. It can either be a https URL (https://..) or an Amazon S3 URL (s3://..).
Either C(url) or C(s3_bucket) and C(s3_key) are required.
required: false
type: str
s3_bucket:
description:
- The name of the S3 bucket where the disk image is located.
- C(s3_bucket) and C(s3_key) are required together if C(url) is not used.
required: false
type: str
s3_key:
description:
- The file name of the disk image.
- C(s3_bucket) and C(s3_key) are required together if C(url) is not used.
required: false
type: str
encrypted:
description:
- Whether or not the destination Snapshot should be encrypted.
type: bool
default: 'no'
kms_key_id:
description:
- KMS key id used to encrypt snapshot. If not specified, defaults to EBS Customer Master Key (CMK) for that account.
required: false
type: str
role_name:
description:
- The name of the role to use when not using the default role, 'vmimport'.
required: false
type: str
wait:
description:
- wait for the snapshot to be ready
type: bool
required: false
default: yes
wait_timeout:
description:
- how long before wait gives up, in seconds
- specify 0 to wait forever
required: false
type: int
default: 900
tags:
description:
- A hash/dictionary of tags to add to the new Snapshot; '{"key":"value"}' and '{"key":"value","key":"value"}'
required: false
type: dict
author: "Brian C. Lane (@bcl)"
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Import an S3 object as a snapshot
ec2_snapshot_import:
description: simple-http-server
format: raw
s3_bucket: mybucket
s3_key: server-image.ami
wait: yes
tags:
Name: Snapshot-Name
'''
RETURN = '''
snapshot_id:
description: id of the created snapshot
returned: when snapshot is created
type: str
sample: "snap-1234abcd"
description:
description: description of snapshot
returned: when snapshot is created
type: str
sample: "simple-http-server"
format:
description: format of the disk image being imported
returned: when snapshot is created
type: str
sample: "raw"
disk_image_size:
description: size of the disk image being imported, in bytes.
returned: when snapshot is created
type: float
sample: 3836739584.0
user_bucket:
description: S3 bucket with the image to import
returned: when snapshot is created
type: dict
sample: {
"s3_bucket": "mybucket",
"s3_key": "server-image.ami"
}
status:
description: status of the import operation
returned: when snapshot is created
type: str
sample: "completed"
'''
import time
from ansible.module_utils.aws.core import AnsibleAWSModule
from ansible.module_utils.ec2 import camel_dict_to_snake_dict
try:
import botocore
except ImportError:
pass
def wait_for_import_snapshot(connection, wait_timeout, import_task_id):
params = {
'ImportTaskIds': [import_task_id]
}
start_time = time.time()
while True:
status = connection.describe_import_snapshot_tasks(**params)
# What are the valid status values?
if len(status['ImportSnapshotTasks']) > 1:
raise RuntimeError("Should only be 1 Import Snapshot Task with this id.")
task = status['ImportSnapshotTasks'][0]
if task['SnapshotTaskDetail']['Status'] in ['completed']:
return status
if time.time() - start_time > wait_timeout:
raise RuntimeError('Wait timeout exceeded (%s sec)' % wait_timeout)
time.sleep(5)
def import_snapshot(module, connection):
description = module.params.get('description')
image_format = module.params.get('format')
url = module.params.get('url')
s3_bucket = module.params.get('s3_bucket')
s3_key = module.params.get('s3_key')
encrypted = module.params.get('encrypted')
kms_key_id = module.params.get('kms_key_id')
role_name = module.params.get('role_name')
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
tags = module.params.get('tags')
if module.check_mode:
module.exit_json(changed=True, msg="IMPORT operation skipped - running in check mode")
try:
params = {
'Description': description,
'DiskContainer': {
'Description': description,
'Format': image_format,
},
'Encrypted': encrypted
}
if url:
params['DiskContainer']['Url'] = url
else:
params['DiskContainer']['UserBucket'] = {
'S3Bucket': s3_bucket,
'S3Key': s3_key
}
if kms_key_id:
params['KmsKeyId'] = kms_key_id
if role_name:
params['RoleName'] = role_name
task = connection.import_snapshot(**params)
import_task_id = task['ImportTaskId']
detail = task['SnapshotTaskDetail']
if wait:
status = wait_for_import_snapshot(connection, wait_timeout, import_task_id)
detail = status['ImportSnapshotTasks'][0]['SnapshotTaskDetail']
if tags:
connection.create_tags(
Resources=[detail["SnapshotId"]],
Tags=[{'Key': k, 'Value': v} for k, v in tags.items()]
)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError, RuntimeError) as e:
module.fail_json_aws(e, msg="Error importing image")
module.exit_json(changed=True, **camel_dict_to_snake_dict(detail))
def snapshot_import_ansible_module():
argument_spec = dict(
description=dict(default=''),
wait=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=900),
format=dict(required=True),
url=dict(),
s3_bucket=dict(),
s3_key=dict(),
encrypted=dict(type='bool', default=False),
kms_key_id=dict(),
role_name=dict(),
tags=dict(type='dict')
)
return AnsibleAWSModule(
argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[['s3_bucket', 'url']],
required_one_of=[['s3_bucket', 'url']],
required_together=[['s3_bucket', 's3_key']]
)
def main():
module = snapshot_import_ansible_module()
connection = module.client('ec2')
import_snapshot(module, connection)
if __name__ == '__main__':
main()

View File

@ -1,94 +0,0 @@
- hosts: localhost
tasks:
- name: Make sure bucket exists
aws_s3:
bucket: "{{ aws_bucket }}"
mode: create
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
register: bucket_facts
- fail:
msg: "Bucket creation failed"
when:
- bucket_facts.msg != "Bucket created successfully"
- bucket_facts.msg != "Bucket already exists."
- name: Make sure vmimport role exists
iam_role_facts:
name: vmimport
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
register: role_facts
- fail:
msg: "Role vmimport doesn't exist"
when: role_facts.iam_roles | length < 1
- name: Make sure the AMI name isn't already in use
ec2_ami_facts:
filters:
name: "{{ image_name }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
register: ami_facts
- fail:
msg: "An AMI named {{ image_name }} already exists"
when: ami_facts.images | length > 0
- stat:
path: "{{ image_path }}"
register: image_stat
- set_fact:
image_id: "{{ image_name }}-{{ image_stat['stat']['checksum'] }}.ami"
- name: Upload the .ami image to an s3 bucket
aws_s3:
bucket: "{{ aws_bucket }}"
src: "{{ image_path }}"
object: "{{ image_id }}"
mode: put
overwrite: different
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
- name: Import a snapshot from an AMI stored as an s3 object
ec2_snapshot_import:
description: "{{ image_name }}"
format: raw
s3_bucket: "{{ aws_bucket }}"
s3_key: "{{ image_id }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
wait: yes
tags:
Name: "{{ image_name }}"
register: import_facts
- fail:
msg: "Import of image from s3 failed"
when:
- import_facts.status != "completed"
- name: Register the snapshot as an AMI
ec2_ami:
name: "{{ image_name }}"
state: present
virtualization_type: hvm
root_device_name: /dev/sda1
device_mapping:
- device_name: /dev/sda1
snapshot_id: "{{ import_facts.snapshot_id }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
wait: yes
register: register_facts
- fail:
msg: "Registering snapshot as an AMI failed"
when:
- register_facts.msg != "AMI creation operation complete."
- name: Delete the s3 object used for the snapshot/AMI
aws_s3:
bucket: "{{ aws_bucket }}"
object: "{{ image_id }}"
mode: delobj
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"

View File

@ -1,29 +0,0 @@
display = "AWS"
supported_types = [
"ami",
]
[settings-info.aws_access_key]
display = "AWS Access Key"
type = "string"
placeholder = ""
regex = ''
[settings-info.aws_secret_key]
display = "AWS Secret Key"
type = "string"
placeholder = ""
regex = ''
[settings-info.aws_region]
display = "AWS Region"
type = "string"
placeholder = ""
regex = ''
[settings-info.aws_bucket]
display = "AWS Bucket"
type = "string"
placeholder = ""
regex = ''

View File

@ -1,4 +0,0 @@
- hosts: localhost
connection: local
tasks:
- pause: seconds=30

View File

@ -1,5 +0,0 @@
display = "Dummy"
supported_types = []
[settings-info]
# This provider has no settings.

View File

@ -1,20 +0,0 @@
- hosts: localhost
connection: local
tasks:
- stat:
path: "{{ image_path }}"
register: image_stat
- set_fact:
image_id: "{{ image_name }}-{{ image_stat['stat']['checksum'] }}.qcow2"
- name: Upload image to OpenStack
os_image:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ user_domain_name }}"
os_project_domain_name: "{{ project_domain_name }}"
name: "{{ image_id }}"
filename: "{{ image_path }}"
is_public: "{{ is_public }}"

View File

@ -1,45 +0,0 @@
display = "OpenStack"
supported_types = [
"qcow2",
]
[settings-info.auth_url]
display = "Authentication URL"
type = "string"
placeholder = ""
regex = ''
[settings-info.username]
display = "Username"
type = "string"
placeholder = ""
regex = ''
[settings-info.password]
display = "Password"
type = "string"
placeholder = ""
regex = ''
[settings-info.project_name]
display = "Project name"
type = "string"
placeholder = ""
regex = ''
[settings-info.user_domain_name]
display = "User domain name"
type = "string"
placeholder = ""
regex = ''
[settings-info.project_domain_name]
display = "Project domain name"
type = "string"
placeholder = ""
regex = ''
[settings-info.is_public]
display = "Allow public access"
type = "boolean"

View File

@ -1,17 +0,0 @@
- hosts: localhost
connection: local
tasks:
- stat:
path: "{{ image_path }}"
register: image_stat
- set_fact:
image_id: "{{ image_name }}-{{ image_stat['stat']['checksum'] }}.vmdk"
- name: Upload image to vSphere
vsphere_copy:
login: "{{ username }}"
password: "{{ password }}"
host: "{{ host }}"
datacenter: "{{ datacenter }}"
datastore: "{{ datastore }}"
src: "{{ image_path }}"
path: "{{ folder }}/{{ image_id }}"

View File

@ -1,42 +0,0 @@
display = "vSphere"
supported_types = [
"vmdk",
]
[settings-info.datacenter]
display = "Datacenter"
type = "string"
placeholder = ""
regex = ''
[settings-info.datastore]
display = "Datastore"
type = "string"
placeholder = ""
regex = ''
[settings-info.host]
display = "Host"
type = "string"
placeholder = ""
regex = ''
[settings-info.folder]
display = "Folder"
type = "string"
placeholder = ""
regex = ''
[settings-info.username]
display = "Username"
type = "string"
placeholder = ""
regex = ''
[settings-info.password]
display = "Password"
type = "string"
placeholder = ""
regex = ''

View File

@ -1,16 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#

View File

@ -1,35 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
from pylorax.sysutils import joinpaths
def configure(conf):
"""Add lifted settings to the configuration
:param conf: configuration object
:type conf: ComposerConfig
:returns: None
This uses the composer.share_dir and composer.lib_dir as the base
directories for the settings.
"""
share_dir = conf.get("composer", "share_dir")
lib_dir = conf.get("composer", "lib_dir")
conf.add_section("upload")
conf.set("upload", "providers_dir", joinpaths(share_dir, "/lifted/providers/"))
conf.set("upload", "queue_dir", joinpaths(lib_dir, "/upload/queue/"))
conf.set("upload", "settings_dir", joinpaths(lib_dir, "/upload/settings/"))

View File

@ -1,245 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
from glob import glob
import os
import re
import stat
import pylorax.api.toml as toml
def _get_profile_path(ucfg, provider_name, profile, exists=True):
"""Helper to return the directory and path for a provider's profile file
:param ucfg: upload config
:type ucfg: object
:param provider_name: the name of the cloud provider, e.g. "azure"
:type provider_name: str
:param profile: the name of the profile to save
:type profile: str != ""
:returns: Full path of the profile .toml file
:rtype: str
:raises: ValueError when passed invalid settings or an invalid profile name
:raises: RuntimeError when the provider or profile couldn't be found
"""
# Make sure no path elements are present
profile = os.path.basename(profile)
provider_name = os.path.basename(provider_name)
if not profile:
raise ValueError("Profile name cannot be empty!")
if not provider_name:
raise ValueError("Provider name cannot be empty!")
directory = os.path.join(ucfg["settings_dir"], provider_name)
# create the settings directory if it doesn't exist
os.makedirs(directory, exist_ok=True)
path = os.path.join(directory, f"{profile}.toml")
if exists and not os.path.isfile(path):
raise RuntimeError(f'Couldn\'t find profile "{profile}"!')
return os.path.abspath(path)
def resolve_provider(ucfg, provider_name):
"""Get information about the specified provider as defined in that
provider's `provider.toml`, including the provider's display name and expected
settings.
At a minimum, each setting has a display name (that likely differs from its
snake_case name) and a type. Currently, there are two types of settings:
string and boolean. String settings can optionally have a "placeholder"
value for use on the front end and a "regex" for making sure that a value
follows an expected pattern.
:param ucfg: upload config
:type ucfg: object
:param provider_name: the name of the provider to look for
:type provider_name: str
:raises: RuntimeError when the provider couldn't be found
:returns: the provider
:rtype: dict
"""
# Make sure no path elements are present
provider_name = os.path.basename(provider_name)
path = os.path.join(ucfg["providers_dir"], provider_name, "provider.toml")
try:
with open(path) as provider_file:
provider = toml.load(provider_file)
except OSError as error:
raise RuntimeError(f'Couldn\'t find provider "{provider_name}"!') from error
return provider
def load_profiles(ucfg, provider_name):
"""Return all settings profiles associated with a provider
:param ucfg: upload config
:type ucfg: object
:param provider_name: name a provider to find profiles for
:type provider_name: str
:returns: a dict of settings dicts, keyed by profile name
:rtype: dict
"""
# Make sure no path elements are present
provider_name = os.path.basename(provider_name)
def load_path(path):
with open(path) as file:
return toml.load(file)
def get_name(path):
return os.path.splitext(os.path.basename(path))[0]
paths = glob(os.path.join(ucfg["settings_dir"], provider_name, "*"))
return {get_name(path): load_path(path) for path in paths}
def resolve_playbook_path(ucfg, provider_name):
"""Given a provider's name, return the path to its playbook
:param ucfg: upload config
:type ucfg: object
:param provider_name: the name of the provider to find the playbook for
:type provider_name: str
:raises: RuntimeError when the provider couldn't be found
:returns: the path to the playbook
:rtype: str
"""
# Make sure no path elements are present
provider_name = os.path.basename(provider_name)
path = os.path.join(ucfg["providers_dir"], provider_name, "playbook.yaml")
if not os.path.isfile(path):
raise RuntimeError(f'Couldn\'t find playbook for "{provider_name}"!')
return path
def list_providers(ucfg):
"""List the names of the available upload providers
:param ucfg: upload config
:type ucfg: object
:returns: a list of all available provider_names
:rtype: list of str
"""
paths = glob(os.path.join(ucfg["providers_dir"], "*"))
return sorted(os.path.basename(path) for path in paths)
def validate_settings(ucfg, provider_name, settings, image_name=None):
"""Raise a ValueError if any settings are invalid
:param ucfg: upload config
:type ucfg: object
:param provider_name: the name of the provider to validate the settings against
:type provider_name: str
:param settings: the settings to validate
:type settings: dict
:param image_name: optionally check whether an image_name is valid
:type image_name: str
:raises: ValueError when the passed settings are invalid
:raises: RuntimeError when provider_name can't be found
"""
if image_name == "":
raise ValueError("Image name cannot be empty!")
type_map = {"string": str, "boolean": bool}
settings_info = resolve_provider(ucfg, provider_name)["settings-info"]
for key, value in settings.items():
if key not in settings_info:
raise ValueError(f'Received unexpected setting: "{key}"!')
setting_type = settings_info[key]["type"]
correct_type = type_map[setting_type]
if not isinstance(value, correct_type):
raise ValueError(
f'Expected a {correct_type} for "{key}", received a {type(value)}!'
)
if setting_type == "string" and "regex" in settings_info[key]:
if not re.match(settings_info[key]["regex"], value):
raise ValueError(f'Value "{value}" is invalid for setting "{key}"!')
def save_settings(ucfg, provider_name, profile, settings):
"""Save (and overwrite) settings for a given provider
:param ucfg: upload config
:type ucfg: object
:param provider_name: the name of the cloud provider, e.g. "azure"
:type provider_name: str
:param profile: the name of the profile to save
:type profile: str != ""
:param settings: settings to save for that provider
:type settings: dict
:raises: ValueError when passed invalid settings or an invalid profile name
"""
path = _get_profile_path(ucfg, provider_name, profile, exists=False)
validate_settings(ucfg, provider_name, settings, image_name=None)
# touch the TOML file if it doesn't exist
if not os.path.isfile(path):
open(path, "a").close()
# make sure settings files aren't readable by others, as they will contain
# sensitive credentials
current = stat.S_IMODE(os.lstat(path).st_mode)
os.chmod(path, current & ~stat.S_IROTH)
with open(path, "w") as settings_file:
toml.dump(settings, settings_file)
def load_settings(ucfg, provider_name, profile):
"""Load settings for a provider's profile
:param ucfg: upload config
:type ucfg: object
:param provider_name: the name of the cloud provider, e.g. "azure"
:type provider_name: str
:param profile: the name of the profile to save
:type profile: str != ""
:returns: The profile settings for the selected provider
:rtype: dict
:raises: ValueError when passed invalid settings or an invalid profile name
:raises: RuntimeError when the provider or profile couldn't be found
:raises: ValueError when the passed settings are invalid
This also calls validate_settings on the loaded settings, potentially
raising an error if the saved settings are invalid.
"""
path = _get_profile_path(ucfg, provider_name, profile)
with open(path) as file:
settings = toml.load(file)
validate_settings(ucfg, provider_name, settings)
return settings
def delete_profile(ucfg, provider_name, profile):
"""Delete a provider's profile settings file
:param ucfg: upload config
:type ucfg: object
:param provider_name: the name of the cloud provider, e.g. "azure"
:type provider_name: str
:param profile: the name of the profile to save
:type profile: str != ""
:raises: ValueError when passed invalid settings or an invalid profile name
:raises: RuntimeError when the provider or profile couldn't be found
"""
path = _get_profile_path(ucfg, provider_name, profile)
if os.path.exists(path):
os.unlink(path)

View File

@ -1,269 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
from functools import partial
from glob import glob
import logging
import multiprocessing
# We use a multiprocessing Pool for uploads so that we can cancel them with a
# simple SIGINT, which should bubble down to subprocesses.
from multiprocessing import Pool
# multiprocessing.dummy is to threads as multiprocessing is to processes.
# Since daemonic processes can't have children, we use a thread to monitor the
# upload pool.
from multiprocessing.dummy import Process
from operator import attrgetter
import os
import stat
import time
import pylorax.api.toml as toml
from lifted.upload import Upload
from lifted.providers import resolve_playbook_path, validate_settings
# the maximum number of simultaneous uploads
SIMULTANEOUS_UPLOADS = 1
log = logging.getLogger("lifted")
multiprocessing.log_to_stderr().setLevel(logging.INFO)
def _get_queue_path(ucfg):
path = ucfg["queue_dir"]
# create the upload_queue directory if it doesn't exist
os.makedirs(path, exist_ok=True)
return path
def _get_upload_path(ucfg, uuid, write=False):
# Make sure no path elements are present
uuid = os.path.basename(uuid)
path = os.path.join(_get_queue_path(ucfg), f"{uuid}.toml")
if write and not os.path.exists(path):
open(path, "a").close()
if os.path.exists(path):
# make sure uploads aren't readable by others, as they will contain
# sensitive credentials
current = stat.S_IMODE(os.lstat(path).st_mode)
os.chmod(path, current & ~stat.S_IROTH)
return path
def _list_upload_uuids(ucfg):
paths = glob(os.path.join(_get_queue_path(ucfg), "*"))
return [os.path.splitext(os.path.basename(path))[0] for path in paths]
def _write_upload(ucfg, upload):
with open(_get_upload_path(ucfg, upload.uuid, write=True), "w") as upload_file:
toml.dump(upload.serializable(), upload_file)
def _write_callback(ucfg):
return partial(_write_upload, ucfg)
def get_upload(ucfg, uuid, ignore_missing=False, ignore_corrupt=False):
"""Get an Upload object by UUID
:param ucfg: upload config
:type ucfg: object
:param uuid: UUID of the upload to get
:type uuid: str
:param ignore_missing: if True, don't raise a RuntimeError when the specified upload is missing, instead just return None
:type ignore_missing: bool
:param ignore_corrupt: if True, don't raise a RuntimeError when the specified upload could not be deserialized, instead just return None
:type ignore_corrupt: bool
:returns: the upload object or None
:rtype: Upload or None
:raises: RuntimeError
"""
try:
with open(_get_upload_path(ucfg, uuid), "r") as upload_file:
return Upload(**toml.load(upload_file))
except FileNotFoundError as error:
if not ignore_missing:
raise RuntimeError(f"Could not find upload {uuid}!") from error
except toml.TomlError as error:
if not ignore_corrupt:
raise RuntimeError(f"Could not parse upload {uuid}!") from error
def get_uploads(ucfg, uuids):
"""Gets a list of Upload objects from a list of upload UUIDs, ignoring
missing or corrupt uploads
:param ucfg: upload config
:type ucfg: object
:param uuids: list of upload UUIDs to get
:type uuids: list of str
:returns: a list of the uploads that were successfully deserialized
:rtype: list of Upload
"""
uploads = (
get_upload(ucfg, uuid, ignore_missing=True, ignore_corrupt=True)
for uuid in uuids
)
return list(filter(None, uploads))
def get_all_uploads(ucfg):
"""Get a list of all stored Upload objects
:param ucfg: upload config
:type ucfg: object
:returns: a list of all stored upload objects
:rtype: list of Upload
"""
return get_uploads(ucfg, _list_upload_uuids(ucfg))
def create_upload(ucfg, provider_name, image_name, settings):
"""Creates a new upload
:param ucfg: upload config
:type ucfg: object
:param provider_name: the name of the cloud provider to upload to, e.g. "azure"
:type provider_name: str
:param image_name: what to name the image in the cloud
:type image_name: str
:param settings: settings to pass to the upload, specific to the cloud provider
:type settings: dict
:returns: the created upload object
:rtype: Upload
"""
validate_settings(ucfg, provider_name, settings, image_name)
return Upload(
provider_name=provider_name,
playbook_path=resolve_playbook_path(ucfg, provider_name),
image_name=image_name,
settings=settings,
status_callback=_write_callback(ucfg),
)
def ready_upload(ucfg, uuid, image_path):
"""Pass an image_path to an upload and mark it ready to execute
:param ucfg: upload config
:type ucfg: object
:param uuid: the UUID of the upload to mark ready
:type uuid: str
:param image_path: the path of the image to pass to the upload
:type image_path: str
"""
get_upload(ucfg, uuid).ready(image_path, _write_callback(ucfg))
def reset_upload(ucfg, uuid, new_image_name=None, new_settings=None):
"""Reset an upload so it can be attempted again
:param ucfg: upload config
:type ucfg: object
:param uuid: the UUID of the upload to reset
:type uuid: str
:param new_image_name: optionally update the upload's image_name
:type new_image_name: str
:param new_settings: optionally update the upload's settings
:type new_settings: dict
"""
upload = get_upload(ucfg, uuid)
validate_settings(
ucfg,
upload.provider_name,
new_settings or upload.settings,
new_image_name or upload.image_name,
)
if new_image_name:
upload.image_name = new_image_name
if new_settings:
upload.settings = new_settings
upload.reset(_write_callback(ucfg))
def cancel_upload(ucfg, uuid):
"""Cancel an upload
:param ucfg: the compose config
:type ucfg: ComposerConfig
:param uuid: the UUID of the upload to cancel
:type uuid: str
"""
get_upload(ucfg, uuid).cancel(_write_callback(ucfg))
def delete_upload(ucfg, uuid):
"""Delete an upload
:param ucfg: the compose config
:type ucfg: ComposerConfig
:param uuid: the UUID of the upload to delete
:type uuid: str
"""
upload = get_upload(ucfg, uuid)
if upload and upload.is_cancellable():
upload.cancel()
os.remove(_get_upload_path(ucfg, uuid))
def start_upload_monitor(ucfg):
"""Start a thread that manages the upload queue
:param ucfg: the compose config
:type ucfg: ComposerConfig
"""
process = Process(target=_monitor, args=(ucfg,))
process.daemon = True
process.start()
def _monitor(ucfg):
log.info("Started upload monitor.")
for upload in get_all_uploads(ucfg):
# Set abandoned uploads to FAILED
if upload.status == "RUNNING":
upload.set_status("FAILED", _write_callback(ucfg))
pool = Pool(processes=SIMULTANEOUS_UPLOADS)
pool_uuids = set()
def remover(uuid):
return lambda _: pool_uuids.remove(uuid)
while True:
# Every second, scoop up READY uploads from the filesystem and throw
# them in the pool
all_uploads = get_all_uploads(ucfg)
for upload in sorted(all_uploads, key=attrgetter("creation_time")):
ready = upload.status == "READY"
if ready and upload.uuid not in pool_uuids:
log.info("Starting upload %s...", upload.uuid)
pool_uuids.add(upload.uuid)
callback = remover(upload.uuid)
pool.apply_async(
upload.execute,
(_write_callback(ucfg),),
callback=callback,
error_callback=callback,
)
time.sleep(1)

View File

@ -1,212 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
from datetime import datetime
import logging
from multiprocessing import current_process
import os
import signal
from uuid import uuid4
from ansible_runner.interface import run as ansible_run
from ansible_runner.exceptions import AnsibleRunnerException
log = logging.getLogger("lifted")
class Upload:
"""Represents an upload of an image to a cloud provider. Instances of this
class are serialized as TOML and stored in the upload queue directory,
which is /var/lib/lorax/upload/queue/ by default"""
def __init__(
self,
uuid=None,
provider_name=None,
playbook_path=None,
image_name=None,
settings=None,
creation_time=None,
upload_log=None,
upload_pid=None,
image_path=None,
status_callback=None,
status=None,
):
self.uuid = uuid or str(uuid4())
self.provider_name = provider_name
self.playbook_path = playbook_path
self.image_name = image_name
self.settings = settings
self.creation_time = creation_time or datetime.now().timestamp()
self.upload_log = upload_log or ""
self.upload_pid = upload_pid
self.image_path = image_path
if status:
self.status = status
else:
self.set_status("WAITING", status_callback)
def _log(self, message, callback=None):
"""Logs something to the upload log with an optional callback
:param message: the object to log
:type message: object
:param callback: a function of the form callback(self)
:type callback: function
"""
if message:
messages = str(message).splitlines()
# Log multi-line messages as individual log lines
for m in messages:
log.info(m)
self.upload_log += f"{message}\n"
if callback:
callback(self)
def serializable(self):
"""Returns a representation of the object as a dict for serialization
:returns: the object's __dict__
:rtype: dict
"""
return self.__dict__
def summary(self):
"""Return a dict with useful information about the upload
:returns: upload information
:rtype: dict
"""
return {
"uuid": self.uuid,
"status": self.status,
"provider_name": self.provider_name,
"image_name": self.image_name,
"image_path": self.image_path,
"creation_time": self.creation_time,
"settings": self.settings,
}
def set_status(self, status, status_callback=None):
"""Sets the status of the upload with an optional callback
:param status: the new status
:type status: str
:param status_callback: a function of the form callback(self)
:type status_callback: function
"""
self._log("Setting status to %s" % status)
self.status = status
if status_callback:
status_callback(self)
def ready(self, image_path, status_callback):
"""Provide an image_path and mark the upload as ready to execute
:param image_path: path of the image to upload
:type image_path: str
:param status_callback: a function of the form callback(self)
:type status_callback: function
"""
self._log("Setting image_path to %s" % image_path)
self.image_path = image_path
if self.status == "WAITING":
self.set_status("READY", status_callback)
def reset(self, status_callback):
"""Reset the upload so it can be attempted again
:param status_callback: a function of the form callback(self)
:type status_callback: function
"""
if self.is_cancellable():
raise RuntimeError(f"Can't reset, status is {self.status}!")
if not self.image_path:
raise RuntimeError("Can't reset, no image supplied yet!")
# self.error = None
self._log("Resetting state")
self.set_status("READY", status_callback)
def is_cancellable(self):
"""Is the upload in a cancellable state?
:returns: whether the upload is cancellable
:rtype: bool
"""
return self.status in ("WAITING", "READY", "RUNNING")
def cancel(self, status_callback=None):
"""Cancel the upload. Sends a SIGINT to self.upload_pid.
:param status_callback: a function of the form callback(self)
:type status_callback: function
"""
if not self.is_cancellable():
raise RuntimeError(f"Can't cancel, status is already {self.status}!")
if self.upload_pid:
os.kill(self.upload_pid, signal.SIGINT)
self.set_status("CANCELLED", status_callback)
def execute(self, status_callback=None):
"""Execute the upload. Meant to be called from a dedicated process so
that the upload can be cancelled by sending a SIGINT to
self.upload_pid.
:param status_callback: a function of the form callback(self)
:type status_callback: function
"""
if self.status != "READY":
raise RuntimeError("This upload is not ready!")
try:
self.upload_pid = current_process().pid
self.set_status("RUNNING", status_callback)
self._log("Executing playbook.yml")
# NOTE: event_handler doesn't seem to be called for playbook errors
logger = lambda e: self._log(e["stdout"], status_callback)
runner = ansible_run(
playbook=self.playbook_path,
extravars={
**self.settings,
"image_name": self.image_name,
"image_path": self.image_path,
},
event_handler=logger,
verbosity=2,
)
# Try logging events and stats -- but they may not exist, so catch the error
try:
for e in runner.events:
self._log("%s" % dir(e), status_callback)
self._log("%s" % runner.stats, status_callback)
except AnsibleRunnerException:
self._log("%s" % runner.stdout.read(), status_callback)
if runner.status == "successful":
self.set_status("FINISHED", status_callback)
else:
self.set_status("FAILED", status_callback)
except Exception:
import traceback
log.error(traceback.format_exc(limit=2))

View File

@ -1,17 +0,0 @@
#
# lorax-composer API server
#
# Copyright (C) 2017 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.

View File

@ -1,49 +0,0 @@
#
# Copyright (C) 2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
def insort_left(a, x, key=None, lo=0, hi=None):
"""Insert item x in list a, and keep it sorted assuming a is sorted.
:param a: sorted list
:type a: list
:param x: item to insert into the list
:type x: object
:param key: Function to use to compare items in the list
:type key: function
:returns: index where the item was inserted
:rtype: int
If x is already in a, insert it to the left of the leftmost x.
Optional args lo (default 0) and hi (default len(a)) bound the
slice of a to be searched.
This is a modified version of bisect.insort_left that can use a
function for the compare, and returns the index position where it
was inserted.
"""
if key is None:
key = lambda i: i
if lo < 0:
raise ValueError('lo must be non-negative')
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
if key(a[mid]) < key(x): lo = mid+1
else: hi = mid
a.insert(lo, x)
return lo

View File

@ -1,44 +0,0 @@
#
# Copyright (C) 2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import logging
log = logging.getLogger("lorax-composer")
from flask import jsonify
from functools import update_wrapper
# A decorator for checking the parameters provided to the API route implementing
# functions. The tuples parameter is a list of tuples. Each tuple is the string
# name of a parameter ("blueprint_name", not blueprint_name), the value it's set
# to by flask if the caller did not provide it, and a message to be returned to
# the user.
#
# If the parameter is set to its default, the error message is returned. Otherwise,
# the decorated function is called and its return value is returned.
def checkparams(tuples):
def decorator(f):
def wrapped_function(*args, **kwargs):
for tup in tuples:
if kwargs[tup[0]] == tup[1]:
log.error("(%s) %s", f.__name__, tup[2])
return jsonify(status=False, errors=[tup[2]]), 400
return f(*args, **kwargs)
return update_wrapper(wrapped_function, f)
return decorator

View File

@ -1,63 +0,0 @@
#
# cmdline.py
#
# Copyright (C) 2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import os
import sys
import argparse
from pylorax import vernum
DEFAULT_USER = "root"
DEFAULT_GROUP = "weldr"
version = "{0}-{1}".format(os.path.basename(sys.argv[0]), vernum)
def lorax_composer_parser():
""" Return the ArgumentParser for lorax-composer"""
parser = argparse.ArgumentParser(description="Lorax Composer API Server",
fromfile_prefix_chars="@")
parser.add_argument("--socket", default="/run/weldr/api.socket", metavar="SOCKET",
help="Path to the socket file to listen on")
parser.add_argument("--user", default=DEFAULT_USER, metavar="USER",
help="User to use for reduced permissions")
parser.add_argument("--group", default=DEFAULT_GROUP, metavar="GROUP",
help="Group to set ownership of the socket to")
parser.add_argument("--log", dest="logfile", default="/var/log/lorax-composer/composer.log", metavar="LOG",
help="Path to logfile (/var/log/lorax-composer/composer.log)")
parser.add_argument("--mockfiles", default="/var/tmp/bdcs-mockfiles/", metavar="MOCKFILES",
help="Path to JSON files used for /api/mock/ paths (/var/tmp/bdcs-mockfiles/)")
parser.add_argument("--sharedir", type=os.path.abspath, metavar="SHAREDIR",
help="Directory containing all the templates. Overrides config file sharedir")
parser.add_argument("-V", action="store_true", dest="showver",
help="show program's version number and exit")
parser.add_argument("-c", "--config", default="/etc/lorax/composer.conf", metavar="CONFIG",
help="Path to lorax-composer configuration file.")
parser.add_argument("--releasever", default=None, metavar="STRING",
help="Release version to use for $releasever in dnf repository urls")
parser.add_argument("--tmp", default="/var/tmp",
help="Top level temporary directory")
parser.add_argument("--proxy", default=None, metavar="PROXY",
help="Set proxy for DNF, overrides configuration file setting.")
parser.add_argument("--no-system-repos", action="store_true", default=False,
help="Do not copy over system repos from /etc/yum.repos.d/ at startup")
parser.add_argument("BLUEPRINTS", metavar="BLUEPRINTS",
help="Path to the blueprints")
return parser

File diff suppressed because it is too large Load Diff

View File

@ -1,140 +0,0 @@
#
# Copyright (C) 2017 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import configparser
import grp
import os
import pwd
from pylorax.sysutils import joinpaths
class ComposerConfig(configparser.ConfigParser):
def get_default(self, section, option, default):
try:
return self.get(section, option)
except configparser.Error:
return default
def configure(conf_file="/etc/lorax/composer.conf", root_dir="/", test_config=False):
"""lorax-composer configuration
:param conf_file: Path to the config file overriding the default settings
:type conf_file: str
:param root_dir: Directory to prepend to paths, defaults to /
:type root_dir: str
:param test_config: Set to True to skip reading conf_file
:type test_config: bool
:returns: Configuration
:rtype: ComposerConfig
"""
conf = ComposerConfig()
# set defaults
conf.add_section("composer")
conf.set("composer", "share_dir", os.path.realpath(joinpaths(root_dir, "/usr/share/lorax/")))
conf.set("composer", "lib_dir", os.path.realpath(joinpaths(root_dir, "/var/lib/lorax/composer/")))
conf.set("composer", "repo_dir", os.path.realpath(joinpaths(root_dir, "/var/lib/lorax/composer/repos.d/")))
conf.set("composer", "dnf_conf", os.path.realpath(joinpaths(root_dir, "/var/tmp/composer/dnf.conf")))
conf.set("composer", "dnf_root", os.path.realpath(joinpaths(root_dir, "/var/tmp/composer/dnf/root/")))
conf.set("composer", "cache_dir", os.path.realpath(joinpaths(root_dir, "/var/tmp/composer/cache/")))
conf.set("composer", "tmp", os.path.realpath(joinpaths(root_dir, "/var/tmp/")))
conf.add_section("users")
conf.set("users", "root", "1")
# Enable all available repo files by default
conf.add_section("repos")
conf.set("repos", "use_system_repos", "1")
conf.set("repos", "enabled", "*")
conf.add_section("dnf")
if not test_config:
# read the config file
if os.path.isfile(conf_file):
conf.read(conf_file)
return conf
def make_owned_dir(p_dir, uid, gid):
"""Make a directory and its parents, setting owner and group
:param p_dir: path to directory to create
:type p_dir: string
:param uid: uid of owner
:type uid: int
:param gid: gid of owner
:type gid: int
:returns: list of errors
:rtype: list of str
Check to make sure it does not have o+rw permissions and that it is owned by uid:gid
"""
errors = []
if not os.path.isdir(p_dir):
# Make sure no o+rw permissions are set
orig_umask = os.umask(0o006)
os.makedirs(p_dir, 0o771)
os.chown(p_dir, uid, gid)
os.umask(orig_umask)
else:
p_stat = os.stat(p_dir)
if p_stat.st_mode & 0o006 != 0:
errors.append("Incorrect permissions on %s, no o+rw permissions are allowed." % p_dir)
if p_stat.st_gid != gid or p_stat.st_uid != 0:
gr_name = grp.getgrgid(gid).gr_name
u_name = pwd.getpwuid(uid)
errors.append("%s should be owned by %s:%s" % (p_dir, u_name, gr_name))
return errors
def make_dnf_dirs(conf, uid, gid):
"""Make any missing dnf directories owned by user:group
:param conf: The configuration to use
:type conf: ComposerConfig
:param uid: uid of owner
:type uid: int
:param gid: gid of owner
:type gid: int
:returns: list of errors
:rtype: list of str
"""
errors = []
for p in ["dnf_conf", "repo_dir", "cache_dir", "dnf_root"]:
p_dir = os.path.abspath(conf.get("composer", p))
if p == "dnf_conf":
p_dir = os.path.dirname(p_dir)
errors.extend(make_owned_dir(p_dir, uid, gid))
def make_queue_dirs(conf, gid):
"""Make any missing queue directories
:param conf: The configuration to use
:type conf: ComposerConfig
:param gid: Group ID that has access to the queue directories
:type gid: int
:returns: list of errors
:rtype: list of str
"""
errors = []
lib_dir = conf.get("composer", "lib_dir")
for p in ["queue/run", "queue/new", "results"]:
p_dir = joinpaths(lib_dir, p)
errors.extend(make_owned_dir(p_dir, 0, gid))
return errors

View File

@ -1,186 +0,0 @@
#
# Copyright (C) 2017-2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# pylint: disable=bad-preconf-access
import logging
log = logging.getLogger("lorax-composer")
import dnf
import dnf.logging
from glob import glob
import os
import shutil
from threading import Lock
import time
from pylorax import DEFAULT_PLATFORM_ID
from pylorax.sysutils import flatconfig
class DNFLock(object):
"""Hold the dnf.Base object and a Lock to control access to it.
self.dbo is a property that returns the dnf.Base object, but it *may* change
from one call to the next if the upstream repositories have changed.
"""
def __init__(self, conf, expire_secs=6*60*60):
self._conf = conf
self._lock = Lock()
self.dbo = get_base_object(self._conf)
self._expire_secs = expire_secs
self._expire_time = time.time() + self._expire_secs
@property
def lock(self):
"""Check for repo updates (using expiration time) and return the lock
If the repository has been updated, tear down the old dnf.Base and
create a new one. This is the only way to force dnf to use the new
metadata.
"""
if time.time() > self._expire_time:
return self.lock_check
return self._lock
@property
def lock_check(self):
"""Force a check for repo updates and return the lock
Use this method sparingly, it removes the repodata and downloads a new copy every time.
"""
self._expire_time = time.time() + self._expire_secs
self.dbo.update_cache()
return self._lock
def get_base_object(conf):
"""Get the DNF object with settings from the config file
:param conf: configuration object
:type conf: ComposerParser
:returns: A DNF Base object
:rtype: dnf.Base
"""
cachedir = os.path.abspath(conf.get("composer", "cache_dir"))
dnfconf = os.path.abspath(conf.get("composer", "dnf_conf"))
dnfroot = os.path.abspath(conf.get("composer", "dnf_root"))
repodir = os.path.abspath(conf.get("composer", "repo_dir"))
# Setup the config for the DNF Base object
dbo = dnf.Base()
dbc = dbo.conf
# TODO - Handle this
# dbc.logdir = logdir
dbc.installroot = dnfroot
if not os.path.isdir(dnfroot):
os.makedirs(dnfroot)
if not os.path.isdir(repodir):
os.makedirs(repodir)
dbc.cachedir = cachedir
dbc.reposdir = [repodir]
dbc.install_weak_deps = False
dbc.prepend_installroot('persistdir')
# this is a weird 'AppendOption' thing that, when you set it,
# actually appends. Doing this adds 'nodocs' to the existing list
# of values, over in libdnf, it does not replace the existing values.
dbc.tsflags = ['nodocs']
if conf.get_default("dnf", "proxy", None):
dbc.proxy = conf.get("dnf", "proxy")
if conf.has_option("dnf", "sslverify") and not conf.getboolean("dnf", "sslverify"):
dbc.sslverify = False
# If the system repos are enabled read the dnf vars from /etc/dnf/vars/
if not conf.has_option("repos", "use_system_repos") or conf.getboolean("repos", "use_system_repos"):
dbc.substitutions.update_from_etc("/")
log.info("dnf vars: %s", dbc.substitutions)
_releasever = conf.get_default("composer", "releasever", None)
if not _releasever:
# Use the releasever of the host system
_releasever = dnf.rpm.detect_releasever("/")
log.info("releasever = %s", _releasever)
dbc.releasever = _releasever
# DNF 3.2 needs to have module_platform_id set, otherwise depsolve won't work correctly
if not os.path.exists("/etc/os-release"):
log.warning("/etc/os-release is missing, cannot determine platform id, falling back to %s", DEFAULT_PLATFORM_ID)
platform_id = DEFAULT_PLATFORM_ID
else:
os_release = flatconfig("/etc/os-release")
platform_id = os_release.get("PLATFORM_ID", DEFAULT_PLATFORM_ID)
log.info("Using %s for module_platform_id", platform_id)
dbc.module_platform_id = platform_id
# Make sure metadata is always current
dbc.metadata_expire = 0
dbc.metadata_expire_filter = "never"
# write the dnf configuration file
with open(dnfconf, "w") as f:
f.write(dbc.dump())
# dnf needs the repos all in one directory, composer uses repodir for this
# if system repos are supposed to be used, copy them into repodir, overwriting any previous copies
if not conf.has_option("repos", "use_system_repos") or conf.getboolean("repos", "use_system_repos"):
for repo_file in glob("/etc/yum.repos.d/*.repo"):
shutil.copy2(repo_file, repodir)
dbo.read_all_repos()
# Remove any duplicate repo entries. These can cause problems with Anaconda, which will fail
# with space problems.
repos = sorted(list(r.id for r in dbo.repos.iter_enabled()))
seen = {"baseurl": [], "mirrorlist": [], "metalink": []}
for source_name in repos:
remove = False
repo = dbo.repos.get(source_name, None)
if repo is None:
log.warning("repo %s vanished while removing duplicates", source_name)
continue
if repo.baseurl:
if repo.baseurl[0] in seen["baseurl"]:
log.info("Removing duplicate repo: %s baseurl=%s", source_name, repo.baseurl[0])
remove = True
else:
seen["baseurl"].append(repo.baseurl[0])
elif repo.mirrorlist:
if repo.mirrorlist in seen["mirrorlist"]:
log.info("Removing duplicate repo: %s mirrorlist=%s", source_name, repo.mirrorlist)
remove = True
else:
seen["mirrorlist"].append(repo.mirrorlist)
elif repo.metalink:
if repo.metalink in seen["metalink"]:
log.info("Removing duplicate repo: %s metalink=%s", source_name, repo.metalink)
remove = True
else:
seen["metalink"].append(repo.metalink)
if remove:
del dbo.repos[source_name]
# Update the metadata from the enabled repos to speed up later operations
log.info("Updating repository metadata")
try:
dbo.fill_sack(load_system_repo=False)
dbo.read_comps()
dbo.update_cache()
except dnf.exceptions.Error as e:
log.error("Failed to update metadata: %s", str(e))
raise RuntimeError("Fetching metadata failed: %s" % str(e))
return dbo

View File

@ -1,84 +0,0 @@
#
# lorax-composer API server
#
# Copyright (C) 2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# HTTP errors
HTTP_ERROR = "HTTPError"
# Returned from the API when either an invalid compose type is given, or not
# compose type is given.
BAD_COMPOSE_TYPE = "BadComposeType"
# Returned from the API when ?limit= or ?offset= is given something that does
# not convert into an integer.
BAD_LIMIT_OR_OFFSET = "BadLimitOrOffset"
# Returned from the API for all other errors from a /blueprints/* route.
BLUEPRINTS_ERROR = "BlueprintsError"
# Returned from the API for any other error resulting from /compose failing.
BUILD_FAILED = "BuildFailed"
# Returned from the API when it expected a build to be in a state other than
# what it currently is. This most often happens when asking for results from
# a build that is not yet done.
BUILD_IN_WRONG_STATE = "BuildInWrongState"
# Returned from the API when some file is requested that is not present - a log
# file, the compose results, etc.
BUILD_MISSING_FILE = "BuildMissingFile"
# Returned from the API for all other errors from a /compose/* route.
COMPOSE_ERROR = "ComposeError"
# Returned from the API for all errors from a /upload/* route.
UPLOAD_ERROR = "UploadError" # TODO these errors should be more specific
# Returned from the API when invalid characters are used in a route path or in
# some identifier.
INVALID_CHARS = "InvalidChars"
# Returned from the API when /compose is called without the POST body telling it
# what to compose.
MISSING_POST = "MissingPost"
# Returned from the API for all other errors from a /modules/* route.
MODULES_ERROR = "ModulesError"
# Returned from the API for all other errors from a /projects/* route.
PROJECTS_ERROR = "ProjectsError"
# Returned from the API when someone tries to modify an immutable system source.
SYSTEM_SOURCE = "SystemSource"
# Returned from the API when a blueprint that was requested does not exist.
UNKNOWN_BLUEPRINT = "UnknownBlueprint"
# Returned from the API when a commit that was requested does not exist.
UNKNOWN_COMMIT = "UnknownCommit"
# Returned from the API when a module that was requested does not exist.
UNKNOWN_MODULE = "UnknownModule"
# Returned from the API when a project that was requested does not exist.
UNKNOWN_PROJECT = "UnknownProject"
# Returned from the API when a source that was requested does not exist.
UNKNOWN_SOURCE = "UnknownSource"
# Returned from the API when a UUID that was requested does not exist.
UNKNOWN_UUID = "UnknownUUID"

View File

@ -1,54 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
""" Flask Blueprints that support skipping routes
When using Blueprints for API versioning you will usually want to fall back
to the previous version's rules for routes that have no new behavior. To do
this we add a 'skip_rule' list to the Blueprint's options dictionary. It lists
all of the routes that you do not want to register.
For example:
from pylorax.api.v0 import v0
from pylorax.api.v1 import v1
server.register_blueprint(v0, url_prefix="/api/v0/")
server.register_blueprint(v0, url_prefix="/api/v1/", skip_rules=["/blueprints/list"]
server.register_blueprint(v1, url_prefix="/api/v1/")
This will register all of v0's routes under `/api/v0`, and all but `/blueprints/list` under /api/v1,
and then register v1's version of `/blueprints/list` under `/api/v1`
"""
from flask import Blueprint
from flask.blueprints import BlueprintSetupState
class BlueprintSetupStateSkip(BlueprintSetupState):
def __init__(self, blueprint, app, options, first_registration, skip_rules):
self._skip_rules = skip_rules
super(BlueprintSetupStateSkip, self).__init__(blueprint, app, options, first_registration)
def add_url_rule(self, rule, endpoint=None, view_func=None, **options):
if rule not in self._skip_rules:
super(BlueprintSetupStateSkip, self).add_url_rule(rule, endpoint, view_func, **options)
class BlueprintSkip(Blueprint):
def __init__(self, *args, **kwargs):
super(BlueprintSkip, self).__init__(*args, **kwargs)
def make_setup_state(self, app, options, first_registration=False):
skip_rules = options.pop("skip_rules", [])
return BlueprintSetupStateSkip(self, app, options, first_registration, skip_rules)

View File

@ -1,222 +0,0 @@
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
""" Clone a git repository and package it as an rpm
This module contains functions for cloning a git repo, creating a tar archive of
the selected commit, branch, or tag, and packaging the files into an rpm that will
be installed by anaconda when creating the image.
"""
import logging
log = logging.getLogger("lorax-composer")
import os
from rpmfluff import SimpleRpmBuild
import shutil
import subprocess
import tempfile
import time
from pylorax.sysutils import joinpaths
def get_repo_description(gitRepo):
""" Return a description including the git repo and reference
:param gitRepo: A dict with the repository details
:type gitRepo: dict
:returns: A string with the git repo url and reference
:rtype: str
"""
return "Created from %s, reference '%s', on %s" % (gitRepo["repo"], gitRepo["ref"], time.ctime())
class GitArchiveTarball:
"""Create a git archive of the selected git repo and reference"""
def __init__(self, gitRepo):
self._gitRepo = gitRepo
self.sourceName = self._gitRepo["rpmname"]+".tar.xz"
def write_file(self, sourcesDir):
""" Create the tar archive
:param sourcesDir: Path to use for creating the archive
:type sourcesDir: str
This clones the git repository and creates a git archive from the specified reference.
The result is in RPMNAME.tar.xz under the sourcesDir
"""
# Clone the repository into a temporary location
cmd = ["git", "clone", self._gitRepo["repo"], joinpaths(sourcesDir, "gitrepo")]
log.debug(cmd)
try:
subprocess.check_output(cmd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
log.error("Failed to clone %s: %s", self._gitRepo["repo"], e.output)
raise RuntimeError("Failed to clone %s" % self._gitRepo["repo"])
oldcwd = os.getcwd()
try:
os.chdir(joinpaths(sourcesDir, "gitrepo"))
# Configure archive to create a .tar.xz
cmd = ["git", "config", "tar.tar.xz.command", "xz -c"]
log.debug(cmd)
subprocess.check_call(cmd)
cmd = ["git", "archive", "--prefix", self._gitRepo["rpmname"] + "/", "-o", joinpaths(sourcesDir, self.sourceName), self._gitRepo["ref"]]
log.debug(cmd)
try:
subprocess.check_output(cmd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
log.error("Failed to archive %s: %s", self._gitRepo["repo"], e.output)
raise RuntimeError('Failed to archive %s from ref "%s"' % (self._gitRepo["repo"],
self._gitRepo["ref"]))
finally:
# Cleanup even if there was an error
os.chdir(oldcwd)
shutil.rmtree(joinpaths(sourcesDir, "gitrepo"))
class GitRpmBuild(SimpleRpmBuild):
"""Build an rpm containing files from a git repository"""
def __init__(self, *args, **kwargs):
self._base_dir = None
super().__init__(*args, **kwargs)
def check(self):
raise NotImplementedError
def get_base_dir(self):
"""Place all the files under a temporary directory + rpmbuild/
"""
if not self._base_dir:
self._base_dir = tempfile.mkdtemp(prefix="lorax-git-rpm.")
return joinpaths(self._base_dir, "rpmbuild")
def cleanup_tmpdir(self):
"""Remove the temporary directory and all of its contents
"""
if len(self._base_dir) < 5:
raise RuntimeError("Invalid base_dir: %s" % self.get_base_dir())
shutil.rmtree(self._base_dir)
def clean(self):
"""Remove the base directory from inside the tmpdir"""
if len(self.get_base_dir()) < 5:
raise RuntimeError("Invalid base_dir: %s" % self.get_base_dir())
shutil.rmtree(self.get_base_dir(), ignore_errors=True)
def add_git_tarball(self, gitRepo):
"""Add a tar archive of a git repository to the rpm
:param gitRepo: A dict with the repository details
:type gitRepo: dict
This populates the rpm with the URL of the git repository, the summary
describing the repo, the description of the repository and reference used,
and sets up the rpm to install the archive contents into the destination
path.
"""
self.addUrl(gitRepo["repo"])
self.add_summary(gitRepo["summary"])
self.add_description(get_repo_description(gitRepo))
self.addLicense("Unknown")
sourceIndex = self.add_source(GitArchiveTarball(gitRepo))
self.section_build += "tar -xvf %s\n" % self.sources[sourceIndex].sourceName
dest = os.path.normpath(gitRepo["destination"])
# Prevent double slash root
if dest == "/":
dest = ""
self.create_parent_dirs(dest)
self.section_install += "cp -r %s/. $RPM_BUILD_ROOT/%s\n" % (gitRepo["rpmname"], dest)
sub = self.get_subpackage(None)
if not dest:
# / is special, we don't want to include / itself, just what's under it
sub.section_files += "/*\n"
else:
sub.section_files += "%s/\n" % dest
def make_git_rpm(gitRepo, dest):
""" Create an rpm from the specified git repo
:param gitRepo: A dict with the repository details
:type gitRepo: dict
This will clone the git repository, create an archive of the selected reference,
and build an rpm that will install the files from the repository under the destination
directory. The gitRepo dict should have the following fields::
rpmname: "server-config"
rpmversion: "1.0"
rpmrelease: "1"
summary: "Setup files for server deployment"
repo: "PATH OF GIT REPO TO CLONE"
ref: "v1.0"
destination: "/opt/server/"
* rpmname: Name of the rpm to create, also used as the prefix name in the tar archive
* rpmversion: Version of the rpm, eg. "1.0.0"
* rpmrelease: Release of the rpm, eg. "1"
* summary: Summary string for the rpm
* repo: URL of the get repo to clone and create the archive from
* ref: Git reference to check out. eg. origin/branch-name, git tag, or git commit hash
* destination: Path to install the / of the git repo at when installing the rpm
"""
gitRpm = GitRpmBuild(gitRepo["rpmname"], gitRepo["rpmversion"], gitRepo["rpmrelease"], ["noarch"])
try:
gitRpm.add_git_tarball(gitRepo)
gitRpm.do_make()
rpmfile = gitRpm.get_built_rpm("noarch")
shutil.move(rpmfile, dest)
except Exception as e:
log.error("Creating git repo rpm: %s", e)
raise RuntimeError("Creating git repo rpm: %s" % e)
finally:
gitRpm.cleanup_tmpdir()
return os.path.basename(rpmfile)
# Create the git rpms, if any, and return the path to the repo under results_dir
def create_gitrpm_repo(results_dir, recipe):
"""Create a dnf repository with the rpms from the recipe
:param results_dir: Path to create the repository under
:type results_dir: str
:param recipe: The recipe to get the repos.git entries from
:type recipe: Recipe
:returns: Path to the dnf repository or ""
:rtype: str
This function creates a dnf repository directory at results_dir+"repo/",
creates rpms for all of the repos.git entries in the recipe, runs createrepo_c
on the dnf repository so that Anaconda can use it, and returns the path to the
repository to the caller.
"""
if "repos" not in recipe or "git" not in recipe["repos"]:
return ""
gitrepo = joinpaths(results_dir, "repo/")
if not os.path.exists(gitrepo):
os.makedirs(gitrepo)
for r in recipe["repos"]["git"]:
make_git_rpm(r, gitrepo)
cmd = ["createrepo_c", gitrepo]
log.debug(cmd)
try:
subprocess.check_output(cmd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
log.error("Failed to create repo at %s: %s", gitrepo, e.output)
raise RuntimeError("Failed to create repo at %s" % gitrepo)
return gitrepo

View File

@ -1,697 +0,0 @@
#
# Copyright (C) 2017 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import logging
log = logging.getLogger("lorax-composer")
from configparser import ConfigParser
import dnf
from glob import glob
import os
import time
from pylorax.api.bisect import insort_left
from pylorax.sysutils import joinpaths
TIME_FORMAT = "%Y-%m-%dT%H:%M:%S"
class ProjectsError(Exception):
pass
def api_time(t):
"""Convert time since epoch to a string
:param t: Seconds since epoch
:type t: int
:returns: Time string
:rtype: str
"""
return time.strftime(TIME_FORMAT, time.localtime(t))
def api_changelog(changelog):
"""Convert the changelog to a string
:param changelog: A list of time, author, string tuples.
:type changelog: tuple
:returns: The most recent changelog text or ""
:rtype: str
This returns only the most recent changelog entry.
"""
try:
entry = changelog[0][2]
except IndexError:
entry = ""
return entry
def pkg_to_project(pkg):
"""Extract the details from a hawkey.Package object
:param pkgs: hawkey.Package object with package details
:type pkgs: hawkey.Package
:returns: A dict with the name, summary, description, and url.
:rtype: dict
upstream_vcs is hard-coded to UPSTREAM_VCS
"""
return {"name": pkg.name,
"summary": pkg.summary,
"description": pkg.description,
"homepage": pkg.url,
"upstream_vcs": "UPSTREAM_VCS"}
def pkg_to_build(pkg):
"""Extract the build details from a hawkey.Package object
:param pkg: hawkey.Package object with package details
:type pkg: hawkey.Package
:returns: A dict with the build details, epoch, release, arch, build_time, changelog, ...
:rtype: dict
metadata entries are hard-coded to {}
Note that this only returns the build dict, it does not include the name, description, etc.
"""
return {"epoch": pkg.epoch,
"release": pkg.release,
"arch": pkg.arch,
"build_time": api_time(pkg.buildtime),
"changelog": "CHANGELOG_NEEDED", # XXX Not in hawkey.Package
"build_config_ref": "BUILD_CONFIG_REF",
"build_env_ref": "BUILD_ENV_REF",
"metadata": {},
"source": {"license": pkg.license,
"version": pkg.version,
"source_ref": "SOURCE_REF",
"metadata": {}}}
def pkg_to_project_info(pkg):
"""Extract the details from a hawkey.Package object
:param pkg: hawkey.Package object with package details
:type pkg: hawkey.Package
:returns: A dict with the project details, as well as epoch, release, arch, build_time, changelog, ...
:rtype: dict
metadata entries are hard-coded to {}
"""
return {"name": pkg.name,
"summary": pkg.summary,
"description": pkg.description,
"homepage": pkg.url,
"upstream_vcs": "UPSTREAM_VCS",
"builds": [pkg_to_build(pkg)]}
def pkg_to_dep(pkg):
"""Extract the info from a hawkey.Package object
:param pkg: A hawkey.Package object
:type pkg: hawkey.Package
:returns: A dict with name, epoch, version, release, arch
:rtype: dict
"""
return {"name": pkg.name,
"epoch": pkg.epoch,
"version": pkg.version,
"release": pkg.release,
"arch": pkg.arch}
def proj_to_module(proj):
"""Extract the name from a project_info dict
:param pkg: dict with package details
:type pkg: dict
:returns: A dict with name, and group_type
:rtype: dict
group_type is hard-coded to "rpm"
"""
return {"name": proj["name"],
"group_type": "rpm"}
def dep_evra(dep):
"""Return the epoch:version-release.arch for the dep
:param dep: dependency dict
:type dep: dict
:returns: epoch:version-release.arch
:rtype: str
"""
if dep["epoch"] == 0:
return dep["version"]+"-"+dep["release"]+"."+dep["arch"]
else:
return str(dep["epoch"])+":"+dep["version"]+"-"+dep["release"]+"."+dep["arch"]
def dep_nevra(dep):
"""Return the name-epoch:version-release.arch"""
return dep["name"]+"-"+dep_evra(dep)
def projects_list(dbo):
"""Return a list of projects
:param dbo: dnf base object
:type dbo: dnf.Base
:returns: List of project info dicts with name, summary, description, homepage, upstream_vcs
:rtype: list of dicts
"""
return projects_info(dbo, None)
def projects_info(dbo, project_names):
"""Return details about specific projects
:param dbo: dnf base object
:type dbo: dnf.Base
:param project_names: List of names of projects to get info about
:type project_names: str
:returns: List of project info dicts with pkg_to_project as well as epoch, version, release, etc.
:rtype: list of dicts
If project_names is None it will return the full list of available packages
"""
if project_names:
pkgs = dbo.sack.query().available().filter(name__glob=project_names)
else:
pkgs = dbo.sack.query().available()
# iterate over pkgs
# - if pkg.name isn't in the results yet, add pkg_to_project_info in sorted position
# - if pkg.name is already in results, get its builds. If the build for pkg is different
# in any way (version, arch, etc.) add it to the entry's builds list. If it is the same,
# skip it.
results = []
results_names = {}
for p in pkgs:
if p.name.lower() not in results_names:
idx = insort_left(results, pkg_to_project_info(p), key=lambda p: p["name"].lower())
results_names[p.name.lower()] = idx
else:
build = pkg_to_build(p)
if build not in results[results_names[p.name.lower()]]["builds"]:
results[results_names[p.name.lower()]]["builds"].append(build)
return results
def _depsolve(dbo, projects, groups):
"""Add projects to a new transaction
:param dbo: dnf base object
:type dbo: dnf.Base
:param projects: The projects and version globs to find the dependencies for
:type projects: List of tuples
:param groups: The groups to include in dependency solving
:type groups: List of str
:returns: None
:rtype: None
:raises: ProjectsError if there was a problem installing something
"""
# This resets the transaction and updates the cache.
# It is important that the cache always be synchronized because Anaconda will grab its own copy
# and if that is different the NEVRAs will not match and the build will fail.
dbo.reset(goal=True)
install_errors = []
for name in groups:
try:
dbo.group_install(name, ["mandatory", "default"])
except dnf.exceptions.MarkingError as e:
install_errors.append(("Group %s" % (name), str(e)))
for name, version in projects:
# Find the best package matching the name + version glob
# dnf can return multiple packages if it is in more than 1 repository
query = dbo.sack.query().filterm(provides__glob=name)
if version:
query.filterm(version__glob=version)
query.filterm(latest=1)
if not query:
install_errors.append(("%s-%s" % (name, version), "No match"))
continue
sltr = dnf.selector.Selector(dbo.sack).set(pkg=query)
# NOTE: dnf says in near future there will be a "goal" attribute of Base class
# so yes, we're using a 'private' attribute here on purpose and with permission.
dbo._goal.install(select=sltr, optional=False)
if install_errors:
raise ProjectsError("The following package(s) had problems: %s" % ",".join(["%s (%s)" % (pattern, err) for pattern, err in install_errors]))
def projects_depsolve(dbo, projects, groups):
"""Return the dependencies for a list of projects
:param dbo: dnf base object
:type dbo: dnf.Base
:param projects: The projects to find the dependencies for
:type projects: List of Strings
:param groups: The groups to include in dependency solving
:type groups: List of str
:returns: NEVRA's of the project and its dependencies
:rtype: list of dicts
:raises: ProjectsError if there was a problem installing something
"""
_depsolve(dbo, projects, groups)
try:
dbo.resolve()
except dnf.exceptions.DepsolveError as e:
raise ProjectsError("There was a problem depsolving %s: %s" % (projects, str(e)))
if len(dbo.transaction) == 0:
return []
return sorted(map(pkg_to_dep, dbo.transaction.install_set), key=lambda p: p["name"].lower())
def estimate_size(packages, block_size=6144):
"""Estimate the installed size of a package list
:param packages: The packages to be installed
:type packages: list of hawkey.Package objects
:param block_size: The block size to use for rounding up file sizes.
:type block_size: int
:returns: The estimated size of installed packages
:rtype: int
Estimating actual requirements is difficult without the actual file sizes, which
dnf doesn't provide access to. So use the file count and block size to estimate
a minimum size for each package.
"""
installed_size = 0
for p in packages:
installed_size += len(p.files) * block_size
installed_size += p.installsize
return installed_size
def projects_depsolve_with_size(dbo, projects, groups, with_core=True):
"""Return the dependencies and installed size for a list of projects
:param dbo: dnf base object
:type dbo: dnf.Base
:param project_names: The projects to find the dependencies for
:type project_names: List of Strings
:param groups: The groups to include in dependency solving
:type groups: List of str
:returns: installed size and a list of NEVRA's of the project and its dependencies
:rtype: tuple of (int, list of dicts)
:raises: ProjectsError if there was a problem installing something
"""
_depsolve(dbo, projects, groups)
if with_core:
dbo.group_install("core", ['mandatory', 'default', 'optional'])
try:
dbo.resolve()
except dnf.exceptions.DepsolveError as e:
raise ProjectsError("There was a problem depsolving %s: %s" % (projects, str(e)))
if len(dbo.transaction) == 0:
return (0, [])
installed_size = estimate_size(dbo.transaction.install_set)
deps = sorted(map(pkg_to_dep, dbo.transaction.install_set), key=lambda p: p["name"].lower())
return (installed_size, deps)
def modules_list(dbo, module_names):
"""Return a list of modules
:param dbo: dnf base object
:type dbo: dnf.Base
:param offset: Number of modules to skip
:type limit: int
:param limit: Maximum number of modules to return
:type limit: int
:returns: List of module information and total count
:rtype: tuple of a list of dicts and an Int
Modules don't exist in RHEL7 so this only returns projects
and sets the type to "rpm"
"""
# TODO - Figure out what to do with this for Fedora 'modules'
return list(map(proj_to_module, projects_info(dbo, module_names)))
def modules_info(dbo, module_names):
"""Return details about a module, including dependencies
:param dbo: dnf base object
:type dbo: dnf.Base
:param module_names: Names of the modules to get info about
:type module_names: str
:returns: List of dicts with module details and dependencies.
:rtype: list of dicts
"""
modules = projects_info(dbo, module_names)
# Add the dependency info to each one
for module in modules:
module["dependencies"] = projects_depsolve(dbo, [(module["name"], "*.*")], [])
return modules
def dnf_repo_to_file_repo(repo):
"""Return a string representation of a DNF Repo object suitable for writing to a .repo file
:param repo: DNF Repository
:type repo: dnf.RepoDict
:returns: A string
:rtype: str
The DNF Repo.dump() function does not produce a string that can be used as a dnf .repo file,
it ouputs baseurl and gpgkey as python lists which DNF cannot read. So do this manually with
only the attributes we care about.
"""
repo_str = "[%s]\nname = %s\n" % (repo.id, repo.name)
if repo.metalink:
repo_str += "metalink = %s\n" % repo.metalink
elif repo.mirrorlist:
repo_str += "mirrorlist = %s\n" % repo.mirrorlist
elif repo.baseurl:
repo_str += "baseurl = %s\n" % repo.baseurl[0]
else:
raise RuntimeError("Repo has no baseurl, metalink, or mirrorlist")
# proxy is optional
if repo.proxy:
repo_str += "proxy = %s\n" % repo.proxy
repo_str += "sslverify = %s\n" % repo.sslverify
repo_str += "gpgcheck = %s\n" % repo.gpgcheck
if repo.gpgkey:
repo_str += "gpgkey = %s\n" % ",".join(repo.gpgkey)
if repo.skip_if_unavailable:
repo_str += "skip_if_unavailable=1\n"
return repo_str
def repo_to_source(repo, system_source, api=1):
"""Return a Weldr Source dict created from the DNF Repository
:param repo: DNF Repository
:type repo: dnf.RepoDict
:param system_source: True if this source is an immutable system source
:type system_source: bool
:param api: Select which api version of the dict to return (default 1)
:type api: int
:returns: A dict with Weldr Source fields filled in
:rtype: dict
Example::
{
"check_gpg": true,
"check_ssl": true,
"gpgkey_url": [
"file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64"
],
"id": "fedora",
"name": "Fedora $releasever - $basearch",
"proxy": "http://proxy.brianlane.com:8123",
"system": true
"type": "yum-metalink",
"url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64"
}
The ``name`` field has changed in v1 of the API.
In v0 of the API ``name`` is the repo.id, in v1 it is the repo.name and a new field,
``id`` has been added for the repo.id
"""
if api==0:
source = {"name": repo.id, "system": system_source}
else:
source = {"id": repo.id, "name": repo.name, "system": system_source}
if repo.baseurl:
source["url"] = repo.baseurl[0]
source["type"] = "yum-baseurl"
elif repo.metalink:
source["url"] = repo.metalink
source["type"] = "yum-metalink"
elif repo.mirrorlist:
source["url"] = repo.mirrorlist
source["type"] = "yum-mirrorlist"
else:
raise RuntimeError("Repo has no baseurl, metalink, or mirrorlist")
# proxy is optional
if repo.proxy:
source["proxy"] = repo.proxy
if not repo.sslverify:
source["check_ssl"] = False
else:
source["check_ssl"] = True
if not repo.gpgcheck:
source["check_gpg"] = False
else:
source["check_gpg"] = True
if repo.gpgkey:
source["gpgkey_urls"] = list(repo.gpgkey)
return source
def source_to_repodict(source):
"""Return a tuple suitable for use with dnf.add_new_repo
:param source: A Weldr source dict
:type source: dict
:returns: A tuple of dnf.Repo attributes
:rtype: (str, list, dict)
Return a tuple with (id, baseurl|(), kwargs) that can be used
with dnf.repos.add_new_repo
"""
kwargs = {}
if "id" in source:
# This is an API v1 source definition
repoid = source["id"]
if "name" in source:
kwargs["name"] = source["name"]
else:
repoid = source["name"]
# This will allow errors to be raised so we can catch them
# without this they are logged, but the repo is silently disabled
kwargs["skip_if_unavailable"] = False
if source["type"] == "yum-baseurl":
baseurl = [source["url"]]
elif source["type"] == "yum-metalink":
kwargs["metalink"] = source["url"]
baseurl = ()
elif source["type"] == "yum-mirrorlist":
kwargs["mirrorlist"] = source["url"]
baseurl = ()
if "proxy" in source:
kwargs["proxy"] = source["proxy"]
if source["check_ssl"]:
kwargs["sslverify"] = True
else:
kwargs["sslverify"] = False
if source["check_gpg"]:
kwargs["gpgcheck"] = True
else:
kwargs["gpgcheck"] = False
if "gpgkey_urls" in source:
kwargs["gpgkey"] = tuple(source["gpgkey_urls"])
return (repoid, baseurl, kwargs)
def source_to_repo(source, dnf_conf):
"""Return a dnf Repo object created from a source dict
:param source: A Weldr source dict
:type source: dict
:param dnf_conf: The dnf Config object
:type dnf_conf: dnf.conf
:returns: A dnf Repo object
:rtype: dnf.Repo
Example::
{
"check_gpg": True,
"check_ssl": True,
"gpgkey_urls": [
"file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64"
],
"id": "fedora",
"name": "Fedora $releasever - $basearch",
"proxy": "http://proxy.brianlane.com:8123",
"system": True
"type": "yum-metalink",
"url": "https://mirrors.fedoraproject.org/metalink?repo=fedora-28&arch=x86_64"
}
If the ``id`` field is included it is used for the repo id, otherwise ``name`` is used.
v0 of the API only used ``name``, v1 added the distinction between ``id`` and ``name``.
"""
repoid, baseurl, kwargs = source_to_repodict(source)
repo = dnf.repo.Repo(repoid, dnf_conf)
if baseurl:
repo.baseurl = baseurl
# Apply the rest of the kwargs to the Repo object
for k, v in kwargs.items():
setattr(repo, k, v)
repo.enable()
return repo
def get_source_ids(source_path):
"""Return a list of the source ids in a file
:param source_path: Full path and filename of the source (yum repo) file
:type source_path: str
:returns: A list of source id strings
:rtype: list of str
"""
if not os.path.exists(source_path):
return []
cfg = ConfigParser()
cfg.read(source_path)
return cfg.sections()
def get_repo_sources(source_glob):
"""Return a list of sources from a directory of yum repositories
:param source_glob: A glob to use to match the source files, including full path
:type source_glob: str
:returns: A list of the source ids in all of the matching files
:rtype: list of str
"""
sources = []
for f in glob(source_glob):
sources.extend(get_source_ids(f))
return sources
def delete_repo_source(source_glob, source_id):
"""Delete a source from a repo file
:param source_glob: A glob of the repo sources to search
:type source_glob: str
:param source_id: The repo id to delete
:type source_id: str
:returns: None
:raises: ProjectsError if there was a problem
A repo file may have multiple sources in it, delete only the selected source.
If it is the last one in the file, delete the file.
WARNING: This will delete ANY source, the caller needs to ensure that a system
source_id isn't passed to it.
"""
found = False
for f in glob(source_glob):
try:
cfg = ConfigParser()
cfg.read(f)
if source_id in cfg.sections():
found = True
cfg.remove_section(source_id)
# If there are other sections, rewrite the file without the deleted one
if len(cfg.sections()) > 0:
with open(f, "w") as cfg_file:
cfg.write(cfg_file)
else:
# No sections left, just delete the file
os.unlink(f)
except Exception as e:
raise ProjectsError("Problem deleting repo source %s: %s" % (source_id, str(e)))
if not found:
raise ProjectsError("source %s not found" % source_id)
def new_repo_source(dbo, repoid, source, repo_dir):
"""Add a new repo source from a Weldr source dict
:param dbo: dnf base object
:type dbo: dnf.Base
:param id: The repo id (API v0 uses the name, v1 uses the id)
:type id: str
:param source: A Weldr source dict
:type source: dict
:returns: None
:raises: ...
Make sure access to the dbo has been locked before calling this.
The `id` parameter will the the 'name' field for API v0, and the 'id' field for API v1
DNF variables will be substituted at load time, and on restart.
"""
try:
# Remove it from the RepoDict (NOTE that this isn't explicitly supported by the DNF API)
# If this repo already exists, delete it and replace it with the new one
repos = list(r.id for r in dbo.repos.iter_enabled())
if repoid in repos:
del dbo.repos[repoid]
# Add the repo and substitute any dnf variables
_, baseurl, kwargs = source_to_repodict(source)
log.debug("repoid=%s, baseurl=%s, kwargs=%s", repoid, baseurl, kwargs)
r = dbo.repos.add_new_repo(repoid, dbo.conf, baseurl, **kwargs)
r.enable()
log.info("Updating repository metadata after adding %s", repoid)
dbo.fill_sack(load_system_repo=False)
dbo.read_comps()
# Remove any previous sources with this id, ignore it if it isn't found
try:
delete_repo_source(joinpaths(repo_dir, "*.repo"), repoid)
except ProjectsError:
pass
# Make sure the source id can't contain a path traversal by taking the basename
source_path = joinpaths(repo_dir, os.path.basename("%s.repo" % repoid))
# Write the un-substituted version of the repo to disk
with open(source_path, "w") as f:
repo = source_to_repo(source, dbo.conf)
f.write(dnf_repo_to_file_repo(repo))
except Exception as e:
log.error("(new_repo_source) adding %s failed: %s", repoid, str(e))
# Cleanup the mess, if loading it failed we don't want to leave it in memory
repos = list(r.id for r in dbo.repos.iter_enabled())
if repoid in repos:
del dbo.repos[repoid]
log.info("Updating repository metadata after adding %s failed", repoid)
dbo.fill_sack(load_system_repo=False)
dbo.read_comps()
raise

View File

@ -1,863 +0,0 @@
# Copyright (C) 2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
""" Functions to monitor compose queue and run anaconda"""
import logging
log = logging.getLogger("pylorax")
program_log = logging.getLogger("program")
dnf_log = logging.getLogger("dnf")
import os
import grp
from glob import glob
import multiprocessing as mp
import pwd
import shutil
import subprocess
from subprocess import Popen, PIPE
import time
from pylorax import find_templates
from pylorax.api.compose import move_compose_results
from pylorax.api.recipes import recipe_from_file
from pylorax.api.timestamp import TS_CREATED, TS_STARTED, TS_FINISHED, write_timestamp, timestamp_dict
import pylorax.api.toml as toml
from pylorax.base import DataHolder
from pylorax.creator import run_creator
from pylorax.sysutils import joinpaths, read_tail
from lifted.queue import create_upload, get_uploads, ready_upload, delete_upload
def check_queues(cfg):
"""Check to make sure the new and run queue symlinks are correct
:param cfg: Configuration settings
:type cfg: DataHolder
Also check all of the existing results and make sure any with WAITING
set in STATUS have a symlink in queue/new/
"""
# Remove broken symlinks from the new and run queues
queue_symlinks = glob(joinpaths(cfg.composer_dir, "queue/new/*")) + \
glob(joinpaths(cfg.composer_dir, "queue/run/*"))
for link in queue_symlinks:
if not os.path.isdir(os.path.realpath(link)):
log.info("Removing broken symlink %s", link)
os.unlink(link)
# Write FAILED to the STATUS of any run queue symlinks and remove them
for link in glob(joinpaths(cfg.composer_dir, "queue/run/*")):
log.info("Setting build %s to FAILED, and removing symlink from queue/run/", os.path.basename(link))
open(joinpaths(link, "STATUS"), "w").write("FAILED\n")
os.unlink(link)
# Check results STATUS messages
# - If STATUS is missing, set it to FAILED
# - RUNNING should be changed to FAILED
# - WAITING should have a symlink in the new queue
for link in glob(joinpaths(cfg.composer_dir, "results/*")):
if not os.path.exists(joinpaths(link, "STATUS")):
open(joinpaths(link, "STATUS"), "w").write("FAILED\n")
continue
status = open(joinpaths(link, "STATUS")).read().strip()
if status == "RUNNING":
log.info("Setting build %s to FAILED", os.path.basename(link))
open(joinpaths(link, "STATUS"), "w").write("FAILED\n")
elif status == "WAITING":
if not os.path.islink(joinpaths(cfg.composer_dir, "queue/new/", os.path.basename(link))):
log.info("Creating missing symlink to new build %s", os.path.basename(link))
os.symlink(link, joinpaths(cfg.composer_dir, "queue/new/", os.path.basename(link)))
def start_queue_monitor(cfg, uid, gid):
"""Start the queue monitor as a mp process
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uid: User ID that owns the queue
:type uid: int
:param gid: Group ID that owns the queue
:type gid: int
:returns: None
"""
lib_dir = cfg.get("composer", "lib_dir")
share_dir = cfg.get("composer", "share_dir")
tmp = cfg.get("composer", "tmp")
monitor_cfg = DataHolder(cfg=cfg, composer_dir=lib_dir, share_dir=share_dir, uid=uid, gid=gid, tmp=tmp)
p = mp.Process(target=monitor, args=(monitor_cfg,))
p.daemon = True
p.start()
def monitor(cfg):
"""Monitor the queue for new compose requests
:param cfg: Configuration settings
:type cfg: DataHolder
:returns: Does not return
The queue has 2 subdirectories, new and run. When a compose is ready to be run
a symlink to the uniquely named results directory should be placed in ./queue/new/
When the it is ready to be run (it is checked every 30 seconds or after a previous
compose is finished) the symlink will be moved into ./queue/run/ and a STATUS file
will be created in the results directory.
STATUS can contain one of: WAITING, RUNNING, FINISHED, FAILED
If the system is restarted while a compose is running it will move any old symlinks
from ./queue/run/ to ./queue/new/ and rerun them.
"""
def queue_sort(uuid):
"""Sort the queue entries by their mtime, not their names"""
return os.stat(joinpaths(cfg.composer_dir, "queue/new", uuid)).st_mtime
check_queues(cfg)
while True:
uuids = sorted(os.listdir(joinpaths(cfg.composer_dir, "queue/new")), key=queue_sort)
# Pick the oldest and move it into ./run/
if not uuids:
# No composes left to process, sleep for a bit
time.sleep(5)
else:
src = joinpaths(cfg.composer_dir, "queue/new", uuids[0])
dst = joinpaths(cfg.composer_dir, "queue/run", uuids[0])
try:
os.rename(src, dst)
except OSError:
# The symlink may vanish if uuid_cancel() has been called
continue
# The anaconda logs are also copied into ./anaconda/ in this directory
os.makedirs(joinpaths(dst, "logs"), exist_ok=True)
def open_handler(loggers, file_name):
handler = logging.FileHandler(joinpaths(dst, "logs", file_name))
handler.setLevel(logging.DEBUG)
handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)s: %(message)s"))
for logger in loggers:
logger.addHandler(handler)
return (handler, loggers)
loggers = (((log, program_log, dnf_log), "combined.log"),
((log,), "composer.log"),
((program_log,), "program.log"),
((dnf_log,), "dnf.log"))
handlers = [open_handler(loggers, file_name) for loggers, file_name in loggers]
log.info("Starting new compose: %s", dst)
open(joinpaths(dst, "STATUS"), "w").write("RUNNING\n")
try:
make_compose(cfg, os.path.realpath(dst))
log.info("Finished building %s, results are in %s", dst, os.path.realpath(dst))
open(joinpaths(dst, "STATUS"), "w").write("FINISHED\n")
write_timestamp(dst, TS_FINISHED)
upload_cfg = cfg.cfg["upload"]
for upload in get_uploads(upload_cfg, uuid_get_uploads(cfg.cfg, uuids[0])):
log.info("Readying upload %s", upload.uuid)
uuid_ready_upload(cfg.cfg, uuids[0], upload.uuid)
except Exception:
import traceback
log.error("traceback: %s", traceback.format_exc())
# TODO - Write the error message to an ERROR-LOG file to include with the status
# log.error("Error running compose: %s", e)
open(joinpaths(dst, "STATUS"), "w").write("FAILED\n")
write_timestamp(dst, TS_FINISHED)
finally:
for handler, loggers in handlers:
for logger in loggers:
logger.removeHandler(handler)
handler.close()
os.unlink(dst)
def make_compose(cfg, results_dir):
"""Run anaconda with the final-kickstart.ks from results_dir
:param cfg: Configuration settings
:type cfg: DataHolder
:param results_dir: The directory containing the metadata and results for the build
:type results_dir: str
:returns: Nothing
:raises: May raise various exceptions
This takes the final-kickstart.ks, and the settings in config.toml and runs Anaconda
in no-virt mode (directly on the host operating system). Exceptions should be caught
at the higer level.
If there is a failure, the build artifacts will be cleaned up, and any logs will be
moved into logs/anaconda/ and their ownership will be set to the user from the cfg
object.
"""
# Check on the ks's presence
ks_path = joinpaths(results_dir, "final-kickstart.ks")
if not os.path.exists(ks_path):
raise RuntimeError("Missing kickstart file at %s" % ks_path)
# Load the compose configuration
cfg_path = joinpaths(results_dir, "config.toml")
if not os.path.exists(cfg_path):
raise RuntimeError("Missing config.toml for %s" % results_dir)
cfg_dict = toml.loads(open(cfg_path, "r").read())
# The keys in cfg_dict correspond to the arguments setup in livemedia-creator
# keys that define what to build should be setup in compose_args, and keys with
# defaults should be setup here.
# Make sure that image_name contains no path components
cfg_dict["image_name"] = os.path.basename(cfg_dict["image_name"])
# Only support novirt installation, set some other defaults
cfg_dict["no_virt"] = True
cfg_dict["disk_image"] = None
cfg_dict["fs_image"] = None
cfg_dict["keep_image"] = False
cfg_dict["domacboot"] = False
cfg_dict["anaconda_args"] = ""
cfg_dict["proxy"] = ""
cfg_dict["armplatform"] = ""
cfg_dict["squashfs_args"] = None
cfg_dict["lorax_templates"] = find_templates(cfg.share_dir)
cfg_dict["tmp"] = cfg.tmp
# Use default args for dracut
cfg_dict["dracut_conf"] = None
cfg_dict["dracut_args"] = None
# TODO How to support other arches?
cfg_dict["arch"] = None
# Compose things in a temporary directory inside the results directory
cfg_dict["result_dir"] = joinpaths(results_dir, "compose")
os.makedirs(cfg_dict["result_dir"])
install_cfg = DataHolder(**cfg_dict)
# Some kludges for the 99-copy-logs %post, failure in it will crash the build
for f in ["/tmp/NOSAVE_INPUT_KS", "/tmp/NOSAVE_LOGS"]:
open(f, "w")
# Placing a CANCEL file in the results directory will make execWithRedirect send anaconda a SIGTERM
def cancel_build():
return os.path.exists(joinpaths(results_dir, "CANCEL"))
log.debug("cfg = %s", install_cfg)
try:
test_path = joinpaths(results_dir, "TEST")
write_timestamp(results_dir, TS_STARTED)
if os.path.exists(test_path):
# Pretend to run the compose
time.sleep(5)
try:
test_mode = int(open(test_path, "r").read())
except Exception:
test_mode = 1
if test_mode == 1:
raise RuntimeError("TESTING FAILED compose")
else:
open(joinpaths(results_dir, install_cfg.image_name), "w").write("TEST IMAGE")
else:
run_creator(install_cfg, cancel_func=cancel_build)
# Extract the results of the compose into results_dir and cleanup the compose directory
move_compose_results(install_cfg, results_dir)
finally:
# Make sure any remaining temporary directories are removed (eg. if there was an exception)
for d in glob(joinpaths(cfg.tmp, "lmc-*")):
if os.path.isdir(d):
shutil.rmtree(d)
elif os.path.isfile(d):
os.unlink(d)
# Make sure that everything under the results directory is owned by the user
user = pwd.getpwuid(cfg.uid).pw_name
group = grp.getgrgid(cfg.gid).gr_name
log.debug("Install finished, chowning results to %s:%s", user, group)
subprocess.call(["chown", "-R", "%s:%s" % (user, group), results_dir])
def get_compose_type(results_dir):
"""Return the type of composition.
:param results_dir: The directory containing the metadata and results for the build
:type results_dir: str
:returns: The type of compose (eg. 'tar')
:rtype: str
:raises: RuntimeError if no kickstart template can be found.
"""
# Should only be 2 kickstarts, the final-kickstart.ks and the template
t = [os.path.basename(ks)[:-3] for ks in glob(joinpaths(results_dir, "*.ks"))
if "final-kickstart" not in ks]
if len(t) != 1:
raise RuntimeError("Cannot find ks template for build %s" % os.path.basename(results_dir))
return t[0]
def compose_detail(cfg, results_dir, api=1):
"""Return details about the build.
:param cfg: Configuration settings (required for api=1)
:type cfg: ComposerConfig
:param results_dir: The directory containing the metadata and results for the build
:type results_dir: str
:param api: Select which api version of the dict to return (default 1)
:type api: int
:returns: A dictionary with details about the compose
:rtype: dict
:raises: IOError if it cannot read the directory, STATUS, or blueprint file.
The following details are included in the dict:
* id - The uuid of the comoposition
* queue_status - The final status of the composition (FINISHED or FAILED)
* compose_type - The type of output generated (tar, iso, etc.)
* blueprint - Blueprint name
* version - Blueprint version
* image_size - Size of the image, if finished. 0 otherwise.
* uploads - For API v1 details about uploading the image are included
Various timestamps are also included in the dict. These are all Unix UTC timestamps.
It is possible for these timestamps to not always exist, in which case they will be
None in Python (or null in JSON). The following timestamps are included:
* job_created - When the user submitted the compose
* job_started - Anaconda started running
* job_finished - Job entered FINISHED or FAILED state
"""
build_id = os.path.basename(os.path.abspath(results_dir))
status = open(joinpaths(results_dir, "STATUS")).read().strip()
blueprint = recipe_from_file(joinpaths(results_dir, "blueprint.toml"))
compose_type = get_compose_type(results_dir)
image_path = get_image_name(results_dir)[1]
if status == "FINISHED" and os.path.exists(image_path):
image_size = os.stat(image_path).st_size
else:
image_size = 0
times = timestamp_dict(results_dir)
detail = {"id": build_id,
"queue_status": status,
"job_created": times.get(TS_CREATED),
"job_started": times.get(TS_STARTED),
"job_finished": times.get(TS_FINISHED),
"compose_type": compose_type,
"blueprint": blueprint["name"],
"version": blueprint["version"],
"image_size": image_size,
}
if api == 1:
# Get uploads for this build_id
upload_uuids = uuid_get_uploads(cfg, build_id)
summaries = [upload.summary() for upload in get_uploads(cfg["upload"], upload_uuids)]
detail["uploads"] = summaries
return detail
def queue_status(cfg, api=1):
"""Return details about what is in the queue.
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param api: Select which api version of the dict to return (default 1)
:type api: int
:returns: A list of the new composes, and a list of the running composes
:rtype: dict
This returns a dict with 2 lists. "new" is the list of uuids that are waiting to be built,
and "run" has the uuids that are being built (currently limited to 1 at a time).
"""
queue_dir = joinpaths(cfg.get("composer", "lib_dir"), "queue")
new_queue = [os.path.realpath(p) for p in glob(joinpaths(queue_dir, "new/*"))]
run_queue = [os.path.realpath(p) for p in glob(joinpaths(queue_dir, "run/*"))]
new_details = []
for n in new_queue:
try:
d = compose_detail(cfg, n, api)
except IOError:
continue
new_details.append(d)
run_details = []
for r in run_queue:
try:
d = compose_detail(cfg, r, api)
except IOError:
continue
run_details.append(d)
return {
"new": new_details,
"run": run_details
}
def uuid_status(cfg, uuid, api=1):
"""Return the details of a specific UUID compose
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:param api: Select which api version of the dict to return (default 1)
:type api: int
:returns: Details about the build
:rtype: dict or None
Returns the same dict as `compose_detail()`
"""
uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid)
try:
return compose_detail(cfg, uuid_dir, api)
except IOError:
return None
def build_status(cfg, status_filter=None, api=1):
"""Return the details of finished or failed builds
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param status_filter: What builds to return. None == all, "FINISHED", or "FAILED"
:type status_filter: str
:param api: Select which api version of the dict to return (default 1)
:type api: int
:returns: A list of the build details (from compose_detail)
:rtype: list of dicts
This returns a list of build details for each of the matching builds on the
system. It does not return the status of builds that have not been finished.
Use queue_status() for those.
"""
if status_filter:
status_filter = [status_filter]
else:
status_filter = ["FINISHED", "FAILED"]
results = []
result_dir = joinpaths(cfg.get("composer", "lib_dir"), "results")
for build in glob(result_dir + "/*"):
log.debug("Checking status of build %s", build)
try:
status = open(joinpaths(build, "STATUS"), "r").read().strip()
if status in status_filter:
results.append(compose_detail(cfg, build, api))
except IOError:
pass
return results
def _upload_list_path(cfg, uuid):
"""Return the path to the UPLOADS file
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:returns: Path to the UPLOADS file listing the build's associated uploads
:rtype: str
:raises: RuntimeError if the uuid is not found
"""
results_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid)
if not os.path.isdir(results_dir):
raise RuntimeError(f'"{uuid}" is not a valid build uuid!')
return joinpaths(results_dir, "UPLOADS")
def uuid_schedule_upload(cfg, uuid, provider_name, image_name, settings):
"""Schedule an upload of an image
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:param provider_name: The name of the cloud provider, e.g. "azure"
:type provider_name: str
:param image_name: Path of the image to upload
:type image_name: str
:param settings: Settings to use for the selected provider
:type settings: dict
:returns: uuid of the upload
:rtype: str
:raises: RuntimeError if the uuid is not a valid build uuid
"""
status = uuid_status(cfg, uuid)
if status is None:
raise RuntimeError(f'"{uuid}" is not a valid build uuid!')
upload = create_upload(cfg["upload"], provider_name, image_name, settings)
uuid_add_upload(cfg, uuid, upload.uuid)
return upload.uuid
def uuid_get_uploads(cfg, uuid):
"""Return the list of uploads for a build uuid
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:returns: The upload UUIDs associated with the build UUID
:rtype: frozenset
"""
try:
with open(_upload_list_path(cfg, uuid)) as uploads_file:
return frozenset(uploads_file.read().split())
except FileNotFoundError:
return frozenset()
def uuid_add_upload(cfg, uuid, upload_uuid):
"""Add an upload UUID to a build
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:param upload_uuid: The UUID of the upload
:type upload_uuid: str
:returns: None
:rtype: None
"""
if upload_uuid not in uuid_get_uploads(cfg, uuid):
with open(_upload_list_path(cfg, uuid), "a") as uploads_file:
print(upload_uuid, file=uploads_file)
status = uuid_status(cfg, uuid)
if status and status["queue_status"] == "FINISHED":
uuid_ready_upload(cfg, uuid, upload_uuid)
def uuid_remove_upload(cfg, upload_uuid):
"""Remove an upload UUID from the build
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param upload_uuid: The UUID of the upload
:type upload_uuid: str
:returns: None
:rtype: None
:raises: RuntimeError if the upload_uuid is not found
"""
for build_uuid in (os.path.basename(b) for b in glob(joinpaths(cfg.get("composer", "lib_dir"), "results/*"))):
uploads = uuid_get_uploads(cfg, build_uuid)
if upload_uuid not in uploads:
continue
uploads = uploads - frozenset((upload_uuid,))
with open(_upload_list_path(cfg, build_uuid), "w") as uploads_file:
for upload in uploads:
print(upload, file=uploads_file)
return
raise RuntimeError(f"{upload_uuid} is not a valid upload id!")
def uuid_ready_upload(cfg, uuid, upload_uuid):
"""Set an upload to READY if the build is in FINISHED state
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:param upload_uuid: The UUID of the upload
:type upload_uuid: str
:returns: None
:rtype: None
:raises: RuntimeError if the build uuid is invalid or not in FINISHED state.
"""
status = uuid_status(cfg, uuid)
if not status:
raise RuntimeError(f"{uuid} is not a valid build id!")
if status["queue_status"] != "FINISHED":
raise RuntimeError(f"Build {uuid} is not finished!")
_, image_path = uuid_image(cfg, uuid)
ready_upload(cfg["upload"], upload_uuid, image_path)
def uuid_cancel(cfg, uuid):
"""Cancel a build and delete its results
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:returns: True if it was canceled and deleted
:rtype: bool
Only call this if the build status is WAITING or RUNNING
"""
cancel_path = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid, "CANCEL")
if os.path.exists(cancel_path):
log.info("Cancel has already been requested for %s", uuid)
return False
# This status can change (and probably will) while it is in the middle of doing this:
# It can move from WAITING -> RUNNING or it can move from RUNNING -> FINISHED|FAILED
# If it is in WAITING remove the symlink and then check to make sure it didn't show up
# in the run queue
queue_dir = joinpaths(cfg.get("composer", "lib_dir"), "queue")
uuid_new = joinpaths(queue_dir, "new", uuid)
if os.path.exists(uuid_new):
try:
os.unlink(uuid_new)
except OSError:
# The symlink may vanish if the queue monitor started the build
pass
uuid_run = joinpaths(queue_dir, "run", uuid)
if not os.path.exists(uuid_run):
# Make sure the build is still in the waiting state
status = uuid_status(cfg, uuid)
if status is None or status["queue_status"] == "WAITING":
# Successfully removed it before the build started
return uuid_delete(cfg, uuid)
# At this point the build has probably started. Write to the CANCEL file.
open(cancel_path, "w").write("\n")
# Wait for status to move to FAILED or FINISHED
started = time.time()
while True:
status = uuid_status(cfg, uuid)
if status is None or status["queue_status"] == "FAILED":
break
elif status is not None and status["queue_status"] == "FINISHED":
# The build finished successfully, no point in deleting it now
return False
# Is this taking too long? Exit anyway and try to cleanup.
if time.time() > started + (10 * 60):
log.error("Failed to cancel the build of %s", uuid)
break
time.sleep(5)
# Remove the partial results
uuid_delete(cfg, uuid)
def uuid_delete(cfg, uuid):
"""Delete all of the results from a compose
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:returns: True if it was deleted
:rtype: bool
:raises: This will raise an error if the delete failed
"""
uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid)
if not uuid_dir or len(uuid_dir) < 10:
raise RuntimeError("Directory length is too short: %s" % uuid_dir)
for upload in get_uploads(cfg["upload"], uuid_get_uploads(cfg, uuid)):
delete_upload(cfg["upload"], upload.uuid)
shutil.rmtree(uuid_dir)
return True
def uuid_info(cfg, uuid, api=1):
"""Return information about the composition
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:returns: dictionary of information about the composition or None
:rtype: dict
:raises: RuntimeError if there was a problem
This will return a dict with the following fields populated:
* id - The uuid of the comoposition
* config - containing the configuration settings used to run Anaconda
* blueprint - The depsolved blueprint used to generate the kickstart
* commit - The (local) git commit hash for the blueprint used
* deps - The NEVRA of all of the dependencies used in the composition
* compose_type - The type of output generated (tar, iso, etc.)
* queue_status - The final status of the composition (FINISHED or FAILED)
"""
uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid)
if not os.path.exists(uuid_dir):
return None
# Load the compose configuration
cfg_path = joinpaths(uuid_dir, "config.toml")
if not os.path.exists(cfg_path):
raise RuntimeError("Missing config.toml for %s" % uuid)
cfg_dict = toml.loads(open(cfg_path, "r").read())
frozen_path = joinpaths(uuid_dir, "frozen.toml")
if not os.path.exists(frozen_path):
raise RuntimeError("Missing frozen.toml for %s" % uuid)
frozen_dict = toml.loads(open(frozen_path, "r").read())
deps_path = joinpaths(uuid_dir, "deps.toml")
if not os.path.exists(deps_path):
raise RuntimeError("Missing deps.toml for %s" % uuid)
deps_dict = toml.loads(open(deps_path, "r").read())
details = compose_detail(cfg, uuid_dir, api)
commit_path = joinpaths(uuid_dir, "COMMIT")
if not os.path.exists(commit_path):
raise RuntimeError("Missing commit hash for %s" % uuid)
commit_id = open(commit_path, "r").read().strip()
info = {"id": uuid,
"config": cfg_dict,
"blueprint": frozen_dict,
"commit": commit_id,
"deps": deps_dict,
"compose_type": details["compose_type"],
"queue_status": details["queue_status"],
"image_size": details["image_size"],
}
if api == 1:
upload_uuids = uuid_get_uploads(cfg, uuid)
summaries = [upload.summary() for upload in get_uploads(cfg["upload"], upload_uuids)]
info["uploads"] = summaries
return info
def uuid_tar(cfg, uuid, metadata=False, image=False, logs=False):
"""Return a tar of the build data
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:param metadata: Set to true to include all the metadata needed to reproduce the build
:type metadata: bool
:param image: Set to true to include the output image
:type image: bool
:param logs: Set to true to include the logs from the build
:type logs: bool
:returns: A stream of bytes from tar
:rtype: A generator
:raises: RuntimeError if there was a problem (eg. missing config file)
This yields an uncompressed tar's data to the caller. It includes
the selected data to the caller by returning the Popen stdout from the tar process.
"""
uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid)
if not os.path.exists(uuid_dir):
raise RuntimeError("%s is not a valid build_id" % uuid)
# Load the compose configuration
cfg_path = joinpaths(uuid_dir, "config.toml")
if not os.path.exists(cfg_path):
raise RuntimeError("Missing config.toml for %s" % uuid)
cfg_dict = toml.loads(open(cfg_path, "r").read())
image_name = cfg_dict["image_name"]
def include_file(f):
if f.endswith("/logs"):
return logs
if f.endswith(image_name):
return image
return metadata
filenames = [os.path.basename(f) for f in glob(joinpaths(uuid_dir, "*")) if include_file(f)]
tar = Popen(["tar", "-C", uuid_dir, "-cf-"] + filenames, stdout=PIPE)
return tar.stdout
def uuid_image(cfg, uuid):
"""Return the filename and full path of the build's image file
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:returns: The image filename and full path
:rtype: tuple of strings
:raises: RuntimeError if there was a problem (eg. invalid uuid, missing config file)
"""
uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid)
return get_image_name(uuid_dir)
def get_image_name(uuid_dir):
"""Return the filename and full path of the build's image file
:param uuid: The UUID of the build
:type uuid: str
:returns: The image filename and full path
:rtype: tuple of strings
:raises: RuntimeError if there was a problem (eg. invalid uuid, missing config file)
"""
uuid = os.path.basename(os.path.abspath(uuid_dir))
if not os.path.exists(uuid_dir):
raise RuntimeError("%s is not a valid build_id" % uuid)
# Load the compose configuration
cfg_path = joinpaths(uuid_dir, "config.toml")
if not os.path.exists(cfg_path):
raise RuntimeError("Missing config.toml for %s" % uuid)
cfg_dict = toml.loads(open(cfg_path, "r").read())
image_name = cfg_dict["image_name"]
return (image_name, joinpaths(uuid_dir, image_name))
def uuid_log(cfg, uuid, size=1024):
"""Return `size` KiB from the end of the most currently relevant log for a
given compose
:param cfg: Configuration settings
:type cfg: ComposerConfig
:param uuid: The UUID of the build
:type uuid: str
:param size: Number of KiB to read. Default is 1024
:type size: int
:returns: Up to `size` KiB from the end of the log
:rtype: str
:raises: RuntimeError if there was a problem (eg. no log file available)
This function will return the end of either the anaconda log, the packaging
log, or the combined composer logs, depending on the progress of the
compose. It tries to return lines from the end of the log, it will attempt
to start on a line boundary, and it may return less than `size` kbytes.
"""
uuid_dir = joinpaths(cfg.get("composer", "lib_dir"), "results", uuid)
if not os.path.exists(uuid_dir):
raise RuntimeError("%s is not a valid build_id" % uuid)
# While a build is running the logs will be in /tmp/anaconda.log and when it
# has finished they will be in the results directory
status = uuid_status(cfg, uuid)
if status is None:
raise RuntimeError("Status is missing for %s" % uuid)
def get_log_path():
# Try to return the most relevant log at any given time during the
# compose. If the compose is not running, return the composer log.
anaconda_log = "/tmp/anaconda.log"
packaging_log = "/tmp/packaging.log"
combined_log = joinpaths(uuid_dir, "logs", "combined.log")
if status["queue_status"] != "RUNNING" or not os.path.isfile(anaconda_log):
return combined_log
if not os.path.isfile(packaging_log):
return anaconda_log
try:
anaconda_mtime = os.stat(anaconda_log).st_mtime
packaging_mtime = os.stat(packaging_log).st_mtime
# If the packaging log exists and its last message is at least 15
# seconds newer than the anaconda log, return the packaging log.
if packaging_mtime > anaconda_mtime + 15:
return packaging_log
return anaconda_log
except OSError:
# Return the combined log if anaconda_log or packaging_log disappear
return combined_log
try:
tail = read_tail(get_log_path(), size)
except OSError as e:
raise RuntimeError("No log available.") from e
return tail

File diff suppressed because it is too large Load Diff

View File

@ -1,24 +0,0 @@
#
# Copyright (C) 2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import re
# These are the characters that we allow to be passed in via the
# API calls.
VALID_API_STRING = re.compile(r'^[a-zA-Z0-9_,.:+*-]+$')
# These are the characters that we allow to be used in blueprint names.
VALID_BLUEPRINT_NAME = re.compile(r'^[a-zA-Z0-9._-]+$')

View File

@ -1,103 +0,0 @@
#
# Copyright (C) 2017-2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import logging
log = logging.getLogger("lorax-composer")
from collections import namedtuple
from flask import Flask, jsonify, redirect, send_from_directory
from glob import glob
import os
import werkzeug
from pylorax import vernum
from pylorax.api.errors import HTTP_ERROR
from pylorax.api.v0 import v0_api
from pylorax.api.v1 import v1_api
from pylorax.sysutils import joinpaths
GitLock = namedtuple("GitLock", ["repo", "lock", "dir"])
server = Flask(__name__)
__all__ = ["server", "GitLock"]
@server.route('/')
def server_root():
redirect("/api/docs/")
@server.route("/api/docs/")
@server.route("/api/docs/<path:path>")
def api_docs(path=None):
# Find the html docs
try:
# This assumes it is running from the source tree
docs_path = os.path.abspath(joinpaths(os.path.dirname(__file__), "../../../docs/html"))
except IndexError:
docs_path = glob("/usr/share/doc/lorax-*/html/")[0]
if not path:
path="index.html"
return send_from_directory(docs_path, path)
@server.route("/api/status")
def api_status():
"""
`/api/status`
^^^^^^^^^^^^^^^^
Return the status of the API Server::
{ "api": "0",
"build": "devel",
"db_supported": true,
"db_version": "0",
"schema_version": "0",
"backend": "lorax-composer",
"msgs": []}
The 'msgs' field can be a list of strings describing startup problems or status that
should be displayed to the user. eg. if the compose templates are not depsolving properly
the errors will be in 'msgs'.
"""
return jsonify(backend="lorax-composer",
build=vernum,
api="1",
db_version="0",
schema_version="0",
db_supported=True,
msgs=server.config["TEMPLATE_ERRORS"])
@server.errorhandler(werkzeug.exceptions.HTTPException)
def bad_request(error):
return jsonify(status=False, errors=[{ "id": HTTP_ERROR, "code": error.code, "msg": error.name }]), error.code
# Register the v0 API on /api/v0/
server.register_blueprint(v0_api, url_prefix="/api/v0/")
# Register the v1 API on /api/v1/
# Use v0 routes by default
skip_rules = [
"/compose",
"/compose/queue",
"/compose/finished",
"/compose/failed",
"/compose/status/<uuids>",
"/compose/info/<uuid>",
"/projects/source/info/<source_names>",
"/projects/source/new",
]
server.register_blueprint(v0_api, url_prefix="/api/v1/", skip_rules=skip_rules)
server.register_blueprint(v1_api, url_prefix="/api/v1/")

View File

@ -1,51 +0,0 @@
#
# Copyright (C) 2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import time
from pylorax.sysutils import joinpaths
import pylorax.api.toml as toml
TS_CREATED = "created"
TS_STARTED = "started"
TS_FINISHED = "finished"
def write_timestamp(destdir, ty):
path = joinpaths(destdir, "times.toml")
try:
contents = toml.loads(open(path, "r").read())
except IOError:
contents = toml.loads("")
if ty == TS_CREATED:
contents[TS_CREATED] = time.time()
elif ty == TS_STARTED:
contents[TS_STARTED] = time.time()
elif ty == TS_FINISHED:
contents[TS_FINISHED] = time.time()
with open(path, "w") as f:
f.write(toml.dumps(contents))
def timestamp_dict(destdir):
path = joinpaths(destdir, "times.toml")
try:
return toml.loads(open(path, "r").read())
except IOError:
return toml.loads("")

View File

@ -1,42 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import toml
class TomlError(toml.TomlDecodeError):
pass
def loads(s):
if isinstance(s, bytes):
s = s.decode('utf-8')
try:
return toml.loads(s)
except toml.TomlDecodeError as e:
raise TomlError(e.msg, e.doc, e.pos)
def dumps(o):
# strip the result, because `toml.dumps` adds a lot of newlines
return toml.dumps(o, encoder=toml.TomlEncoder(dict)).strip()
def load(file):
try:
return toml.load(file)
except toml.TomlDecodeError as e:
raise TomlError(e.msg, e.doc, e.pos)
def dump(o, file):
return toml.dump(o, file)

View File

@ -1,49 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
""" API utility functions
"""
from pylorax.api.recipes import RecipeError, RecipeFileError, read_recipe_commit
def take_limits(iterable, offset, limit):
""" Apply offset and limit to an iterable object
:param iterable: The object to limit
:type iterable: iter
:param offset: The number of items to skip
:type offset: int
:param limit: The total number of items to return
:type limit: int
:returns: A subset of the iterable
"""
return iterable[offset:][:limit]
def blueprint_exists(api, branch, blueprint_name):
"""Return True if the blueprint exists
:param api: flask object
:type api: Flask
:param branch: Branch name
:type branch: str
:param recipe_name: Recipe name to read
:type recipe_name: str
"""
try:
with api.config["GITLOCK"].lock:
read_recipe_commit(api.config["GITLOCK"].repo, branch, blueprint_name)
return True
except (RecipeError, RecipeFileError):
return False

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,129 +0,0 @@
#
# Copyright (C) 2017 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import os
from pylorax.api.recipes import recipe_filename, recipe_from_toml, RecipeFileError
from pylorax.sysutils import joinpaths
def workspace_dir(repo, branch):
"""Create the workspace's path from a Repository and branch
:param repo: Open repository
:type repo: Git.Repository
:param branch: Branch name
:type branch: str
:returns: The path to the branch's workspace directory
:rtype: str
"""
repo_path = repo.get_location().get_path()
return joinpaths(repo_path, "workspace", branch)
def workspace_read(repo, branch, recipe_name):
"""Read a Recipe from the branch's workspace
:param repo: Open repository
:type repo: Git.Repository
:param branch: Branch name
:type branch: str
:param recipe_name: The name of the recipe
:type recipe_name: str
:returns: The workspace copy of the recipe, or None if it doesn't exist
:rtype: Recipe or None
:raises: RecipeFileError
"""
ws_dir = workspace_dir(repo, branch)
if not os.path.isdir(ws_dir):
os.makedirs(ws_dir)
filename = joinpaths(ws_dir, recipe_filename(recipe_name))
if not os.path.exists(filename):
return None
try:
f = open(filename, 'rb')
recipe = recipe_from_toml(f.read().decode("UTF-8"))
except IOError:
raise RecipeFileError
return recipe
def workspace_write(repo, branch, recipe):
"""Write a recipe to the workspace
:param repo: Open repository
:type repo: Git.Repository
:param branch: Branch name
:type branch: str
:param recipe: The recipe to write to the workspace
:type recipe: Recipe
:returns: None
:raises: IO related errors
"""
ws_dir = workspace_dir(repo, branch)
if not os.path.isdir(ws_dir):
os.makedirs(ws_dir)
filename = joinpaths(ws_dir, recipe.filename)
open(filename, 'wb').write(recipe.toml().encode("UTF-8"))
def workspace_filename(repo, branch, recipe_name):
"""Return the path and filename of the workspace recipe
:param repo: Open repository
:type repo: Git.Repository
:param branch: Branch name
:type branch: str
:param recipe_name: The name of the recipe
:type recipe_name: str
:returns: workspace recipe path and filename
:rtype: str
"""
ws_dir = workspace_dir(repo, branch)
return joinpaths(ws_dir, recipe_filename(recipe_name))
def workspace_exists(repo, branch, recipe_name):
"""Return true of the workspace recipe exists
:param repo: Open repository
:type repo: Git.Repository
:param branch: Branch name
:type branch: str
:param recipe_name: The name of the recipe
:type recipe_name: str
:returns: True if the file exists
:rtype: bool
"""
return os.path.exists(workspace_filename(repo, branch, recipe_name))
def workspace_delete(repo, branch, recipe_name):
"""Delete the recipe from the workspace
:param repo: Open repository
:type repo: Git.Repository
:param branch: Branch name
:type branch: str
:param recipe_name: The name of the recipe
:type recipe_name: str
:returns: None
:raises: IO related errors
"""
filename = workspace_filename(repo, branch, recipe_name)
if os.path.exists(filename):
os.unlink(filename)

View File

@ -1,289 +0,0 @@
#!/usr/bin/python3
#
# lorax-composer
#
# Copyright (C) 2017-2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import logging
log = logging.getLogger("lorax-composer")
program_log = logging.getLogger("program")
pylorax_log = logging.getLogger("pylorax")
server_log = logging.getLogger("server")
dnf_log = logging.getLogger("dnf")
lifted_log = logging.getLogger("lifted")
import grp
import os
import pwd
import sys
import subprocess
import tempfile
from threading import Lock
from gevent import socket
from gevent.pywsgi import WSGIServer
from pylorax import vernum, log_selinux_state
from pylorax.api.cmdline import lorax_composer_parser
from pylorax.api.config import configure, make_dnf_dirs, make_queue_dirs, make_owned_dir
from pylorax.api.compose import test_templates
from pylorax.api.dnfbase import DNFLock
from pylorax.api.queue import start_queue_monitor
from pylorax.api.recipes import open_or_create_repo, commit_recipe_directory
from pylorax.api.server import server, GitLock
import lifted.config
from lifted.queue import start_upload_monitor
VERSION = "{0}-{1}".format(os.path.basename(sys.argv[0]), vernum)
def setup_logging(logfile):
# Setup logging to console and to logfile
log.setLevel(logging.DEBUG)
pylorax_log.setLevel(logging.DEBUG)
lifted_log.setLevel(logging.DEBUG)
sh = logging.StreamHandler()
sh.setLevel(logging.INFO)
fmt = logging.Formatter("%(asctime)s: %(message)s")
sh.setFormatter(fmt)
log.addHandler(sh)
pylorax_log.addHandler(sh)
lifted_log.addHandler(sh)
fh = logging.FileHandler(filename=logfile)
fh.setLevel(logging.DEBUG)
fmt = logging.Formatter("%(asctime)s %(levelname)s %(name)s: %(message)s")
fh.setFormatter(fmt)
log.addHandler(fh)
pylorax_log.addHandler(fh)
lifted_log.addHandler(fh)
# External program output log
program_log.setLevel(logging.DEBUG)
logfile = os.path.abspath(os.path.dirname(logfile))+"/program.log"
fh = logging.FileHandler(filename=logfile)
fh.setLevel(logging.DEBUG)
fmt = logging.Formatter("%(asctime)s %(levelname)s: %(message)s")
fh.setFormatter(fmt)
program_log.addHandler(fh)
# Server request logging
server_log.setLevel(logging.DEBUG)
logfile = os.path.abspath(os.path.dirname(logfile))+"/server.log"
fh = logging.FileHandler(filename=logfile)
fh.setLevel(logging.DEBUG)
server_log.addHandler(fh)
# DNF logging
dnf_log.setLevel(logging.DEBUG)
logfile = os.path.abspath(os.path.dirname(logfile))+"/dnf.log"
fh = logging.FileHandler(filename=logfile)
fh.setLevel(logging.DEBUG)
fmt = logging.Formatter("%(asctime)s %(levelname)s: %(message)s")
fh.setFormatter(fmt)
dnf_log.addHandler(fh)
class LogWrapper(object):
"""Wrapper for the WSGIServer which only calls write()"""
def __init__(self, log_obj):
self.log = log_obj
def write(self, msg):
"""Log everything as INFO"""
self.log.info(msg.strip())
def make_pidfile(pid_path="/run/lorax-composer.pid"):
"""Check for a running instance of lorax-composer
:param pid_path: Path to the pid file
:type pid_path: str
:returns: False if there is already a running lorax-composer, True otherwise
:rtype: bool
This will look for an existing pid file, and if found read the PID and check to
see if it is really lorax-composer running, or if it is a stale pid.
It will create a new pid file if there isn't already one, or if the PID is stale.
"""
if os.path.exists(pid_path):
try:
pid = int(open(pid_path, "r").read())
cmdline = open("/proc/%s/cmdline" % pid, "r").read()
if "lorax-composer" in cmdline:
return False
except (IOError, ValueError):
pass
open(pid_path, "w").write(str(os.getpid()))
return True
if __name__ == '__main__':
# parse the arguments
opts = lorax_composer_parser().parse_args()
if opts.showver:
print(VERSION)
sys.exit(0)
tempfile.tempdir = opts.tmp
logpath = os.path.abspath(os.path.dirname(opts.logfile))
if not os.path.isdir(logpath):
os.makedirs(logpath)
setup_logging(opts.logfile)
log.debug("opts=%s", opts)
log_selinux_state()
if not make_pidfile():
log.error("PID file exists, lorax-composer already running. Quitting.")
sys.exit(1)
errors = []
# Check to make sure the user exists and get its uid
try:
uid = pwd.getpwnam(opts.user).pw_uid
except KeyError:
errors.append("Missing user '%s'" % opts.user)
# Check to make sure the group exists and get its gid
try:
gid = grp.getgrnam(opts.group).gr_gid
except KeyError:
errors.append("Missing group '%s'" % opts.group)
# No point in continuing if there are uid or gid errors
if errors:
for e in errors:
log.error(e)
sys.exit(1)
errors = []
# Check the socket path to make sure it exists, and that ownership and permissions are correct.
socket_dir = os.path.dirname(opts.socket)
if not os.path.exists(socket_dir):
# Create the directory and set permissions and ownership
os.makedirs(socket_dir, 0o750)
os.chown(socket_dir, 0, gid)
sockdir_stat = os.stat(socket_dir)
if sockdir_stat.st_mode & 0o007 != 0:
errors.append("Incorrect permissions on %s, no 'other' permissions are allowed." % socket_dir)
if sockdir_stat.st_gid != gid or sockdir_stat.st_uid != 0:
errors.append("%s should be owned by root:%s" % (socket_dir, opts.group))
# No point in continuing if there are ownership or permission errors
if errors:
for e in errors:
log.error(e)
sys.exit(1)
server.config["COMPOSER_CFG"] = configure(conf_file=opts.config)
server.config["COMPOSER_CFG"].set("composer", "tmp", opts.tmp)
# If the user passed in a releasever set it in the configuration
if opts.releasever:
server.config["COMPOSER_CFG"].set("composer", "releasever", opts.releasever)
# Override the default sharedir
if opts.sharedir:
server.config["COMPOSER_CFG"].set("composer", "share_dir", opts.sharedir)
# Override the config file's DNF proxy setting
if opts.proxy:
server.config["COMPOSER_CFG"].set("dnf", "proxy", opts.proxy)
# Override using system repos
if opts.no_system_repos:
server.config["COMPOSER_CFG"].set("repos", "use_system_repos", "0")
# Setup the lifted configuration settings
lifted.config.configure(server.config["COMPOSER_CFG"])
# Make sure the queue paths are setup correctly, exit on errors
errors = make_queue_dirs(server.config["COMPOSER_CFG"], gid)
if errors:
for e in errors:
log.error(e)
sys.exit(1)
# Make sure dnf directories are created (owned by user:group)
make_dnf_dirs(server.config["COMPOSER_CFG"], uid, gid)
# Make sure the git repo can be accessed by the API uid/gid
if os.path.exists(opts.BLUEPRINTS):
repodir_stat = os.stat(opts.BLUEPRINTS)
if repodir_stat.st_gid != gid or repodir_stat.st_uid != uid:
subprocess.call(["chown", "-R", "%s:%s" % (opts.user, opts.group), opts.BLUEPRINTS])
else:
make_owned_dir(opts.BLUEPRINTS, uid, gid)
# Did systemd pass any extra fds (for socket activation)?
try:
fds = int(os.environ['LISTEN_FDS'])
except (ValueError, KeyError):
fds = 0
if fds == 1:
# Inherit the fd passed by systemd
listener = socket.fromfd(3, socket.AF_UNIX, socket.SOCK_STREAM)
elif fds > 1:
log.error("lorax-composer only supports inheriting 1 fd from systemd.")
sys.exit(1)
else:
# Setup the Unix Domain Socket, remove old one, set ownership and permissions
if os.path.exists(opts.socket):
os.unlink(opts.socket)
listener = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
listener.bind(opts.socket)
os.chmod(opts.socket, 0o660)
os.chown(opts.socket, 0, gid)
listener.listen(socket.SOMAXCONN)
start_queue_monitor(server.config["COMPOSER_CFG"], uid, gid)
start_upload_monitor(server.config["COMPOSER_CFG"]["upload"])
# Change user and group on the main process. Note that this still happens even if
# --user and --group were passed in, but changing to the same user should be fine.
os.setgid(gid)
os.setuid(uid)
log.debug("user is now %s:%s", os.getresuid(), os.getresgid())
# Switch to a home directory we can access (libgit2 uses this to look for .gitconfig)
os.environ["HOME"] = server.config["COMPOSER_CFG"].get("composer", "lib_dir")
# Setup access to the git repo
server.config["REPO_DIR"] = opts.BLUEPRINTS
repo = open_or_create_repo(server.config["REPO_DIR"])
server.config["GITLOCK"] = GitLock(repo=repo, lock=Lock(), dir=opts.BLUEPRINTS)
# Import example blueprints
commit_recipe_directory(server.config["GITLOCK"].repo, "master", opts.BLUEPRINTS)
# Get a dnf.Base to share with the requests
try:
server.config["DNFLOCK"] = DNFLock(server.config["COMPOSER_CFG"])
except RuntimeError:
# Error has already been logged. Just exit cleanly.
sys.exit(1)
# Depsolve the templates and make a note of the failures for /api/status to report
with server.config["DNFLOCK"].lock:
server.config["TEMPLATE_ERRORS"] = test_templates(server.config["DNFLOCK"].dbo, server.config["COMPOSER_CFG"].get("composer", "share_dir"))
log.info("Starting %s on %s with blueprints from %s", VERSION, opts.socket, opts.BLUEPRINTS)
http_server = WSGIServer(listener, server, log=LogWrapper(server_log))
# The server writes directly to a file object, so point to our log directory
http_server.serve_forever()

View File

@ -1 +0,0 @@
d /run/weldr 750 root weldr

View File

@ -1,15 +0,0 @@
[Unit]
Description=Lorax Image Composer API Server
After=network-online.target
Wants=network-online.target
Documentation=man:lorax-composer(1),https://weldr.io/lorax/lorax-composer.html
[Service]
User=root
Type=simple
PIDFile=/run/lorax-composer.pid
ExecStartPre=/usr/bin/systemd-tmpfiles --create /usr/lib/tmpfiles.d/lorax-composer.conf
ExecStart=/usr/sbin/lorax-composer /var/lib/lorax/composer/blueprints/
[Install]
WantedBy=multi-user.target

View File

@ -1,13 +0,0 @@
[Unit]
Description=lorax-composer socket activation
Documentation=man:lorax-composer(1),https://weldr.io/lorax/lorax-composer.html
[Socket]
ListenStream=/run/weldr/api.socket
SocketUser=root
SocketGroup=weldr
SocketMode=0660
DirectoryMode=0750
[Install]
WantedBy=sockets.target

View File

@ -2,16 +2,12 @@ anaconda-tui
beakerlib
e2fsprogs
git
libgit2-glib
libselinux-python3
make
pbzip2
pykickstart
python3-ansible-runner
python3-coverage
python3-coveralls
python3-flask
python3-gevent
python3-librepo
python3-magic
python3-mako
@ -34,4 +30,3 @@ squashfs-tools
sudo
which
xz-lzma-compat
yamllint

View File

@ -16,10 +16,11 @@
#
import unittest
from pylorax.api.errors import INVALID_CHARS
from composer.cli.utilities import argify, toml_filename, frozen_toml_filename, packageNEVRA
from composer.cli.utilities import handle_api_result, get_arg
INVALID_CHARS = "InvalidChars"
class CliUtilitiesTest(unittest.TestCase):
def test_argify(self):
"""Convert an optionally comma-separated cmdline into a list of args"""

View File

@ -1,54 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# test profile settings for each provider
test_profiles = {
"aws": ["aws-profile", {
"aws_access_key": "theaccesskey",
"aws_secret_key": "thesecretkey",
"aws_region": "us-east-1",
"aws_bucket": "composer-mops"
}],
"azure": ["azure-profile", {
"resource_group": "production",
"storage_account_name": "HomerSimpson",
"storage_container": "plastic",
"subscription_id": "SpringfieldNuclear",
"client_id": "DonutGuy",
"secret": "I Like sprinkles",
"tenant": "Bart",
"location": "Springfield"
}],
"dummy": ["dummy-profile", {}],
"openstack": ["openstack-profile", {
"auth_url": "https://localhost/auth/url",
"username": "ChuckBurns",
"password": "Excellent!",
"project_name": "Springfield Nuclear",
"user_domain_name": "chuck.burns.localhost",
"project_domain_name": "springfield.burns.localhost",
"is_public": True
}],
"vsphere": ["vsphere-profile", {
"datacenter": "Lisa's Closet",
"datastore": "storage-crate-alpha",
"host": "marge",
"folder": "the.green.one",
"username": "LisaSimpson",
"password": "EmbraceNothingnes"
}]
}

View File

@ -1,52 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import unittest
import lifted.config
import pylorax.api.config
class ConfigTestCase(unittest.TestCase):
def test_lifted_config(self):
"""Test lifted config setup"""
config = pylorax.api.config.configure(test_config=True)
lifted.config.configure(config)
self.assertTrue(config.get("upload", "providers_dir").startswith(config.get("composer", "share_dir")))
self.assertTrue(config.get("upload", "queue_dir").startswith(config.get("composer", "lib_dir")))
self.assertTrue(config.get("upload", "settings_dir").startswith(config.get("composer", "lib_dir")))
def test_lifted_sharedir_config(self):
"""Test lifted config setup with custom share_dir"""
config = pylorax.api.config.configure(test_config=True)
config.set("composer", "share_dir", "/custom/share/path")
lifted.config.configure(config)
self.assertEqual(config.get("composer", "share_dir"), "/custom/share/path")
self.assertTrue(config.get("upload", "providers_dir").startswith(config.get("composer", "share_dir")))
self.assertTrue(config.get("upload", "queue_dir").startswith(config.get("composer", "lib_dir")))
self.assertTrue(config.get("upload", "settings_dir").startswith(config.get("composer", "lib_dir")))
def test_lifted_libdir_config(self):
"""Test lifted config setup with custom lib_dir"""
config = pylorax.api.config.configure(test_config=True)
config.set("composer", "lib_dir", "/custom/lib/path")
lifted.config.configure(config)
self.assertEqual(config.get("composer", "lib_dir"), "/custom/lib/path")
self.assertTrue(config.get("upload", "providers_dir").startswith(config.get("composer", "share_dir")))
self.assertTrue(config.get("upload", "queue_dir").startswith(config.get("composer", "lib_dir")))
self.assertTrue(config.get("upload", "settings_dir").startswith(config.get("composer", "lib_dir")))

View File

@ -1,157 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import os
import shutil
import tempfile
import unittest
import lifted.config
from lifted.providers import list_providers, resolve_provider, resolve_playbook_path, save_settings
from lifted.providers import load_profiles, validate_settings, load_settings, delete_profile
from lifted.providers import _get_profile_path
import pylorax.api.config
from pylorax.sysutils import joinpaths
from tests.lifted.profiles import test_profiles
class ProvidersTestCase(unittest.TestCase):
@classmethod
def setUpClass(self):
self.root_dir = tempfile.mkdtemp(prefix="lifted.test.")
self.config = pylorax.api.config.configure(root_dir=self.root_dir, test_config=True)
self.config.set("composer", "share_dir", os.path.realpath("./share/"))
lifted.config.configure(self.config)
@classmethod
def tearDownClass(self):
shutil.rmtree(self.root_dir)
def test_get_profile_path(self):
"""Make sure that _get_profile_path strips path elements from the input"""
path = _get_profile_path(self.config["upload"], "aws", "staging-settings", exists=False)
self.assertEqual(path, os.path.abspath(joinpaths(self.config["upload"]["settings_dir"], "aws/staging-settings.toml")))
path = _get_profile_path(self.config["upload"], "../../../../foo/bar/aws", "/not/my/path/staging-settings", exists=False)
self.assertEqual(path, os.path.abspath(joinpaths(self.config["upload"]["settings_dir"], "aws/staging-settings.toml")))
def test_list_providers(self):
p = list_providers(self.config["upload"])
self.assertEqual(p, ['aws', 'dummy', 'openstack', 'vsphere'])
def test_resolve_provider(self):
for p in list_providers(self.config["upload"]):
print(p)
info = resolve_provider(self.config["upload"], p)
self.assertTrue("display" in info)
self.assertTrue("supported_types" in info)
self.assertTrue("settings-info" in info)
def test_resolve_playbook_path(self):
for p in list_providers(self.config["upload"]):
print(p)
self.assertTrue(len(resolve_playbook_path(self.config["upload"], p)) > 0)
def test_resolve_playbook_path_error(self):
with self.assertRaises(RuntimeError):
resolve_playbook_path(self.config["upload"], "foobar")
def test_validate_settings(self):
for p in list_providers(self.config["upload"]):
print(p)
validate_settings(self.config["upload"], p, test_profiles[p][1])
def test_validate_settings_errors(self):
with self.assertRaises(ValueError):
validate_settings(self.config["upload"], "dummy", test_profiles["dummy"][1], image_name="")
with self.assertRaises(ValueError):
validate_settings(self.config["upload"], "aws", {"wrong-key": "wrong value"})
with self.assertRaises(ValueError):
validate_settings(self.config["upload"], "aws", {"secret": False})
# TODO - test regex, needs a provider with a regex
def test_save_settings(self):
"""Test saving profiles"""
for p in list_providers(self.config["upload"]):
print(p)
save_settings(self.config["upload"], p, test_profiles[p][0], test_profiles[p][1])
profile_dir = joinpaths(self.config.get("upload", "settings_dir"), p, test_profiles[p][0]+".toml")
self.assertTrue(os.path.exists(profile_dir))
# This *must* run after test_save_settings, _zz_ ensures that happens
def test_zz_load_profiles(self):
"""Test loading profiles"""
for p in list_providers(self.config["upload"]):
print(p)
profile = load_profiles(self.config["upload"], p)
self.assertTrue(test_profiles[p][0] in profile)
# This *must* run after test_save_settings, _zz_ ensures that happens
def test_zz_load_settings_errors(self):
"""Test returning the correct errors for missing profiles and providers"""
with self.assertRaises(ValueError):
load_settings(self.config["upload"], "", "")
with self.assertRaises(ValueError):
load_settings(self.config["upload"], "", "default")
with self.assertRaises(ValueError):
load_settings(self.config["upload"], "aws", "")
with self.assertRaises(RuntimeError):
load_settings(self.config["upload"], "foo", "default")
with self.assertRaises(RuntimeError):
load_settings(self.config["upload"], "aws", "missing-test")
# This *must* run after test_save_settings, _zz_ ensures that happens
def test_zz_load_settings(self):
"""Test loading settings"""
for p in list_providers(self.config["upload"]):
settings = load_settings(self.config["upload"], p, test_profiles[p][0])
self.assertEqual(settings, test_profiles[p][1])
# This *must* run after all the save and load tests, but *before* the actual delete test
# _zz_ ensures this happens
def test_zz_delete_settings_errors(self):
"""Test raising the correct errors when deleting"""
with self.assertRaises(ValueError):
delete_profile(self.config["upload"], "", "")
with self.assertRaises(ValueError):
delete_profile(self.config["upload"], "", "default")
with self.assertRaises(ValueError):
delete_profile(self.config["upload"], "aws", "")
with self.assertRaises(RuntimeError):
delete_profile(self.config["upload"], "aws", "missing-test")
# This *must* run after all the save and load tests, _zzz_ ensures this happens
def test_zzz_delete_settings(self):
"""Test raising the correct errors when deleting"""
# Ensure the profile is really there
settings = load_settings(self.config["upload"], "aws", test_profiles["aws"][0])
self.assertEqual(settings, test_profiles["aws"][1])
delete_profile(self.config["upload"], "aws", test_profiles["aws"][0])
with self.assertRaises(RuntimeError):
load_settings(self.config["upload"], "aws", test_profiles["aws"][0])

View File

@ -1,119 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import os
import shutil
import tempfile
import unittest
import lifted.config
from lifted.providers import list_providers
from lifted.queue import _write_callback, create_upload, get_all_uploads, get_upload, get_uploads
from lifted.queue import ready_upload, reset_upload, cancel_upload
import pylorax.api.config
from tests.lifted.profiles import test_profiles
class QueueTestCase(unittest.TestCase):
@classmethod
def setUpClass(self):
self.root_dir = tempfile.mkdtemp(prefix="lifted.test.")
self.config = pylorax.api.config.configure(root_dir=self.root_dir, test_config=True)
self.config.set("composer", "share_dir", os.path.realpath("./share/"))
lifted.config.configure(self.config)
self.upload_uuids = []
@classmethod
def tearDownClass(self):
shutil.rmtree(self.root_dir)
# This should run first, it writes uploads to the queue directory
def test_00_create_upload(self):
"""Test creating an upload for each provider"""
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1])
summary = upload.summary()
self.assertEqual(summary["provider_name"], p)
self.assertEqual(summary["image_name"], "test-image")
self.assertTrue(summary["status"], "WAITING")
self.upload_uuids.append(summary["uuid"])
self.assertTrue(len(self.upload_uuids) > 0)
self.assertTrue(len(self.upload_uuids), len(list_providers(self.config["upload"])))
def test_01_get_all_uploads(self):
"""Test listing all the uploads"""
uploads = get_all_uploads(self.config["upload"])
# Should be one upload per provider
providers = sorted([u.provider_name for u in uploads])
self.assertEqual(providers, list_providers(self.config["upload"]))
def test_02_get_upload(self):
"""Test listing specific uploads by uuid"""
for uuid in self.upload_uuids:
upload = get_upload(self.config["upload"], uuid)
self.assertTrue(upload.uuid, uuid)
def test_02_get_upload_error(self):
"""Test listing an unknown upload uuid"""
with self.assertRaises(RuntimeError):
get_upload(self.config["upload"], "not-a-valid-uuid")
def test_03_get_uploads(self):
"""Test listing multiple uploads by uuid"""
uploads = get_uploads(self.config["upload"], self.upload_uuids)
uuids = sorted([u.uuid for u in uploads])
self.assertTrue(uuids, sorted(self.upload_uuids))
def test_04_ready_upload(self):
"""Test ready_upload"""
ready_upload(self.config["upload"], self.upload_uuids[0], "image-test-path")
upload = get_upload(self.config["upload"], self.upload_uuids[0])
self.assertEqual(upload.image_path, "image-test-path")
def test_05_reset_upload(self):
"""Test reset_upload"""
# Set the status to FAILED so it can be reset
upload = get_upload(self.config["upload"], self.upload_uuids[0])
upload.set_status("FAILED", _write_callback(self.config["upload"]))
reset_upload(self.config["upload"], self.upload_uuids[0])
upload = get_upload(self.config["upload"], self.upload_uuids[0])
self.assertEqual(upload.status, "READY")
def test_06_reset_upload_error(self):
"""Test reset_upload raising an error"""
with self.assertRaises(RuntimeError):
reset_upload(self.config["upload"], self.upload_uuids[0])
def test_07_cancel_upload(self):
"""Test cancel_upload"""
cancel_upload(self.config["upload"], self.upload_uuids[0])
upload = get_upload(self.config["upload"], self.upload_uuids[0])
self.assertEqual(upload.status, "CANCELLED")
def test_08_cancel_upload_error(self):
"""Test cancel_upload raises an error"""
# Set the status to CANCELED to make sure the cancel will fail
upload = get_upload(self.config["upload"], self.upload_uuids[0])
upload.set_status("CANCELLED", _write_callback(self.config["upload"]))
with self.assertRaises(RuntimeError):
cancel_upload(self.config["upload"], self.upload_uuids[0])
# TODO test execute

View File

@ -1,126 +0,0 @@
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import os
import shutil
import tempfile
import unittest
import lifted.config
from lifted.providers import list_providers, resolve_playbook_path, validate_settings
from lifted.upload import Upload
import pylorax.api.config
from tests.lifted.profiles import test_profiles
# Helper function for creating Upload object
def create_upload(ucfg, provider_name, image_name, settings, status=None, callback=None):
validate_settings(ucfg, provider_name, settings, image_name)
return Upload(
provider_name=provider_name,
playbook_path=resolve_playbook_path(ucfg, provider_name),
image_name=image_name,
settings=settings,
status=status,
status_callback=callback,
)
class UploadTestCase(unittest.TestCase):
@classmethod
def setUpClass(self):
self.root_dir = tempfile.mkdtemp(prefix="lifted.test.")
self.config = pylorax.api.config.configure(root_dir=self.root_dir, test_config=True)
self.config.set("composer", "share_dir", os.path.realpath("./share/"))
lifted.config.configure(self.config)
@classmethod
def tearDownClass(self):
shutil.rmtree(self.root_dir)
def test_new_upload(self):
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="READY")
summary = upload.summary()
self.assertEqual(summary["provider_name"], p)
self.assertEqual(summary["image_name"], "test-image")
self.assertTrue(summary["status"], "WAITING")
def test_serializable(self):
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="READY")
self.assertEqual(upload.serializable()["settings"], test_profiles[p][1])
self.assertEqual(upload.serializable()["status"], "READY")
def test_summary(self):
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="READY")
self.assertEqual(upload.summary()["settings"], test_profiles[p][1])
self.assertEqual(upload.summary()["status"], "READY")
def test_set_status(self):
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="READY")
self.assertEqual(upload.summary()["status"], "READY")
upload.set_status("WAITING")
self.assertEqual(upload.summary()["status"], "WAITING")
def test_ready(self):
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="WAITING")
self.assertEqual(upload.summary()["status"], "WAITING")
upload.ready("test-image-path", status_callback=None)
summary = upload.summary()
self.assertEqual(summary["status"], "READY")
self.assertEqual(summary["image_path"], "test-image-path")
def test_reset(self):
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="CANCELLED")
upload.ready("test-image-path", status_callback=None)
upload.reset(status_callback=None)
self.assertEqual(upload.status, "READY")
def test_reset_errors(self):
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="WAITING")
with self.assertRaises(RuntimeError):
upload.reset(status_callback=None)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="CANCELLED")
with self.assertRaises(RuntimeError):
upload.reset(status_callback=None)
def test_cancel(self):
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="WAITING")
upload.cancel()
self.assertEqual(upload.status, "CANCELLED")
def test_cancel_error(self):
for p in list_providers(self.config["upload"]):
print(p)
upload = create_upload(self.config["upload"], p, "test-image", test_profiles[p][1], status="CANCELLED")
with self.assertRaises(RuntimeError):
upload.cancel()

View File

@ -1,5 +0,0 @@
#!/usr/bin/sh
for f in ./share/lifted/providers/*/playbook.yaml; do
echo "linting $f"
yamllint -c ./tests/yamllint.conf "$f"
done

View File

@ -1,18 +0,0 @@
name = "example-append"
description = "An example using kernel append customization"
version = "0.0.1"
[[packages]]
name = "tmux"
version = "*"
[[packages]]
name = "openssh-server"
version = "*"
[[packages]]
name = "rsync"
version = "*"
[customizations.kernel]
append = "nosmt=force"

View File

@ -1,11 +0,0 @@
name = "example-atlas"
description = "Automatically Tuned Linear Algebra Software"
version = "0.0.1"
[[modules]]
name = "atlas"
version = "*"
[[modules]]
name = "python3-numpy"
version = "*"

View File

@ -1,45 +0,0 @@
name = "example-custom-base"
description = "A base system with customizations"
version = "0.0.1"
[[packages]]
name = "bash"
version = "*"
[customizations]
hostname = "custombase"
[[customizations.sshkey]]
user = "root"
key = "A SSH KEY FOR ROOT"
[[customizations.user]]
name = "widget"
description = "Widget process user account"
home = "/srv/widget/"
shell = "/usr/bin/false"
groups = ["dialout", "users"]
[[customizations.user]]
name = "admin"
description = "Widget admin account"
password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31LeOUleVK/R/aeWVHVZDi26zAH.o0ywBKH9Tc0/wm7sW/q39uyd1"
home = "/srv/widget/"
shell = "/usr/bin/bash"
groups = ["widget", "users", "students"]
uid = 1200
[[customizations.user]]
name = "plain"
password = "simple plain password"
[[customizations.user]]
name = "bart"
key = "SSH KEY FOR BART"
groups = ["students"]
[[customizations.group]]
name = "widget"
[[customizations.group]]
name = "students"

View File

@ -1,82 +0,0 @@
name = "example-development"
description = "A general purpose development image"
[[packages]]
name = "cmake"
version = "*"
[[packages]]
name = "curl"
version = "*"
[[packages]]
name = "file"
version = "*"
[[packages]]
name = "gcc"
version = "*"
[[packages]]
name = "gcc-c++"
version = "*"
[[packages]]
name = "gdb"
version = "*"
[[packages]]
name = "git"
version = "*"
[[packages]]
name = "glibc-devel"
version = "*"
[[packages]]
name = "gnupg2"
version = "*"
[[packages]]
name = "libcurl-devel"
version = "*"
[[packages]]
name = "make"
version = "*"
[[packages]]
name = "openssl-devel"
version = "*"
[[packages]]
name = "openssl-devel"
version = "*"
[[packages]]
name = "sqlite"
version = "*"
[[packages]]
name = "sqlite-devel"
version = "*"
[[packages]]
name = "sudo"
version = "*"
[[packages]]
name = "tar"
version = "*"
[[packages]]
name = "xz"
version = "*"
[[packages]]
name = "xz-devel"
version = "*"
[[packages]]
name = "zlib-devel"
version = "*"

View File

@ -1,14 +0,0 @@
name = "example-glusterfs"
description = "An example GlusterFS server with samba"
[[modules]]
name = "glusterfs"
version = "*"
[[modules]]
name = "glusterfs-cli"
version = "*"
[[packages]]
name = "samba"
version = "*"

View File

@ -1,35 +0,0 @@
name = "example-http-server"
description = "An example http server with PHP and MySQL support."
version = "0.0.1"
[[modules]]
name = "httpd"
version = "*"
[[modules]]
name = "mod_auth_openid"
version = "*"
[[modules]]
name = "mod_ssl"
version = "*"
[[modules]]
name = "php"
version = "*"
[[modules]]
name = "php-mysqlnd"
version = "*"
[[packages]]
name = "tmux"
version = "*"
[[packages]]
name = "openssh-server"
version = "*"
[[packages]]
name = "rsync"
version = "*"

View File

@ -1,15 +0,0 @@
name = "example-jboss"
description = "An example jboss server"
version = "0.0.1"
[[modules]]
name = "jboss-servlet-3.1-api"
version = "*"
[[modules]]
name = "jboss-interceptors-1.2-api"
version = "*"
[[modules]]
name = "java-1.8.0-openjdk"
version = "*"

View File

@ -1,27 +0,0 @@
name = "example-kubernetes"
description = "An example kubernetes master"
version = "0.0.1"
[[modules]]
name = "kubernetes"
version = "*"
[[modules]]
name = "docker"
version = "*"
[[modules]]
name = "docker-lvm-plugin"
version = "*"
[[modules]]
name = "etcd"
version = "*"
[[modules]]
name = "flannel"
version = "*"
[[packages]]
name = "oci-systemd-hook"
version = "*"

View File

@ -1,6 +0,0 @@
[fake-repo-baseurl]
name = A fake repo with a baseurl
baseurl = https://fake-repo.base.url
sslverify = True
gpgcheck = True
skip_if_unavailable=1

View File

@ -1,6 +0,0 @@
[fake-repo-gpgkey]
name = A fake repo with a gpgkey
baseurl = https://fake-repo.base.url
sslverify = True
gpgcheck = True
gpgkey = https://fake-repo.gpgkey

View File

@ -1,6 +0,0 @@
[fake-repo-metalink]
name = A fake repo with a metalink
metalink = https://fake-repo.metalink
sslverify = True
gpgcheck = True
skip_if_unavailable=1

View File

@ -1,6 +0,0 @@
[fake-repo-mirrorlist]
name = A fake repo with a mirrorlist
mirrorlist = https://fake-repo.mirrorlist
sslverify = True
gpgcheck = True
skip_if_unavailable=1

View File

@ -1,6 +0,0 @@
[fake-repo-proxy]
name = A fake repo with a proxy
baseurl = https://fake-repo.base.url
proxy = https://fake-repo.proxy
sslverify = True
gpgcheck = True

View File

@ -1,47 +0,0 @@
[lorax-1]
name=Lorax test repo 1
failovermethod=priority
baseurl=file:///tmp/lorax-empty-repo-1/
enabled=1
metadata_expire=7d
repo_gpgcheck=0
type=rpm
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False
[lorax-2]
name=Lorax test repo 2
failovermethod=priority
baseurl=file:///tmp/lorax-empty-repo-2/
enabled=1
metadata_expire=7d
repo_gpgcheck=0
type=rpm
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False
[lorax-3]
name=Lorax test repo 3
failovermethod=priority
baseurl=file:///tmp/lorax-empty-repo-3/
enabled=1
metadata_expire=7d
repo_gpgcheck=0
type=rpm
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False
[lorax-4]
name=Lorax test repo 4
failovermethod=priority
baseurl=file:///tmp/lorax-empty-repo-4/
enabled=1
metadata_expire=7d
repo_gpgcheck=0
type=rpm
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False

View File

@ -1,11 +0,0 @@
[single-repo]
name=One repo in the file
failovermethod=priority
baseurl=file:///tmp/lorax-empty-repo/
enabled=1
metadata_expire=7d
repo_gpgcheck=0
type=rpm
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False

View File

@ -1,11 +0,0 @@
[other-repo]
name=Other repo
failovermethod=priority
baseurl=file:///tmp/lorax-other-empty-repo/
enabled=1
metadata_expire=7d
repo_gpgcheck=0
type=rpm
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False

View File

@ -1,11 +0,0 @@
[single-repo-duplicate]
name=single-repo-duplicate
failovermethod=priority
baseurl=file:///tmp/lorax-empty-repo/
enabled=1
metadata_expire=7d
repo_gpgcheck=0
type=rpm
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False

View File

@ -1 +0,0 @@
{'name': 'custom-base', 'description': 'A base system with customizations', 'version': '0.0.1', 'modules': [], 'packages': [{'name': 'bash', 'version': '5.0.*'}], 'groups': [], 'customizations': {'hostname': 'custombase', 'sshkey': [{'user': 'root', 'key': 'A SSH KEY FOR ROOT'}], 'kernel': {'append': 'nosmt=force'}, 'user': [{'name': 'admin', 'description': 'Administrator account', 'password': '$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L...', 'key': 'PUBLIC SSH KEY', 'home': '/srv/widget/', 'shell': '/usr/bin/bash', 'groups': ['widget', 'users', 'wheel'], 'uid': 1200, 'gid': 1200}], 'group': [{'name': 'widget', 'gid': 1130}], 'timezone': {'timezone': 'US/Eastern', 'ntpservers': ['0.north-america.pool.ntp.org', '1.north-america.pool.ntp.org']}, 'locale': {'languages': ['en_US.UTF-8'], 'keyboard': 'us'}, 'firewall': {'ports': ['22:tcp', '80:tcp', 'imap:tcp', '53:tcp', '53:udp'], 'services': {'enabled': ['ftp', 'ntp', 'dhcp'], 'disabled': ['telnet']}}, 'services': {'enabled': ['sshd', 'cockpit.socket', 'httpd'], 'disabled': ['postfix', 'telnetd']}}}

View File

@ -1,51 +0,0 @@
name = "custom-base"
description = "A base system with customizations"
version = "0.0.1"
[[packages]]
name = "bash"
version = "5.0.*"
[customizations]
hostname = "custombase"
[[customizations.sshkey]]
user = "root"
key = "A SSH KEY FOR ROOT"
[customizations.kernel]
append = "nosmt=force"
[[customizations.user]]
name = "admin"
description = "Administrator account"
password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L..."
key = "PUBLIC SSH KEY"
home = "/srv/widget/"
shell = "/usr/bin/bash"
groups = ["widget", "users", "wheel"]
uid = 1200
gid = 1200
[[customizations.group]]
name = "widget"
gid = 1130
[customizations.timezone]
timezone = "US/Eastern"
ntpservers = ["0.north-america.pool.ntp.org", "1.north-america.pool.ntp.org"]
[customizations.locale]
languages = ["en_US.UTF-8"]
keyboard = "us"
[customizations.firewall]
ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp"]
[customizations.firewall.services]
enabled = ["ftp", "ntp", "dhcp"]
disabled = ["telnet"]
[customizations.services]
enabled = ["sshd", "cockpit.socket", "httpd"]
disabled = ["postfix", "telnetd"]

View File

@ -1 +0,0 @@
{'description': u'An example http server with PHP and MySQL support.', 'packages': [{'version': u'6.6.*', 'name': u'openssh-server'}, {'version': u'3.0.*', 'name': u'rsync'}, {'version': u'2.2', 'name': u'tmux'}], 'groups': [], 'modules': [{'version': u'2.4.*', 'name': u'httpd'}, {'version': u'5.4', 'name': u'mod_auth_kerb'}, {'version': u'2.4.*', 'name': u'mod_ssl'}, {'version': u'5.4.*', 'name': u'php'}, {'version': u'5.4.*', 'name': u'php-mysql'}], 'version': u'0.0.1', 'name': u'http-server'}

View File

@ -1,35 +0,0 @@
name = "http-server"
description = "An example http server with PHP and MySQL support."
version = "0.0.1"
[[modules]]
name = "httpd"
version = "2.4.*"
[[modules]]
name = "mod_auth_kerb"
version = "5.4"
[[modules]]
name = "mod_ssl"
version = "2.4.*"
[[modules]]
name = "php"
version = "5.4.*"
[[modules]]
name = "php-mysql"
version = "5.4.*"
[[packages]]
name = "tmux"
version = "2.2"
[[packages]]
name = "openssh-server"
version = "6.6.*"
[[packages]]
name = "rsync"
version = "3.0.*"

View File

@ -1 +0,0 @@
{'description': u'An example e-mail server.', 'packages': [], 'groups': [{'name': u'mail-server'}], 'modules': [], 'version': u'0.0.1', 'name': u'mail-server'}

View File

@ -1,6 +0,0 @@
name = "mail-server"
description = "An example e-mail server."
version = "0.0.1"
[[groups]]
name = "mail-server"

View File

@ -1 +0,0 @@
{'description': u'An example http server with PHP and MySQL support.', 'packages': [], 'groups': [], 'modules': [], 'version': u'0.0.1', 'name': u'http-server'}

View File

@ -1,3 +0,0 @@
name = "http-server"
description = "An example http server with PHP and MySQL support."
version = "0.0.1"

View File

@ -1 +0,0 @@
{'description': u'An example http server with PHP and MySQL support.', 'packages': [], 'groups': [], 'modules': [{'version': u'2.4.*', 'name': u'httpd'}, {'version': u'5.4', 'name': u'mod_auth_kerb'}, {'version': u'2.4.*', 'name': u'mod_ssl'}, {'version': u'5.4.*', 'name': u'php'}, {'version': u'5.4.*', 'name': u'php-mysql'}], 'version': u'0.0.1', 'name': u'http-server'}

View File

@ -1,23 +0,0 @@
name = "http-server"
description = "An example http server with PHP and MySQL support."
version = "0.0.1"
[[modules]]
name = "httpd"
version = "2.4.*"
[[modules]]
name = "mod_auth_kerb"
version = "5.4"
[[modules]]
name = "mod_ssl"
version = "2.4.*"
[[modules]]
name = "php"
version = "5.4.*"
[[modules]]
name = "php-mysql"
version = "5.4.*"

View File

@ -1 +0,0 @@
{'description': u'An example http server with PHP and MySQL support.', 'packages': [{'version': u'6.6.*', 'name': u'openssh-server'}, {'version': u'3.0.*', 'name': u'rsync'}, {'version': u'2.2', 'name': u'tmux'}], 'groups': [], 'modules': [], 'version': u'0.0.1', 'name': u'http-server'}

View File

@ -1,15 +0,0 @@
name = "http-server"
description = "An example http server with PHP and MySQL support."
version = "0.0.1"
[[packages]]
name = "tmux"
version = "2.2"
[[packages]]
name = "openssh-server"
version = "6.6.*"
[[packages]]
name = "rsync"
version = "3.0.*"

View File

@ -1 +0,0 @@
{'description': u'An example http server with PHP and MySQL support.', 'packages': [], 'groups': [], 'modules': [{'version': u'2.4.*', 'name': u'httpd'}, {'version': u'5.4.*', 'name': u'php'}], 'version': u'0.0.1', 'name': u'http-server', 'repos': {'git': [{"rpmname": "server-config-files", "rpmversion": "1.0", "rpmrelease": "1", "summary": "Setup files for server deployment", "repo": "https://github.com/bcl/server-config-files", "ref": "v3.0", "destination": "/srv/config/"}]}}

View File

@ -1,20 +0,0 @@
name = "http-server"
description = "An example http server with PHP and MySQL support."
version = "0.0.1"
[[modules]]
name = "httpd"
version = "2.4.*"
[[modules]]
name = "php"
version = "5.4.*"
[[repos.git]]
rpmname="server-config-files"
rpmversion="1.0"
rpmrelease="1"
summary="Setup files for server deployment"
repo="https://github.com/bcl/server-config-files"
ref="v3.0"
destination="/srv/config/"

View File

@ -1,6 +0,0 @@
name = "bad-repo-1"
url = "file:///tmp/not-a-repo/"
type = "yum-baseurl"
check_ssl = true
check_gpg = true
gpgkey_urls = ["file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch"]

Some files were not shown because too many files have changed in this diff Show More