Compare commits

...

301 Commits

Author SHA1 Message Date
8b79bf754d Merge pull request 'Update pungi.spec' (#17) from soksanichenko-patch-1 into master
Reviewed-on: #17
Reviewed-by: eabdullin <eabdullin@noreply.localhost>
2025-09-30 12:11:55 +00:00
875e663809 Update pungi.spec
Exclude i686
2025-09-30 12:08:21 +00:00
eabdullin
65986b8eaf Disable hfs by default 2025-09-30 14:07:08 +03:00
Lubomír Sedlář
bc12ed7d69 Release 4.10.1
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 2eb6fe34118c85a1e86e8482b44e61504e7cbf81)
2025-09-29 18:28:14 +03:00
Lubomír Sedlář
834ee63331 osbuild: Handle wsl2 images
The images are imported to Koji with type_name set to wsl. We need to
know about this so that the image is pulled into the compose, and also
translate the type into correct value for productmd.

JIRA: RHELCMP-14724
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 3ff06bc8ea0345a3c96afe6c1f44a83615ed02d0)
2025-09-29 18:28:13 +03:00
Lubomír Sedlář
2fbc4e7bcb repoclosure: Clean up cache for dnf5
If `dnf repoclosure` actually calls dnf5 (which is not easy to tell),
our cache clean up is ineffective as dnf5 uses different locations to
dnf4. Since it's not easy to tell what `dnf` actually is, let's be safe
and iterate over both possibilities.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit a31f4233226d1df85d3f197ecc4b7fd47b827593)
2025-09-29 18:28:13 +03:00
Dominika Vesela
363a28f561 Ignore errors for rmtree after archive extraction
The folder was there and it was empty.
It looks like https://github.com/python/cpython/issues/128076.

Signed-off-by: Dominika Vesela <dhodovsk@redhat.com>
(cherry picked from commit 6569e5726298af8fdb5f6d54cd37b1b3d409de8d)
2025-09-29 18:28:12 +03:00
Simon de Vlieger
05ded4aaa8 imagebuilder: accept manifest_type
The `imagebuilder` phase was missing the `manifest_type` property in its
schema. While pungi (often) guesses correctly for the `manifest_type` it
doesn't do so in the case of ostree installer images; thus it needs to
be allowed.

This was an oversight as the phase implementation already looked for
this value in the configuration.

Signed-off-by: Simon de Vlieger <supakeen@redhat.com>
(cherry picked from commit 0ade83b0e9adc43b4e9fd38d536d647f42126688)
2025-09-29 18:28:12 +03:00
Lubomír Sedlář
f200e493ec Add a telemetry span over image building threads
Currently there is pretty much no structure under the toplevel
run-compose span. It's a mess of random Koji calls. This change should
group spans related to a particular image build.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 661b75c6098182f652a724d3e89b913bac56e579)
2025-09-29 18:28:11 +03:00
Lubomír Sedlář
603527c6cc Add specific exception for skopeo copy
If the command fails on timeout, we can raise more specific exception
instead of the generic RuntimeError. This has no effect on the actual
runtime, but it will be beneficial for tracing as the error output will
be directly surfaced in the telemetry span.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 14e8665283f945a89d642d7da236f8a2ce854c89)
2025-09-29 18:28:11 +03:00
Lubomír Sedlář
d3bc078089 Release 4.10.0
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c276a5a25abffcc990476267adf42125b87dee94)
2025-09-29 18:28:10 +03:00
Lubomír Sedlář
4d432fd385 Add more tracing to kojiwrapper
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 431cd630e93d8655329a0a1c77d291e7714c12c4)
2025-09-29 18:28:10 +03:00
Simon de Vlieger
85d7d19dc5 phases: implement image-builder
Implement a phase for the `imageBuilderBuild` task that is provided by
the `koji-image-builder` plugin, which schedules tasks to run with
`image-builder`.

This change is part of an accepted change proposal [1] for Fedora to use
`koji-image-builder` to build (some of) its variants.

[1]: https://fedoraproject.org/wiki/Changes/KojiLocalImageBuilder

Signed-off-by: Simon de Vlieger <supakeen@redhat.com>
(cherry picked from commit 69d87c27ff29b128aa8ff1e8aebd278a00d9fed8)
2025-09-29 18:28:09 +03:00
Lubomír Sedlář
84f7766dcf Add a tracing span around call to skopeo inspect
This call can fail and with a span we can get better visibility into
that.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 7b803d620f9951ef4ee1a17eeb5e46b8c9728e47)
2025-09-29 18:28:08 +03:00
Lubomír Sedlář
858c0ab252 Add retries to skopeo inspect calls
These can fail for any transient networking reason. Let's make a few
attempts before giving up.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 2c289729b9b24300f18ca5ca2eb1edd83da63a53)
2025-09-29 18:28:08 +03:00
Lubomír Sedlář
d818c781f9 Release 4.9.4
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit b2f82644e5ccf5b01e21412b7ed0f83a6d898249)
2025-09-29 18:28:07 +03:00
Lubomír Sedlář
2dd2b5c82a otel: Explicitly initialize telemetry provider and tracer
Doing this setup on import is simple, but it has issues if the pungi
code is directly imported into a different process.

Specifically, ODCS may have created its own provider and set things up
as needed, and then imports pungi, which tries to set a new provider.
This is prohibited by the SDK docs, and emits a warning. In reality it
is causing spans to be attributed to a wrong service.

As a side effect, RequestsIntrumentor doesn't start, and so the parent
process will need to do that on its own instead of relying on the side
effect.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e0a3343a4be96f9e284b752e5c5f31e02883a0a8)
2025-09-29 18:28:07 +03:00
Lubomír Sedlář
65c507e2c0 Release 4.9.3
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 73408f081de82e865e5df8ea28b513ca311509bb)
2025-09-29 18:28:07 +03:00
Lubomír Sedlář
2d48a341a6 Recognize wsl2 images produced by koji
The image type was added to productmd 1.45, so we should also require
that version.

Merges: https://pagure.io/pungi/pull-request/1841
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 668547ed3f8da2528fbeb1b12040182ca02e7f31)
2025-09-29 18:28:05 +03:00
Lubomír Sedlář
384a7a0bee Specify data_files with relative paths
The paths should be relative to sys.prefix, which happens to be /usr in
the RPM world. This change should make installation with
%pyproject_install macro from a generated wheel work correctly.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 1116e1f278ea547cf3bf9265eba70d7a8aa4795e)
2025-09-29 18:28:05 +03:00
Andrew Hills
466e924e6f Crossreference koji_cache from the Koji cache page
Signed-off-by: Andrew Hills <ahills@redhat.com>
(cherry picked from commit 33bb36814903fe2990f1a48e21a7c456d0c16a9e)
2025-09-29 18:28:04 +03:00
Andrew Hills
b0e025cea1 Add documentation for koji_cache configuration
Signed-off-by: Andrew Hills <ahills@redhat.com>
(cherry picked from commit 18ed52d10b35d2a0718588279a4419db8358cbf5)
2025-09-29 18:28:04 +03:00
Lubomír Sedlář
e765db157f linker: Drop ability to link dirs recursively
Nothing in the code base uses this functionality, and the semantins are
not well defined anyway when it comes to symlinks.

Now the tests are failing in Python 3.14 rebuild when hardlinking
symlinks. Rather than trying to fix the unused code, we could just drop
it.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2367780
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit ab11e0e4a9e0b70e1b78399931d357f92f3a27aa)
2025-09-29 18:28:03 +03:00
Lubomír Sedlář
37479bbc6a Record exceptions for top level OTel span
If there is an exception in the code, the cli_main function captures it,
saves the traceback and exits the process.

With the original tracing span, the instrumentation never saw the actual
exception, only SystemExit. This meant the span was not recorded as
failed. (Technically python-opentelemetry 1.31.0 does record it, but
that change was reverted in 1.32.0.)

It is somewhat tricky to structure the code so that the exception is
recorded implicitly. The status update to DOOMED must happen inside the
span (in order to propagate it to the trace). Thus a new function is
exported from the tracing module to record the exception explicitly
before it gets discarded and replaced with the exit.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d3630bfa6f8dc5ccf3dcef9cb4a947d82d7f09b8)
2025-09-29 18:28:03 +03:00
Lubomír Sedlář
5cf13491df Make black happy
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 16eda470c9a0bc2aa94c0637619537fae8de6778)
2025-09-29 18:28:03 +03:00
Lubomír Sedlář
7a4bd62978 Fix jenkins tests
Defining a variable on top level is now causing the pipeline to not
execute anything and just report success.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 912baf119879e8d5b113e64b9a98e169ef3aa516)
2025-09-29 18:28:02 +03:00
Lubomír Sedlář
2c80051446 Release 4.9.2
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 0badc6f3c2d63ee26766f6a21ef2722d3d716840)
2025-09-29 18:28:01 +03:00
Lubomír Sedlář
dcd7e5ff2a Drop compatibility with Koji < 1.32
The 1.32 version with checksum API has been released more than 2 years
ago.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 65336b406c01a99ccc9dabd011cc969b4395e106)
2025-09-29 18:28:01 +03:00
Lubomír Sedlář
fc2cc0073a kiwibuild: Add support for use_buildroot_repo option
This option can be set for a particular image or globally for all
kiwibuild images (with individual overload).

Fixes: https://pagure.io/pungi/issue/1833
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d91adfd34d9bd97e2cc73a6023924f3f7b73cef4)
2025-09-29 18:28:01 +03:00
Lubomír Sedlář
8c897dda71 gather: Resolve symlinks before linking packages
If we happen to have a symlink to an RPM that should be linked into the
compose, we should first resolve it to the actual path. This avoids a
problem if the symlink is relative, as otherwise Pungi would copy/link
the actual relative symlink, which would break it in the new location.

If the path is not a symlink, resolving the real path should make no
difference.

JIRA: RHELCMP-14504
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 859b968483b75f630bb9c7e15a90767eb0b68d95)
2025-09-29 18:28:01 +03:00
Lubomír Sedlář
027eb45b1d Make requests instrumentation optional
Even if basic otel dependencies are available, this instrumentor is a
separate dependency which may be missing.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 004f357acb0c6907e7cd06a9e6cc6a547e67e070)
2025-09-29 18:28:01 +03:00
Lubomír Sedlář
7bcf9df307 Release 4.9.1
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e13504296716dbd3900e2860d2aa9dbd4d8af875)
2025-09-29 18:28:01 +03:00
Lubomír Sedlář
0e203553fa util: Fix typo in regex for container digests
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit ab4b221e6465ca387cd4c0c5928f133737cc0905)
2025-09-29 18:28:00 +03:00
Lubomír Sedlář
45c3b1d9b1 Resolve container tags to digests
When the compose is configured to include any container image, it just
followed the provided URL. This is not particularly reproducible. If the
image spec contains a tag, it may point to different images at different
time.

This commit adds a step to validating the configuration that will query
the registry and replace the tag with a digest.

This makes it more reproducible, and also fixes a problem where changing
container image would not stop ISO reuse. There's still a chance of
non-container file changing and not forcing the reuse, but that is not
very common.

JIRA: RHELCMP-14381
JIRA: RHELCMP-14465
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 3ed09991c17c05551ea2d86286a72d13c726439f)
2025-09-29 18:27:59 +03:00
Lubomír Sedlář
4bfbe8afc2 kojiwrapper: Remove unused code
These methods were used in the live_images phase that doesn't exist
anymore.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 252044a9118d0a2ccffc91dde64226a9901a9d6b)
2025-09-29 18:27:59 +03:00
Lubomír Sedlář
feffd284a4 Add basic telemetry support
This patch adds support for Opentelemetry. If
OTEL_EXPORTER_OTLP_ENDPOINT env variable is defined, it will send traces
there. Otherwise there is no change.

The whole compose is wrapped in a single span. Nested under that are
spans for operations that involve a remote server.

* Talking to CTS
* Sending API requests to Koji
* Any git repo clone

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit c15ddbc946cc6a820dfb2f0bbacb72ca118100ba)
2025-09-29 18:27:59 +03:00
Haibo Lin
49a3e6cd12 Reorder ostree and ostree_installer phases
osbuild phase needs to wait for ostree phase in some cases, this patch
makes the various image build phases waiting for ostree phase, it may
introduce some slowdown, but it's still faster than the version before
PR#1790.

JIRA: RHELCMP-14349
Fixes: https://pagure.io/pungi/issue/1816
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit b3e0b6d7b73c48588b9aacd933f3e0e8ae3506ac)
2025-09-29 18:27:16 +03:00
Lubomír Sedlář
545215da19 Fix test data generation script
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 6a9c8551d2449b4a9581b403562338be765861a5)
2025-09-29 18:27:16 +03:00
Lubomír Sedlář
74ceea10ba extra_isos: Mention all extra files in the manifest
When container-images are downloaded, they would be skipped from the
extra_files.json manifest. This patch fixes that by enumerating all
files rather than relying on the getter to return a list.

JIRA: RHELCMP-14406
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit cb0399238e097c6917ffa847f546ff01fdff7599)
2025-09-29 18:27:16 +03:00
Lubomír Sedlář
64e1c30100 scm: Add retries to container-image download
If all retries fail, let's also log the error output.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit b99bcfb5eebf0d14d284fa0ab1bc2631d9e14ae3)
2025-09-29 18:27:15 +03:00
Lubomír Sedlář
4f53c5257d Release 4.9.0
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e33ac15d9972f0ad83d4c639e5af6e7fe96aad62)
2025-09-29 18:27:15 +03:00
Haibo Lin
136a02bdbb scm: Fix git clone issue for git+http protocol
`git clone` failed if the URL is specified as git+http.

    git: 'remote-git+http' is not a git command. See 'git --help'.

JIRA: RHELCMP-14340
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 1a594e4148c409fc5383fd0a4b0e7ba04d13ec1c)
2025-09-29 18:27:15 +03:00
Lubomír Sedlář
a6e7828033 Make black happy
The latest version seems to want escape sequences written in lowercase.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit fc0de97c5e8e11527858e6d835525ded28d2501e)
2025-09-29 18:27:15 +03:00
Lubomír Sedlář
6891038eb8 buildinstall: Add support for rootfs-type lorax option
JIRA: ENGCMP-5117
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 3c6298ee28b32a41ee458975730f246fd9284f93)
2025-09-29 18:27:15 +03:00
Lubomír Sedlář
dd8d22f0e3 scm: Stop trying to download src arch
This simplifies configuring extra isos to avoid failing on downloading
non-existing images.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit b3a316776e878d56c683c6558316d0c578c65992)
2025-09-29 18:27:15 +03:00
Lubomír Sedlář
cdc275741b extra_isos: Provide arch to extra files getter
The getter is already running once per architecture, it just doesn't
make the information available to the scm wrapper.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 46d6c48e0a03146f05a996a9529cffbcbcc8447c)
2025-09-29 18:27:13 +03:00
Lubomír Sedlář
a034b8b977 Move temporary buildinstall download to work/
The files should always be cleaned up immediately after the archive is
extracted, but we are seeing them being left behind for some reason.

With this page, even if the data is not cleaned up, it will not clog up
/tmp and be eventually deleted together with the compose.

JIRA: RHELCMP-14319
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e4d1bd4783de28b34ec289d6218205756ee916ad)
2025-09-29 18:27:13 +03:00
Adam Williamson
f3dcb036a5 Protect against decoding errors with subprocess text mode
All these are calling subprocess in 'text mode', where it will
try to decode stdout/stderr using the default encoding (utf-8
for us). If it doesn't decode, subprocess will raise an exception
and kobo doesn't handle it, it just passes it along to us, so
things blow up - see https://pagure.io/releng/issue/12474 . To
avoid this, let's set `errors="replace"`, which tells the decoder
to replace invalid data with ? characters. This way we should get
as much of the output as can be read, and no crashes.

We also replace `universal_newlines=True` with `text=True` as
the latter is shorter, clearer, and what Python 3 subprocess
wants us to use, it considers `universal_newlines` to just be
a backwards-compatibility thing - "The universal_newlines argument
is equivalent to text and is provided for backwards compatibility"

Signed-off-by: Adam Williamson <awilliam@redhat.com>
Merges: https://pagure.io/pungi/pull-request/1812
(cherry picked from commit 2d16a3af004f61cf41e4eb2e5e694bb46a5d3cda)
2025-09-29 18:27:13 +03:00
Adam Williamson
e59566feb2 Revert "Avoid to crash on unicode decoding errors"
This reverts commit 7d8f3b4b9b2cf65967b4d3f8dd249aec2e3cbbf8. It
doesn't really fix the problem. A better fix follows.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 98e3b3f8c410943c6dbeb21ebca9934b60a30f2f)
2025-09-29 18:27:13 +03:00
Lubomír Sedlář
ed0713c572 Download extra files from container registry
This could be useful for handling flatpak applications in the installer.

All of the specified containers are downloaded into a single oci
layout.

JIRA: RHELCMP-14302
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 3d5348a6728b4d01cf8770494902e64c99e21a14)
2025-09-29 18:27:12 +03:00
Haibo Lin
e550458c9f Avoid to crash on unicode decoding errors
As kobo.shortcuts.run can't handle binary output correctly, it causes
pungi-make-ostree crashed when rpm-ostree outputs unexpected characters.

JIRA: RHELCMP-14253
Fixes: https://pagure.io/releng/issue/12474
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 7d8f3b4b9b2cf65967b4d3f8dd249aec2e3cbbf8)
2025-09-29 18:27:12 +03:00
Lubomír Sedlář
c2852f7034 Remove python 2.7 dependencies from setup.py
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit b058f64abe797b78059d084a3af5d9f7e4049d88)
2025-09-29 18:27:12 +03:00
Lubomír Sedlář
6a293639cf util: Drop dead code
These functions were only used in the legacy pungi.gather module that
has since been removed.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8a36744f02040108cbe7e6d590984b3cf8e53b40)
2025-09-29 18:26:36 +03:00
Adam Williamson
ac7e1e515e Use new container and bootable-container productmd types
In https://github.com/release-engineering/productmd/pull/181 I
added new `bootable-container` and `container` types to
productmd. This makes pungi always use the bootable-container
type for ostree_container images (previously 'ociarchive'), and
default to using the container type for Kiwi-built oci.tar.xz
container images (previously 'docker').

This is a significant change for anything that relies on
productmd/fedfind conventions to 'identify' images, as these
images will now have a different identity. But I think it's a
valuable improvement in their identities. 'ociarchive' never made
any sense as an image 'type' - it's a format - and 'docker'
wasn't a very good type for images that are explicitly OCI
container images, not Docker-native ones. We also can now easily
distinguish between 'regular' container images and ones that are
intended to be bootable.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 3cb8992d56f2cee8a7cb151253125e30931ccd6d)
2025-09-29 18:26:35 +03:00
Lubomír Sedlář
fddce94704 Directly import mock from unittest
It is not a separate package since Python 3.3

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 3987688de6720d951bfeb0b49c364df9738b490b)
2025-09-29 18:26:35 +03:00
Lubomír Sedlář
26959621a6 Release 4.8.0
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 7d512293611eb91d005226974c14a42a1bc44dc1)
2025-09-29 18:23:51 +03:00
Lubomír Sedlář
74db11a836 Remove python 2.7 from tox configuration
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit af10ab312b248052e987ee2c50650e2db85be5e4)
2025-09-29 18:23:45 +03:00
Lubomír Sedlář
e98dd56fce Remove forgotten multilib module for yum
There's no more yum anymore. This was also the only user of the
pathmatch module, which is thus also removed.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 989a9c2565ab9466b55d6edc056973c3080dfeae)
2025-09-29 18:23:44 +03:00
Lubomír Sedlář
4ff13b1993 Drop usage of six
We no longer need to support Python 2, so there's no point in this
compatibility layer.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit b34de57813187f1781aef733468c9745a144d9af)
2025-09-29 18:23:44 +03:00
Lubomír Sedlář
b044ebdba1 Ensure ostree phase threads are stopped
The ostree phase now runs in parallel with a lot of other stuff. If
there's any error while the phase is running, the compose would be
aborted but the ostree threads wouldn't be stopped automatically. With
the threads left alive, the process would never finish.

This patch makes sure that whatever happens in the other code, we always
stop the ostree phases.

Fixes: https://pagure.io/pungi/issue/1799
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8558b74d7810dd92144924542a63bae7b1999bd3)
2025-09-29 18:19:15 +03:00
Lubomír Sedlář
f8932bc1f4 scm: Clone git submodules
If the repo contains .gitmodules file, run the commands to clone all
submodules.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 6d1428ab89de6ffa5c18466a469606887a0300b8)
2025-09-29 18:19:14 +03:00
Lubomír Sedlář
755004af02 Drop unittest2
The library is imported if available, but we never build it in any
environment where the package would be installed. It was last used for
RHEL 6 builds.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit d95d1f59e2ae243ea794c5f5613fef3249b4fad6)
2025-09-29 18:19:14 +03:00
Adam Williamson
567baed60f kiwibuild: extend productmd type/format detection for FEX images
Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit eb4ba5f637153f0037f05981adea8b35fc0f6b25)
2025-09-29 18:16:18 +03:00
Lubomír Sedlář
2e9baeaf51 Remove pungi/gather.py and associated code
This commit completly drops support for Yum as a depsolving/repoclosure
backend.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit f5702e4c9d0d5d9d31421d3d47200581e41f02bf)
2025-09-29 18:16:18 +03:00
Adam Williamson
4454619be6 Reduce legacy pungi script to gather phase only (#1792)
This reduces the legacy 'pungi' script to only its gather phase,
and removes related stuff in gather.py. The gather phase is used
in the yum path through phases/gather/methods/method_deps.py, so
it cannot be entirely removed until all users of that are gone.
But we can at least get rid of the non-Koji support for creating
install trees, ISOs and repos.

Merges: https://pagure.io/pungi/pull-request/1793
Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 3bc35a9a271c50ca093b186938eae7cbc1bbd3de)
2025-09-29 18:15:21 +03:00
Adam Williamson
4f69f6c242 pkgset: optimize cache check (saves 20 minutes)
The pkgset phase takes around 35 minutes in current composes.
Around 20 minutes of that is spent creating these per-arch
subsets of the global package set. In a rather roundabout way
(see #1794 ), I figured out that almost all of this time is
spent in this cache check, which is broken for a subtle reason.

Python's `in` keyword works by first attempting to call the
container's magic `__contains__` method. If the container does
not implement `__contains__`, it falls back to iteration - it
tries to iterate over the container until it either hits what
it's looking for, or runs out. (If the container implements
neither, you get an error).

The FileCache instance's `file_cache` is a plain Python dict.
dicts have a very efficient `__contains__` implementation, so
doing `foo in (somedict)` is basically always very fast no matter
how huge the dict is. FileCache itself, though, implements
`__iter__` by returning an iterator over the `file_cache` dict's
keys, but it does *not* implement `__contains__`. So when we do
`foo in self.file_cache`, Python has to iterate over every key
in the dict until it hits foo or runs out. This is massively
slower than `foo in self.file_cache.file_cache`, which uses the
efficient `__contains__` method.

Because these package sets are so huge, and we're looping over
*one* huge set and checking each package from it against the cache
of another, increasingly huge, set, this effect becomes massive.
To make it even worse, I ran a few tests where I added a debug log
if we ever hit the cache, and it looks like we never actually do -
so every check has to iterate through the entire dict.

We could probably remove this entirely, but changing it to check
the dict instead of the FileCache instance makes it just about as
fast as taking it out, so I figured let's go with that in case
there's some unusual scenario in which the cache does work here.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit c8fe99b1aa5a9a9b941b7515cda367d24829dedf)
2025-09-29 18:15:21 +03:00
Lubomír Sedlář
37f9f1fcaf Install dnf4 into test image
The fedora:latest image is now based on 41, and contains dnf5. This is
causing some tests to fail due to failing imports of dnf version 4.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bc60af794dc54c99bc295c742bbfba03393f7e0f)
2025-09-29 18:15:21 +03:00
Lubomír Sedlář
fdea2c88d9 Update phase diagram
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 83f56da0f17d5b9e400cfcc4678d176ce4b0f5b3)
2025-09-29 18:15:20 +03:00
Adam Williamson
0483f914c4 Don't block main image build phase on ostree_install phase
I did a time map of a Fedora compose today, and noticed that we
spend about an hour waiting for the ostree_install phase to
complete before we start up the compose_images_phase which does
all the other image builds.

This is unnecessary. Nothing else depends on ostree_install; it
should be fine to start up the extra_phase (which contains
compose_images_phase) while the ostree stuff is still running.

This implements that by splitting the ostree phases out of the
essentials_phase which contains the real precursors to the
extra_phase. We start the essentials and ostree phases together,
but only wait for the essentials phase to complete before
kicking off extra_phase, so it can start while the ostree
phase is still running.

One tweak we have to make to accommodate this is to move
image_checksum_phase out of extra_phase, to avoid it potentially
running before all ostree installer images are built. The
checksum phase is quite fast - it takes about five minutes -
and any time benefit of running it in parallel with the osbs and
repoclosure phases seems like it must be smaller than the time
loss of waiting for ostree_install before kicking off extra.

Merges: https://pagure.io/pungi/pull-request/1790
Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 18bda22fcb842c00a606e5f357aeb9f3d02aa626)
2025-09-29 18:15:20 +03:00
Adam Williamson
a24c6d52ce ostree_container: make filename configurable, include arch
The default base name is probably fine in most cases, but there
are some where we might want to tweak it. We already allow this
for other phases (e.g. the livemedia phase).

Also, we should include the arch in the image filename. Not doing
this doesn't blow up the compose as, while they have identical
filenames, the images for different arches are in different paths,
but it's confusing for people who actually download and use the
images.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit aea8da5225aeb31b4c5dd413f0a31b6ab395a9ac)
2025-09-29 18:15:19 +03:00
Adam Williamson
a0a155ebcd Correct subvariant handling for ostree_container phase
The image metadata construction code allows for subvariant to be
set in the image config dict, but checks.py doesn't expect it;
fix that. Also, when a subvariant is set, use it in the image
name template rather than the variant; otherwise you can't
build more than one subvariant in any variant (they will have
identical names, which isn't alllowed).

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 391a5eaed5198e5ee2941dac4ae43a2fe057eedd)
2025-09-29 18:15:19 +03:00
Adam Williamson
059995a200 image_build: drop .tar.gz as an expected extension for docker
Koji's image-build command has not been capable of producing a
docker image with .tar.gz as its extension since 2015:

https://pagure.io/koji/c/b489f282bee7a008108534404dd2e78efb2256e7?branch=master

as that commit message implies, the files have not actually been
gzip-compressed for even longer:

https://pagure.io/koji/c/82a405c7943192e3bba3340efe7a8d07a0e26b70?branch=master

so there's no point to having this any more. It is causing the
wrong productmd 'type' to be set for GCE cloud images, which *do*
have the .tar.gz extension - because docker appears in this dict
before tar-gz, their type is being set as 'docker' not 'tar-gz'.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 739062ed3c471e74ba9c5144c4047f67f9fbe8c8)
2025-09-29 18:15:19 +03:00
Adam Williamson
53c273f025 move osbuild/kiwi-specific EXTENSIONS to each phase
The image-build phase's EXTENSIONS dict is meant to exactly
mirror the 'formats' that exist in the context of the command
`koji image-build`, which is driven by this phase. That nice
association was lost, however, by adding a couple of items to it
which exist for the purposes of the osbuild phase (and in the
case of .iso, also the kiwibuild phase), which import this dict
and uses it for image identification.

To make the association 1:1 again and more clearly show what's
going on here, let's move those entries out into the osbuild and
kiwi phases. osbuild now has its own dict which starts out as a
copy of the image-build one before being extended. And let's
update the relevant comments.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 5338d3098ccd614a8fd32f837a393aed78b471bd)
2025-09-29 18:15:18 +03:00
Lubomír Sedlář
9594954287 Drop compatibility helper for dnf.Package.source_name
The bug that caused the attribute to have wrong value has been fixed in
DNF a long time ago.

Fixes: https://pagure.io/pungi/issue/1786
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 225f04cf43326c136e95356681f1461270673ca6)
2025-09-29 18:15:18 +03:00
Lubomír Sedlář
c586c0b03b kiwibuild: Allow setting metadata type explicitly
It is not possible to reliably detect what the type for an image should
be in the metadata. This commit adds an option for user to explicitly
provide it.

It can only be configured on the specific image, not globally.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit cd2ae81e3c63316997b9617ff2e30e3148af14f2)
2025-09-29 18:15:17 +03:00
Lubomír Sedlář
6576ab9b32 kiwibuild: Fix location and metadata for ISOs
When Kiwi builds an ISO, it is always supposed to be bootable and should
be located in the iso/ subdirectory.

Any other kind of image should still land in images/ and be listed as
not bootable in the metadata.

Relates: https://pagure.io/pungi-fedora/issue/1342
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d9d21d3cf4eaad5cc7f2959a4abdafed781bb9cf)
2025-09-29 18:15:17 +03:00
Lubomír Sedlář
d93b358959 kiwibuild: Add options for version and repo_releasever
The version follows the same rules as versioning for live media etc.
That means it's always going to be set. The precedence goes like this:

 * image specific option
 * `kiwibuild_version`
 * `global_version`
 * `release_version` or `<release_version>_<label_milestone>`.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d351773dab7b3aa8e6de82bbe23058b6b3448dd4)
2025-09-29 18:15:17 +03:00
Lubomír Sedlář
d2fc85437b Release 4.10.1
(cherry picked from commit d14925b85c4f0e26eb4b097b6603f3dbc5d00d60)
2025-09-29 18:14:47 +03:00
Lubomír Sedlář
ca0984611b Release 4.10.0
(cherry picked from commit 79c630a8599978b3b073c9fbc17abf7df347bb40)
2025-09-29 18:14:37 +03:00
Fedora Release Engineering
4dd7ecf875 Rebuilt for https://fedoraproject.org/wiki/Fedora_43_Mass_Rebuild
(cherry picked from commit 94ede558b3ac60de921095d23ec8d6ca762087b0)
2025-09-29 18:14:25 +03:00
Miro Hrončok
2f8ce9dbca Remove one generated runtime Requires
It's required by python3-pungi.

(cherry picked from commit 73e8998491ecb630eb31877692fe03b4eceaff3e)
2025-09-29 18:14:09 +03:00
Miro Hrončok
eaaa5a6a0c Remove generated runtime dependencies from BuildRequires
Those are handled by %pyproject_buildrequires

(cherry picked from commit 4af7eaba63f34cd84ee4dbe9e908068a6d89e354)
2025-09-29 18:14:09 +03:00
Lubomír Sedlář
e164c6ed14 Release 4.9.3
Merges: https://src.fedoraproject.org/rpms/pungi/pull-request/11

(cherry picked from commit e0a9959d1f7478a0357f76d8d31c96b9d8cda895)
2025-09-29 18:10:49 +03:00
Python Maint
e33373f74c Rebuilt for Python 3.14
(cherry picked from commit 280b98bf8383ae70aa07938d08c5291f9e872d96)
2025-09-29 18:08:30 +03:00
Lubomír Sedlář
8e5c545c22 Fix tests on Python 3.14
(cherry picked from commit 40a6fe451dab89ed180b89cadd4c32e3a9328700)
2025-09-29 18:08:19 +03:00
Lubomír Sedlář
1fda6afce9 Release 4.9.2
(cherry picked from commit 2d39cf9856893d0c9e718bb9b0ebf5b7134ac01c)
2025-09-29 18:08:09 +03:00
Lubomír Sedlář
4f5ca6ad18 Release 4.9.1
(cherry picked from commit 5639a4d5deb28dd98a13b8ce6ddf2264a73d82c4)
2025-09-29 18:07:59 +03:00
Lubomír Sedlář
afa2617a73 New release 4.9.0
(cherry picked from commit 5fbdefc5fc784a2ee632dc093c587e490c33593c)
2025-09-29 18:07:47 +03:00
Adam Williamson
e9b29c87d5 Backport PR #1812 to fix crash on subprocess unicode decode error
(cherry picked from commit 47f155d57038db0013eeff61b17380bc945b263a)
2025-09-29 18:07:35 +03:00
Adam Williamson
4137092e7f Backport PR #1810 to use new container types
(cherry picked from commit ddef475081f9efea390c60c2666d9e22536d70c1)
2025-09-29 18:07:16 +03:00
Lubomír Sedlář
80e22467e7 New upstream release 4.8.0
(cherry picked from commit 85b8b74f54cc6e1afd13c9608ad791d6ad103b61)
2025-09-29 18:06:51 +03:00
Adam Williamson
1fb0c8aa16 Backport #1798 to infer types/formats for new FEX backing images
(cherry picked from commit 92d7921cf1bbf816217e2c4945dc8f6ae7881a39)
2025-09-29 18:06:23 +03:00
Adam Williamson
cc5b039197 Backport #1796 to speed up compose some more
(cherry picked from commit 130e003364be879f91c3716f9e059f319376b89c)
2025-09-29 18:06:06 +03:00
Adam Williamson
3ec9bd0413 Rebuild with no changes to bump past release used in infra tag
(cherry picked from commit 2b05735da4ecb2a1a230c47c22f0fa18200f2972)
2025-09-29 18:05:50 +03:00
Adam Williamson
560916cd83 Backport patches for ostree_container, ostree compose speedup
PR #1789 improved various aspects of the ostree_container phase
regarding subvariant handling and filenames, this is mainly to
help us with how we want to handle bootc images (currently in
the IoT compose, but the generic base bootc image may move to
the Fedora compose).

PR #1790 rejigs the compose phase handling so the main image
build phase is not unnecessarily blocked on the ostree_install
phase. This should cut 60-90 minutes out of the main Fedora
compose time.

(cherry picked from commit 13884fef2c199b613442af03c24b737a7e3cb057)
2025-09-29 18:05:38 +03:00
Adam Williamson
2495771f59 Backport patches to fix GCE image format not to be 'docker'
(cherry picked from commit 80ddc0cf015d582ca21376c3ba97b38ba813f5e2)
2025-09-29 18:05:16 +03:00
Lubomír Sedlář
b3b4b894c7 Commit forgotten patch
(cherry picked from commit d672d5b724891363bf4f793e21e1993c6be07153)
2025-09-29 18:04:46 +03:00
Lubomír Sedlář
dac4df2438 Backport patch for setting kiwibuild image type in metadata
(cherry picked from commit 729586d0ed2cb9103ee89fca589208c8d3381861)
2025-09-29 18:04:46 +03:00
Lubomír Sedlář
8334b2f027 Backport upstream PR 1780
(cherry picked from commit 94c3195e7398d8f75720216676a87509e2177fc2)
2025-09-29 18:03:25 +03:00
e9ed4402e6 Merge pull request 'Add riscv64 to supported architectures' (#16) from add_riscv64 into master
Reviewed-on: #16
Reviewed-by: Stepan Oksanichenko <soksanichenko@noreply.localhost>
2025-09-02 15:03:06 +00:00
2ac29cf0d6 Release is bumped 2025-09-01 16:22:31 +00:00
9c1dfb3cbc changelog 2025-09-01 12:46:54 +00:00
d49e8278ea use noarch with riscv64 2025-09-01 12:04:36 +00:00
1856763163 Add riscv64 to supported architectures 2025-09-01 11:43:48 +00:00
e17a6d7f42
- Changelog date order
- Typo
2024-10-09 12:48:48 +03:00
5152dfa764
- Add x86_64_v2 to a lisf of exclusive arches if there is any arch with base x86_64
- Changelog
- Bumbed version
2024-09-27 15:43:27 +03:00
b61614969d - Add x86_64_v2 to arch list if x86_64 in list 2024-09-16 14:59:03 +03:00
38cc2f79a0
- Unittests are fixed 2024-09-08 12:01:32 +03:00
d8b7f9210e
- Typo 2024-09-08 11:47:45 +03:00
69ec4df8f0
- Release is fixed 2024-09-06 22:30:35 +03:00
20841cfd4c
- Changelog
- Release is bumped
2024-09-06 22:29:55 +03:00
cb53de3c46
- Truncate a volume ID to 32 bytes
- Add new architecture `x86_64_v2`
2024-09-06 22:28:38 +03:00
72635cf5c1
- Release is bumped 2024-09-06 15:06:55 +03:00
9ce519426d
- Typo 2024-09-06 15:06:35 +03:00
208c71c194
- Typo 2024-09-05 17:36:42 +03:00
71c4e3c178
- Use xorriso as recommended package and genisoimage as required for RHEL8/9 and vice versa for RHEL10 2024-09-05 17:28:11 +03:00
1308986569
- New release of AL version of Pungi 2024-08-30 13:42:27 +03:00
Lubomír Sedlář
e05a11f99a
Release 4.7.0
JIRA: RHELCMP-13991
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit a8dbd77f7f)
2024-08-30 13:40:54 +03:00
Lubomír Sedlář
cb9dede604
kiwibuild: Add support for type, type attr and bundle format
This is a very basic support. Whatever users specify in the new option
will be passed to the koji task.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=2270197
Related: https://pagure.io/koji/pull-request/4157
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e43cf68f08)
2024-08-30 13:40:50 +03:00
Lubomír Sedlář
ce2c222dc2
createiso: Block reuse if unsigned packages are allowed
We can have a compose with unsigned packages.

By the time the next compose is generated, the packages could have been
signed. However, the new compose would still reuse the ISO with unsigned
copies.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d546a49299)
2024-08-30 13:40:49 +03:00
Lubomír Sedlář
be4fd75a7a
Allow live_images phase to still be skipped
Without this fix existing configurations break even though they don't
use the phase.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c59f2371a3)
2024-08-30 13:40:48 +03:00
Lubomír Sedlář
33bb0ceceb
createiso: Recompute .treeinfo checksums for images
Running xorriso to modify an ISO image can update content of included
images such as images/eltorito.img, unless we explicitly update the
image, which is undesirable (https://pagure.io/pungi/issue/1647).

However, when the file is changed, the checksum changes and .treeinfo no
longer matches.

This patch implements a workaround: once the DVD is written, it looks
for incorrect checksums, recalculates them and updates the .treeinfo on
the DVD. Since only the checksum is changing and the size of the file
remains the same, this seems to help fix the issue.

An additional step for implanting MD5 is needed again, as that gets
erased by the workaround.

JIRA: RHELCMP-13664
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 3b2c6ae72a)
2024-08-30 13:40:47 +03:00
Lubomír Sedlář
aef48c0ab4
Drop support for signing rpm-wrapped artifacts
This was only usable in live_images phase that doesn't exist anymore,
and wasn't used much in the first place.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 0726a4dca7)
2024-08-30 13:40:15 +03:00
Adam Williamson
bd91ef1d10
Remove live_images.py (LiveImagesPhase)
This phase was used to create live images with livecd-creator
and 32-bit ARM images with appliance-creator. We also remove
get_create_image_cmd from the Koji wrapper as it was only used
for this phase, remove associated tests, and remove related
configuration settings and documentation.

Fixes: https://pagure.io/pungi/issue/1753
Merges: https://pagure.io/pungi/pull-request/1774
Signed-off-by: Adam Williamson <awilliam@redhat.com>

(cherry picked from commit 531f0ef389)
2024-08-30 13:40:14 +03:00
Lubomír Sedlář
32d5d32a6e
Clean up requirements
* dict.sorted and funcsigs are not used anywhere anymore
* urlgrabber is used only in the yum based gather.py module, and thus
  only needed on Python 2
* py3 doesn't need to reinstall mock as that is part of stdlib now

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c96b5358ba)
2024-08-30 13:40:02 +03:00
Haibo Lin
5bcb3f5ac1
Release 4.6.3
JIRA: RHELCMP-13724

Signed-off-by: Haibo Lin <hlin@redhat.com>

(cherry picked from commit 0cb18bfa24)
2024-08-30 13:39:59 +03:00
Lubomír Sedlář
78bfbef206
Fix formatting of long line
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit f72adc03b1)
2024-08-30 13:39:51 +03:00
Lubomír Sedlář
88b6d8ebf5
unified-isos: Resolve symlinks
If the compose is configured to use symlinks for packages, the unified
ISO would include the symlinks which is useless.

Instead, let's check and replace any symlinks pointing outside of the
compose with the actual file.

JIRA: RHELCMP-13802
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8ced384540)
2024-08-30 13:39:50 +03:00
Lubomír Sedlář
6223baa2ba
gather: Skip lookaside packages from local lookaside repo
When variant X depends on variant A, Pungi creates a temporary local
lookaside with packages from A. If there's an external lookaside
configured, the list of package for variant A can contain URLs to the
external repo.

Newer versions of createrepo fail when pkglist specifies an unreachable
package, and it doesn't do downloading.

JIRA: RHELCMP-13648
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 4a5106375e)
2024-08-30 13:39:49 +03:00
Haibo Lin
9d6226b436
pkgset: Avoid adding modules to unavailable arches
If a module is not built for specific arches, pungi will skip adding it
to these arches in pkgset phase.

JIRA: RHELCMP-13625
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 627b72597e)
2024-08-30 13:39:48 +03:00
Lubomír Sedlář
927a0d35ab
iso: Extract volume id with xorriso if available
Pungi can use either genisoimage or xorriso to create ISOs.

It also needed isoinfo utility for querying volume ID from the ISO
image. However, the utility is part of the genisoimage suite of tools.

On systems that no longer provide genisoimage, the image would be
successfully generate with xorriso, but then pungi would fail to extract
the volume id leading to metadata with missing values.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bc0334cc09)
2024-08-30 13:39:47 +03:00
Adam Williamson
d81ee0f553
De-duplicate log messages for ostree and ostree_container phases
The ostree and ostree_container phases both log messages in the
exact same form, which is rather confusing. This will make it
much clearer which message comes from which phase.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 5c9e79f535)
2024-08-30 13:39:46 +03:00
Lubomír Sedlář
e601345a38
Handle tracebacks as str or bytes
Kobo 0.36.0 changed how tracebacks are handled. Instead of `bytes`, it
returns a `str`. That makes pungi fail to write it into a file opened as
binary.

Relates: https://github.com/release-engineering/kobo/pull/246
Fixes: https://pagure.io/pungi/issue/1756
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 29c166ab99)
2024-08-30 13:39:45 +03:00
Adam Williamson
1fe075e7e4
ostree/container: add missing --version arg
https://pagure.io/pungi/pull-request/1726 tries to use
`self.args.version`, but the `pungi-make-ostree container`
subcommand does not actually have a `--version` arg, so that is
not going to work. This adds the required arg.

We *could* make it optional by still setting an empty update
dict if it's not specified, I guess, but not sure if that's worth
the effort.

Fixes: https://pagure.io/pungi/issue/1751

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 51d58322f2)
2024-08-30 13:39:44 +03:00
Lubomír Sedlář
a8fc1b183b
Block pkgset reuse on module defaults change
JIRA: RHELCMP-13463
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 0ef1c102b8)
2024-08-30 13:39:43 +03:00
Adam Williamson
8f171b81a1
Include task ID in DONE message for OSBS phase
Again, composetracker expects the message in this format.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit b6cfd8c5d4)
2024-08-30 13:39:41 +03:00
Adam Williamson
ee8a56e64d
Various phases: consistent format of failure message
composetracker expects the failure message to be in a specific
form, but some phases weren't using it. They were phrasing it
slightly differently, which throws off composetracker's parsing.
We could extend composetracker to handle both forms, but it seems
simpler to just make all the phases use a consistent form.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 9f8377abab)
2024-08-30 13:39:40 +03:00
Lubomír Sedlář
2bf6c216bc
Update tests to exercise kiwi specific metadata
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 949add0dac)
2024-08-30 13:39:39 +03:00
Adam Williamson
99a6dfe8ad
Kiwi: translate virtualbox and azure productmd formats
As discussed in
https://pagure.io/releng/failed-composes/issue/6047#comment-899622
the list of 'acceptable' types and formats (in productmd terms)
is locked down in productmd, we cannot just 'declare' new formats
in pungi as we kinda wound up doing by adding these Kiwi
extensions to the EXTENSIONS dict in image_build phase. So
instead, let's return the image_build phase to the way it was,
and add an additional layer of handling in kiwibuild phase for
these awkward cases, which 'translates' the file suffix to a
format productmd knows about already. This is actually how we
would rather behave anyway, because a Kiwi-produced
`vagrant.libvirt.box` file really is the same kind of thing as an
ImageFactory-produced `vagrant-libvirt.box` file; we want them to
have compatible metadata, we don't want them to look like
different things.

Merges: https://pagure.io/pungi/pull-request/1740
Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 8fb694f000)
2024-08-30 13:39:37 +03:00
Lubomír Sedlář
c63f9f41b6
kiwibuild: Add tests for the basic functionality
Merges: https://pagure.io/pungi/pull-request/1739
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8a3b64e5b8)
2024-08-30 13:39:36 +03:00
Lubomír Sedlář
ab1960de6d
kiwibuild: Remove repos as dicts
The task needs just URLs, the dics don't bring anything here.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c80ebb029b)
2024-08-30 13:39:35 +03:00
Lubomír Sedlář
c17b820490
Fix additional image metadata
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e2ceb48450)
2024-08-30 13:39:33 +03:00
Lubomír Sedlář
36133b71da
Drop kiwibuild_version option
Version in kiwibuild is embedded in the definition file. The option
makes no sense.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 242d7d951f)
2024-08-30 13:39:32 +03:00
Lubomír Sedlář
50b217145c
Update docs with kiwibuild options
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 04d4e1d585)
2024-08-30 13:39:31 +03:00
Adam Williamson
57f2b428d5
kiwibuild: allow setting description scm and path at phase level
Neal wanted this to work - he tried using global_description_scm
and global_description_path in the initial PR - but it wasn't
wired up to work. This should make it possible to set
`kiwibuild_description_scm` and `kiwibuild_description_path`.
It also technically lets you set `global_` for both, since the
`get_config` implementation is very generic, but it doesn't add
it to the checks, so you'd still get an "unrecognized config
option" warning, I think. It seems appropriate to encourage
setting this as a phase-level option rather than a global one
since it seems quite specific to the kiwibuild phase.

Merges: https://pagure.io/pungi/pull-request/1737
Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit e90ffdfd93)
2024-08-30 13:39:29 +03:00
Lubomír Sedlář
3cdc8d0ba7
Use latest Fedora for python 3 test environment
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 0d310fb3b3)
2024-08-30 13:39:28 +03:00
Lubomír Sedlář
07829f2229
Install unittest2 only on python 2
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 5172d7e5eb)
2024-08-30 13:39:26 +03:00
Adam Williamson
bdf06ea038
Fix 'failable' handling for kiwibuild phase
The mechanisms here are a bit subtle and the kiwibuild phase
didn't quite get them right. The arg passed to `util.failable`
is supposed to be a boolean, but kiwibuild was passing it the
list of failable arches (which will always evaluate True).

How this is meant to work is that we only make *the Koji task
as a whole* failable (by passing `True` to `util.failable`) if
*all* the arches in it are failable. If *any* arch in the task
is not failable, the task should not be failable.

We allow a subset of arches to fail by passing the Koji task a
list of `optional_arches`, later. If an arch is 'optional', that
arch failing won't cause the Koji task itself to be considered
failed.

This commit fixes the logic (I hope), renames all the variables
and adds a couple of comments to make it clearer what's going on,
and does a bit of making the code simpler.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 0d306d4964)
2024-08-30 13:39:25 +03:00
Jeremy Cline
bcab3431e1
image_build: Accept Kiwi extension for Azure VHD images
Kiwi builds for Azure fixed VHD images are suffixed with "vhdfixed"
instead of plain "vhd". Add that to the list of suffixes.

Signed-off-by: Jeremy Cline <jeremycline@microsoft.com>
(cherry picked from commit 1494f203ce)
2024-08-30 13:39:24 +03:00
Adam Williamson
b181b08033
image_build: accept Kiwi vagrant image name format
According to Neal, Vagrant images produced by Kiwi end in e.g.
`vagrant.libvirt.box` and `vagrant.virtualbox.box` - with a
period between `vagrant` and the image type, not a dash as with
oz. We should accept this slightly different format so we can
correctly derive the productmd `type` and `format` for these.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 93b4b4ae0f)
2024-08-30 13:39:23 +03:00
Lubomír Sedlář
e05b1bcd78
Release 4.6.2
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit b8e26bfb64)
2024-08-30 13:39:22 +03:00
Tomáš Hozza
a97488721d
Phases/osbuild: support passing 'customizations' for image builds
The osbuild Koji plugin supports passing customizations for an image
build. This is also supported in the Koji CLI plugin. Some teams want to
pass image customizations for images built as part of Pungi composes.
Extend the osbuild phase to support passing customizations in the Pungi
configuration.

Merges: https://pagure.io/pungi/pull-request/1733
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
(cherry picked from commit e738f65458)
2024-08-30 13:39:16 +03:00
Lubomír Sedlář
4d858ef958
dnf: Load filelists for actual solver too
Not just in tests.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 209d308e1c)
2024-08-30 13:39:14 +03:00
Lubomír Sedlář
744b00499d
kiwibuild: Tell Koji which arches are allowed to fail
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit be410d9fd5)
2024-08-30 13:39:13 +03:00
Lubomír Sedlář
583547c6ee
kiwibuild: Update documentation with more details
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 1f819ee08a)
2024-08-30 13:39:11 +03:00
Lubomír Sedlář
f28053eecc
kiwibuild: Add kiwibuild global options
This is already supported by code, just missing in the schema.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit b9d94970b5)
2024-08-30 13:39:10 +03:00
Lubomír Sedlář
a196e9c895
kiwibuild: Process images same as image-build
Getting the images from task is less hacky then matching on filenames.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit b032425f30)
2024-08-30 13:39:08 +03:00
Lubomír Sedlář
a6f6199910
kiwibuild: Add subvariant configuration
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bcd937d16d)
2024-08-30 13:39:07 +03:00
Lubomír Sedlář
a3dcec5059
kiwibuild: Work around missing arch in build data
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit f0137fd9b9)
2024-08-30 13:39:05 +03:00
Haibo Lin
6aa674fbb3
Support KiwiBuild
Adding kiwibuild phase which is similar to osbuild.

Fixes: https://pagure.io/pungi/issue/1710
Merges: https://pagure.io/pungi/pull-request/1720
JIRA: RHELCMP-13348
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 3d630d3e8e)
2024-08-30 13:39:04 +03:00
Timothée Ravier
05d9651eba
ostree/container: Set version in treefile 'automatic-version-prefix'
In the non container path, we're setting the version for the build using
the `--add-metadata-string=version=XYZ` argument passed to `rpm-ostree
compose tree ...`.

The `rpm-ostree compose image` path does not expose this option yet so
modify the treefile directly as we are already doing it to set the
repos used for the compose.

See: https://github.com/coreos/rpm-ostree/issues/4829
See: https://pagure.io/workstation-ostree-config/pull-request/472
Merges: https://pagure.io/pungi/pull-request/1726
Signed-off-by: Timothée Ravier <tim@siosm.fr>
(cherry picked from commit 8412890640)
2024-08-30 13:39:02 +03:00
Lubomír Sedlář
75ab6a14b2
dnf: Explicitly load filelists
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2264414
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 42befba0b1)
2024-08-30 13:39:01 +03:00
Lubomír Sedlář
533ea641d8
Fix buildinstall reuse with pungi_buildinstall plugin
The keys may not exist anymore. If there's nothing to delete, it's fine.

JIRA: RHELCMP-13464
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 52c2cea0ef)
2024-08-30 13:38:59 +03:00
Lubomír Sedlář
185a53d56b
Fix filters for DNF query
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d2e9ccefde)
2024-08-30 13:38:58 +03:00
Lubomír Sedlář
305deab9ed
gather-dnf: Support dotarch in filter_packages
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 2c61416423)
2024-08-30 13:38:57 +03:00
Lubomír Sedlář
6af11d5747
gather: Support dotarch notation for debuginfo packages
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 986099f8b5)
2024-08-30 13:38:56 +03:00
Lubomír Sedlář
58f96531c7
Correctly set input and fultree_exclude flags for debuginfo
This only matters for composes that use the functionality for trimming
addon packages from parent variants.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 947ddf0a1a)
2024-08-30 13:38:55 +03:00
Lubomír Sedlář
e570aa7726
4.6.1 release
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit e46393263e)
2024-08-30 13:38:53 +03:00
Lubomír Sedlář
d8a553163f
Make python3-mock dependency optional
https://fedoraproject.org/wiki/Changes/RemovePythonMockUsage

Prefer using unittest.mock to a standalone package. The separate
packages should only really be needed on Python 2.7 these days.

The test requirements file is updated to only require mock on old
Python, and the dependency is removed from setup.py to avoid issues
there.

Relates: https://src.fedoraproject.org/rpms/pungi/pull-request/9

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit ff5a7e6377)
2024-08-30 13:38:43 +03:00
Lubomír Sedlář
a9839d8078
Make latest black happy
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit dd7ecbd5fd)
2024-08-30 13:31:29 +03:00
Lubomír Sedlář
dc05d1fbba
Update tox configuration
The whitelist_externals option has been renamed to allowlist_externals.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit ba613563f6)
2024-08-30 13:31:28 +03:00
Lubomír Sedlář
dc4e8b2fb7
Fix scm tests to not use user configuration
If you configure default branch name in new repos to anything else than
master, there will be failures in tests. The test expects the branch to
be called master, but does not ensure it in any way.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c8d16e6978)
2024-08-30 13:31:26 +03:00
Lubomír Sedlář
27d055992e
Add workaround for old requests in kojiwrapper
When running with requests<2.18 (i.e. on RHEL 7), streaming responses
are not a context manager and need to be wrapped in contextlib.closing.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 860360629d)
2024-08-30 13:31:25 +03:00
Lubomír Sedlář
34fcd550b6
Use pungi_buildinstall without NFS
The plugin supports two modes of operation:
1. Mount a shared storage volume into the runroot and have the output
   written there.
2. Have the plugin create a tar.gz with the outputs and upload them to
   the hub, from where they can be downloaded.

This patch switches from option 1 to option 2.

This requires all input repositories to be passes in as URLs and not
paths. Once the task finishes, Pungi will download the output archives
and unpack them into the expected locations.

JIRA: RHELCMP-13284
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit f25489d060)
2024-08-30 13:31:24 +03:00
Adam Williamson
4c0059e91b
checks: don't require "repo" in the "ostree" schema
Per @siosm in https://pagure.io/pungi-fedora/pull-request/1227
this option "is deprecated and not needed anymore", so Pungi
should not be requiring it.

Merges: https://pagure.io/pungi/pull-request/1714
Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 432b0bce04)
2024-08-30 13:31:23 +03:00
Lubomír Sedlář
bb2e32132e
ostree_container: Use unique temporary directory
The config repository is cloned into a path that conflicts with the
regular ostree phase. Let's use a unique name to avoid that problem.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 7e779aa90f)
2024-08-30 13:31:22 +03:00
Lubomír Sedlář
dca3be5861
4.6.0 release
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit f4bf0739aa)
2024-08-30 13:31:20 +03:00
Lubomír Sedlář
38ec4ca159
Add ostree container to image metadata
This requires https://github.com/release-engineering/productmd/pull/172

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 119b212241)
2024-08-30 13:30:44 +03:00
Lubomír Sedlář
c589ccb56f
Updates for ostree-container phase
This patch connects the phase into the main script, and adds other
modifications:

* The archive is now stored in the images/ subdirectory in the compose.
* Documentation is updated to correctly mention that variant repos are
  not available.
* Configuration for path and name of the final archive is dropped. There
  are reasonable defaults for this and there's no point in having users
  configure it.
* The extra message for the archive is no longer sent.
* The pungi-make-ostree utility is no longer required in the buildroot.

The pungi-make-ostree utility doesn't do any significant work. It
modifies configuration files (which can happen on the compose host), and
it starts other processes.

This patch changes the ostree-container phase to no longer need the
script in the buildroot. Instead, the utility is called on the compose
host to do the config manipulation and output the needed commands. Those
are then passed into the runroot task.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 081c31238b)
2024-08-30 13:30:42 +03:00
Timothée Ravier
e413955849
Add ostree native container support
Add a new `ostree_container` stage to create ostree native container
images as OCI archives, using rpm-ostree compose image.

See: https://fedoraproject.org/wiki/Changes/OstreeNativeContainerStable
See: https://gitlab.com/CentOS/cloud/issue-tracker/-/issues/1

Fixes: https://pagure.io/pungi/issue/1698
Merges: https://pagure.io/pungi/pull-request/1699

Signed-off-by: Timothée Ravier <tim@siosm.fr>
(cherry picked from commit 95497d2676)
2024-08-30 13:30:41 +03:00
Adam Williamson
e70e1841c7
Improve autodetection of productmd image type for osbuild images
I don't love inferring the type from the filename like this -
it's kinda backwards - but it's an improvement on the current
logic (I don't think 'dvd' is ever currently the correct value
here, I don't think osbuild *can* currently build the type of
image that 'dvd' is meant to indicate). I can't immediately see
any better source of data here (we could use the 'name' or
'package_name' from 'build_info', but those are pretty much
just inputs to the filenames anyway).

Types that are possible in productmd but not covered here are
'cd' (never likely to be used again in Fedora at least, not sure
about RHEL), 'dvd-debuginfo' (again not used in Fedora, may be
used in RHEL), 'ec2', 'kvm' (not sure about those), 'netinst'
(this is a synonym for 'boot', we use 'boot' in practice in
Fedora metadata), 'p2v' and 'rescue' (not sure).

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit aa7fcc1c20)
2024-08-30 13:30:40 +03:00
Lubomír Sedlář
fc86e03e44
pkgset: ignore events for modular content tags
Generally we want all packages to come from particular event.

There are two exceptions: packages configured via `pkgset_koji_builds`
are pulled in by exact NVR and skip event; and modules in
`pkgset_koji_modules` are pulled in by NSVC and also ignore events.

However, the modular content tag did honor event, and could lead to a
crashed compose if the content tag did not exist at the configured
event.

This patch is a slightly too big hammer. It ignores events for all
modules, not just ones configured by explicit NSVC. It's not a huge deal
as the content tags are created before the corresponding module build is
created, and once all rpm builds are tagged into the content tag, MBS
will never change it again.

JIRA: RHELCMP-12765
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit b32c8f3e5e)
2024-08-30 13:30:38 +03:00
Lubomír Sedlář
548441644b
pkgset: Ignore duplicated module builds
If the module tag contains the same module build multiple times (because
it's in multiple tags in the inheritance), Pungi will not process that
correctly and try to include the same NSVC in the compose multiple
times. That leads to a crash.

This patch adds another step to the inheritance filter to ensure the
result contains each module only once.

JIRA: RHELCMP-12768
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 935da7c246)
2024-08-30 13:30:36 +03:00
Aditya Bisoi
ca369df0df
Drop buildinstall method
JIRA: RHELCMP-12388

Signed-off-by: Aditya Bisoi <abisoi@redhat.com>
(cherry picked from commit b513c8cd00)
2024-08-30 13:30:35 +03:00
Lingyan Zhuang
67ae4202c4
Add step to send UMB message
If reuse old ISO finished, send out UMB message.

Signed-off-by: Lingyan Zhuang <lzhuang@redhat.com>
(cherry picked from commit 8cf1d98312)
2024-08-30 13:30:33 +03:00
Timothée Ravier
aba5a7a093
Fix minor Ruff/flake8 warnings
```
pungi/checks.py:575:17: F601 [*] Dictionary key literal `"type"` repeated
pungi/phases/pkgset/pkgsets.py:617:12: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:241:16: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:244:16: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:370:16: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:374:20: E721 Do not compare types, use `isinstance()`
```

Signed-off-by: Timothée Ravier <tim@siosm.fr>
(cherry picked from commit 2534ddee99)
2024-08-30 13:30:32 +03:00
Simon de Vlieger
323d1c1eb6
osbuild: manifest type in config
Allow the manifest type used to be specified in the pungi configuration
instead of always selecting the manifest type based on the koji output.

Signed-off-by: Simon de Vlieger <cmdr@supakeen.com>
(cherry picked from commit f30a8b4d15)
2024-08-30 13:30:31 +03:00
Lubomír Sedlář
b0964ff555
4.5.1 release
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 3ffb991bac)
2024-08-30 13:30:30 +03:00
Ozan Unsal
79bc4e0c3a
gather_dnf.py: Do not raise error when the downloaded package is exists.
If the packages are pulled from different repos and a package is already
exists in target directory, pungi raises File exists error and breaks. This
behavior can be suspended and skipped if the package is already available.

Merges: https://pagure.io/pungi/pull-request/1696
Signed-off-by: Ozan Unsal <ounsal@redhat.com>
(cherry picked from commit dbc0e531b2)
2024-08-30 13:30:05 +03:00
Lubomír Sedlář
8772ccca23
New upstream release 4.7.0
(cherry picked from commit e0600a2abac9e0e9b8a3b15b51eb44e3cd467bd3)
2024-08-30 13:29:32 +03:00
Fedora Release Engineering
3bb34225a9
Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild
(cherry picked from commit 192a8ef731fbc134bf5337dfb3d60ba6c5ad7bd5)
2024-08-30 13:29:32 +03:00
Haibo Lin
daea6cabdf
Upstream release 4.6.3
(cherry picked from commit 9a24cfff1bfccbafde32a4a34805d9d0aeff5650)
2024-08-30 13:29:30 +03:00
Python Maint
35b720e87a
Rebuilt for Python 3.13
(cherry picked from commit 639bb6214433a96a6275817baf893ab4850a3309)
2024-08-30 13:29:29 +03:00
Lubomír Sedlář
5a6ee9f8eb
Bump release over f40-infra build
(cherry picked from commit 1ad8b6fa2edeb91316dd1d1e33a9c234800e28d9)
2024-08-30 13:29:28 +03:00
Lubomír Sedlář
9a64db0485
Require xorriso for bug#2278677
(cherry picked from commit 22214e03b888c9b5f85919815f2825ad176c5370)
2024-08-30 13:29:27 +03:00
Lubomír Sedlář
de7210f69a
Upstream release 4.6.2
(cherry picked from commit f24f577c89647dc80a84bfa76f3055d24ced55a5)
2024-08-30 13:29:05 +03:00
Lubomír Sedlář
24418ef74d
New upstream release 4.6.1
(cherry picked from commit 98b4f26e0972a2bea2d46f2c74c1db94ed087477)
2024-08-30 13:29:03 +03:00
f4765fbe3a
Remove python3-mock dependency
Merges: https://src.fedoraproject.org/rpms/pungi/pull-request/9

(cherry picked from commit 67a11d878b04bd46a0d9fb98036467bca6ffed92)
2024-08-30 13:28:01 +03:00
Fedora Release Engineering
80b9add9f7
Rebuilt for https://fedoraproject.org/wiki/Fedora_40_Mass_Rebuild
(cherry picked from commit 40fd963a495689a2a3a0279760f5a4024e7e5857)
2024-08-30 13:27:24 +03:00
Fedora Release Engineering
b241545ca6
Rebuilt for https://fedoraproject.org/wiki/Fedora_40_Mass_Rebuild
(cherry picked from commit 5cfb290545fdd5b18bb1691218e5e8e732e351e4)
2024-08-30 13:27:00 +03:00
Lubomír Sedlář
2e536228ae
Backport: Stop requiring repo option in ostree phase
(cherry picked from commit 6778cae05afb2b5784a46ed72ee2703785756dde)
2024-08-30 13:26:39 +03:00
Lubomír Sedlář
ff7950b9d1
ostree_container: Use unique temporary directory
(cherry picked from commit 58ca2a86231e53cc329e3e20294853230fabf587)
2024-08-30 13:26:38 +03:00
Lubomír Sedlář
6971624f83
New upstream release 4.6.0
(cherry picked from commit 2b47d8ea021a7b6e694c52fd8d74880f9a6b79a5)
2024-08-30 13:26:11 +03:00
Lubomír Sedlář
b7d371d1c3
Backport patch for explicit setting of osbuild image type
(cherry picked from commit c0bf9a2a78)
2024-08-30 13:25:21 +03:00
bc8c776872
- Method get_remote_file_content is object's method now 2024-05-04 10:43:19 +03:00
91d282708e
- Method get_remote_file_content is object's method now 2023-11-21 09:19:01 +02:00
ccaf31bc87
- Method get_remote_file_content is object's method now 2023-11-21 08:51:05 +02:00
5fe0504265
- Spec's changelog chronology is fixed 2023-11-15 15:14:22 +02:00
d79f163685
- Bump version 2023-11-15 14:49:51 +02:00
793fb23958
- Bump version 2023-11-15 14:02:10 +02:00
65d0c09e97
- Return empty list if a repo doesn't contain any module 2023-11-15 13:17:57 +02:00
0a9e5df66c
- Properly removing tmp files 2023-11-10 21:38:01 +02:00
ae527a2e01
- The unittests are fixed 2023-11-10 18:08:03 +02:00
Aditya Bisoi
4991144a01
4.5.0 release
Signed-off-by: Aditya Bisoi <abisoi@redhat.com>

(cherry picked from commit 4c7611291d (centos_master))
2023-11-10 16:58:03 +02:00
Lubomír Sedlář
68d94ff488
kojiwrapper: Stop being smart about local access
Rather than trying to use local access when it's accessible, let user
make the decision:

 * if koji_cache is configured use it and download stuff
 * if not, fall back to local access

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 0d3cd150bd)
2023-11-10 16:57:53 +02:00
Ozan Unsal
ce45fdc39a
Fix unittest errors
Signed-off-by: Ozan Unsal <ounsal@redhat.com>

(cherry picked from commit aa0aae3d3e (centos_master))
2023-11-10 16:57:51 +02:00
Lubomír Sedlář
b625ccea06
Add integrity checking for builds
When a real build is downloaded, Koji can provide a checksum via API.
This commit adds verification of that checksum.

A mismatch will abort the compose. If Koji doesn't provide a checksum
for the particular sigkey, no checking will happen.

Nothing is still checked for scratch builds and images.

This patch requires Koji 1.32. When talking to an older version, there
is no checking done.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 77f8fa25ad)
2023-11-10 16:55:44 +02:00
Lubomír Sedlář
8eccfc5a03
Add script for cleaning up the cache
Pungi would by default only ever add files to the cache. That would
eventually result in essentially a mirror of the Koji volume.

This patch adds a helper cleanup script. When called, it goes through
files in the cache and deletes anything that is not hardlinked from
elsewhere and with mtime not updated recently.

Cleaning up files that hardlinked from some compose would not save any
space anyway. The mtime check should account for cases like subpackage
being downloaded but not included in any compose. This would avoid it
from being downloaded over and over again.

When a compose fails or is aborted, there can be a stale lock file left
behind in the cache. This script cleans that up too.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit e6d9f31ef4 (centos_master))
2023-11-10 16:55:43 +02:00
Lubomír Sedlář
f5a0e06af5
Add ability to download images
This patch extends the ability to download files from Koji to image
building phases too.

There is no integrity checking for the downloaded images.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bf3e9bc53a)
2023-11-10 16:55:20 +02:00
Lubomír Sedlář
f6f54b56ca
Add support for not having koji volume mounted locally
With this patch, Pungi can be configured with a local directory to be
used as a cache for RPMs, and it will download packages from Koji over
HTTP instead of reading them from filesystem directly.

The files from the cache can then be hardlink as usual.

There is locking in place to avoid different composes running at the
same time to step on each other.

This is now supported for RPMs only, be it real builds or scratch
builds.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 631bb01d8f)
2023-11-10 16:55:19 +02:00
Aditya Bisoi
fcee346c7c
Remove repository cloning multiple times
JIRA: RHELCMP-8913
Signed-off-by: Aditya Bisoi <abisoi@redhat.com>
(cherry picked from commit b6296bdfcd)
2023-11-10 16:55:18 +02:00
Lubomír Sedlář
82ec38ad60
Support require_all_comps_packages on DNF backend
It's not a great name anymore though, because it will fail the compose
if any input package is missing, no matter whether it's from comps,
prepopulate or additional_packages.

JIRA: RHELCMP-12484
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 1c4275bbfa)
2023-11-10 16:55:17 +02:00
Lubomír Sedlář
c9cbd80569
Fix new warnings from flake8
Use isinstance rather than directly comparing types.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit fe2dad3b3c)
2023-11-10 16:55:16 +02:00
Aditya Bisoi
035fca1e6d
4.4.1 release
Signed-off-by: Aditya Bisoi <abisoi@redhat.com>

(cherry picked from commit 7128021654 (centos_master))
2023-11-10 16:55:15 +02:00
Lubomír Sedlář
0f8cae69b7
ostree: Add configuration for custom runroot packages
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bd64894a03)
2023-11-10 16:55:01 +02:00
Lubomír Sedlář
f17628dd5f
pkgset: Emit better error for missing modulemd file
The exceptions from libmodulemd are not particularly helpful as they do
not contain information about what file caused it.

   modulemd-yaml-error-quark: Failed to open file: Permission denied (0)

This patch should add the path to the problematic file into the message.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 14e025a5a1)
2023-11-10 16:55:00 +02:00
Lubomír Sedlář
f3485410ad
Add support for git-credential-helper
This patch adds an additional field `options` to scm_dict, which can be
used to provide additional information to the backends.

It implements a single new option for GitWrapper. This option allows
setting a custom git credentials wrapper. This can be useful if Pungi
needs to get files from a git repository that requires authentication.

The helper can be as simple as this (assuming the username is already
provided in the url):

    #!/bin/sh
    echo password=i-am-secret

The helper would need to be referenced by an absolute path from the
pungi configuration, or prefixed with ! to have git interpret it as a
shell script and look it up in PATH.

See https://git-scm.com/docs/gitcredentials for more details.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
JIRA: RHELCMP-11808
(cherry picked from commit ada8f4e346)
2023-11-10 16:54:59 +02:00
Haibo Lin
cccfaea14e
Support OIDC Client Credentials authentication to CTS
JIRA: RHELCMP-11324
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit e4c525ecbf)
2023-11-10 16:54:58 +02:00
Lubomír Sedlář
e2057b75c5
4.4.0 release
JIRA: RHELCMP-11764
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 091d228219 (centos_stream))
2023-11-10 16:54:57 +02:00
Lubomír Sedlář
44ea4d4419
gather-dnf: Run latest() later
The initial version of the filtered the latest builds at the start. That
doesn't matter in many cases:

* When there are no lookaside repos, there is generally a single version
  of each package.
* When lookaside repos do not overlap with compose repos, or contain
  only older versions.

It is however a problem when the lookaside repos contain higher version
of a package than what is in a compose repo, and some package explicitly
requires the older version.

Consider this scenario:

* lookaside contains bar-1.1
* compose repo contains bar-1.0 and foo-1.0
* foo-1.0 `Requires: bar < 1.1`

The original code would filter out the bar-1.0 package, and then fail on
unresolved dependencies.

This patch moves the computation of latest packages much later, to part
of code where all options to satisfy a dependency are selected and the
best match is chosen. At that point if there are multiple versions
available, we do want the latest one.

JIRA: SPMM-13483
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bcc440491e)
2023-11-10 16:54:43 +02:00
Lubomír Sedlář
d4425f7935
iso: Support joliet long names
Without this option the names reported by joliet tree are truncated.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit fa50eedfad)
2023-11-10 16:54:42 +02:00
Lubomír Sedlář
c8118527ea
Drop pungi-orchestrator code
This was never actually used.

JIRA: RHELCMP-10218
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit b7adbf8a91 (centos_master))
2023-11-10 16:54:40 +02:00
Lubomír Sedlář
a8ea322907
isos: Ensure proper file ownership and permissions
The genisoimage backend uses the -rational-rock option, which sets uid
and gid to 0, and makes file readable by everyone.

With xorriso this must be done explicitly. Setting ownership is a single
command, but the permissions require a per-file command to not make
files executable where not needed.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2203888
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 82ae9e86d5 (centos_master))
2023-11-10 16:54:22 +02:00
Lubomír Sedlář
c4995c8f4b
gather: Always get latest packages
If lookaside contains an older version of a package, but with a
different arch, the depsolver doesn't notice that and prefers the
lookaside version.

This is not correct. The latest package should be used no matter if
there are different arches available.

The filtering in DNF doesn't ensure this, so we have to build it
ourselves. To limit the performance impact, only run this filtering when
there actually are some lookaside repos configured.

JIRA: RHELCMP-11728

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 2ad341a01c)
2023-11-10 16:54:01 +02:00
Lubomír Sedlář
997e372f25
Add back compatibility with jsonschema <3.0.0
Resolves: https://pagure.io/pungi/issue/1667
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit e888e76992 (centos_master))
2023-11-10 16:54:00 +02:00
Lubomír Sedlář
42f1c62528
Remove useless debug message
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 6e72de7efe)
2023-11-10 16:52:27 +02:00
Lubomír Sedlář
3fd29d0ee0
Remove fedmsg from requirements
The code for sending messages in Fedora actually relies on
fedora-messaging library now. However, we do not have any tests for
that, so there's little reason to pull the library in via
requirements.txt

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c8263fcd39 (centos_master))
2023-11-10 16:52:04 +02:00
Lubomír Sedlář
c1f2fa5035
gather: Support dotarch in DNF backend
The documentation claims that dotarch syntax is supported for additional
packages. For yum backend this seems to be handled automatically, but
the dnf backend could not interpret this.

This patch checks if a package is specified in the syntax and contains a
valid architecture. If so, the query will honor the arch.

JIRA: RHELCMP-11728
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 82ca4f4e65)
2023-11-10 16:51:55 +02:00
Aurélien Bompard
85c9e9e776
Set the priority in the fedora-messaging notifier
According to [infra ticket #10899](https://pagure.io/fedora-infrastructure/issue/10899),
ostree messages should have prioriy 3.

Signed-off-by: Aurélien Bompard <aurelien@bompard.org>
(cherry picked from commit b8b6b46ce7)
2023-11-10 16:51:54 +02:00
Lubomír Sedlář
33012ab31e
Fix compatibility with createrepo_c 0.21.1
The length of the file entry tuple has changed, it can not be unpacked
reliably.

Relates: https://github.com/rpm-software-management/createrepo_c/issues/360
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e9d836c115)
2023-11-10 16:51:53 +02:00
Lubomír Sedlář
72ddf65e62
comps: Apply arch filtering to environment/optionlist
Let's filter this list too, not just the grouplist tag.

JIRA: RHELCMP-7926
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d3f0701e01)
2023-11-10 16:51:52 +02:00
Haibo Lin
c402ff3d60
Add config file for cleaning up cache files
systemd-tmpfiles is required to enable the auto clean up.

JIRA: RHELCMP-6327
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 8f6f0f463f)
2023-11-10 16:51:51 +02:00
Haibo Lin
8dd344f9ee
4.3.8 release
JIRA: RHELCMP-11448
Signed-off-by: Haibo Lin <hlin@redhat.com>

(cherry picked from commit 467c7a7f6a (centos_master))
2023-11-10 16:51:49 +02:00
Lubomír Sedlář
d07f517a90
createiso: Update possibly changed file on DVD
There's no good way of detecting if buildinstall phase tweaked boot
configuration (and efiboot.img). We should update those files in the DVD
just to be sure.

The .discinfo file is always different and needs to be updated.

Relates: https://pagure.io/pungi/issue/1647
JIRA: RHELCMP-10811
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e1d7544c2b)
2023-11-10 16:51:39 +02:00
Lubomír Sedlář
48366177cc
pkgset: Stop reuse if configuration changed
When options controlling excluding arches change, it should break reuse.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit a71c8e23be)
2023-11-10 16:51:38 +02:00
Lubomír Sedlář
4cb8671fe4
Allow disabling inheriting ExcludeArch to noarch packages
Copying ExcludeArch/ExclusiveArch from source rpm to noarch is an easy
option to block shipping that particular noarch package from a certain
architecture. However, there is no way to bypass it, and it is rather
confusing and not discoverable.

An alternative way to remove an unwanted package is to use the good old
`filter_packages`, which has enough granularity to remove pretty much
anything from anywhere. The only downside is that it requires a change
in configuration, so it can't be done by a packager directly from a spec
file.

When we decide to break backwards compatibility, this option should be
removed and the entire ExcludeArch/ExclusiveArch inheritance removed
completely.

JIRA: ENGCMP-2606
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit ab508c1511)
2023-11-10 16:51:37 +02:00
Lubomír Sedlář
135bbbfe7e
pkgset: Support extra builds with no tags
This is a rather fringe use case. If the configuration contains
pkgset_koji_builds or pkgset_koji_scratch_tasks but no pkgset_koji_tag,
the compose will be empty.

The expectation though is that the packages should be pulled.

The extra RPMs are added to all non-modular tags because they are
supposed to mask builds from the same packages (e.g. user may want to
explicitly pull in older version than tagged).

This patch adds support for composes containing only explicitly listed
builds by creating a dummy package set that is not actually using any
tag.

JIRA: RHELCMP-11385
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit f960b4d155)
2023-11-10 16:51:36 +02:00
Lubomír Sedlář
5624829564
buildinstall: Avoid pointlessly tweaking the boot images
Only modify boot images if there actually is some change.

The tweak function updates config files with volume id and kickstart
file. Even if we don't have a kickstart and there is no change in the
config files, the image will be regenerated. This leads to a change in
checksum for no good reason.

This patch keeps track of modified config files. If there are none, it
avoids touching anything else.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 602b698080)
2023-11-10 16:51:35 +02:00
Haibo Lin
5fb4f86312
Prevent to reuse if unsigned packages are allowed
JIRA: RHELCMP-8415
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit b30f7e0d83)
2023-11-10 16:51:34 +02:00
Lubomír Sedlář
e891fe7b09
Pass parent id/respin id to CTS
When the --target-dir option is used, the compose can be created in CTS,
but the parent and respin information is not passed through. That leads
to data missing later on.

JIRA: RHELCMP-11411
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 0c3b6e22f9 (centos_master))
2023-11-10 16:51:33 +02:00
Haibo Lin
4cd7d39914
Exclude existing files in boot.iso
JIRA: RHELCMP-10811
Fixes: https://pagure.io/pungi/issue/1647
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 3175ede38a)
2023-11-10 16:50:46 +02:00
Lubomír Sedlář
5de829d05b
image-build/osbuild: Pull ISOs into the compose
OSBuild tasks can produce ISO files. If they do, we should include them
in the compose, and we should pull them into the iso/ subdirectory
together with other ISOs.

Fixes: https://pagure.io/pungi/issue/1657
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8920eef339)
2023-11-10 16:50:45 +02:00
Lubomír Sedlář
2930a1cc54
Retry 401 error from CTS
This could be a transient error caused by kerberos server instability.

JIRA: RHELCMP-11251
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 58036eab84)
2023-11-10 16:50:43 +02:00
Lubomír Sedlář
9c4d3d496d
gather: Better detection of debuginfo in lookaside
If the depsolver wants to include a package that is present in both the
source repo and a lookaside repo, it reliably detects binary packages
present in lookaside, but for debuginfo it's not so reliable.

There is a separate package object for each package in each repo.
Depending on which one is used, debuginfo could be included in the
result or not. This patch fixes that by actually looking if the same
package is present in any lookaside repo.

JIRA: RHELCMP-9373
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit a4476f2570)
2023-11-10 16:50:42 +02:00
Haibo Lin
4637fd6697
Log versions of all installed packages
JIRA: RHELCMP-9493
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 8c06b7a3f1)
2023-11-10 16:50:41 +02:00
Lubomír Sedlář
2ff8132eaf
Use authentication for all CTS calls
The update of compose URL relied on environment being set from the
initial import. This got broken when a unique credentials cache started
to be used, and was cleaned up after the import.

JIRA: RHELCMP-11072
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 64ae81b416)
2023-11-10 16:50:40 +02:00
Lubomír Sedlář
f9190d1fd1
Fix black complaints
These are newly detected by black 23.1.0.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 826169af7c)
2023-11-10 16:50:38 +02:00
Lubomír Sedlář
80ad0448ec
Add vhd.gz extension to compressed VHD images
JIRA: RHELCMP-11027
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d97b8bdd33)
2023-11-10 16:50:37 +02:00
Lubomír Sedlář
027380f969
Add vhd-compressed image type
JIRA: RHELCMP-11027
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8768b23cbe)
2023-11-10 16:50:36 +02:00
Lubomír Sedlář
41048f60b7
Update to work with latest mock
The `called_once` attribute now raises an exception. Switch to
`assert_called_once` method. Also replace `assertTrue(x.called)` with
`x.assert_called()`.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 51628a974d)
2023-11-10 16:50:34 +02:00
Ondrej Nosek
9f8f6a7956
Default bztar format for sdist command
Usage of the 'bztar' format is unchanged, just changing the way
of configuration. The previous method was deprecated.

Signed-off-by: Ondrej Nosek <onosek@redhat.com>
(cherry picked from commit 88327d5784)
2023-11-10 16:50:33 +02:00
Lubomír Sedlář
3d3e4bafdf
- New upstream release 4.5.0
(cherry picked from commit 4dfabb647b (fedora_master))
2023-11-10 16:47:04 +02:00
Lubomír Sedlář
8fe0257e93
Release 4.4.1
(cherry picked from commit 4c604f434a (fedora_master))
2023-11-10 16:46:02 +02:00
Fedora Release Engineering
d7b5fd2278
Rebuilt for https://fedoraproject.org/wiki/Fedora_39_Mass_Rebuild
Signed-off-by: Fedora Release Engineering <releng@fedoraproject.org>

(cherry picked from commit bf4f5b6e53 (fedora_master))
2023-11-10 16:44:52 +02:00
Lubomír Sedlář
8b49d4ad61
Backport patch from upstream PR 1690
(cherry picked from commit 2362ef59c5 (fedora_master))
2023-11-10 16:44:19 +02:00
Lubomír Sedlář
57443cd0aa
Backport patch from upstream PR 1690
(cherry picked from commit 9ee6caf117 (fedora_master))
2023-11-10 16:43:47 +02:00
Python Maint
1d146bb8d5
Rebuilt for Python 3.12
(cherry picked from commit 8b8b558fbc (fedora_master))
2023-11-10 16:42:36 +02:00
Lubomír Sedlář
790091b7d7
Release 4.4.0
(cherry picked from commit a6196da315 (fedora_master))
2023-11-10 16:42:10 +02:00
Lubomír Sedlář
28aad3ea40
Rebuild without fedmsg dependencs
(cherry picked from commit d142464ef1 (fedora_master))
2023-11-10 16:41:29 +02:00
Pierre-Yves Chibon
7373b4dbbf
Replace the requirement on fedmsg to one on fedora-messaging
Signed-off-by: Pierre-Yves Chibon <pingou@pingoured.fr>
(cherry picked from commit 802f5fe854)
2023-11-10 16:40:34 +02:00
Lubomír Sedlář
218b11f1b7
Backport patches
(cherry picked from commit 20a5d00961 (fedora_master))
2023-11-10 16:40:33 +02:00
Haibo Lin
bfbe9095d2
Release 4.3.8
Signed-off-by: Haibo Lin <hlin@redhat.com>

(cherry picked from commit 3548f55821 (fedora_master))
2023-11-10 16:38:58 +02:00
Lubomír Sedlář
eb17182c04
Update license tag to SPDX
(cherry picked from commit f9143f6ea1 (fedora_master))
2023-11-10 16:33:41 +02:00
f91f90cf64 - Test empty sub-package 2023-10-26 00:01:45 +03:00
49931082b2 - Test empty sub-package 2023-10-25 23:11:26 +03:00
8ba8609bda - Test empty sub-package 2023-10-25 22:58:28 +03:00
6f495a8133 - Test empty sub-package 2023-10-25 22:55:18 +03:00
2b4bddbfe0 - Test empty sub-package 2023-10-25 22:17:42 +03:00
032cf725de - Bump version
- Changelog
2023-07-25 11:12:03 +03:00
8b11bb81af AL-5220: Investigate why CL9 can't built on the new nebula
- Exclude the packages for using in a build
2023-07-24 18:26:51 +03:00
soksanichenko
114a73f100 - gather-module can find modules through symlinks
- Bump version
- Update changelog
2023-04-15 20:03:27 +03:00
soksanichenko
1c3e5dce5e - CLI option --label can be passed through a Pungi config file
- Bump version
- Update changelog
2023-04-13 00:57:39 +03:00
soksanichenko
e55abb17f1 - Bump version 2023-04-04 10:12:22 +03:00
soksanichenko
e81d78a1d1 - The log message contains a variant's name if Pungi didn't find one or more modules for that 2023-04-04 10:11:59 +03:00
soksanichenko
68915d04f8 - Excluded/included modules/packages will be processed correctly 2023-04-02 22:27:24 +03:00
soksanichenko
a25bf72fb8 - Changelog is updated
- Version is bumped
2023-03-31 12:07:22 +03:00
Stepan Oksanichenko
68aee1fa2d Merge pull request 'ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically' (#15) from ALBS-987 into al_master
Reviewed-on: #15
2023-03-31 09:03:39 +00:00
soksanichenko
6592735aec ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Unittests are fixed
2023-03-30 14:05:47 +03:00
soksanichenko
943fd8e77d ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Script `create extra repo` is fixed
- Unittests are fixed
2023-03-30 12:52:51 +03:00
soksanichenko
004fc4382f ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Review comments
2023-03-29 11:40:00 +03:00
soksanichenko
596c5c0b7f ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Refactoring
- Some absent packages are in packages.json now
2023-03-28 12:58:08 +03:00
soksanichenko
141d00e941 ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- More info about unsigned packages
2023-03-24 16:39:10 +02:00
soksanichenko
4b64d20826 ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Path.rglob/glob doesn't work with symlinks (it's the bug and reported)
- Refactoring
2023-03-24 12:45:28 +02:00
soksanichenko
0747e967b0 ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Some refactoring
2023-03-23 09:36:52 +02:00
soksanichenko
6d58bc2ed8 ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- [Generator of packages.json] Replace using CLI by config.yaml
- [Gather RPMs] os.path is replaced by Path
2023-03-22 15:56:58 +02:00
Stepan Oksanichenko
60a347a4a2 Merge pull request 'ALBS-1030: Generate Devel section in packages.json' (#14) from ALBS-1030 into al_master
Reviewed-on: #14
2023-03-22 10:06:58 +00:00
soksanichenko
53ed7386f3 ALBS-1030: Generate Devel section in packages.json
- Redundant empty lines are removed
2023-03-20 13:56:44 +02:00
soksanichenko
ed43f0038e ALBS-1030: Generate Devel section in packages.json
- Style fix
2023-03-20 13:55:06 +02:00
soksanichenko
fcc9b4f1ca ALBS-1030: Generate Devel section in packages.json
- Skip verifying an RPM signature if sigkeys are empty
2023-03-20 13:25:45 +02:00
soksanichenko
d32c293bca ALBS-1030: Generate Devel section in packages.json
- Some upstream changes to KojiMock parts
2023-03-19 21:11:12 +02:00
soksanichenko
f0bd1af999 ALBS-1030: Generate Devel section in packages.json
- Also the tool can combine (remove and add) packages in a variant from different
  sources according to an url's type of source
2023-03-19 18:21:33 +02:00
161 changed files with 7464 additions and 10738 deletions

25
1860.patch Normal file
View File

@ -0,0 +1,25 @@
From 3bd28f97b2991cf4e3b4ce9ce34c80cba2bf21ab Mon Sep 17 00:00:00 2001
From: Lubomír Sedlář <lsedlar@redhat.com>
Date: Aug 08 2025 11:54:39 +0000
Subject: repoclosure: Don't fail if cache doesn't exist
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
---
diff --git a/pungi/phases/repoclosure.py b/pungi/phases/repoclosure.py
index 1d3fad0..398802f 100644
--- a/pungi/phases/repoclosure.py
+++ b/pungi/phases/repoclosure.py
@@ -136,6 +136,9 @@ def _delete_repoclosure_cache_dirs(compose):
pass
for top_cache_dir in cache_dirs:
+ if not os.path.isdir(top_cache_dir):
+ # Skip if the cache doesn't exist.
+ continue
for name in os.listdir(top_cache_dir):
if name.startswith(compose.compose_id):
cache_path = os.path.join(top_cache_dir, name)

View File

@ -2,6 +2,7 @@ include AUTHORS
include COPYING include COPYING
include GPL include GPL
include pungi.spec include pungi.spec
include setup.cfg
include tox.ini include tox.ini
include share/* include share/*
include share/multilib/* include share/multilib/*

1
TODO
View File

@ -47,7 +47,6 @@ Split Pungi into smaller well-defined tools
* create install images * create install images
* lorax * lorax
* buildinstall
* create isos * create isos
* isos * isos

View File

@ -0,0 +1,2 @@
# Clean up pungi cache
d /var/cache/pungi/createrepo_c/ - - - 30d

268
doc/_static/phases.svg vendored
View File

@ -1,22 +1,22 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg <svg
xmlns:dc="http://purl.org/dc/elements/1.1/" width="698.46503"
xmlns:cc="http://creativecommons.org/ns#" height="367.16599"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" viewBox="0 0 698.46506 367.16599"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="610.46454"
height="301.1662"
viewBox="0 0 610.46457 301.1662"
id="svg2" id="svg2"
version="1.1" version="1.1"
inkscape:version="1.0.2 (e86c870879, 2021-01-15)" inkscape:version="1.4 (e7c3feb1, 2024-10-09)"
sodipodi:docname="phases.svg" sodipodi:docname="phases.svg"
inkscape:export-filename="/home/lsedlar/repos/pungi/doc/_static/phases.png" inkscape:export-filename="/home/lsedlar/repos/pungi/doc/_static/phases.png"
inkscape:export-xdpi="90" inkscape:export-xdpi="90"
inkscape:export-ydpi="90"> inkscape:export-ydpi="90"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<sodipodi:namedview <sodipodi:namedview
id="base" id="base"
pagecolor="#ffffff" pagecolor="#ffffff"
@ -24,16 +24,16 @@
borderopacity="1.0" borderopacity="1.0"
inkscape:pageopacity="1" inkscape:pageopacity="1"
inkscape:pageshadow="2" inkscape:pageshadow="2"
inkscape:zoom="1.5" inkscape:zoom="1.5268051"
inkscape:cx="9.4746397" inkscape:cx="281.30637"
inkscape:cy="58.833855" inkscape:cy="222.68723"
inkscape:document-units="px" inkscape:document-units="px"
inkscape:current-layer="layer1" inkscape:current-layer="layer1"
showgrid="false" showgrid="false"
inkscape:window-width="2560" inkscape:window-width="1920"
inkscape:window-height="1376" inkscape:window-height="1027"
inkscape:window-x="0" inkscape:window-x="0"
inkscape:window-y="0" inkscape:window-y="25"
inkscape:window-maximized="1" inkscape:window-maximized="1"
units="px" units="px"
inkscape:document-rotation="0" inkscape:document-rotation="0"
@ -43,7 +43,10 @@
fit-margin-left="7.4" fit-margin-left="7.4"
fit-margin-right="7.4" fit-margin-right="7.4"
fit-margin-bottom="7.4" fit-margin-bottom="7.4"
lock-margins="true" /> lock-margins="true"
inkscape:showpageshadow="2"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1" />
<defs <defs
id="defs4"> id="defs4">
<marker <marker
@ -70,7 +73,6 @@
<dc:format>image/svg+xml</dc:format> <dc:format>image/svg+xml</dc:format>
<dc:type <dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /> rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title />
</cc:Work> </cc:Work>
</rdf:RDF> </rdf:RDF>
</metadata> </metadata>
@ -103,7 +105,7 @@
style="font-size:13.1479px;line-height:1.25">Pkgset</tspan></text> style="font-size:13.1479px;line-height:1.25">Pkgset</tspan></text>
</g> </g>
<g <g
transform="translate(58.253953,-80.817124)" transform="translate(141.04531,-80.817124)"
id="g3398"> id="g3398">
<rect <rect
y="553.98242" y="553.98242"
@ -151,36 +153,33 @@
<path <path
inkscape:connector-curvature="0" inkscape:connector-curvature="0"
id="path3642" id="path3642"
d="M 100.90864,859.8891 H 654.22706" d="M 100.90864,859.8891 H 734.73997"
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1.17467px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Lend)" /> style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1.25724px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Lend)" />
<g <g
transform="translate(26.249988)" id="g10">
id="g262">
<g
id="g234">
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
y="179.38934" y="205.63933"
x="872.67383" x="872.67383"
height="162.72726" height="137.98026"
width="26.295755" width="26.295755"
id="rect3342" id="rect3342"
style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:0.838448px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" /> style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:0.772066px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<text <text
id="text3364" id="text3364"
y="890.72327" y="890.72327"
x="181.69368" x="207.94366"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan xml:space="preserve"><tspan
style="font-size:13.1479px;line-height:1.25" style="font-size:13.1479px;line-height:1.25"
y="890.72327" y="890.72327"
x="181.69368" x="207.94366"
id="tspan3366" id="tspan3366"
sodipodi:role="line">Buildinstall</tspan></text> sodipodi:role="line">Buildinstall</tspan></text>
</g> </g>
<g <g
id="g3639" id="g3639"
transform="translate(75.925692,-0.34404039)"> transform="translate(102.17568,-0.34404039)">
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
y="103.28194" y="103.28194"
@ -202,7 +201,7 @@
sodipodi:role="line">Gather</tspan></text> sodipodi:role="line">Gather</tspan></text>
</g> </g>
<g <g
transform="translate(15.925722,63.405928)" transform="translate(42.17571,32.494534)"
id="g3647"> id="g3647">
<g <g
id="g3644"> id="g3644">
@ -228,7 +227,7 @@
sodipodi:role="line">ExtraFiles</tspan></text> sodipodi:role="line">ExtraFiles</tspan></text>
</g> </g>
<g <g
transform="translate(-2.824268,-0.34404039)" transform="translate(23.42572,-0.34404039)"
id="g3658"> id="g3658">
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
@ -252,76 +251,93 @@
</g> </g>
<g <g
id="g3408" id="g3408"
transform="translate(-74.638308,113.77258)"> transform="translate(-48.38832,300.30474)">
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
y="254.60153" y="253.37347"
x="823.54675" x="670.65399"
height="53.653927" height="137.77563"
width="26.295755" width="26.295755"
id="rect3350-3" id="rect3350-3"
style="fill:#729fcf;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" /> style="fill:#729fcf;fill-rule:evenodd;stroke:none;stroke-width:1.60245px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<text <text
id="text3380-2" id="text3380-2"
y="840.3219" y="688.04315"
x="256.90588" x="256.90588"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan xml:space="preserve"><tspan
style="font-size:13.1479px;line-height:1.25" style="font-size:13.1479px;line-height:1.25"
id="tspan3406" id="tspan3406"
sodipodi:role="line"
x="256.90588" x="256.90588"
y="840.3219">OSTree</tspan></text> y="688.04315"
sodipodi:role="line">OSTree</tspan></text>
</g> </g>
<g
transform="translate(-252.46536,-85.861863)"
id="g288">
<g
transform="translate(0.56706579)"
id="g3653">
<rect <rect
style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:1.48564px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect3428" id="rect3428"
width="26.295755" width="26.295755"
height="101.85102" height="224.79666"
x="1022.637" x="1122.0793"
y="490.33765" y="351.26718"
transform="matrix(0,1,1,0,0,0)" /> transform="matrix(0,1,1,0,0,0)" />
<text <text
xml:space="preserve" xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="492.642" x="355.4136"
y="1039.4121" y="1140.0824"
id="text3430"><tspan id="text3430"><tspan
id="tspan283" id="tspan283"
sodipodi:role="line" sodipodi:role="line"
x="492.642" x="355.4136"
y="1039.4121" y="1140.0824"
style="font-size:12px;line-height:0">OSTreeInstaller</tspan></text> style="font-size:12px;line-height:0">OSTreeInstaller</tspan></text>
<g
id="g11">
<rect
style="fill:#edd400;fill-rule:evenodd;stroke:none;stroke-width:1.90661px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect3428-5"
width="26.295755"
height="370.24628"
x="1155.5499"
y="205.91063"
transform="matrix(0,1,1,0,0,0)" />
<text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="208.21498"
y="1172.3251"
id="text3430-3"><tspan
id="tspan283-5"
sodipodi:role="line"
x="208.21498"
y="1172.3251"
style="font-size:12px;line-height:0">OSTreeContainer</tspan></text>
</g> </g>
</g> <g
</g> id="g9"
transform="translate(-23.616254)">
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
style="fill:#e9b96e;fill-rule:evenodd;stroke:none;stroke-width:1.85901px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#e9b96e;fill-rule:evenodd;stroke:none;stroke-width:0.898355px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect3338-1" id="rect3338-1"
width="90.874992" width="25.155075"
height="115.80065" height="110.86161"
x="872.67383" x="872.67383"
y="486.55563" /> y="602.95026" />
<text <text
id="text3384-0" id="text3384-0"
y="921.73846" y="889.42767"
x="489.56451" x="605.95917"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan xml:space="preserve"><tspan
style="font-size:13.1475px;line-height:1.25" style="font-size:13.1475px;line-height:1.25"
id="tspan3391" id="tspan3391"
sodipodi:role="line" sodipodi:role="line"
x="489.56451" x="605.95917"
y="921.73846">ImageChecksum</tspan></text> y="889.42767">ImageChecksum</tspan></text>
</g>
<g <g
transform="translate(-42.209584,-80.817124)" transform="translate(-68.341107,-80.817124)"
id="g3458"> id="g3458">
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
@ -343,32 +359,9 @@
sodipodi:role="line" sodipodi:role="line"
style="font-size:13.1479px;line-height:1.25">Createiso</tspan></text> style="font-size:13.1479px;line-height:1.25">Createiso</tspan></text>
</g> </g>
<g
id="g3453"
transform="translate(-42.466031,-84.525321)">
<rect
transform="matrix(0,1,1,0,0,0)"
y="420.39337"
x="989.65247"
height="101.85102"
width="26.295755"
id="rect3352"
style="fill:#73d216;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
<text
id="text3388"
y="1006.4276"
x="422.69772"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
y="1006.4276"
x="422.69772"
id="tspan3390"
sodipodi:role="line"
style="font-size:13.1479px;line-height:1.25">LiveImages</tspan></text>
</g>
<g <g
id="g3448" id="g3448"
transform="translate(-42.466031,-88.485966)"> transform="translate(-68.597554,-120.23498)">
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
y="420.39337" y="420.39337"
@ -391,7 +384,7 @@
</g> </g>
<g <g
id="g3443" id="g3443"
transform="translate(-43.173123,-92.80219)"> transform="translate(-69.304646,-124.55121)">
<rect <rect
style="fill:#edd400;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#edd400;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect3422" id="rect3422"
@ -412,27 +405,31 @@
y="1079.6111" y="1079.6111"
style="font-size:13.1479px;line-height:1.25">LiveMedia</tspan></text> style="font-size:13.1479px;line-height:1.25">LiveMedia</tspan></text>
</g> </g>
<g
id="g8"
transform="translate(-26.131523,-31.749016)">
<rect <rect
style="fill:#c17d11;fill-rule:evenodd;stroke:none;stroke-width:1.48416px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#c17d11;fill-rule:evenodd;stroke:none;stroke-width:1.48416px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect290" id="rect290"
width="26.295755" width="26.295755"
height="224.35098" height="224.35098"
x="1063.5973" x="1091.7223"
y="378.43698" y="378.43698"
transform="matrix(0,1,1,0,0,0)" /> transform="matrix(0,1,1,0,0,0)" />
<text <text
xml:space="preserve" xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="380.74133" x="380.74133"
y="1080.3723" y="1106.6223"
id="text294"><tspan id="text294"><tspan
y="1080.3723" y="1106.6223"
x="380.74133" x="380.74133"
sodipodi:role="line" sodipodi:role="line"
id="tspan301" id="tspan301"
style="font-size:12px;line-height:0">OSBS</tspan></text> style="font-size:12px;line-height:0">OSBS</tspan></text>
</g>
<g <g
transform="translate(-70.933542,-51.043149)" transform="translate(-97.065065,-82.792165)"
id="g3819"> id="g3819">
<rect <rect
style="fill:#73d216;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#73d216;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
@ -455,31 +452,11 @@
id="tspan3812">ExtraIsos</tspan></text> id="tspan3812">ExtraIsos</tspan></text>
</g> </g>
<g <g
id="g1031" id="g7"
transform="translate(-40.740337,29.23522)"> transform="translate(-26.131523,-31.749016)">
<rect
transform="matrix(0,1,1,0,0,0)"
style="fill:#5ed4ec;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect206"
width="26.295755"
height="102.36562"
x="1066.8611"
y="418.66275" />
<text
id="text210"
y="1084.9105"
x="421.51923"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
y="1084.9105"
x="421.51923"
id="tspan208"
sodipodi:role="line"
style="font-size:13.1479px;line-height:1.25">Repoclosure</tspan></text>
</g>
<rect <rect
y="377.92242" y="377.92242"
x="1096.0963" x="1122.3463"
height="224.24059" height="224.24059"
width="26.295755" width="26.295755"
id="rect87" id="rect87"
@ -489,17 +466,19 @@
xml:space="preserve" xml:space="preserve"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="380.7789" x="380.7789"
y="1114.1458" y="1140.3958"
id="text91"><tspan id="text91"><tspan
style="font-size:13.1479px;line-height:1.25" style="font-size:13.1479px;line-height:1.25"
sodipodi:role="line" sodipodi:role="line"
id="tspan89" id="tspan89"
x="380.7789" x="380.7789"
y="1114.1458">Repoclosure</tspan></text> y="1140.3958">Repoclosure</tspan></text>
</g>
<g <g
id="g206"> id="g206"
transform="translate(-26.131523,-33.624015)">
<rect <rect
style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:1.00033px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#fcd9a4;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:1.00033px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect290-6" id="rect290-6"
width="26.295755" width="26.295755"
height="101.91849" height="101.91849"
@ -516,26 +495,49 @@
x="380.23166" x="380.23166"
sodipodi:role="line" sodipodi:role="line"
id="tspan301-5" id="tspan301-5"
style="font-size:12px;line-height:0">OSBuild</tspan></text> style="font-size:12px;line-height:0">KiwiBuild</tspan></text>
</g> </g>
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
style="fill:#729fcf;fill-rule:evenodd;stroke:none;stroke-width:1.83502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#729fcf;fill-rule:evenodd;stroke:none;stroke-width:2.42607px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect3338-1-3" id="rect3338-1-3"
width="88.544876" width="180.25586"
height="115.80065" height="115.80065"
x="970.31763" x="873.67194"
y="486.55563" /> y="460.4241" />
<text <text
id="text3384-0-6" id="text3384-0-6"
y="1018.2172" y="967.06702"
x="489.56451" x="467.91034"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan xml:space="preserve"><tspan
style="font-size:13.1475px;line-height:1.25" style="font-size:13.1475px;line-height:1.25"
id="tspan3391-7" id="tspan3391-7"
sodipodi:role="line" sodipodi:role="line"
x="489.56451" x="467.91034"
y="1018.2172">ImageContainer</tspan></text> y="967.06702">ImageContainer</tspan></text>
<g
id="g206-1"
transform="translate(-26.177813,-3.0471625)">
<rect
style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:1.00033px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect290-6-7"
width="26.295755"
height="101.91849"
x="1032.3469"
y="377.92731"
transform="matrix(0,1,1,0,0,0)" />
<text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="380.23166"
y="1049.1219"
id="text294-7-5"><tspan
y="1049.1219"
x="380.23166"
sodipodi:role="line"
id="tspan301-5-5"
style="font-size:12px;line-height:0">OSBuild</tspan></text>
</g>
</g> </g>
</svg> </svg>

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -31,29 +31,29 @@ import os
extensions = [] extensions = []
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ["_templates"]
# The suffix of source filenames. # The suffix of source filenames.
source_suffix = '.rst' source_suffix = ".rst"
# The encoding of source files. # The encoding of source files.
# source_encoding = 'utf-8-sig' # source_encoding = 'utf-8-sig'
# The master toctree document. # The master toctree document.
master_doc = 'index' master_doc = "index"
# General information about the project. # General information about the project.
project = u'Pungi' project = "Pungi"
copyright = u'2016, Red Hat, Inc.' copyright = "2016, Red Hat, Inc."
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
# built documents. # built documents.
# #
# The short X.Y version. # The short X.Y version.
version = '4.3' version = "4.10"
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = '4.3.7' release = "4.10.1"
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.
@ -67,7 +67,7 @@ release = '4.3.7'
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
exclude_patterns = ['_build'] exclude_patterns = ["_build"]
# The reST default role (used for this markup: `text`) to use for all # The reST default role (used for this markup: `text`) to use for all
# documents. # documents.
@ -85,7 +85,7 @@ exclude_patterns = ['_build']
# show_authors = False # show_authors = False
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = "sphinx"
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
# modindex_common_prefix = [] # modindex_common_prefix = []
@ -98,7 +98,7 @@ pygments_style = 'sphinx'
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes. # a list of builtin themes.
html_theme = 'default' html_theme = "default"
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the
@ -127,7 +127,7 @@ html_theme = 'default'
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static'] html_static_path = ["_static"]
# Add any extra paths that contain custom files (such as robots.txt or # Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied # .htaccess) here, relative to this directory. These files are copied
@ -176,7 +176,7 @@ html_static_path = ['_static']
# html_file_suffix = None # html_file_suffix = None
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = 'Pungidoc' htmlhelp_basename = "Pungidoc"
# -- Options for LaTeX output --------------------------------------------- # -- Options for LaTeX output ---------------------------------------------
@ -184,10 +184,8 @@ htmlhelp_basename = 'Pungidoc'
latex_elements = { latex_elements = {
# The paper size ('letterpaper' or 'a4paper'). # The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper', #'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt'). # The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt', #'pointsize': '10pt',
# Additional stuff for the LaTeX preamble. # Additional stuff for the LaTeX preamble.
#'preamble': '', #'preamble': '',
} }
@ -196,8 +194,7 @@ latex_elements = {
# (source start file, target name, title, # (source start file, target name, title,
# author, documentclass [howto, manual, or own class]). # author, documentclass [howto, manual, or own class]).
latex_documents = [ latex_documents = [
('index', 'Pungi.tex', u'Pungi Documentation', ("index", "Pungi.tex", "Pungi Documentation", "Daniel Mach", "manual"),
u'Daniel Mach', 'manual'),
] ]
# The name of an image file (relative to this directory) to place at the top of # The name of an image file (relative to this directory) to place at the top of
@ -225,10 +222,7 @@ latex_documents = [
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [ man_pages = [("index", "pungi", "Pungi Documentation", ["Daniel Mach"], 1)]
('index', 'pungi', u'Pungi Documentation',
[u'Daniel Mach'], 1)
]
# If true, show URL addresses after external links. # If true, show URL addresses after external links.
# man_show_urls = False # man_show_urls = False
@ -240,9 +234,15 @@ man_pages = [
# (source start file, target name, title, author, # (source start file, target name, title, author,
# dir menu entry, description, category) # dir menu entry, description, category)
texinfo_documents = [ texinfo_documents = [
('index', 'Pungi', u'Pungi Documentation', (
u'Daniel Mach', 'Pungi', 'One line description of project.', "index",
'Miscellaneous'), "Pungi",
"Pungi Documentation",
"Daniel Mach",
"Pungi",
"One line description of project.",
"Miscellaneous",
),
] ]
# Documents to append as an appendix to all manuals. # Documents to append as an appendix to all manuals.

View File

@ -194,6 +194,17 @@ Options
Tracking Service Kerberos authentication. If not defined, the default Tracking Service Kerberos authentication. If not defined, the default
Kerberos principal is used. Kerberos principal is used.
**cts_oidc_token_url**
(*str*) -- URL to the OIDC token endpoint.
For example ``https://oidc.example.com/openid-connect/token``.
This option can be overridden by the environment variable ``CTS_OIDC_TOKEN_URL``.
**cts_oidc_client_id*
(*str*) -- OIDC client ID.
This option can be overridden by the environment variable ``CTS_OIDC_CLIENT_ID``.
Note that environment variable ``CTS_OIDC_CLIENT_SECRET`` must be configured with
corresponding client secret to authenticate to CTS via OIDC.
**compose_type** **compose_type**
(*str*) -- Allows to set default compose type. Type set via a command-line (*str*) -- Allows to set default compose type. Type set via a command-line
option overwrites this. option overwrites this.
@ -281,8 +292,8 @@ There a couple common format specifiers available for both the options:
format string. The pattern should not overlap, otherwise it is undefined format string. The pattern should not overlap, otherwise it is undefined
which one will be used. which one will be used.
This format will be used for all phases generating images. Currently that This format will be used for some phases generating images. Currently that
means ``createiso``, ``live_images`` and ``buildinstall``. means ``createiso``, ``buildinstall`` and ``ostree_installer``.
Available extra keys are: Available extra keys are:
* ``disc_num`` * ``disc_num``
@ -312,7 +323,6 @@ There a couple common format specifiers available for both the options:
Available keys are: Available keys are:
* ``boot`` -- for ``boot.iso`` images created in *buildinstall* phase * ``boot`` -- for ``boot.iso`` images created in *buildinstall* phase
* ``live`` -- for images created by *live_images* phase
* ``dvd`` -- for images created by *createiso* phase * ``dvd`` -- for images created by *createiso* phase
* ``ostree`` -- for ostree installer images * ``ostree`` -- for ostree installer images
@ -340,48 +350,10 @@ Example
disc_types = { disc_types = {
'boot': 'netinst', 'boot': 'netinst',
'live': 'Live',
'dvd': 'DVD', 'dvd': 'DVD',
} }
Signing
=======
If you want to sign deliverables generated during pungi run like RPM wrapped
images. You must provide few configuration options:
**signing_command** [optional]
(*str*) -- Command that will be run with a koji build as a single
argument. This command must not require any user interaction.
If you need to pass a password for a signing key to the command,
do this via command line option of the command and use string
formatting syntax ``%(signing_key_password)s``.
(See **signing_key_password_file**).
**signing_key_id** [optional]
(*str*) -- ID of the key that will be used for the signing.
This ID will be used when crafting koji paths to signed files
(``kojipkgs.fedoraproject.org/packages/NAME/VER/REL/data/signed/KEYID/..``).
**signing_key_password_file** [optional]
(*str*) -- Path to a file with password that will be formatted
into **signing_command** string via ``%(signing_key_password)s``
string format syntax (if used).
Because pungi config is usually stored in git and is part of compose
logs we don't want password to be included directly in the config.
Note: If ``-`` string is used instead of a filename, then you will be asked
for the password interactivelly right after pungi starts.
Example
-------
::
signing_command = '~/git/releng/scripts/sigulsign_unsigned.py -vv --password=%(signing_key_password)s fedora-24'
signing_key_id = '81b46521'
signing_key_password_file = '~/password_for_fedora-24_key'
.. _git-urls: .. _git-urls:
Git URLs Git URLs
@ -581,6 +553,16 @@ Options
with everything. Set this option to ``False`` to ignore ``noarch`` in with everything. Set this option to ``False`` to ignore ``noarch`` in
``ExclusiveArch`` and always consider only binary architectures. ``ExclusiveArch`` and always consider only binary architectures.
**pkgset_inherit_exclusive_arch_to_noarch** = True
(*bool*) -- When set to ``True``, the value of ``ExclusiveArch`` or
``ExcludeArch`` will be copied from source rpm to all its noarch packages.
That will than limit which architectures the noarch packages can be
included in.
By setting this option to ``False`` this step is skipped, and noarch
packages will by default land in all architectures. They can still be
excluded by listing them in a relevant section of ``filter_packages``.
**pkgset_allow_reuse** = True **pkgset_allow_reuse** = True
(*bool*) -- When set to ``True``, *Pungi* will try to reuse pkgset data (*bool*) -- When set to ``True``, *Pungi* will try to reuse pkgset data
from the old composes specified by ``--old-composes``. When enabled, this from the old composes specified by ``--old-composes``. When enabled, this
@ -621,7 +603,7 @@ Options
------- -------
**buildinstall_method** **buildinstall_method**
(*str*) -- "lorax" (f16+, rhel7+) or "buildinstall" (older releases) (*str*) -- "lorax" (f16+, rhel7+)
**lorax_options** **lorax_options**
(*list*) -- special options passed on to *lorax*. (*list*) -- special options passed on to *lorax*.
@ -647,6 +629,10 @@ Options
* ``squashfs_only`` -- *bool* (default ``False``) pass the --squashfs_only to Lorax. * ``squashfs_only`` -- *bool* (default ``False``) pass the --squashfs_only to Lorax.
* ``configuration_file`` -- (:ref:`scm_dict <scm_support>`) (default empty) pass the * ``configuration_file`` -- (:ref:`scm_dict <scm_support>`) (default empty) pass the
specified configuration file to Lorax using the -c option. specified configuration file to Lorax using the -c option.
* ``rootfs_type`` -- *string* (default empty) pass the ``--rootfs-type``
option to Lorax with the provided value. If not specified, no type is
specified to Lorax, which will choose whatever default it is configured
with.
**lorax_extra_sources** **lorax_extra_sources**
(*list*) -- a variant/arch mapping with urls for extra source repositories (*list*) -- a variant/arch mapping with urls for extra source repositories
added to Lorax command line. Either one repo or a list can be specified. added to Lorax command line. Either one repo or a list can be specified.
@ -920,6 +906,10 @@ Options
comps file can not be found in the package set. When disabled (the comps file can not be found in the package set. When disabled (the
default), such cases are still reported as warnings in the log. default), such cases are still reported as warnings in the log.
With ``dnf`` gather backend, this option will abort the compose on any
missing package no matter if it's listed in comps, ``additional_packages``
or prepopulate file.
**gather_source_mapping** **gather_source_mapping**
(*str*) -- JSON mapping with initial packages for the compose. The value (*str*) -- JSON mapping with initial packages for the compose. The value
should be a path to JSON file with following mapping: ``{variant: {arch: should be a path to JSON file with following mapping: ``{variant: {arch:
@ -1017,6 +1007,8 @@ Example
to track decisions. to track decisions.
.. _koji-settings:
Koji Settings Koji Settings
============= =============
@ -1031,6 +1023,11 @@ Options
to set up your Koji client profile. In the examples, the profile name is to set up your Koji client profile. In the examples, the profile name is
"koji", which points to Fedora's koji.fedoraproject.org. "koji", which points to Fedora's koji.fedoraproject.org.
**koji_cache**
(*str*) -- koji cache directory. Setting this causes Pungi to download
packages over HTTP into a cache, which is used in lieu of the Koji profile's
``topdir`` setting. See :doc:`koji` for details on this behavior.
**global_runroot_method** **global_runroot_method**
(*str*) -- global runroot method to use. If ``runroot_method`` is set (*str*) -- global runroot method to use. If ``runroot_method`` is set
per Pungi phase using a dictionary, this option defines the default per Pungi phase using a dictionary, this option defines the default
@ -1294,7 +1291,7 @@ Options
(*int|str*) -- how much free space should be left on each disk. The format (*int|str*) -- how much free space should be left on each disk. The format
is the same as for ``iso_size`` option. is the same as for ``iso_size`` option.
**iso_hfs_ppc64le_compatible** = True **iso_hfs_ppc64le_compatible** = False
(*bool*) -- when set to False, the Apple/HFS compatibility is turned off (*bool*) -- when set to False, the Apple/HFS compatibility is turned off
for ppc64le ISOs. This option only makes sense for bootable products, and for ppc64le ISOs. This option only makes sense for bootable products, and
affects images produced in *createiso* and *extra_isos* phases. affects images produced in *createiso* and *extra_isos* phases.
@ -1343,8 +1340,8 @@ All non-``RC`` milestones from label get appended to the version. For release
either label is used or date, type and respin. either label is used or date, type and respin.
Common options for Live Images, Live Media and Image Build Common options for Live Media and Image Build
========================================================== =============================================
All images can have ``ksurl``, ``version``, ``release`` and ``target`` All images can have ``ksurl``, ``version``, ``release`` and ``target``
specified. Since this can create a lot of duplication, there are global options specified. Since this can create a lot of duplication, there are global options
@ -1360,14 +1357,12 @@ The kickstart URL is configured by these options.
* ``global_ksurl`` -- global fallback setting * ``global_ksurl`` -- global fallback setting
* ``live_media_ksurl`` * ``live_media_ksurl``
* ``image_build_ksurl`` * ``image_build_ksurl``
* ``live_images_ksurl``
Target is specified by these settings. Target is specified by these settings.
* ``global_target`` -- global fallback setting * ``global_target`` -- global fallback setting
* ``live_media_target`` * ``live_media_target``
* ``image_build_target`` * ``image_build_target``
* ``live_images_target``
* ``osbuild_target`` * ``osbuild_target``
Version is specified by these options. If no version is set, a default value Version is specified by these options. If no version is set, a default value
@ -1376,7 +1371,6 @@ will be provided according to :ref:`automatic versioning <auto-version>`.
* ``global_version`` -- global fallback setting * ``global_version`` -- global fallback setting
* ``live_media_version`` * ``live_media_version``
* ``image_build_version`` * ``image_build_version``
* ``live_images_version``
* ``osbuild_version`` * ``osbuild_version``
Release is specified by these options. If set to a magic value to Release is specified by these options. If set to a magic value to
@ -1386,44 +1380,14 @@ to :ref:`automatic versioning <auto-version>`.
* ``global_release`` -- global fallback setting * ``global_release`` -- global fallback setting
* ``live_media_release`` * ``live_media_release``
* ``image_build_release`` * ``image_build_release``
* ``live_images_release``
* ``osbuild_release`` * ``osbuild_release``
Each configuration block can also optionally specify a ``failable`` key. For Each configuration block can also optionally specify a ``failable`` key. It
live images it should have a boolean value. For live media and image build it
should be a list of strings containing architectures that are optional. If any should be a list of strings containing architectures that are optional. If any
deliverable fails on an optional architecture, it will not abort the whole deliverable fails on an optional architecture, it will not abort the whole
compose. If the list contains only ``"*"``, all arches will be substituted. compose. If the list contains only ``"*"``, all arches will be substituted.
Live Images Settings
====================
**live_images**
(*list*) -- Configuration for the particular image. The elements of the
list should be tuples ``(variant_uid_regex, {arch|*: config})``. The config
should be a dict with these keys:
* ``kickstart`` (*str*)
* ``ksurl`` (*str*) [optional] -- where to get the kickstart from
* ``name`` (*str*)
* ``version`` (*str*)
* ``target`` (*str*)
* ``repo`` (*str|[str]*) -- repos specified by URL or variant UID
* ``specfile`` (*str*) -- for images wrapped in RPM
* ``scratch`` (*bool*) -- only RPM-wrapped images can use scratch builds,
but by default this is turned off
* ``type`` (*str*) -- what kind of task to start in Koji. Defaults to
``live`` meaning ``koji spin-livecd`` will be used. Alternative option
is ``appliance`` corresponding to ``koji spin-appliance``.
* ``sign`` (*bool*) -- only RPM-wrapped images can be signed
**live_images_no_rename**
(*bool*) -- When set to ``True``, filenames generated by Koji will be used.
When ``False``, filenames will be generated based on ``image_name_format``
configuration option.
Live Media Settings Live Media Settings
=================== ===================
@ -1579,6 +1543,83 @@ Example
} }
KiwiBuild Settings
==================
**kiwibuild**
(*dict*) -- configuration for building images using kiwi by a Koji plugin.
Pungi will trigger a Koji task delegating to kiwi, which will build the image,
import it to Koji via content generators.
Format: ``{variant_uid_regex: [{...}]}``.
Required keys in the configuration dict:
* ``kiwi_profile`` -- (*str*) select profile from description file.
Description scm, description path and target have to be provided too, but
instead of specifying them for each image separately, you can use the
``kiwibuild_*`` options or ``global_target``.
Optional keys:
* ``description_scm`` -- (*str*) scm URL of description kiwi description.
* ``description_path`` -- (*str*) path to kiwi description inside the scm
repo.
* ``repos`` -- additional repos used to install RPMs in the image. The
compose repository for the enclosing variant is added automatically.
Either variant name or a URL is supported.
* ``target`` -- (*str*) which build target to use for the task. If not
provided, then either ``kiwibuild_target`` or ``global_target`` is
needed.
* ``release`` -- (*str*) release of the output image.
* ``arches`` -- (*[str]*) List of architectures to build for. If not
provided, all variant architectures will be built.
* ``failable`` -- (*[str]*) List of architectures for which this
deliverable is not release blocking.
* ``type`` -- (*str*) override default type from the bundle with this value.
* ``type_attr`` -- (*[str]*) override default attributes for the build type
from description.
* ``bundle_name_format`` -- (*str*) override default bundle format name.
* ``version`` -- (*str*) override version. Follows the same rules as
described in :ref:`automatic versioning <auto-version>`.
* ``repo_releasever`` -- (*str*) Override default releasever of the output
image.
* ``manifest_type`` -- the image type that is put into the manifest by
pungi. If not supplied, an autodetected value will be provided. It may or
may not make sense.
* ``use_buildroot_repo = False`` -- (*bool*) whether the task should
automatically enable buildroot repository corresponding to the used
target.
The options can be set either for the specific image, or at the phase level
(see below). Version also falls back to ``global_version``.
**kiwibuild_description_scm**
(*str*) -- URL for scm containing the description files
**kiwibuild_description_path**
(*str*) -- path to a description file within the description scm
**kiwibuild_type**
(*str*) -- override default type from the bundle with this value.
**kiwibuild_type_attr**
(*[str]*) -- override default attributes for the build type from description.
**kiwibuild_bundle_name_format**
(*str*) -- override default bundle format name.
**kiwibuild_version**
(*str*) -- overide version for all kiwibuild tasks.
**kiwibuild_repo_releasever**
(*str*) -- override releasever for all kiwibuild tasks.
**kiwibuild_use_buildroot_repo**
(*bool*) -- set enablement of a buildroot repo for all kiwibuild tasks.
OSBuild Composer for building images OSBuild Composer for building images
==================================== ====================================
@ -1627,11 +1668,17 @@ OSBuild Composer for building images
* ``arches`` -- list of architectures for which to build the image. By * ``arches`` -- list of architectures for which to build the image. By
default, the variant arches are used. This option can only restrict it, default, the variant arches are used. This option can only restrict it,
not add a new one. not add a new one.
* ``manifest_type`` -- the image type that is put into the manifest by
pungi. If not supplied then it is autodetected from the Koji output.
* ``ostree_url`` -- URL of the repository that's used to fetch the parent * ``ostree_url`` -- URL of the repository that's used to fetch the parent
commit from. commit from.
* ``ostree_ref`` -- name of the ostree branch * ``ostree_ref`` -- name of the ostree branch
* ``ostree_parent`` -- commit hash or a a branch-like reference to the * ``ostree_parent`` -- commit hash or a a branch-like reference to the
parent commit. parent commit.
* ``customizations`` -- a dictionary with customizations to use for the
image build. For the list of supported customizations, see the **hosted**
variants in the `Image Builder documentation
<https://osbuild.org/docs/user-guide/blueprint-reference#installation-device>`.
* ``upload_options`` -- a dictionary with upload options specific to the * ``upload_options`` -- a dictionary with upload options specific to the
target cloud environment. If provided, the image will be uploaded to the target cloud environment. If provided, the image will be uploaded to the
cloud environment, in addition to the Koji server. One can't combine cloud environment, in addition to the Koji server. One can't combine
@ -1679,6 +1726,102 @@ OSBuild Composer for building images
arch. arch.
Image Builder Settings
======================
**imagebuilder**
(*dict*) -- configuration for building images with the ``koji-image-builder``
Koji plugin. Pungi will trigger a Koji task which will build the image with
the given configuration using the ``image-builder`` executable in the build
root.
Format: ``{variant_uid_regex: [{...}]}``.
Required keys in the configuration dict:
* ``name`` -- name of the Koji package
* ``types`` -- a list with a single image type string representing
the image type to build (e.g. ``qcow2``). Only a single image type
can be provided as an argument.
Optional keys:
* ``target`` -- which build target to use for the task. Either this option,
the global ``imagebuilder_target``, or ``global_target`` is required.
* ``version`` -- version for the final build (as a string). This option is
required if the global ``imagebuilder_version`` or its ``global_version``
equivalent are not specified.
* ``release`` -- release part of the final NVR. If neither this option nor
the global ``imagebuilder_release`` nor its ``global_release`` equivalent
are set, Koji will automatically generate a value.
* ``repos`` -- a list of repositories from which to consume packages for
building the image. By default only the variant repository is used.
The list items use the following formats:
* String with just the repository URL.
* Variant ID in the current compose.
* ``arches`` -- list of architectures for which to build the image. By
default, the variant arches are used. This option can only restrict it,
not add a new one.
* ``seed`` -- An integer that can be used to make builds more reproducible.
When ``image-builder`` builds images various bits and bobs are generated
with a PRNG (partition uuids, etc). Pinning the seed with this argument
or ``imagebuilder_seed`` to do so globally will make builds use the same
random values each time. Note that using ``seed`` requires the Koji side
to have at least ``koji-image-builder >= 7`` deployed.
* ``scratch`` -- A boolean to instruct ``koji-image-builder`` to perform scratch
builds. This might have implications on garbage collection within the ``koji``
instance you're targeting. Can also be set globally through
``imagebuilder_scratch``.
* ``ostree`` -- A dictionary describing where to get ``ostree`` content when
applicable. The dictionary contains the following keys:
* ``url`` -- URL of the repository that's used to fetch the parent
commit from.
* ``ref`` -- Name of an ostree branch or tag
* ``blueprint`` -- A dictionary with a blueprint to use for the
image build. Blueprints can customize images beyond their initial definition.
For the list of supported customizations, see external
`Documentation <https://osbuild.org/docs/user-guide/blueprint-reference/>`__
.. note::
There is initial support for having this task as failable without aborting
the whole compose. This can be enabled by setting ``"failable": ["*"]`` in
the config for the image. It is an on/off switch without granularity per
arch.
Example Config
--------------
::
imagebuilder_target = 'f43-image-builder'
imagebuilder_seed = 43
imagebuilder_scratch = True
imagebuilder = {
"^IoT$": [
{
"name": "%s-raw" % release_name,
"types": ["iot-raw-xz"],
"arches": ["x86_64"], #, "aarch64"],
"repos": ["https://kojipkgs.fedoraproject.org/compose/rawhide/latest-Fedora-Rawhide/compose/Everything/$arch/os/"],
"ostree": {
"url": "https://kojipkgs.fedoraproject.org/compose/iot/repo/",
"ref": "fedora/rawhide/$arch/iot",
},
"subvariant": "IoT",
"failable": ["*"],
},
]
}
Image container Image container
=============== ===============
@ -1739,16 +1882,16 @@ another directory. Any new packages in the compose will be added to the
repository with a new commit. repository with a new commit.
**ostree** **ostree**
(*dict*) -- a mapping of configuration for each. The format should be (*dict*) -- a mapping of configuration for each variant. The format should
``{variant_uid_regex: config_dict}``. It is possible to use a list of be ``{variant_uid_regex: config_dict}``. It is possible to use a list of
configuration dicts as well. configuration dicts as well.
The configuration dict for each variant arch pair must have these keys: The configuration dict for each variant arch pair must have these keys:
* ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``. * ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``.
* ``config_url`` -- (*str*) URL for Git repository with the ``treefile``. * ``config_url`` -- (*str*) URL for Git repository with the ``treefile``.
* ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or variant UID * ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or a dict of
or a dict of repo options, ``baseurl`` is required in the dict. repo options, ``baseurl`` is required in the dict.
* ``ostree_repo`` -- (*str*) Where to put the ostree repository * ``ostree_repo`` -- (*str*) Where to put the ostree repository
These keys are optional: These keys are optional:
@ -1779,6 +1922,8 @@ repository with a new commit.
* ``tag_ref`` -- (*bool*, default ``True``) If set to ``False``, a git * ``tag_ref`` -- (*bool*, default ``True``) If set to ``False``, a git
reference will not be created. reference will not be created.
* ``ostree_ref`` -- (*str*) To override value ``ref`` from ``treefile``. * ``ostree_ref`` -- (*str*) To override value ``ref`` from ``treefile``.
* ``runroot_packages`` -- (*list*) A list of additional package names to be
installed in the runroot environment in Koji.
Example config Example config
-------------- --------------
@ -1788,13 +1933,11 @@ Example config
"^Atomic$": { "^Atomic$": {
"treefile": "fedora-atomic-docker-host.json", "treefile": "fedora-atomic-docker-host.json",
"config_url": "https://git.fedorahosted.org/git/fedora-atomic.git", "config_url": "https://git.fedorahosted.org/git/fedora-atomic.git",
"keep_original_sources": True,
"repo": [ "repo": [
"Server",
"http://example.com/repo/x86_64/os", "http://example.com/repo/x86_64/os",
{"baseurl": "Everything"},
{"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"}, {"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
], ],
"keep_original_sources": True,
"ostree_repo": "/mnt/koji/compose/atomic/Rawhide/", "ostree_repo": "/mnt/koji/compose/atomic/Rawhide/",
"update_summary": True, "update_summary": True,
# Automatically generate a reasonable version # Automatically generate a reasonable version
@ -1810,6 +1953,88 @@ Example config
has the pungi_ostree plugin installed. has the pungi_ostree plugin installed.
OSTree Native Container Settings
================================
The ``ostree_container`` phase of *Pungi* can create an ostree native container
image as an OCI archive. This is done by running ``rpm-ostree compose image``
in a Koji runroot environment.
While rpm-ostree can use information from previously built images to improve
the split in container layers, we can not use that functionnality until
https://github.com/containers/skopeo/pull/2114 is resolved. Each invocation
will thus create a new OCI archive image *from scratch*.
**ostree_container**
(*dict*) -- a mapping of configuration for each variant. The format should
be ``{variant_uid_regex: config_dict}``. It is possible to use a list of
configuration dicts as well.
The configuration dict for each variant arch pair must have these keys:
* ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``.
* ``config_url`` -- (*str*) URL for Git repository with the ``treefile``.
These keys are optional:
* ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or a dict of
repo options, ``baseurl`` is required in the dict.
* ``keep_original_sources`` -- (*bool*) Keep the existing source repos in
the tree config file. If not enabled, all the original source repos will
be removed from the tree config file.
* ``config_branch`` -- (*str*) Git branch of the repo to use. Defaults to
``main``.
* ``arches`` -- (*[str]*) List of architectures for which to generate
ostree native container images. There will be one task per architecture.
By default all architectures in the variant are used.
* ``failable`` -- (*[str]*) List of architectures for which this
deliverable is not release blocking.
* ``version`` -- (*str*) Version string to be added to the OCI archive name.
If this option is set to ``!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN``,
a value will be generated automatically as ``$VERSION.$RELEASE``.
If this option is set to ``!VERSION_FROM_VERSION_DATE_RESPIN``,
a value will be generated automatically as ``$VERSION.$DATE.$RESPIN``.
:ref:`See how those values are created <auto-version>`.
* ``tag_ref`` -- (*bool*, default ``True``) If set to ``False``, a git
reference will not be created.
* ``runroot_packages`` -- (*list*) A list of additional package names to be
installed in the runroot environment in Koji.
* ``subvariant`` -- (*str*) The subvariant value to be used in the metadata
for the image. Also used in the image's filename, unless overridden by
``name``. Defaults to being the same as the variant. If building more
than one ostree container in a variant, each must have a unique
subvariant.
* ``name`` -- (*str*) The base for the image's filename. To produce the
complete filename, the image's architecture, the version string, and the
format suffix are appended to this. Defaults to the value of
``release_short`` and the subvariant, joined by a dash.
Example config
--------------
::
ostree_container = {
"^Sagano$": {
"treefile": "fedora-tier-0-38.yaml",
"config_url": "https://gitlab.com/CentOS/cloud/sagano.git",
"config_branch": "main",
"repo": [
"http://example.com/repo/x86_64/os",
{"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
],
# Automatically generate a reasonable version
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
# Only run this for x86_64 even if Sagano has more arches
"arches": ["x86_64"],
}
}
**ostree_container_use_koji_plugin** = False
(*bool*) -- When set to ``True``, the Koji pungi_ostree task will be
used to execute rpm-ostree instead of runroot. Use only if the Koji instance
has the pungi_ostree plugin installed.
Ostree Installer Settings Ostree Installer Settings
========================= =========================
@ -2160,9 +2385,9 @@ Miscellaneous Settings
format string accepting ``%(variant_name)s`` and ``%(arch)s`` placeholders. format string accepting ``%(variant_name)s`` and ``%(arch)s`` placeholders.
**symlink_isos_to** **symlink_isos_to**
(*str*) -- If set, the ISO files from ``buildinstall``, ``createiso`` and (*str*) -- If set, the ISO files from ``buildinstall`` and ``createiso``
``live_images`` phases will be put into this destination, and a symlink phases will be put into this destination, and a symlink pointing to this
pointing to this location will be created in actual compose directory. location will be created in actual compose directory.
**dogpile_cache_backend** **dogpile_cache_backend**
(*str*) -- If set, Pungi will use the configured Dogpile cache backend to (*str*) -- If set, Pungi will use the configured Dogpile cache backend to

View File

@ -294,30 +294,6 @@ This is a shortened configuration for Fedora Radhide compose as of 2019-10-14.
}) })
] ]
live_target = 'f32'
live_images_no_rename = True
live_images = [
('^Workstation$', {
'armhfp': {
'kickstart': 'fedora-arm-workstation.ks',
'name': 'Fedora-Workstation-armhfp',
# Again workstation takes packages from Everything.
'repo': 'Everything',
'type': 'appliance',
'failable': True,
}
}),
('^Server$', {
# But Server has its own repo.
'armhfp': {
'kickstart': 'fedora-arm-server.ks',
'name': 'Fedora-Server-armhfp',
'type': 'appliance',
'failable': True,
}
}),
]
ostree = { ostree = {
"^Silverblue$": { "^Silverblue$": {
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN", "version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
@ -343,6 +319,20 @@ This is a shortened configuration for Fedora Radhide compose as of 2019-10-14.
} }
} }
ostree_container = {
"^Sagano$": {
"treefile": "fedora-tier-0-38.yaml",
"config_url": "https://gitlab.com/CentOS/cloud/sagano.git",
"config_branch": "main",
# Consume packages from Everything
"repo": "Everything",
# Automatically generate a reasonable version
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
# Only run this for x86_64 even if Sagano has more arches
"arches": ["x86_64"],
}
}
ostree_installer = [ ostree_installer = [
("^Silverblue$", { ("^Silverblue$", {
"x86_64": { "x86_64": {

View File

@ -19,7 +19,7 @@ Contents:
scm_support scm_support
messaging messaging
gathering gathering
koji
comps comps
contributing contributing
testing testing
multi_compose

107
doc/koji.rst Normal file
View File

@ -0,0 +1,107 @@
======================
Getting data from koji
======================
When Pungi is configured to get packages from a Koji tag, it somehow needs to
access the actual RPM files.
Historically, this required the storage used by Koji to be directly available
on the host where Pungi was running. This was usually achieved by using NFS for
the Koji volume, and mounting it on the compose host.
The compose could be created directly on the same volume. In such case the
packages would be hardlinked, significantly reducing space consumption.
The compose could also be created on a different storage, in which case the
packages would either need to be copied over or symlinked. Using symlinks
requires that anything that accesses the compose (e.g. a download server) would
also need to mount the Koji volume in the same location.
There is also a risk with symlinks that the package in Koji can change (due to
being resigned for example), which would invalidate composes linking to it.
Using Koji without direct mount
===============================
It is possible now to run a compose from a Koji tag without direct access to
Koji storage.
Pungi can download the packages over HTTP protocol, store them in a local
cache, and consume them from there. To enable this behavior, set the
:ref:`koji_cache <koji-settings>` option in the compose configuration.
The local cache has similar structure to what is on the Koji volume.
When Pungi needs some package, it has a path on Koji volume. It will replace
the ``topdir`` with the cache location. If such file exists, it will be used.
If it doesn't exist, it will be downloaded from Koji (by replacing the
``topdir`` with ``topurl``).
::
Koji path /mnt/koji/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
Koji URL https://kojipkgs.fedoraproject.org/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
Local path /mnt/compose/cache/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
The packages can be hard- or softlinked from this cache directory
(``/mnt/compose/cache`` in the example).
Cleanup
-------
While the approach above allows each RPM to be downloaded only once, it will
eventually result in the Koji volume being mirrored locally. Most of the
packages will however no longer be needed.
There is a script ``pungi-cache-cleanup`` that can help with that. It can find
and remove files from the cache that are no longer needed.
A file is no longer needed if it has a single link (meaning it is only in the
cache, not in any compose), and it has mtime older than a given threshold.
It doesn't make sense to delete files that are hardlinked in an existing
compose as it would not save any space anyway.
The mtime check is meant to preserve files that are downloaded but not actually
used in a compose, like a subpackage that is not included in any variant. Every
time its existence in the local cache is checked, the mtime is updated.
Race conditions?
----------------
It should be safe to have multiple compose hosts share the same storage volume
for generated composes and local cache.
If a cache file is accessed and it exists, there's no risk of race condition.
If two composes need the same file at the same time and it is not present yet,
one of them will take a lock on it and start downloading. The other will wait
until the download is finished.
The lock is only valid for a set amount of time (5 minutes) to avoid issues
where the downloading process is killed in a way that blocks it from releasing
the lock.
If the file is large and network slow, the limit may not be enough finish
downloading. In that case the second process will steal the lock while the
first process is still downloading. This will result in the same file being
downloaded twice.
When the first process finishes the download, it will put the file into the
local cache location. When the second process finishes, it will atomically
replace it, but since it's the same file it will be the same file.
If the first compose already managed to hardlink the file before it gets
replaced, there will be two copies of the file present locally.
Integrity checking
------------------
There is minimal integrity checking. RPM packages belonging to real builds will
be check to match the checksum provided by Koji hub.
There is no checking for scratch builds or any images.

View File

@ -1,107 +0,0 @@
.. _multi_compose:
Managing compose from multiple parts
====================================
There may be cases where it makes sense to split a big compose into separate
parts, but create a compose output that links all output into one familiar
structure.
The `pungi-orchestrate` tools allows that.
It works with an INI-style configuration file. The ``[general]`` section
contains information about identity of the main compose. Other sections define
individual parts.
The parts are scheduled to run in parallel, with the minimal amount of
serialization. The final compose directory will contain hard-links to the
files.
General settings
----------------
**target**
Path to directory where the final compose should be created.
**compose_type**
Type of compose to make.
**release_name**
Name of the product for the final compose.
**release_short**
Short name of the product for the final compose.
**release_version**
Version of the product for the final compose.
**release_type**
Type of the product for the final compose.
**extra_args**
Additional arguments that will be passed to the child Pungi processes.
**koji_profile**
If specified, a current event will be retrieved from the Koji instance and
used for all parts.
**kerberos**
If set to yes, a kerberos ticket will be automatically created at the start.
Set keytab and principal as well.
**kerberos_keytab**
Path to keytab file used to create the kerberos ticket.
**kerberos_principal**
Kerberos principal for the ticket
**pre_compose_script**
Commands to execute before first part is started. Can contain multiple
commands on separate lines.
**post_compose_script**
Commands to execute after the last part finishes and final status is
updated. Can contain multiple commands on separate lines. ::
post_compose_script =
compose-latest-symlink $COMPOSE_PATH
custom-post-compose-script.sh
Multiple environment variables are defined for the scripts:
* ``COMPOSE_PATH``
* ``COMPOSE_ID``
* ``COMPOSE_DATE``
* ``COMPOSE_TYPE``
* ``COMPOSE_RESPIN``
* ``COMPOSE_LABEL``
* ``RELEASE_ID``
* ``RELEASE_NAME``
* ``RELEASE_SHORT``
* ``RELEASE_VERSION``
* ``RELEASE_TYPE``
* ``RELEASE_IS_LAYERED`` ``YES`` for layered products, empty otherwise
* ``BASE_PRODUCT_NAME`` only set for layered products
* ``BASE_PRODUCT_SHORT`` only set for layered products
* ``BASE_PRODUCT_VERSION`` only set for layered products
* ``BASE_PRODUCT_TYPE`` only set for layered products
**notification_script**
Executable name (or path to a script) that will be used to send a message
once the compose is finished. In order for a valid URL to be included in the
message, at least one part must configure path translation that would apply
to location of main compose.
Only two messages will be sent, one for start and one for finish (either
successful or not).
Partial compose settings
------------------------
Each part should have a separate section in the config file.
It can specify these options:
**config**
Path to configuration file that describes this part. If relative, it is
resolved relative to the file with parts configuration.
**just_phase**, **skip_phase**
Customize which phases should run for this part.
**depends_on**
A comma separated list of other parts that must be finished before this part
starts.
**failable**
A boolean toggle to mark a part as failable. A failure in such part will
mark the final compose as incomplete, but still successful.

View File

@ -30,17 +30,14 @@ packages to architectures.
Buildinstall Buildinstall
------------ ------------
Spawns a bunch of threads, each of which runs either ``lorax`` or Spawns a bunch of threads, each of which runs the ``lorax`` command. The
``buildinstall`` command (the latter coming from ``anaconda`` package). The
commands create ``boot.iso`` and other boot configuration files. The image is commands create ``boot.iso`` and other boot configuration files. The image is
finally linked into the ``compose/`` directory as netinstall media. finally linked into the ``compose/`` directory as netinstall media.
The created images are also needed for creating live media or other images in The created images are also needed for creating live media or other images in
later phases. later phases.
With ``lorax`` this phase runs one task per variant.arch combination. For With ``lorax`` this phase runs one task per variant.arch combination.
``buildinstall`` command there is only one task per architecture and
``product.img`` should be used to customize the results.
Gather Gather
------ ------
@ -115,12 +112,24 @@ ImageBuild
This phase wraps up ``koji image-build``. It also updates the metadata This phase wraps up ``koji image-build``. It also updates the metadata
ultimately responsible for ``images.json`` manifest. ultimately responsible for ``images.json`` manifest.
KiwiBuild
---------
Similarly to image build, this phases creates a koji `kiwiBuild` task. In the
background it uses Kiwi to create images.
OSBuild OSBuild
------- -------
Similarly to image build, this phases creates a koji `osbuild` task. In the Similarly to image build, this phases creates a koji `osbuild` task. In the
background it uses OSBuild Composer to create images. background it uses OSBuild Composer to create images.
ImageBuilder
------------
Similarly to image build, this phases creates a koji `imageBuilderBuild`
task. In the background it uses `image-builder` to create images.
OSBS OSBS
---- ----

View File

@ -18,6 +18,7 @@ which can contain following keys.
* ``cvs`` -- copies files from a CVS repository * ``cvs`` -- copies files from a CVS repository
* ``rpm`` -- copies files from a package in the compose * ``rpm`` -- copies files from a package in the compose
* ``koji`` -- downloads archives from a given build in Koji build system * ``koji`` -- downloads archives from a given build in Koji build system
* ``container-image`` -- downloads an artifact from a container registry
* ``repo`` * ``repo``
@ -41,6 +42,14 @@ which can contain following keys.
* ``command`` -- defines a shell command to run after Git clone to generate the * ``command`` -- defines a shell command to run after Git clone to generate the
needed file (for example to run ``make``). Only supported in Git backend. needed file (for example to run ``make``). Only supported in Git backend.
* ``options`` -- a dictionary of additional configuration options. These are
specific to different backends.
Currently supported values for Git:
* ``credential_helper`` -- path to a credential helper used to supply
username/password for remotes that require authentication.
Koji examples Koji examples
------------- -------------
@ -77,6 +86,24 @@ For ``extra_files`` phase either key is valid and should be chosen depending on
what the actual use case. what the actual use case.
``container-image`` example
---------------------------
Example of pulling a container image into the compose. ::
{
# Pull a container into an oci-archive tar file
"scm": "container-image",
# This is the pull spec including tag. It is passed directly to skopeo
# copy with no modification.
"repo": "docker://registry.access.redhat.com/ubi9/ubi-minimal:latest",
# Key `file` is required, but the value is ignored.
"file": "",
# Optional subdirectory under Server/<arch>/os
"target": "containers",
}
Caveats Caveats
------- -------

View File

@ -1,64 +1,60 @@
%{?python_enable_dependency_generator}
Name: pungi Name: pungi
Version: 4.3.7 Version: 4.10.1
Release: 3%{?dist}.alma Release: 1%{?dist}.alma.2
Summary: Distribution compose tool Summary: Distribution compose tool
License: GPL-2.0-only License: GPL-2.0-only
URL: https://pagure.io/pungi URL: https://pagure.io/pungi
Source0: %{name}-%{version}.tar.bz2 Source0: %{name}-%{version}.tar.bz2
Patch: https://pagure.io/pungi/pull-request/1860.patch
ExcludeArch: i686
BuildRequires: make BuildRequires: make
BuildRequires: python3-pytest BuildRequires: python3-pytest
BuildRequires: python3-pyfakefs # replaced by unittest.mock
BuildRequires: python3-ddt # BuildRequires: python3-mock
BuildRequires: python3-devel BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-productmd >= 1.33
BuildRequires: python3-kobo-rpmlib >= 0.18.0 BuildRequires: python3-kobo-rpmlib >= 0.18.0
BuildRequires: createrepo_c >= 0.20.1 BuildRequires: createrepo_c >= 0.20.1
BuildRequires: python3-lxml
BuildRequires: python3-ddt
BuildRequires: python3-kickstart BuildRequires: python3-kickstart
BuildRequires: python3-rpm BuildRequires: python3-rpm
BuildRequires: python3-dnf BuildRequires: python3-dnf
BuildRequires: python3-multilib BuildRequires: python3-multilib
BuildRequires: python3-six BuildRequires: python3-six
BuildRequires: git-core BuildRequires: git-core
BuildRequires: python3-jsonschema
BuildRequires: python3-libcomps BuildRequires: python3-libcomps
BuildRequires: python3-kobo
BuildRequires: python3-koji BuildRequires: python3-koji
BuildRequires: lorax BuildRequires: lorax
BuildRequires: python3-PyYAML BuildRequires: python3-PyYAML
BuildRequires: python3-libmodulemd >= 2.8.0 BuildRequires: python3-libmodulemd >= 2.8.0
BuildRequires: python3-gobject BuildRequires: python3-gobject
BuildRequires: python3-createrepo_c >= 0.20.1 BuildRequires: python3-createrepo_c >= 0.20.1
BuildRequires: python3-dogpile-cache
BuildRequires: python3-parameterized BuildRequires: python3-parameterized
BuildRequires: python3-gobject-base BuildRequires: python3-flufl-lock
BuildRequires: python3-ddt
BuildRequires: python3-distro BuildRequires: python3-distro
BuildRequires: python3-gobject-base
BuildRequires: python3-pgpy
BuildRequires: python3-pyfakefs
%if %{rhel} == 8 %if %{rhel} == 8
BuildRequires: python3-dataclasses BuildRequires: python3-dataclasses
%endif %endif
BuildRequires: python3-pgpy
#deps for doc building #deps for doc building
BuildRequires: python3-sphinx BuildRequires: python3-sphinx
Requires: python3-kobo-rpmlib >= 0.18.0 Requires: python3-kobo-rpmlib >= 0.18.0
Requires: python3-productmd >= 1.33
Requires: python3-kickstart Requires: python3-kickstart
Requires: python3-requests
%if %{rhel} == 8
Requires: python3-dataclasses
%endif
Requires: createrepo_c >= 0.20.1 Requires: createrepo_c >= 0.20.1
Requires: koji >= 1.10.1-13 Requires: koji >= 1.10.1-13
Requires: python3-koji-cli-plugins Requires: python3-koji-cli-plugins
Requires: isomd5sum Requires: isomd5sum
%if %{rhel} == 8 || %{rhel} == 9
Requires: genisoimage Requires: genisoimage
%else
Recommends: genisoimage
%endif
Requires: git Requires: git
Requires: python3-dnf Requires: python3-dnf
Requires: python3-multilib Requires: python3-multilib
@ -68,11 +64,21 @@ Requires: python3-libmodulemd >= 2.8.0
Requires: python3-gobject Requires: python3-gobject
Requires: python3-createrepo_c >= 0.20.1 Requires: python3-createrepo_c >= 0.20.1
Requires: python3-PyYAML Requires: python3-PyYAML
Requires: python3-productmd >= 1.28R Requires: python3-flufl-lock
Requires: python3-gobject-base %if %{rhel} == 10
Requires: xorriso
%else
Recommends: xorriso
%endif
Requires: python3-productmd >= 1.33
Requires: lorax Requires: lorax
Requires: python3-pgpy
Requires: python3-distro Requires: python3-distro
Requires: python3-gobject-base
Requires: python3-pgpy
Requires: python3-requests
%if %{rhel} == 8
Requires: python3-dataclasses
%endif
# This package is not available on i686, hence we cannot require it # This package is not available on i686, hence we cannot require it
# See https://bugzilla.redhat.com/show_bug.cgi?id=1743421 # See https://bugzilla.redhat.com/show_bug.cgi?id=1743421
@ -88,6 +94,7 @@ A tool to create anaconda based installation trees/isos of a set of rpms.
%package utils %package utils
Summary: Utilities for working with finished composes Summary: Utilities for working with finished composes
Requires: pungi = %{version}-%{release} Requires: pungi = %{version}-%{release}
Requires: python3-fedora-messaging
%description utils %description utils
These utilities work with finished composes produced by Pungi. They can be used These utilities work with finished composes produced by Pungi. They can be used
@ -96,8 +103,8 @@ notification to Fedora Message Bus.
%package -n python3-%{name} %package -n python3-%{name}
Summary: Python 3 libraries for pungi Summary: Python 3 libraries for pungi
Requires: python3-attrs
Requires: fus Requires: fus
Requires: python3-attrs
%description -n python3-%{name} %description -n python3-%{name}
Python library with code for Pungi. This is not a public library and there are Python library with code for Pungi. This is not a public library and there are
@ -107,8 +114,11 @@ no guarantees about API stability.
%prep %prep
%autosetup -p1 %autosetup -p1
%generate_buildrequires
%pyproject_buildrequires
%build %build
%py3_build %pyproject_wheel
cd doc cd doc
make epub SPHINXBUILD=/usr/bin/sphinx-build-3 make epub SPHINXBUILD=/usr/bin/sphinx-build-3
make text SPHINXBUILD=/usr/bin/sphinx-build-3 make text SPHINXBUILD=/usr/bin/sphinx-build-3
@ -116,13 +126,11 @@ make man SPHINXBUILD=/usr/bin/sphinx-build-3
gzip _build/man/pungi.1 gzip _build/man/pungi.1
%install %install
%py3_install %pyproject_install
%{__install} -d %{buildroot}/var/cache/pungi/createrepo_c %{__install} -d %{buildroot}/var/cache/pungi/createrepo_c
%{__install} -d %{buildroot}%{_mandir}/man1 %{__install} -d %{buildroot}%{_mandir}/man1
%{__install} -m 0644 doc/_build/man/pungi.1.gz %{buildroot}%{_mandir}/man1 %{__install} -m 0644 doc/_build/man/pungi.1.gz %{buildroot}%{_mandir}/man1
rm %{buildroot}%{_bindir}/pungi
%check %check
%pytest %pytest
@ -140,25 +148,347 @@ rm %{buildroot}%{_bindir}/pungi
%{_bindir}/%{name}-make-ostree %{_bindir}/%{name}-make-ostree
%{_mandir}/man1/pungi.1.gz %{_mandir}/man1/pungi.1.gz
%{_datadir}/pungi %{_datadir}/pungi
/var/cache/pungi %dir %{_localstatedir}/cache/pungi
%dir %attr(1777, root, root) %{_localstatedir}/cache/pungi/createrepo_c
%{_tmpfilesdir}/pungi-clean-cache.conf
%files -n python3-%{name} %files -n python3-%{name}
%{python3_sitelib}/%{name} %{python3_sitelib}/%{name}
%{python3_sitelib}/%{name}-%{version}-py%{python3_version}.egg-info %{python3_sitelib}/%{name}-%{version}.dist-info
%files utils %files utils
%{python3_sitelib}/%{name}_utils %{python3_sitelib}/%{name}_utils
%{_bindir}/%{name}-create-unified-isos %{_bindir}/%{name}-create-unified-isos
%{_bindir}/%{name}-config-dump %{_bindir}/%{name}-config-dump
%{_bindir}/%{name}-config-validate %{_bindir}/%{name}-config-validate
%{_bindir}/%{name}-fedmsg-notification
%{_bindir}/%{name}-notification-report-progress %{_bindir}/%{name}-notification-report-progress
%{_bindir}/%{name}-orchestrate
%{_bindir}/%{name}-patch-iso %{_bindir}/%{name}-patch-iso
%{_bindir}/%{name}-compare-depsolving %{_bindir}/%{name}-compare-depsolving
%{_bindir}/%{name}-wait-for-signed-ostree-handler %{_bindir}/%{name}-wait-for-signed-ostree-handler
%{_bindir}/%{name}-cache-cleanup
%changelog %changelog
* Tue Sep 30 2025 Eduard Abdullin <eabdullin@almalinux.org> - 4.10.1-1.alma.2
- Set iso_hfs_ppc64le_compatible to False by default
* Fri Aug 08 2025 Lubomír Sedlář <lsedlar@redhat.com> - 4.10.1-1
- osbuild: Handle wsl2 images (lsedlar)
- repoclosure: Clean up cache for dnf5 (lsedlar)
- Ignore errors for rmtree after archive extraction (dhodovsk)
- imagebuilder: accept `manifest_type` (supakeen)
- Add a telemetry span over image building threads (lsedlar)
- Add specific exception for skopeo copy (lsedlar)
* Wed Jul 30 2025 Lubomír Sedlář <lsedlar@redhat.com> - 4.10.0-1
- Add more tracing to kojiwrapper (lsedlar)
- phases: implement image-builder (supakeen)
- Add a tracing span around call to skopeo inspect (lsedlar)
- Add retries to skopeo inspect calls (lsedlar)
- otel: Explicitly initialize telemetry provider and tracer (lsedlar)
* Fri Jul 25 2025 Fedora Release Engineering <releng@fedoraproject.org> - 4.9.3-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_43_Mass_Rebuild
* Thu Jun 12 2025 Lubomír Sedlář <lsedlar@redhat.com> - 4.9.3-1
- Recognize wsl2 images produced by koji (lsedlar)
- Specify data_files with relative paths (lsedlar)
- Crossreference `koji_cache` from the Koji cache page (ahills)
- Add documentation for `koji_cache` configuration (ahills)
- Record exceptions for top level OTel span (lsedlar)
- Update spec to match current python packaging guidelines
* Wed Jun 04 2025 Python Maint <python-maint@redhat.com> - 4.9.2-3
- Rebuilt for Python 3.14
* Mon May 26 2025 Lubomír Sedlář <lsedlar@redhat.com> - 4.9.2-2
- Fix tests on Python 3.14
* Tue May 06 2025 Lubomír Sedlář <lsedlar@redhat.com> - 4.9.2-1
- Drop compatibility with Koji < 1.32 (lsedlar)
- kiwibuild: Add support for use_buildroot_repo option (lsedlar)
- gather: Resolve symlinks before linking packages (lsedlar)
- Make requests instrumentation optional (lsedlar)
- Fix incorrect log line (lsedlar)
* Thu Apr 03 2025 Lubomír Sedlář <lsedlar@redhat.com> - 4.9.1-1
- util: Fix typo in regex for container digests (lsedlar)
- Resolve container tags to digests (lsedlar)
- kojiwrapper: Remove unused code (lsedlar)
- Add basic telemetry support (lsedlar)
- Reorder ostree and ostree_installer phases (hlin)
- Fix test data generation script (lsedlar)
- extra_isos: Mention all extra files in the manifest (lsedlar)
- scm: Add retries to container-image download (lsedlar)
* Fri Feb 14 2025 Lubomír Sedlář <lsedlar@redhat.com> - 4.9.0-1
- buildinstall: Add support for rootfs-type lorax option (lsedlar)
- scm: Stop trying to download src arch (lsedlar)
- extra_isos: Provide arch to extra files getter (lsedlar)
- Move temporary buildinstall download to work/ (lsedlar)
- Download extra files from container registry (lsedlar)
- Remove python 2.7 dependencies from setup.py (lsedlar)
- util: Drop dead code (lsedlar)
- Directly import mock from unittest (lsedlar)
* Thu Jan 16 2025 Adam Williamson <awilliam@redhat.com> - 4.8.0-3
- Backport PR #1812 to fix crash on subprocess unicode decode error
* Mon Jan 06 2025 Adam Williamson <awilliam@redhat.com> - 4.8.0-2
- Backport PR #1810 to use new container types
* Fri Nov 29 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.8.0-1
- Drop spec file (lsedlar)
- Remove python 2.7 from tox configuration (lsedlar)
- Remove forgotten multilib module for yum (lsedlar)
- Drop usage of six (lsedlar)
- Ensure ostree phase threads are stopped (lsedlar)
- scm: Clone git submodules (lsedlar)
- Drop unittest2 (lsedlar)
- Remove pungi/gather.py and associated code (lsedlar)
- Reduce legacy pungi script to gather phase only (#1792) (awilliam)
- Install dnf4 into test image (lsedlar)
- ostree_container: make filename configurable, include arch (awilliam)
- Correct subvariant handling for ostree_container phase (awilliam)
- Drop compatibility helper for dnf.Package.source_name (lsedlar)
* Tue Nov 19 2024 Adam Williamson <awilliam@redhat.com> - 4.7.0-8
- Backport #1798 to infer types/formats for new FEX backing images
* Tue Nov 19 2024 Adam Williamson <awilliam@redhat.com> - 4.7.0-7
- Backport #1796 to speed up compose some more
* Thu Nov 14 2024 Adam Williamson <awilliam@redhat.com> - 4.7.0-6
- Rebuild with no changes to bump past release used in infra tag
* Wed Oct 16 2024 Adam Williamson <awilliam@redhat.com> - 4.7.0-5
- Backport patches for subvariant and filename for ostree_container
- Backport patch to split ostree phases out and improve compose speed
* Mon Oct 07 2024 Adam Williamson <awilliam@redhat.com> - 4.7.0-4
- Backport patches to fix GCE image format not to be 'docker'
* Mon Sep 1 2025 Aleksandra Kachanova <akachanova@almalinux.org> - 4.7.0-7
- Add riscv64 to the list of supported architectures
* Fri Sep 27 2024 Stepan Oksanichenko <soksanichenko@almalinux.org> - 4.7.0-6
- Add x86_64_v2 to a list of exclusive arches if there is any arch with base `x86_64`
* Mon Sep 16 2024 Eduard Abdullin <eabdullin@almalinux.org> - 4.7.0-5
- Add x86_64_v2 to arch list if x86_64 in list
* Fri Sep 06 2024 Stepan Oksanichenko <soksanichenko@almalinux.org> - 4.7.0-4
- Truncate a volume ID to 32 bytes
- Add new architecture `x86_64_v2`
* Thu Sep 05 2024 Stepan Oksanichenko <soksanichenko@almalinux.org> - 4.7.0-2
- Use xorriso as recommended package and genisoimage as required for RHEL8/9 and vice versa for RHEL10
* Thu Aug 29 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.7.0-3
- Backport patch for setting kiwibuild image type in metadata
* Wed Aug 28 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.7.0-2
- Backport patch with kiwibuild options version and repo_releasever
* Thu Aug 22 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.7.0-1
- kiwibuild: Add support for type, type attr and bundle format (lsedlar)
- createiso: Block reuse if unsigned packages are allowed (lsedlar)
- Allow live_images phase to still be skipped (lsedlar)
- createiso: Recompute .treeinfo checksums for images (lsedlar)
- Drop support for signing rpm-wrapped artifacts (lsedlar)
- Remove live_images.py (LiveImagesPhase) (awilliam)
- Clean up requirements (lsedlar)
- Update pungi.spec for py3 (hlin)
* Fri Jul 19 2024 Fedora Release Engineering <releng@fedoraproject.org> - 4.6.3-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild
* Fri Jul 12 2024 Haibo Lin <hlin@redhat.com> - 4.6.3-1
- Fix formatting of long line (lsedlar)
- unified-isos: Resolve symlinks (lsedlar)
- gather: Skip lookaside packages from local lookaside repo (lsedlar)
- pkgset: Avoid adding modules to unavailable arches (hlin)
- iso: Extract volume id with xorriso if available (lsedlar)
- De-duplicate log messages for ostree and ostree_container phases (awilliam)
- Handle tracebacks as str or bytes (lsedlar)
- ostree/container: add missing --version arg (awilliam)
- Block pkgset reuse on module defaults change (lsedlar)
- Include task ID in DONE message for OSBS phase (awilliam)
- Various phases: consistent format of failure message (awilliam)
- Update tests to exercise kiwi specific metadata (lsedlar)
- Kiwi: translate virtualbox and azure productmd formats (awilliam)
- kiwibuild: Add tests for the basic functionality (lsedlar)
- kiwibuild: Remove repos as dicts (lsedlar)
- Fix additional image metadata (lsedlar)
- Drop kiwibuild_version option (lsedlar)
- Update docs with kiwibuild options (lsedlar)
- kiwibuild: allow setting description scm and path at phase level (awilliam)
- Use latest Fedora for python 3 test environment (lsedlar)
- Install unittest2 only on python 2 (lsedlar)
- Fix 'failable' handling for kiwibuild phase (awilliam)
- image_build: Accept Kiwi extension for Azure VHD images (jeremycline)
- image_build: accept Kiwi vagrant image name format (awilliam)
* Sun Jun 09 2024 Python Maint <python-maint@redhat.com> - 4.6.2-7
- Rebuilt for Python 3.13
* Fri May 31 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.6.2-6
- Rebuild to bump release over f40-infra build
* Fri May 31 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.6.2-2
- Add dependency on xorriso, fixes rhbz#2278677
* Tue Apr 30 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.6.2-1
- Phases/osbuild: support passing 'customizations' for image builds (thozza)
- dnf: Load filelists for actual solver too (lsedlar)
- kiwibuild: Tell Koji which arches are allowed to fail (lsedlar)
- kiwibuild: Update documentation with more details (lsedlar)
- kiwibuild: Add kiwibuild global options (lsedlar)
- kiwibuild: Process images same as image-build (lsedlar)
- kiwibuild: Add subvariant configuration (lsedlar)
- kiwibuild: Work around missing arch in build data (lsedlar)
- Support KiwiBuild (hlin)
- ostree/container: Set version in treefile 'automatic-version-prefix' (tim)
- dnf: Explicitly load filelists (lsedlar)
- Fix buildinstall reuse with pungi_buildinstall plugin (lsedlar)
- Fix filters for DNF query (lsedlar)
- gather-dnf: Support dotarch in filter_packages (lsedlar)
- gather: Support dotarch notation for debuginfo packages (lsedlar)
- Correctly set input and fultree_exclude flags for debuginfo (lsedlar)
* Fri Feb 09 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.6.1-1
- Make python3-mock dependency optional (lsedlar)
- Make latest black happy (lsedlar)
- Update tox configuration (lsedlar)
- Fix scm tests to not use user configuration (lsedlar)
- Add workaround for old requests in kojiwrapper (lsedlar)
- Use pungi_buildinstall without NFS (lsedlar)
- checks: don't require "repo" in the "ostree" schema (awilliam)
- ostree_container: Use unique temporary directory (lsedlar)
* Fri Jan 26 2024 Maxwell G <maxwell@gtmx.me> - 4.6.0-5
- Remove python3-mock dependency
* Fri Jan 26 2024 Fedora Release Engineering <releng@fedoraproject.org> - 4.6.0-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_40_Mass_Rebuild
* Sun Jan 21 2024 Fedora Release Engineering <releng@fedoraproject.org> - 4.6.0-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_40_Mass_Rebuild
* Fri Jan 19 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.6.0-3
- Stop requiring repo option in ostree phase
* Thu Jan 18 2024 Lubomír Sedlář <lsedlar@redhat.com> - 4.6.0-2
- ostree_container: Use unique temporary directory
* Wed Dec 13 2023 Lubomír Sedlář <lsedlar@redhat.com> - 4.6.0-1
- Add ostree container to image metadata (lsedlar)
- Updates for ostree-container phase (lsedlar)
- Add ostree native container support (tim)
- Improve autodetection of productmd image type for osbuild images (awilliam)
- pkgset: ignore events for modular content tags (lsedlar)
- pkgset: Ignore duplicated module builds (lsedlar)
- Drop buildinstall method (abisoi)
- Add step to send UMB message (lzhuang)
- Fix minor Ruff/flake8 warnings (tim)
- osbuild: manifest type in config (cmdr)
* Mon Nov 21 2023 Stepan Oksanichenko <soksanichenko@almalinux.org> - 4.5.0-3
- Method `get_remote_file_content` is object's method now
* Wed Nov 15 2023 Stepan Oksanichenko <soksanichenko@almalinux.org> - 4.5.0-2
- Return empty list if a repo doesn't contain any module
* Mon Sep 25 2023 Lubomír Sedlář <lsedlar@redhat.com> - 4.5.0-7
- Backport patch for explicit setting of osbuild image type in metadata
* Thu Aug 31 2023 Lubomír Sedlář <lsedlar@redhat.com> - 4.5.0-1
- kojiwrapper: Stop being smart about local access (lsedlar)
- Fix unittest errors (ounsal)
- Add integrity checking for builds (lsedlar)
- Add script for cleaning up the cache (lsedlar)
- Add ability to download images (lsedlar)
- Add support for not having koji volume mounted locally (lsedlar)
- Remove repository cloning multiple times (abisoi)
- Support require_all_comps_packages on DNF backend (lsedlar)
- Fix new warnings from flake8 (lsedlar)
* Tue Jul 25 2023 Stepan Oksanichenko <soksanichenko@cloudlinux.com> - 4.3.7-8
- Option `excluded-packages` for script `pungi-gather-rpms`
* Tue Jul 25 2023 Lubomír Sedlář <lsedlar@redhat.com> - 4.4.1-1
- ostree: Add configuration for custom runroot packages (lsedlar)
- pkgset: Emit better error for missing modulemd file (lsedlar)
- Add support for git-credential-helper (lsedlar)
- Support OIDC Client Credentials authentication to CTS (hlin)
* Fri Jul 21 2023 Fedora Release Engineering <releng@fedoraproject.org> - 4.4.0-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_39_Mass_Rebuild
* Wed Jul 19 2023 Lubomír Sedlář <lsedlar@redhat.com> - 4.4.0-3
- Backport ostree runroot package additions
* Wed Jul 19 2023 Lubomír Sedlář <lsedlar@redhat.com> - 4.4.0-2
- Backport ostree runroot package additions
* Mon Jun 19 2023 Python Maint <python-maint@redhat.com> - 4.4.0-2
- Rebuilt for Python 3.12
* Wed Jun 07 2023 Lubomír Sedlář <lsedlar@redhat.com> - 4.4.0-1
- gather-dnf: Run latest() later (lsedlar)
- iso: Support joliet long names (lsedlar)
- Drop pungi-orchestrator code (lsedlar)
- isos: Ensure proper file ownership and permissions (lsedlar)
- gather: Always get latest packages (lsedlar)
- Add back compatibility with jsonschema <3.0.0 (lsedlar)
- Remove useless debug message (lsedlar)
- Remove fedmsg from requirements (lsedlar)
- gather: Support dotarch in DNF backend (lsedlar)
- Fix compatibility with createrepo_c 0.21.1 (lsedlar)
- comps: Apply arch filtering to environment/optionlist (lsedlar)
- Add config file for cleaning up cache files (hlin)
* Wed May 17 2023 Lubomír Sedlář <lsedlar@redhat.com> - 4.3.8-3
- Rebuild without fedmsg dependency
* Wed May 03 2023 Lubomír Sedlář <lsedlar@redhat.com> - 4.3.8-1
- Set priority for Fedora messages
* Thu Apr 13 2023 Stepan Oksanichenko <soksanichenko@cloudlinux.com> - 4.3.7-7
- gather-module can find modules through symlinks
* Thu Apr 13 2023 Stepan Oksanichenko <soksanichenko@cloudlinux.com> - 4.3.7-6
- CLI option `--label` can be passed through a Pungi config file
* Fri Mar 31 2023 Stepan Oksanichenko <soksanichenko@cloudlinux.com> - 4.3.7-4
- ALBS-1030: Generate Devel section in packages.json
- Also the tool can combine (remove and add) packages in a variant from different sources according to an url's type of source
- Some upstream changes to KojiMock part
- Skip verifying an RPM signature if sigkeys are empty
- ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- [Generator of packages.json] Replace using CLI by config.yaml
- [Gather RPMs] os.path is replaced by Pat
* Thu Mar 30 2023 Haibo Lin <hlin@redhat.com> - 4.3.8-1
- createiso: Update possibly changed file on DVD (lsedlar)
- pkgset: Stop reuse if configuration changed (lsedlar)
- Allow disabling inheriting ExcludeArch to noarch packages (lsedlar)
- pkgset: Support extra builds with no tags (lsedlar)
- buildinstall: Avoid pointlessly tweaking the boot images (lsedlar)
- Prevent to reuse if unsigned packages are allowed (hlin)
- Pass parent id/respin id to CTS (lsedlar)
- Exclude existing files in boot.iso (hlin)
- image-build/osbuild: Pull ISOs into the compose (lsedlar)
- Retry 401 error from CTS (lsedlar)
- gather: Better detection of debuginfo in lookaside (lsedlar)
- Log versions of all installed packages (hlin)
- Use authentication for all CTS calls (lsedlar)
- Fix black complaints (lsedlar)
- Add vhd.gz extension to compressed VHD images (lsedlar)
- Add vhd-compressed image type (lsedlar)
- Update to work with latest mock (lsedlar)
- Default bztar format for sdist command (onosek)
* Fri Mar 17 2023 Stepan Oksanichenko <soksanichenko@cloudlinux.com> - 4.3.7-3 * Fri Mar 17 2023 Stepan Oksanichenko <soksanichenko@cloudlinux.com> - 4.3.7-3
- ALBS-987: Generate i686 repositories with pungi on building new distr. version automatically - ALBS-987: Generate i686 repositories with pungi on building new distr. version automatically
- KojiMock extracts all modules which are suitable for the variant's arches - KojiMock extracts all modules which are suitable for the variant's arches

View File

@ -16,7 +16,8 @@ def get_full_version():
proc = subprocess.Popen( proc = subprocess.Popen(
["git", "--git-dir=%s/.git" % location, "describe", "--tags"], ["git", "--git-dir=%s/.git" % location, "describe", "--tags"],
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
universal_newlines=True, text=True,
errors="replace",
) )
output, _ = proc.communicate() output, _ = proc.communicate()
return re.sub(r"-1.fc\d\d?", "", output.strip().replace("pungi-", "")) return re.sub(r"-1.fc\d\d?", "", output.strip().replace("pungi-", ""))
@ -24,7 +25,7 @@ def get_full_version():
import subprocess import subprocess
proc = subprocess.Popen( proc = subprocess.Popen(
["rpm", "-q", "pungi"], stdout=subprocess.PIPE, universal_newlines=True ["rpm", "-q", "pungi"], stdout=subprocess.PIPE, text=True, errors="replace"
) )
(output, err) = proc.communicate() (output, err) = proc.communicate()
if not err: if not err:

View File

@ -93,6 +93,11 @@ def split_name_arch(name_arch):
def is_excluded(package, arches, logger=None): def is_excluded(package, arches, logger=None):
"""Check if package is excluded from given architectures.""" """Check if package is excluded from given architectures."""
if any(
getBaseArch(exc_arch) == 'x86_64' for exc_arch in package.exclusivearch
) and 'x86_64_v2' not in package.exclusivearch:
package.exclusivearch.append('x86_64_v2')
if package.excludearch and set(package.excludearch) & set(arches): if package.excludearch and set(package.excludearch) & set(arches):
if logger: if logger:
logger.debug( logger.debug(

View File

@ -34,6 +34,8 @@ arches = {
"x86_64": "athlon", "x86_64": "athlon",
"amd64": "x86_64", "amd64": "x86_64",
"ia32e": "x86_64", "ia32e": "x86_64",
# x86-64-v2
"x86_64_v2": "noarch",
# ppc64le # ppc64le
"ppc64le": "noarch", "ppc64le": "noarch",
# ppc # ppc
@ -82,6 +84,8 @@ arches = {
"sh3": "noarch", "sh3": "noarch",
# itanium # itanium
"ia64": "noarch", "ia64": "noarch",
# riscv64
"riscv64": "noarch",
} }
# Will contain information parsed from /proc/self/auxv via _parse_auxv(). # Will contain information parsed from /proc/self/auxv via _parse_auxv().

View File

@ -39,11 +39,9 @@ from __future__ import print_function
import multiprocessing import multiprocessing
import os.path import os.path
import platform import platform
import distro
import re import re
import jsonschema import jsonschema
import six
from kobo.shortcuts import force_list from kobo.shortcuts import force_list
from pungi.phases import PHASES_NAMES from pungi.phases import PHASES_NAMES
from pungi.runroot import RUNROOT_TYPES from pungi.runroot import RUNROOT_TYPES
@ -228,8 +226,18 @@ def validate(config, offline=False, schema=None):
DefaultValidator = _extend_with_default_and_alias( DefaultValidator = _extend_with_default_and_alias(
jsonschema.Draft4Validator, offline=offline jsonschema.Draft4Validator, offline=offline
) )
if hasattr(jsonschema.Draft4Validator, "TYPE_CHECKER"):
# jsonschema >= 3.0 has new interface for checking types
validator = DefaultValidator(schema)
else:
validator = DefaultValidator( validator = DefaultValidator(
schema, schema,
{
"array": (tuple, list),
"regex": str,
"url": str,
},
) )
errors = [] errors = []
warnings = [] warnings = []
@ -257,6 +265,28 @@ def validate(config, offline=False, schema=None):
if error.validator in ("anyOf", "oneOf"): if error.validator in ("anyOf", "oneOf"):
for suberror in error.context: for suberror in error.context:
errors.append(" Possible reason: %s" % suberror.message) errors.append(" Possible reason: %s" % suberror.message)
# Resolve container tags in extra_files
tag_resolver = util.ContainerTagResolver(offline=offline)
if config.get("extra_files"):
for _, arch_dict in config["extra_files"]:
for value in arch_dict.values():
if isinstance(value, dict):
_resolve_container_tag(value, tag_resolver)
elif isinstance(value, list):
for subinstance in value:
_resolve_container_tag(subinstance, tag_resolver)
if config.get("extra_isos"):
for cfgs in config["extra_isos"].values():
if not isinstance(cfgs, list):
cfgs = [cfgs]
for cfg in cfgs:
if isinstance(cfg.get("extra_files"), dict):
_resolve_container_tag(cfg["extra_files"], tag_resolver)
elif isinstance(cfg.get("extra_files"), list):
for c in cfg["extra_files"]:
_resolve_container_tag(c, tag_resolver)
return (errors + _validate_requires(schema, config, CONFIG_DEPS), warnings) return (errors + _validate_requires(schema, config, CONFIG_DEPS), warnings)
@ -378,6 +408,7 @@ def _extend_with_default_and_alias(validator_class, offline=False):
instance[property]["branch"] = resolver( instance[property]["branch"] = resolver(
instance[property]["repo"], instance[property]["repo"],
instance[property].get("branch") or "HEAD", instance[property].get("branch") or "HEAD",
instance[property].get("options"),
) )
for error in _hook_errors(properties, instance, schema): for error in _hook_errors(properties, instance, schema):
@ -445,37 +476,20 @@ def _extend_with_default_and_alias(validator_class, offline=False):
context=all_errors, context=all_errors,
) )
kwargs = {}
if hasattr(validator_class, "TYPE_CHECKER"):
# jsonschema >= 3
def is_array(checker, instance): def is_array(checker, instance):
return isinstance(instance, (tuple, list)) return isinstance(instance, (tuple, list))
def is_string_type(checker, instance): def is_string_type(checker, instance):
return isinstance(instance, six.string_types) return isinstance(instance, str)
# RHEL9 has newer version of package jsonschema kwargs["type_checker"] = validator_class.TYPE_CHECKER.redefine_many(
# which has another way of working with validators
if float(distro.linux_distribution()[1]) < 9:
validator = jsonschema.validators.extend(
validator_class,
{
"properties": properties_validator,
"deprecated": error_on_deprecated,
"type": validate_regex_type,
"required": _validate_required,
"additionalProperties": _validate_additional_properties,
"anyOf": _validate_any_of,
},
)
validator.DEFAULT_TYPES.update({
"array": (list, tuple),
"regex": six.string_types,
"url": six.string_types,
})
else:
type_checker = validator_class.TYPE_CHECKER.redefine_many(
{"array": is_array, "regex": is_string_type, "url": is_string_type} {"array": is_array, "regex": is_string_type, "url": is_string_type}
) )
validator = jsonschema.validators.extend( return jsonschema.validators.extend(
validator_class, validator_class,
{ {
"properties": properties_validator, "properties": properties_validator,
@ -485,9 +499,8 @@ def _extend_with_default_and_alias(validator_class, offline=False):
"additionalProperties": _validate_additional_properties, "additionalProperties": _validate_additional_properties,
"anyOf": _validate_any_of, "anyOf": _validate_any_of,
}, },
type_checker=type_checker, **kwargs
) )
return validator
class ConfigDeprecation(jsonschema.exceptions.ValidationError): class ConfigDeprecation(jsonschema.exceptions.ValidationError):
@ -529,12 +542,31 @@ def make_schema():
"file": {"type": "string"}, "file": {"type": "string"},
"dir": {"type": "string"}, "dir": {"type": "string"},
"command": {"type": "string"}, "command": {"type": "string"},
"options": {
"type": "object",
"properties": {
"credential_helper": {"type": "string"},
},
"additionalProperties": False,
},
}, },
"additionalProperties": False, "additionalProperties": False,
}, },
"str_or_scm_dict": { "str_or_scm_dict": {
"anyOf": [{"type": "string"}, {"$ref": "#/definitions/scm_dict"}] "anyOf": [{"type": "string"}, {"$ref": "#/definitions/scm_dict"}]
}, },
"extra_file": {
"type": "object",
"properties": {
"scm": {"type": "string"},
"repo": {"type": "string"},
"branch": {"$ref": "#/definitions/optional_string"},
"file": {"$ref": "#/definitions/strings"},
"dir": {"$ref": "#/definitions/strings"},
"target": {"type": "string"},
},
"additionalProperties": False,
},
"repo_dict": { "repo_dict": {
"type": "object", "type": "object",
"properties": { "properties": {
@ -554,27 +586,6 @@ def make_schema():
"list_of_strings": {"type": "array", "items": {"type": "string"}}, "list_of_strings": {"type": "array", "items": {"type": "string"}},
"strings": _one_or_list({"type": "string"}), "strings": _one_or_list({"type": "string"}),
"optional_string": {"anyOf": [{"type": "string"}, {"type": "null"}]}, "optional_string": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"live_image_config": {
"type": "object",
"properties": {
"kickstart": {"type": "string"},
"ksurl": {"type": "url"},
"name": {"type": "string"},
"subvariant": {"type": "string"},
"target": {"type": "string"},
"version": {"type": "string"},
"repo": {"$ref": "#/definitions/repos"},
"specfile": {"type": "string"},
"scratch": {"type": "boolean"},
"type": {"type": "string"},
"sign": {"type": "boolean"},
"failable": {"type": "boolean"},
"release": {"$ref": "#/definitions/optional_string"},
},
"required": ["kickstart"],
"additionalProperties": False,
"type": "object",
},
"osbs_config": { "osbs_config": {
"type": "object", "type": "object",
"properties": { "properties": {
@ -610,6 +621,7 @@ def make_schema():
"release_discinfo_description": {"type": "string"}, "release_discinfo_description": {"type": "string"},
"treeinfo_version": {"type": "string"}, "treeinfo_version": {"type": "string"},
"compose_type": {"type": "string", "enum": COMPOSE_TYPES}, "compose_type": {"type": "string", "enum": COMPOSE_TYPES},
"label": {"type": "string"},
"base_product_name": {"type": "string"}, "base_product_name": {"type": "string"},
"base_product_short": {"type": "string"}, "base_product_short": {"type": "string"},
"base_product_version": {"type": "string"}, "base_product_version": {"type": "string"},
@ -724,7 +736,6 @@ def make_schema():
), ),
"repoclosure_backend": { "repoclosure_backend": {
"type": "string", "type": "string",
# Gather and repoclosure both have the same backends: yum + dnf
"default": _get_default_gather_backend(), "default": _get_default_gather_backend(),
"enum": _get_gather_backends(), "enum": _get_gather_backends(),
}, },
@ -791,7 +802,7 @@ def make_schema():
_variant_arch_mapping({"type": "number", "enum": [1, 2, 3, 4]}), _variant_arch_mapping({"type": "number", "enum": [1, 2, 3, 4]}),
], ],
}, },
"iso_hfs_ppc64le_compatible": {"type": "boolean", "default": True}, "iso_hfs_ppc64le_compatible": {"type": "boolean", "default": False},
"multilib": _variant_arch_mapping( "multilib": _variant_arch_mapping(
{"$ref": "#/definitions/list_of_strings"} {"$ref": "#/definitions/list_of_strings"}
), ),
@ -818,7 +829,7 @@ def make_schema():
"buildinstall_allow_reuse": {"type": "boolean", "default": False}, "buildinstall_allow_reuse": {"type": "boolean", "default": False},
"buildinstall_method": { "buildinstall_method": {
"type": "string", "type": "string",
"enum": ["lorax", "buildinstall"], "enum": ["lorax"],
}, },
# In phase `buildinstall` we should add to compose only the # In phase `buildinstall` we should add to compose only the
# images that will be used only as netinstall # images that will be used only as netinstall
@ -845,8 +856,11 @@ def make_schema():
"pdc_insecure": {"deprecated": "Koji is queried instead"}, "pdc_insecure": {"deprecated": "Koji is queried instead"},
"cts_url": {"type": "string"}, "cts_url": {"type": "string"},
"cts_keytab": {"type": "string"}, "cts_keytab": {"type": "string"},
"cts_oidc_token_url": {"type": "url"},
"cts_oidc_client_id": {"type": "string"},
"koji_profile": {"type": "string"}, "koji_profile": {"type": "string"},
"koji_event": {"type": "number"}, "koji_event": {"type": "number"},
"koji_cache": {"type": "string"},
"pkgset_koji_tag": {"$ref": "#/definitions/strings"}, "pkgset_koji_tag": {"$ref": "#/definitions/strings"},
"pkgset_koji_builds": {"$ref": "#/definitions/strings"}, "pkgset_koji_builds": {"$ref": "#/definitions/strings"},
"pkgset_koji_scratch_tasks": {"$ref": "#/definitions/strings"}, "pkgset_koji_scratch_tasks": {"$ref": "#/definitions/strings"},
@ -864,6 +878,10 @@ def make_schema():
"type": "boolean", "type": "boolean",
"default": True, "default": True,
}, },
"pkgset_inherit_exclusive_arch_to_noarch": {
"type": "boolean",
"default": True,
},
"pkgset_scratch_modules": { "pkgset_scratch_modules": {
"type": "object", "type": "object",
"patternProperties": { "patternProperties": {
@ -876,7 +894,10 @@ def make_schema():
"paths_module": {"type": "string"}, "paths_module": {"type": "string"},
"skip_phases": { "skip_phases": {
"type": "array", "type": "array",
"items": {"type": "string", "enum": PHASES_NAMES + ["productimg"]}, "items": {
"type": "string",
"enum": PHASES_NAMES + ["productimg", "live_images"],
},
"default": [], "default": [],
}, },
"image_name_format": { "image_name_format": {
@ -910,11 +931,6 @@ def make_schema():
}, },
"restricted_volid": {"type": "boolean", "default": False}, "restricted_volid": {"type": "boolean", "default": False},
"volume_id_substitutions": {"type": "object", "default": {}}, "volume_id_substitutions": {"type": "object", "default": {}},
"live_images_no_rename": {"type": "boolean", "default": False},
"live_images_ksurl": {"type": "url"},
"live_images_target": {"type": "string"},
"live_images_release": {"$ref": "#/definitions/optional_string"},
"live_images_version": {"type": "string"},
"image_build_ksurl": {"type": "url"}, "image_build_ksurl": {"type": "url"},
"image_build_target": {"type": "string"}, "image_build_target": {"type": "string"},
"image_build_release": {"$ref": "#/definitions/optional_string"}, "image_build_release": {"$ref": "#/definitions/optional_string"},
@ -947,8 +963,6 @@ def make_schema():
"product_id": {"$ref": "#/definitions/str_or_scm_dict"}, "product_id": {"$ref": "#/definitions/str_or_scm_dict"},
"product_id_allow_missing": {"type": "boolean", "default": False}, "product_id_allow_missing": {"type": "boolean", "default": False},
"product_id_allow_name_prefix": {"type": "boolean", "default": True}, "product_id_allow_name_prefix": {"type": "boolean", "default": True},
# Deprecated in favour of regular local/phase/global setting.
"live_target": {"type": "string"},
"tree_arches": {"$ref": "#/definitions/list_of_strings", "default": []}, "tree_arches": {"$ref": "#/definitions/list_of_strings", "default": []},
"tree_variants": {"$ref": "#/definitions/list_of_strings", "default": []}, "tree_variants": {"$ref": "#/definitions/list_of_strings", "default": []},
"translate_paths": {"$ref": "#/definitions/string_pairs", "default": []}, "translate_paths": {"$ref": "#/definitions/string_pairs", "default": []},
@ -968,20 +982,7 @@ def make_schema():
"properties": { "properties": {
"include_variants": {"$ref": "#/definitions/strings"}, "include_variants": {"$ref": "#/definitions/strings"},
"extra_files": _one_or_list( "extra_files": _one_or_list(
{ {"$ref": "#/definitions/extra_file"}
"type": "object",
"properties": {
"scm": {"type": "string"},
"repo": {"type": "string"},
"branch": {
"$ref": "#/definitions/optional_string"
},
"file": {"$ref": "#/definitions/strings"},
"dir": {"$ref": "#/definitions/strings"},
"target": {"type": "string"},
},
"additionalProperties": False,
}
), ),
"filename": {"type": "string"}, "filename": {"type": "string"},
"volid": {"$ref": "#/definitions/strings"}, "volid": {"$ref": "#/definitions/strings"},
@ -1066,11 +1067,13 @@ def make_schema():
"config_branch": {"type": "string"}, "config_branch": {"type": "string"},
"tag_ref": {"type": "boolean"}, "tag_ref": {"type": "boolean"},
"ostree_ref": {"type": "string"}, "ostree_ref": {"type": "string"},
"runroot_packages": {
"$ref": "#/definitions/list_of_strings",
},
}, },
"required": [ "required": [
"treefile", "treefile",
"config_url", "config_url",
"repo",
"ostree_repo", "ostree_repo",
], ],
"additionalProperties": False, "additionalProperties": False,
@ -1108,6 +1111,41 @@ def make_schema():
), ),
] ]
}, },
"ostree_container": {
"type": "object",
"patternProperties": {
# Warning: this pattern is a variant uid regex, but the
# format does not let us validate it as there is no regular
# expression to describe all regular expressions.
".+": _one_or_list(
{
"type": "object",
"properties": {
"treefile": {"type": "string"},
"config_url": {"type": "string"},
"repo": {"$ref": "#/definitions/repos"},
"keep_original_sources": {"type": "boolean"},
"config_branch": {"type": "string"},
"arches": {"$ref": "#/definitions/list_of_strings"},
"failable": {"$ref": "#/definitions/list_of_strings"},
"version": {"type": "string"},
"tag_ref": {"type": "boolean"},
"runroot_packages": {
"$ref": "#/definitions/list_of_strings",
},
"subvariant": {"type": "string"},
"name": {"type": "string"},
},
"required": [
"treefile",
"config_url",
],
"additionalProperties": False,
}
),
},
"additionalProperties": False,
},
"ostree_installer": _variant_arch_mapping( "ostree_installer": _variant_arch_mapping(
{ {
"type": "object", "type": "object",
@ -1132,11 +1170,9 @@ def make_schema():
} }
), ),
"ostree_use_koji_plugin": {"type": "boolean", "default": False}, "ostree_use_koji_plugin": {"type": "boolean", "default": False},
"ostree_container_use_koji_plugin": {"type": "boolean", "default": False},
"ostree_installer_use_koji_plugin": {"type": "boolean", "default": False}, "ostree_installer_use_koji_plugin": {"type": "boolean", "default": False},
"ostree_installer_overwrite": {"type": "boolean", "default": False}, "ostree_installer_overwrite": {"type": "boolean", "default": False},
"live_images": _variant_arch_mapping(
_one_or_list({"$ref": "#/definitions/live_image_config"})
),
"image_build_allow_reuse": {"type": "boolean", "default": False}, "image_build_allow_reuse": {"type": "boolean", "default": False},
"image_build": { "image_build": {
"type": "object", "type": "object",
@ -1187,6 +1223,57 @@ def make_schema():
}, },
"additionalProperties": False, "additionalProperties": False,
}, },
"kiwibuild": {
"type": "object",
"patternProperties": {
# Warning: this pattern is a variant uid regex, but the
# format does not let us validate it as there is no regular
# expression to describe all regular expressions.
".+": {
"type": "array",
"items": {
"type": "object",
"properties": {
"target": {"type": "string"},
"description_scm": {"type": "url"},
"description_path": {"type": "string"},
"kiwi_profile": {"type": "string"},
"release": {"type": "string"},
"arches": {"$ref": "#/definitions/list_of_strings"},
"repos": {"$ref": "#/definitions/list_of_strings"},
"failable": {"$ref": "#/definitions/list_of_strings"},
"subvariant": {"type": "string"},
"type": {"type": "string"},
"type_attr": {"$ref": "#/definitions/list_of_strings"},
"bundle_name_format": {"type": "string"},
"version": {"type": "string"},
"repo_releasever": {"type": "string"},
"manifest_type": {"type": "string"},
"use_buildroot_repo": {"type": "boolean"},
},
"required": [
# description_scm and description_path
# are really required, but as they can
# be set at the phase level we cannot
# enforce that here
"kiwi_profile",
],
"additionalProperties": False,
},
}
},
"additionalProperties": False,
},
"kiwibuild_description_scm": {"type": "url"},
"kiwibuild_description_path": {"type": "string"},
"kiwibuild_target": {"type": "string"},
"kiwibuild_release": {"$ref": "#/definitions/optional_string"},
"kiwibuild_type": {"type": "string"},
"kiwibuild_type_attr": {"$ref": "#/definitions/list_of_strings"},
"kiwibuild_bundle_name_format": {"type": "string"},
"kiwibuild_version": {"type": "string"},
"kiwibuild_repo_releasever": {"type": "string"},
"kiwibuild_use_buildroot_repo": {"type": "boolean", "default": False},
"osbuild_target": {"type": "string"}, "osbuild_target": {"type": "string"},
"osbuild_release": {"$ref": "#/definitions/optional_string"}, "osbuild_release": {"$ref": "#/definitions/optional_string"},
"osbuild_version": {"type": "string"}, "osbuild_version": {"type": "string"},
@ -1247,6 +1334,11 @@ def make_schema():
"ostree_url": {"type": "string"}, "ostree_url": {"type": "string"},
"ostree_ref": {"type": "string"}, "ostree_ref": {"type": "string"},
"ostree_parent": {"type": "string"}, "ostree_parent": {"type": "string"},
"manifest_type": {"type": "string"},
"customizations": {
"type": "object",
"additionalProperties": True,
},
"upload_options": { "upload_options": {
# this should be really 'oneOf', but the minimal # this should be really 'oneOf', but the minimal
# required properties in AWSEC2 and GCP options # required properties in AWSEC2 and GCP options
@ -1336,6 +1428,58 @@ def make_schema():
}, },
}, },
}, },
"imagebuilder": {
"type": "object",
"patternProperties": {
# Warning: this pattern is a variant uid regex, but the
# format does not let us validate it as there is no regular
# expression to describe all regular expressions.
".+": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"target": {"type": "string"},
"arches": {"$ref": "#/definitions/list_of_strings"},
"types": {"$ref": "#/definitions/list_of_strings"},
"version": {"type": "string"},
"repos": {"$ref": "#/definitions/list_of_strings"},
"release": {"type": "string"},
"distro": {"type": "string"},
"scratch": {"type": "boolean"},
"ostree": {
"type": "object",
"properties": {
"parent": {"type": "string"},
"ref": {"type": "string"},
"url": {"type": "string"},
},
},
"failable": {"$ref": "#/definitions/list_of_strings"},
"subvariant": {"type": "string"},
"blueprint": {
"type": "object",
"additionalProperties": True,
},
"seed": {"type": "integer"},
"manifest_type": {"type": "string"},
},
"required": [
"name",
"types",
],
"additionalProperties": False,
},
}
},
"additionalProperties": False,
},
"imagebuilder_target": {"type": "string"},
"imagebuilder_release": {"$ref": "#/definitions/optional_string"},
"imagebuilder_version": {"type": "string"},
"imagebuilder_seed": {"type": "integer"},
"imagebuilder_scratch": {"type": "boolean"},
"lorax_options": _variant_arch_mapping( "lorax_options": _variant_arch_mapping(
{ {
"type": "object", "type": "object",
@ -1355,6 +1499,7 @@ def make_schema():
"skip_branding": {"type": "boolean"}, "skip_branding": {"type": "boolean"},
"squashfs_only": {"type": "boolean"}, "squashfs_only": {"type": "boolean"},
"configuration_file": {"$ref": "#/definitions/str_or_scm_dict"}, "configuration_file": {"$ref": "#/definitions/str_or_scm_dict"},
"rootfs_type": {"type": "string"},
}, },
"additionalProperties": False, "additionalProperties": False,
} }
@ -1363,9 +1508,6 @@ def make_schema():
{"$ref": "#/definitions/strings"} {"$ref": "#/definitions/strings"}
), ),
"lorax_use_koji_plugin": {"type": "boolean", "default": False}, "lorax_use_koji_plugin": {"type": "boolean", "default": False},
"signing_key_id": {"type": "string"},
"signing_key_password_file": {"type": "string"},
"signing_command": {"type": "string"},
"productimg": { "productimg": {
"deprecated": "remove it. Productimg phase has been removed" "deprecated": "remove it. Productimg phase has been removed"
}, },
@ -1417,21 +1559,7 @@ def make_schema():
"additionalProperties": False, "additionalProperties": False,
}, },
"extra_files": _variant_arch_mapping( "extra_files": _variant_arch_mapping(
{ {"type": "array", "items": {"$ref": "#/definitions/extra_file"}}
"type": "array",
"items": {
"type": "object",
"properties": {
"scm": {"type": "string"},
"repo": {"type": "string"},
"branch": {"$ref": "#/definitions/optional_string"},
"file": {"$ref": "#/definitions/strings"},
"dir": {"type": "string"},
"target": {"type": "string"},
},
"additionalProperties": False,
},
}
), ),
"gather_lookaside_repos": _variant_arch_mapping( "gather_lookaside_repos": _variant_arch_mapping(
{"$ref": "#/definitions/strings"} {"$ref": "#/definitions/strings"}
@ -1500,7 +1628,6 @@ def get_num_cpus():
CONFIG_DEPS = { CONFIG_DEPS = {
"buildinstall_method": { "buildinstall_method": {
"conflicts": ( "conflicts": (
(lambda val: val == "buildinstall", ["lorax_options"]),
(lambda val: not val, ["lorax_options", "buildinstall_kickstart"]), (lambda val: not val, ["lorax_options", "buildinstall_kickstart"]),
), ),
}, },
@ -1553,10 +1680,13 @@ def update_schema(schema, update_dict):
def _get_gather_backends(): def _get_gather_backends():
if six.PY2:
return ["yum", "dnf"]
return ["dnf"] return ["dnf"]
def _get_default_gather_backend(): def _get_default_gather_backend():
return "yum" if six.PY2 else "dnf" return "dnf"
def _resolve_container_tag(instance, tag_resolver):
if instance.get("scm") == "container-image":
instance["repo"] = tag_resolver(instance["repo"])

View File

@ -17,6 +17,7 @@
__all__ = ("Compose",) __all__ = ("Compose",)
import contextlib
import errno import errno
import logging import logging
import os import os
@ -38,6 +39,7 @@ from dogpile.cache import make_region
from pungi.graph import SimpleAcyclicOrientedGraph from pungi.graph import SimpleAcyclicOrientedGraph
from pungi.wrappers.variants import VariantsXmlParser from pungi.wrappers.variants import VariantsXmlParser
from pungi.paths import Paths from pungi.paths import Paths
from pungi.wrappers.kojiwrapper import KojiDownloadProxy
from pungi.wrappers.scm import get_file_from_scm from pungi.wrappers.scm import get_file_from_scm
from pungi.util import ( from pungi.util import (
makedirs, makedirs,
@ -48,6 +50,7 @@ from pungi.util import (
translate_path_raw, translate_path_raw,
) )
from pungi.metadata import compose_to_composeinfo from pungi.metadata import compose_to_composeinfo
from pungi.otel import tracing
try: try:
# This is available since productmd >= 1.18 # This is available since productmd >= 1.18
@ -57,20 +60,102 @@ except ImportError:
SUPPORTED_MILESTONES = ["RC", "Update", "SecurityFix"] SUPPORTED_MILESTONES = ["RC", "Update", "SecurityFix"]
def is_status_fatal(status_code):
"""Check if status code returned from CTS reports an error that is unlikely
to be fixed by retrying. Generally client errors (4XX) are fatal, with the
exception of 401 Unauthorized which could be caused by transient network
issue between compose host and KDC.
"""
if status_code == 401:
return False
return status_code >= 400 and status_code < 500
@retry(wait_on=RequestException) @retry(wait_on=RequestException)
def retry_request(method, url, data=None, auth=None): def retry_request(method, url, data=None, json_data=None, auth=None):
"""
:param str method: Reqest method.
:param str url: Target URL.
:param dict data: form-urlencoded data to send in the body of the request.
:param dict json_data: json data to send in the body of the request.
"""
request_method = getattr(requests, method) request_method = getattr(requests, method)
rv = request_method(url, json=data, auth=auth) rv = request_method(url, data=data, json=json_data, auth=auth)
if rv.status_code >= 400 and rv.status_code < 500: if is_status_fatal(rv.status_code):
try: try:
error = rv.json()["message"] error = rv.json()
except ValueError: except ValueError:
error = rv.text error = rv.text
raise RuntimeError("CTS responded with %d: %s" % (rv.status_code, error)) raise RuntimeError("%s responded with %d: %s" % (url, rv.status_code, error))
rv.raise_for_status() rv.raise_for_status()
return rv return rv
class BearerAuth(requests.auth.AuthBase):
def __init__(self, token):
self.token = token
def __call__(self, r):
r.headers["authorization"] = "Bearer " + self.token
return r
@contextlib.contextmanager
def cts_auth(pungi_conf):
"""
:param dict pungi_conf: dict obj of pungi.json config.
"""
auth = None
token = None
cts_keytab = pungi_conf.get("cts_keytab")
cts_oidc_token_url = os.environ.get("CTS_OIDC_TOKEN_URL", "") or pungi_conf.get(
"cts_oidc_token_url"
)
try:
if cts_keytab:
# requests-kerberos cannot accept custom keytab, we need to use
# environment variable for this. But we need to change environment
# only temporarily just for this single requests.post.
# So at first backup the current environment and revert to it
# after the requests call.
from requests_kerberos import HTTPKerberosAuth
auth = HTTPKerberosAuth()
environ_copy = dict(os.environ)
if "$HOSTNAME" in cts_keytab:
cts_keytab = cts_keytab.replace("$HOSTNAME", socket.gethostname())
os.environ["KRB5_CLIENT_KTNAME"] = cts_keytab
os.environ["KRB5CCNAME"] = "DIR:%s" % tempfile.mkdtemp()
elif cts_oidc_token_url:
cts_oidc_client_id = os.environ.get(
"CTS_OIDC_CLIENT_ID", ""
) or pungi_conf.get("cts_oidc_client_id", "")
with tracing.span("obtain-oidc-token"):
token = retry_request(
"post",
cts_oidc_token_url,
data={
"grant_type": "client_credentials",
"client_id": cts_oidc_client_id,
"client_secret": os.environ.get("CTS_OIDC_CLIENT_SECRET", ""),
},
).json()["access_token"]
auth = BearerAuth(token)
del token
yield auth
except Exception as e:
# Avoid leaking client secret in trackback
e.show_locals = False
raise e
finally:
if cts_keytab:
shutil.rmtree(os.environ["KRB5CCNAME"].split(":", 1)[1])
os.environ.clear()
os.environ.update(environ_copy)
def get_compose_info( def get_compose_info(
conf, conf,
compose_type="production", compose_type="production",
@ -100,38 +185,20 @@ def get_compose_info(
ci.compose.type = compose_type ci.compose.type = compose_type
ci.compose.date = compose_date or time.strftime("%Y%m%d", time.localtime()) ci.compose.date = compose_date or time.strftime("%Y%m%d", time.localtime())
ci.compose.respin = compose_respin or 0 ci.compose.respin = compose_respin or 0
cts_url = conf.get("cts_url", None)
if cts_url:
# Requests-kerberos cannot accept custom keytab, we need to use
# environment variable for this. But we need to change environment
# only temporarily just for this single requests.post.
# So at first backup the current environment and revert to it
# after the requests.post call.
cts_keytab = conf.get("cts_keytab", None)
authentication = get_authentication(conf)
if cts_keytab:
environ_copy = dict(os.environ)
if "$HOSTNAME" in cts_keytab:
cts_keytab = cts_keytab.replace("$HOSTNAME", socket.gethostname())
os.environ["KRB5_CLIENT_KTNAME"] = cts_keytab
os.environ["KRB5CCNAME"] = "DIR:%s" % tempfile.mkdtemp()
try:
# Create compose in CTS and get the reserved compose ID.
ci.compose.id = ci.create_compose_id() ci.compose.id = ci.create_compose_id()
cts_url = conf.get("cts_url")
if cts_url:
# Create compose in CTS and get the reserved compose ID.
url = os.path.join(cts_url, "api/1/composes/") url = os.path.join(cts_url, "api/1/composes/")
data = { data = {
"compose_info": json.loads(ci.dumps()), "compose_info": json.loads(ci.dumps()),
"parent_compose_ids": parent_compose_ids, "parent_compose_ids": parent_compose_ids,
"respin_of": respin_of, "respin_of": respin_of,
} }
rv = retry_request("post", url, data=data, auth=authentication) with tracing.span("create-compose-in-cts"):
finally: with cts_auth(conf) as authentication:
if cts_keytab: rv = retry_request("post", url, json_data=data, auth=authentication)
shutil.rmtree(os.environ["KRB5CCNAME"].split(":", 1)[1])
os.environ.clear()
os.environ.update(environ_copy)
# Update local ComposeInfo with received ComposeInfo. # Update local ComposeInfo with received ComposeInfo.
cts_ci = ComposeInfo() cts_ci = ComposeInfo()
@ -139,22 +206,9 @@ def get_compose_info(
ci.compose.respin = cts_ci.compose.respin ci.compose.respin = cts_ci.compose.respin
ci.compose.id = cts_ci.compose.id ci.compose.id = cts_ci.compose.id
else:
ci.compose.id = ci.create_compose_id()
return ci return ci
def get_authentication(conf):
authentication = None
cts_keytab = conf.get("cts_keytab", None)
if cts_keytab:
from requests_kerberos import HTTPKerberosAuth
authentication = HTTPKerberosAuth()
return authentication
def write_compose_info(compose_dir, ci): def write_compose_info(compose_dir, ci):
""" """
Write ComposeInfo `ci` to `compose_dir` subdirectories. Write ComposeInfo `ci` to `compose_dir` subdirectories.
@ -168,7 +222,6 @@ def write_compose_info(compose_dir, ci):
def update_compose_url(compose_id, compose_dir, conf): def update_compose_url(compose_id, compose_dir, conf):
authentication = get_authentication(conf)
cts_url = conf.get("cts_url", None) cts_url = conf.get("cts_url", None)
if cts_url: if cts_url:
url = os.path.join(cts_url, "api/1/composes", compose_id) url = os.path.join(cts_url, "api/1/composes", compose_id)
@ -181,7 +234,9 @@ def update_compose_url(compose_id, compose_dir, conf):
"action": "set_url", "action": "set_url",
"compose_url": compose_url, "compose_url": compose_url,
} }
return retry_request("patch", url, data=data, auth=authentication) with tracing.span("update-compose-url"):
with cts_auth(conf) as authentication:
return retry_request("patch", url, json_data=data, auth=authentication)
def get_compose_dir( def get_compose_dir(
@ -192,11 +247,19 @@ def get_compose_dir(
compose_respin=None, compose_respin=None,
compose_label=None, compose_label=None,
already_exists_callbacks=None, already_exists_callbacks=None,
parent_compose_ids=None,
respin_of=None,
): ):
already_exists_callbacks = already_exists_callbacks or [] already_exists_callbacks = already_exists_callbacks or []
ci = get_compose_info( ci = get_compose_info(
conf, compose_type, compose_date, compose_respin, compose_label conf,
compose_type,
compose_date,
compose_respin,
compose_label,
parent_compose_ids,
respin_of,
) )
cts_url = conf.get("cts_url", None) cts_url = conf.get("cts_url", None)
@ -314,6 +377,7 @@ class Compose(kobo.log.LoggingBase):
self.ci_base.load( self.ci_base.load(
os.path.join(self.paths.work.topdir(arch="global"), "composeinfo-base.json") os.path.join(self.paths.work.topdir(arch="global"), "composeinfo-base.json")
) )
tracing.set_attribute("compose_id", self.compose_id)
self.supported = supported self.supported = supported
if ( if (
@ -351,6 +415,8 @@ class Compose(kobo.log.LoggingBase):
else: else:
self.cache_region = make_region().configure("dogpile.cache.null") self.cache_region = make_region().configure("dogpile.cache.null")
self.koji_downloader = KojiDownloadProxy.from_config(self.conf, self._logger)
get_compose_info = staticmethod(get_compose_info) get_compose_info = staticmethod(get_compose_info)
write_compose_info = staticmethod(write_compose_info) write_compose_info = staticmethod(write_compose_info)
get_compose_dir = staticmethod(get_compose_dir) get_compose_dir = staticmethod(get_compose_dir)
@ -405,13 +471,10 @@ class Compose(kobo.log.LoggingBase):
@property @property
def should_create_yum_database(self): def should_create_yum_database(self):
"""Explicit configuration trumps all. Otherwise check gather backend """Explicit configuration trumps all. Yum is no longer supported, so
and only create it for Yum. default to False.
""" """
config = self.conf.get("createrepo_database") return self.conf.get("createrepo_database", False)
if config is not None:
return config
return self.conf["gather_backend"] == "yum"
def read_variants(self): def read_variants(self):
# TODO: move to phases/init ? # TODO: move to phases/init ?
@ -499,6 +562,7 @@ class Compose(kobo.log.LoggingBase):
old_status = self.get_status() old_status = self.get_status()
if stat_msg == old_status: if stat_msg == old_status:
return return
tracing.set_attribute("compose_status", stat_msg)
if old_status == "FINISHED": if old_status == "FINISHED":
msg = "Could not modify a FINISHED compose: %s" % self.topdir msg = "Could not modify a FINISHED compose: %s" % self.topdir
self.log_error(msg) self.log_error(msg)
@ -646,7 +710,7 @@ class Compose(kobo.log.LoggingBase):
separators=(",", ": "), separators=(",", ": "),
) )
def traceback(self, detail=None): def traceback(self, detail=None, show_locals=True):
"""Store an extended traceback. This method should only be called when """Store an extended traceback. This method should only be called when
handling an exception. handling an exception.
@ -657,8 +721,10 @@ class Compose(kobo.log.LoggingBase):
basename += "-" + detail basename += "-" + detail
tb_path = self.paths.log.log_file("global", basename) tb_path = self.paths.log.log_file("global", basename)
self.log_error("Extended traceback in: %s", tb_path) self.log_error("Extended traceback in: %s", tb_path)
with open(tb_path, "wb") as f: tback = kobo.tback.Traceback(show_locals=show_locals).get_traceback()
f.write(kobo.tback.Traceback().get_traceback()) # Kobo 0.36.0 returns traceback as str, older versions return bytes
with open(tb_path, "wb" if isinstance(tback, bytes) else "w") as f:
f.write(tback)
def load_old_compose_config(self): def load_old_compose_config(self):
""" """

View File

@ -1,79 +0,0 @@
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
import os
import sys
import time
from ConfigParser import SafeConfigParser
from .arch_utils import getBaseArch
# In development, `here` will point to the bin/ directory with scripts.
here = sys.path[0]
MULTILIBCONF = (
os.path.join(os.path.dirname(__file__), "..", "share", "multilib")
if here != "/usr/bin"
else "/usr/share/pungi/multilib"
)
class Config(SafeConfigParser):
def __init__(self, pungirc=None):
SafeConfigParser.__init__(self)
self.add_section("pungi")
self.add_section("lorax")
self.set("pungi", "osdir", "os")
self.set("pungi", "sourcedir", "source")
self.set("pungi", "debugdir", "debug")
self.set("pungi", "isodir", "iso")
self.set("pungi", "multilibconf", MULTILIBCONF)
self.set(
"pungi", "relnotefilere", "LICENSE README-BURNING-ISOS-en_US.txt ^RPM-GPG"
)
self.set("pungi", "relnotedirre", "")
self.set(
"pungi", "relnotepkgs", "fedora-repos fedora-release fedora-release-notes"
)
self.set("pungi", "product_path", "Packages")
self.set("pungi", "cachedir", "/var/cache/pungi")
self.set("pungi", "compress_type", "xz")
self.set("pungi", "arch", getBaseArch())
self.set("pungi", "family", "Fedora")
self.set("pungi", "iso_basename", "Fedora")
self.set("pungi", "version", time.strftime("%Y%m%d", time.localtime()))
self.set("pungi", "variant", "")
self.set("pungi", "destdir", os.getcwd())
self.set("pungi", "workdirbase", "/work")
self.set("pungi", "bugurl", "https://bugzilla.redhat.com")
self.set("pungi", "cdsize", "695.0")
self.set("pungi", "debuginfo", "True")
self.set("pungi", "alldeps", "True")
self.set("pungi", "isfinal", "False")
self.set("pungi", "nohash", "False")
self.set("pungi", "full_archlist", "False")
self.set("pungi", "multilib", "")
self.set("pungi", "lookaside_repos", "")
self.set("pungi", "resolve_deps", "True")
self.set("pungi", "no_dvd", "False")
self.set("pungi", "nomacboot", "False")
self.set("pungi", "rootfs_size", "False")
# if missing, self.read() is a noop, else change 'defaults'
if pungirc:
self.read(os.path.expanduser(pungirc))

View File

@ -3,13 +3,15 @@
from __future__ import print_function from __future__ import print_function
import os import os
import six import shlex
from collections import namedtuple from collections import namedtuple
from six.moves import shlex_quote from kobo.shortcuts import run
from .wrappers import iso from .wrappers import iso
from .wrappers.jigdo import JigdoWrapper from .wrappers.jigdo import JigdoWrapper
from .phases.buildinstall import BOOT_CONFIGS, BOOT_IMAGES
CreateIsoOpts = namedtuple( CreateIsoOpts = namedtuple(
"CreateIsoOpts", "CreateIsoOpts",
@ -38,13 +40,13 @@ def quote(str):
expanded. expanded.
""" """
if str.startswith("$TEMPLATE"): if str.startswith("$TEMPLATE"):
return "$TEMPLATE%s" % shlex_quote(str.replace("$TEMPLATE", "", 1)) return "$TEMPLATE%s" % shlex.quote(str.replace("$TEMPLATE", "", 1))
return shlex_quote(str) return shlex.quote(str)
def emit(f, cmd): def emit(f, cmd):
"""Print line of shell code into the stream.""" """Print line of shell code into the stream."""
if isinstance(cmd, six.string_types): if isinstance(cmd, str):
print(cmd, file=f) print(cmd, file=f)
else: else:
print(" ".join([quote(x) for x in cmd]), file=f) print(" ".join([quote(x) for x in cmd]), file=f)
@ -64,10 +66,6 @@ def make_image(f, opts):
os.path.join("$TEMPLATE", "config_files/ppc"), os.path.join("$TEMPLATE", "config_files/ppc"),
hfs_compat=opts.hfs_compat, hfs_compat=opts.hfs_compat,
) )
elif opts.buildinstall_method == "buildinstall":
mkisofs_kwargs["boot_args"] = iso.get_boot_options(
opts.arch, "/usr/lib/anaconda-runtime/boot"
)
# ppc(64) doesn't seem to support utf-8 # ppc(64) doesn't seem to support utf-8
if opts.arch in ("ppc", "ppc64", "ppc64le"): if opts.arch in ("ppc", "ppc64", "ppc64le"):
@ -118,25 +116,65 @@ def make_jigdo(f, opts):
emit(f, cmd) emit(f, cmd)
def _get_perms(fs_path):
"""Compute proper permissions for a file.
This mimicks what -rational-rock option of genisoimage does. All read bits
are set, so that files and directories are globally readable. If any
execute bit is set for a file, set them all. No writes are allowed and
special bits are erased too.
"""
statinfo = os.stat(fs_path)
perms = 0o444
if statinfo.st_mode & 0o111:
perms |= 0o111
return perms
def write_xorriso_commands(opts): def write_xorriso_commands(opts):
# Create manifest for the boot.iso listing all contents
boot_iso_manifest = "%s.manifest" % os.path.join(
opts.script_dir, os.path.basename(opts.boot_iso)
)
run(
iso.get_manifest_cmd(
opts.boot_iso, opts.use_xorrisofs, output_file=boot_iso_manifest
)
)
# Find which files may have been updated by pungi. This only includes a few
# files from tweaking buildinstall and .discinfo metadata. There's no good
# way to detect whether the boot config files actually changed, so we may
# be updating files in the ISO with the same data.
UPDATEABLE_FILES = set(BOOT_IMAGES + BOOT_CONFIGS + [".discinfo"])
updated_files = set()
excluded_files = set()
with open(boot_iso_manifest) as f:
for line in f:
path = line.lstrip("/").rstrip("\n")
if path in UPDATEABLE_FILES:
updated_files.add(path)
else:
excluded_files.add(path)
script = os.path.join(opts.script_dir, "xorriso-%s.txt" % id(opts)) script = os.path.join(opts.script_dir, "xorriso-%s.txt" % id(opts))
with open(script, "w") as f: with open(script, "w") as f:
emit(f, "-indev %s" % opts.boot_iso) for cmd in iso.xorriso_commands(
emit(f, "-outdev %s" % os.path.join(opts.output_dir, opts.iso_name)) opts.arch, opts.boot_iso, os.path.join(opts.output_dir, opts.iso_name)
emit(f, "-boot_image any replay") ):
emit(f, " ".join(cmd))
emit(f, "-volid %s" % opts.volid) emit(f, "-volid %s" % opts.volid)
# isoinfo -J uses the Joliet tree, and it's used by virt-install
emit(f, "-joliet on")
with open(opts.graft_points) as gp: with open(opts.graft_points) as gp:
for line in gp: for line in gp:
iso_path, fs_path = line.strip().split("=", 1) iso_path, fs_path = line.strip().split("=", 1)
emit(f, "-map %s %s" % (fs_path, iso_path)) if iso_path in excluded_files:
continue
if opts.arch == "ppc64le": cmd = "-update" if iso_path in updated_files else "-map"
# This is needed for the image to be bootable. emit(f, "%s %s %s" % (cmd, fs_path, iso_path))
emit(f, "-as mkisofs -U --") emit(f, "-chmod 0%o %s" % (_get_perms(fs_path), iso_path))
emit(f, "-chown_r 0 /")
emit(f, "-chgrp_r 0 /")
emit(f, "-end") emit(f, "-end")
return script return script

File diff suppressed because it is too large Load Diff

View File

@ -15,25 +15,38 @@
from enum import Enum from enum import Enum
from itertools import count from functools import cmp_to_key
from itertools import count, groupby
import errno
import logging import logging
import os import os
import re import re
from kobo.rpmlib import parse_nvra from kobo.rpmlib import parse_nvra
import rpm
import pungi.common import pungi.common
import pungi.dnf_wrapper import pungi.dnf_wrapper
import pungi.multilib_dnf import pungi.multilib_dnf
import pungi.util import pungi.util
from pungi import arch_utils
from pungi.linker import Linker from pungi.linker import Linker
from pungi.profiler import Profiler from pungi.profiler import Profiler
from pungi.util import DEBUG_PATTERNS from pungi.util import DEBUG_PATTERNS
def get_source_name(pkg): def filter_dotarch(queue, pattern, **kwargs):
# Workaround for rhbz#1418298 """Filter queue for packages matching the pattern. If pattern matches the
return pkg.sourcerpm.rsplit("-", 2)[0] dotarch format of <name>.<arch>, it is processed as such. Otherwise it is
treated as just a name.
"""
kwargs["name__glob"] = pattern
if "." in pattern:
name, arch = pattern.split(".", 1)
if arch in arch_utils.arches or arch == "noarch":
kwargs["name__glob"] = name
kwargs["arch"] = arch
return queue.filter(**kwargs).apply()
class GatherOptions(pungi.common.OptionsBase): class GatherOptions(pungi.common.OptionsBase):
@ -245,13 +258,37 @@ class Gather(GatherBase):
# from lookaside. This can be achieved by removing any package that is # from lookaside. This can be achieved by removing any package that is
# also in lookaside from the list. # also in lookaside from the list.
lookaside_pkgs = set() lookaside_pkgs = set()
if self.opts.lookaside_repos:
# We will call `latest()` to get the highest version packages only.
# However, that is per name and architecture. If a package switches
# from arched to noarch or the other way, it is possible that the
# package_list contains different versions in main repos and in
# lookaside repos.
# We need to manually filter the latest version.
def vercmp(x, y):
return rpm.labelCompare(x[1], y[1])
# Annotate the packages with their version.
versioned_packages = [
(pkg, (str(pkg.epoch) or "0", pkg.version, pkg.release))
for pkg in package_list
]
# Sort the packages newest first.
sorted_packages = sorted(
versioned_packages, key=cmp_to_key(vercmp), reverse=True
)
# Group packages by version, take the first group and discard the
# version info from the tuple.
package_list = list(
x[0] for x in next(groupby(sorted_packages, key=lambda x: x[1]))[1]
)
# Now we can decide what is used from lookaside.
for pkg in package_list: for pkg in package_list:
if pkg.repoid in self.opts.lookaside_repos: if pkg.repoid in self.opts.lookaside_repos:
lookaside_pkgs.add("{0.name}-{0.evr}".format(pkg)) lookaside_pkgs.add("{0.name}-{0.evr}".format(pkg))
if self.opts.greedy_method == "all":
return list(package_list)
all_pkgs = [] all_pkgs = []
for pkg in package_list: for pkg in package_list:
# Remove packages that are also in lookaside # Remove packages that are also in lookaside
@ -263,16 +300,21 @@ class Gather(GatherBase):
if not debuginfo: if not debuginfo:
native_pkgs = set( native_pkgs = set(
self.q_native_binary_packages.filter(pkg=all_pkgs).apply() self.q_native_binary_packages.filter(pkg=all_pkgs).latest().apply()
) )
multilib_pkgs = set( multilib_pkgs = set(
self.q_multilib_binary_packages.filter(pkg=all_pkgs).apply() self.q_multilib_binary_packages.filter(pkg=all_pkgs).latest().apply()
) )
else: else:
native_pkgs = set(self.q_native_debug_packages.filter(pkg=all_pkgs).apply()) native_pkgs = set(
multilib_pkgs = set( self.q_native_debug_packages.filter(pkg=all_pkgs).latest().apply()
self.q_multilib_debug_packages.filter(pkg=all_pkgs).apply()
) )
multilib_pkgs = set(
self.q_multilib_debug_packages.filter(pkg=all_pkgs).latest().apply()
)
if self.opts.greedy_method == "all":
return list(native_pkgs | multilib_pkgs)
result = set() result = set()
@ -342,7 +384,7 @@ class Gather(GatherBase):
# lookaside # lookaside
if self.is_from_lookaside(i): if self.is_from_lookaside(i):
self._set_flag(i, PkgFlag.lookaside) self._set_flag(i, PkgFlag.lookaside)
if i.sourcerpm.rsplit("-", 2)[0] in self.opts.fulltree_excludes: if i.source_name in self.opts.fulltree_excludes:
self._set_flag(i, PkgFlag.fulltree_exclude) self._set_flag(i, PkgFlag.fulltree_exclude)
def _get_package_deps(self, pkg, debuginfo=False): def _get_package_deps(self, pkg, debuginfo=False):
@ -392,9 +434,7 @@ class Gather(GatherBase):
"""Given an name of a queue (stored as attribute in `self`), exclude """Given an name of a queue (stored as attribute in `self`), exclude
all given packages and keep only the latest per package name and arch. all given packages and keep only the latest per package name and arch.
""" """
setattr( setattr(self, queue, getattr(self, queue).filter(pkg__neq=exclude).apply())
self, queue, getattr(self, queue).filter(pkg__neq=exclude).latest().apply()
)
@Profiler("Gather._apply_excludes()") @Profiler("Gather._apply_excludes()")
def _apply_excludes(self, excludes): def _apply_excludes(self, excludes):
@ -420,12 +460,16 @@ class Gather(GatherBase):
name__glob=pattern[:-4], reponame__neq=self.opts.lookaside_repos name__glob=pattern[:-4], reponame__neq=self.opts.lookaside_repos
) )
elif pungi.util.pkg_is_debug(pattern): elif pungi.util.pkg_is_debug(pattern):
pkgs = self.q_debug_packages.filter( pkgs = filter_dotarch(
name__glob=pattern, reponame__neq=self.opts.lookaside_repos self.q_debug_packages,
pattern,
reponame__neq=self.opts.lookaside_repos,
) )
else: else:
pkgs = self.q_binary_packages.filter( pkgs = filter_dotarch(
name__glob=pattern, reponame__neq=self.opts.lookaside_repos self.q_binary_packages,
pattern,
reponame__neq=self.opts.lookaside_repos,
) )
exclude.update(pkgs) exclude.update(pkgs)
@ -491,21 +535,19 @@ class Gather(GatherBase):
name__glob=pattern[:-2] name__glob=pattern[:-2]
).apply() ).apply()
else: else:
pkgs = self.q_debug_packages.filter( pkgs = filter_dotarch(self.q_debug_packages, pattern)
name__glob=pattern
).apply()
else: else:
if pattern.endswith(".+"): if pattern.endswith(".+"):
pkgs = self.q_multilib_binary_packages.filter( pkgs = self.q_multilib_binary_packages.filter(
name__glob=pattern[:-2] name__glob=pattern[:-2]
).apply() ).apply()
else: else:
pkgs = self.q_binary_packages.filter( pkgs = filter_dotarch(self.q_binary_packages, pattern)
name__glob=pattern
).apply()
if not pkgs: if not pkgs:
self.logger.error("No package matches pattern %s" % pattern) self.logger.error(
"Could not find a match for %s in any configured repo", pattern
)
# The pattern could have been a glob. In that case we want to # The pattern could have been a glob. In that case we want to
# group the packages by name and get best match in those # group the packages by name and get best match in those
@ -616,7 +658,6 @@ class Gather(GatherBase):
return added return added
for pkg in self.result_debug_packages.copy(): for pkg in self.result_debug_packages.copy():
if pkg not in self.finished_add_debug_package_deps: if pkg not in self.finished_add_debug_package_deps:
deps = self._get_package_deps(pkg, debuginfo=True) deps = self._get_package_deps(pkg, debuginfo=True)
for i, req in deps: for i, req in deps:
@ -784,7 +825,6 @@ class Gather(GatherBase):
continue continue
debug_pkgs = [] debug_pkgs = []
pkg_in_lookaside = pkg.repoid in self.opts.lookaside_repos
for i in candidates: for i in candidates:
if pkg.arch != i.arch: if pkg.arch != i.arch:
continue continue
@ -792,8 +832,14 @@ class Gather(GatherBase):
# If it's not debugsource package or does not match name of # If it's not debugsource package or does not match name of
# the package, we don't want it in. # the package, we don't want it in.
continue continue
if i.repoid in self.opts.lookaside_repos or pkg_in_lookaside: if self.is_from_lookaside(i):
self._set_flag(i, PkgFlag.lookaside) self._set_flag(i, PkgFlag.lookaside)
srpm_name = i.source_name
if srpm_name in self.opts.fulltree_excludes:
self._set_flag(i, PkgFlag.fulltree_exclude)
if PkgFlag.input in self.result_package_flags.get(srpm_name, set()):
# If src rpm is marked as input, mark debuginfo as input too
self._set_flag(i, PkgFlag.input)
if i not in self.result_debug_packages: if i not in self.result_debug_packages:
added.add(i) added.add(i)
debug_pkgs.append(i) debug_pkgs.append(i)
@ -820,7 +866,7 @@ class Gather(GatherBase):
for pkg in sorted(self.result_binary_packages): for pkg in sorted(self.result_binary_packages):
assert pkg is not None assert pkg is not None
if get_source_name(pkg) in self.opts.fulltree_excludes: if pkg.source_name in self.opts.fulltree_excludes:
self.logger.debug("No fulltree for %s due to exclude list", pkg) self.logger.debug("No fulltree for %s due to exclude list", pkg)
continue continue
@ -1030,8 +1076,11 @@ class Gather(GatherBase):
# Link downloaded package in (or link package from file repo) # Link downloaded package in (or link package from file repo)
try: try:
linker.link(pkg.localPkg(), target) linker.link(pkg.localPkg(), target)
except Exception: except Exception as ex:
self.logger.error("Unable to link %s from the yum cache." % pkg.name) if ex.errno == errno.EEXIST:
self.logger.warning("Downloaded package exists in %s", target)
else:
self.logger.error("Unable to link %s from the dnf cache.", pkg.name)
raise raise
def log_count(self, msg, method, *args): def log_count(self, msg, method, *args):

View File

@ -228,20 +228,7 @@ class Linker(kobo.log.LoggingBase):
raise ValueError("Unknown link_type: %s" % link_type) raise ValueError("Unknown link_type: %s" % link_type)
def link(self, src, dst, link_type="hardlink-or-copy"): def link(self, src, dst, link_type="hardlink-or-copy"):
"""Link directories recursively.""" if os.path.isdir(src):
if os.path.isfile(src) or os.path.islink(src): raise RuntimeError("Linking directories recursively is not supported")
self._link_file(src, dst, link_type) self._link_file(src, dst, link_type)
return
if os.path.isfile(dst):
raise OSError(errno.EEXIST, "File exists")
if not self.test:
if not os.path.exists(dst):
makedirs(dst)
shutil.copystat(src, dst)
for i in os.listdir(src):
src_path = os.path.join(src, i)
dst_path = os.path.join(dst, i)
self.link(src_path, dst_path, link_type)

View File

@ -306,11 +306,6 @@ def write_tree_info(compose, arch, variant, timestamp=None, bi=None):
if variant.type in ("addon",) or variant.is_empty: if variant.type in ("addon",) or variant.is_empty:
return return
compose.log_debug(
"on arch '%s' looking at variant '%s' of type '%s'"
% (arch, variant, variant.type)
)
if not timestamp: if not timestamp:
timestamp = int(time.time()) timestamp = int(time.time())
else: else:

View File

@ -1,295 +0,0 @@
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
import re
import fnmatch
import pungi.pathmatch
import pungi.gather
import pungi.util
LINE_PATTERN_RE = re.compile(r"^\s*(?P<line>[^#]+)(:?\s+(?P<comment>#.*))?$")
RUNTIME_PATTERN_SPLIT_RE = re.compile(
r"^\s*(?P<path>[^\s]+)\s+(?P<pattern>[^\s]+)(:?\s+(?P<comment>#.*))?$"
)
SONAME_PATTERN_RE = re.compile(r"^(.+\.so\.[a-zA-Z0-9_\.]+).*$")
def read_lines(lines):
result = []
for i in lines:
i = i.strip()
if not i:
continue
# skip comments
if i.startswith("#"):
continue
match = LINE_PATTERN_RE.match(i)
if match is None:
raise ValueError("Couldn't parse line: %s" % i)
gd = match.groupdict()
result.append(gd["line"])
return result
def read_lines_from_file(path):
lines = open(path, "r").readlines()
lines = read_lines(lines)
return lines
def read_runtime_patterns(lines):
result = []
for i in read_lines(lines):
match = RUNTIME_PATTERN_SPLIT_RE.match(i)
if match is None:
raise ValueError("Couldn't parse pattern: %s" % i)
gd = match.groupdict()
result.append((gd["path"], gd["pattern"]))
return result
def read_runtime_patterns_from_file(path):
lines = open(path, "r").readlines()
return read_runtime_patterns(lines)
def expand_runtime_patterns(patterns):
pm = pungi.pathmatch.PathMatch()
for path, pattern in patterns:
for root in ("", "/opt/*/*/root"):
# include Software Collections: /opt/<vendor>/<scl_name>/root/...
if "$LIBDIR" in path:
for lib_dir in ("/lib", "/lib64", "/usr/lib", "/usr/lib64"):
path_pattern = path.replace("$LIBDIR", lib_dir)
path_pattern = "%s/%s" % (root, path_pattern.lstrip("/"))
pm[path_pattern] = (path_pattern, pattern)
else:
path_pattern = "%s/%s" % (root, path.lstrip("/"))
pm[path_pattern] = (path_pattern, pattern)
return pm
class MultilibMethodBase(object):
"""a base class for multilib methods"""
name = "base"
def __init__(self, config_path):
self.config_path = config_path
def select(self, po):
raise NotImplementedError
def skip(self, po):
if (
pungi.gather.is_noarch(po)
or pungi.gather.is_source(po)
or pungi.util.pkg_is_debug(po)
):
return True
return False
def is_kernel(self, po):
for p_name, p_flag, (p_e, p_v, p_r) in po.provides:
if p_name == "kernel":
return True
return False
def is_kernel_devel(self, po):
for p_name, p_flag, (p_e, p_v, p_r) in po.provides:
if p_name == "kernel-devel":
return True
return False
def is_kernel_or_kernel_devel(self, po):
for p_name, p_flag, (p_e, p_v, p_r) in po.provides:
if p_name in ("kernel", "kernel-devel"):
return True
return False
class NoneMultilibMethod(MultilibMethodBase):
"""multilib disabled"""
name = "none"
def select(self, po):
return False
class AllMultilibMethod(MultilibMethodBase):
"""all packages are multilib"""
name = "all"
def select(self, po):
if self.skip(po):
return False
return True
class RuntimeMultilibMethod(MultilibMethodBase):
"""pre-defined paths to libs"""
name = "runtime"
def __init__(self, *args, **kwargs):
super(RuntimeMultilibMethod, self).__init__(*args, **kwargs)
self.blacklist = read_lines_from_file(
self.config_path + "runtime-blacklist.conf"
)
self.whitelist = read_lines_from_file(
self.config_path + "runtime-whitelist.conf"
)
self.patterns = expand_runtime_patterns(
read_runtime_patterns_from_file(self.config_path + "runtime-patterns.conf")
)
def select(self, po):
if self.skip(po):
return False
if po.name in self.blacklist:
return False
if po.name in self.whitelist:
return True
if self.is_kernel(po):
return False
# gather all *.so.* provides from the RPM header
provides = set()
for i in po.provides:
match = SONAME_PATTERN_RE.match(i[0])
if match is not None:
provides.add(match.group(1))
for path in po.returnFileEntries() + po.returnFileEntries("ghost"):
dirname, filename = path.rsplit("/", 1)
dirname = dirname.rstrip("/")
patterns = self.patterns[dirname]
if not patterns:
continue
for dir_pattern, file_pattern in patterns:
if file_pattern == "-":
return True
if fnmatch.fnmatch(filename, file_pattern):
if ".so.*" in file_pattern:
if filename in provides:
# return only if the lib is provided in RPM header
# (some libs may be private, hence not exposed in Provides)
return True
else:
return True
return False
class KernelMultilibMethod(MultilibMethodBase):
"""kernel and kernel-devel"""
name = "kernel"
def __init__(self, *args, **kwargs):
super(KernelMultilibMethod, self).__init__(*args, **kwargs)
def select(self, po):
if self.is_kernel_or_kernel_devel(po):
return True
return False
class YabootMultilibMethod(MultilibMethodBase):
"""yaboot on ppc"""
name = "yaboot"
def __init__(self, *args, **kwargs):
super(YabootMultilibMethod, self).__init__(*args, **kwargs)
def select(self, po):
if po.arch in ["ppc"]:
if po.name.startswith("yaboot"):
return True
return False
class DevelMultilibMethod(MultilibMethodBase):
"""all -devel and -static packages"""
name = "devel"
def __init__(self, *args, **kwargs):
super(DevelMultilibMethod, self).__init__(*args, **kwargs)
self.blacklist = read_lines_from_file(self.config_path + "devel-blacklist.conf")
self.whitelist = read_lines_from_file(self.config_path + "devel-whitelist.conf")
def select(self, po):
if self.skip(po):
return False
if po.name in self.blacklist:
return False
if po.name in self.whitelist:
return True
if self.is_kernel_devel(po):
return False
# HACK: exclude ghc*
if po.name.startswith("ghc-"):
return False
if po.name.endswith("-devel"):
return True
if po.name.endswith("-static"):
return True
for p_name, p_flag, (p_e, p_v, p_r) in po.provides:
if p_name.endswith("-devel"):
return True
if p_name.endswith("-static"):
return True
return False
DEFAULT_METHODS = ["devel", "runtime"]
METHOD_MAP = {}
def init(config_path="/usr/share/pungi/multilib/"):
global METHOD_MAP
if not config_path.endswith("/"):
config_path += "/"
for cls in (
AllMultilibMethod,
DevelMultilibMethod,
KernelMultilibMethod,
NoneMultilibMethod,
RuntimeMultilibMethod,
YabootMultilibMethod,
):
method = cls(config_path)
METHOD_MAP[method.name] = method
def po_is_multilib(po, methods):
for method_name in methods:
if not method_name:
continue
method = METHOD_MAP[method_name]
if method.select(po):
return method_name
return None

View File

@ -104,7 +104,8 @@ class PungiNotifier(object):
workdir=workdir, workdir=workdir,
return_stdout=False, return_stdout=False,
show_cmd=True, show_cmd=True,
universal_newlines=True, text=True,
errors="replace",
logfile=logfile, logfile=logfile,
) )
if ret != 0: if ret != 0:

View File

@ -19,6 +19,7 @@ import logging
from .tree import Tree from .tree import Tree
from .installer import Installer from .installer import Installer
from .container import Container
def main(args=None): def main(args=None):
@ -71,6 +72,43 @@ def main(args=None):
help="use unified core mode in rpm-ostree", help="use unified core mode in rpm-ostree",
) )
container = subparser.add_parser(
"container", help="Compose OSTree native container"
)
container.set_defaults(_class=Container, func="run")
container.add_argument(
"--name",
required=True,
help="the name of the the OCI archive (required)",
)
container.add_argument(
"--path",
required=True,
help="where to output the OCI archive (required)",
)
container.add_argument(
"--treefile",
metavar="FILE",
required=True,
help="treefile for rpm-ostree (required)",
)
container.add_argument(
"--log-dir",
metavar="DIR",
required=True,
help="where to log output (required).",
)
container.add_argument(
"--extra-config", metavar="FILE", help="JSON file contains extra configurations"
)
container.add_argument(
"-v",
"--version",
metavar="VERSION",
required=True,
help="version identifier (required)",
)
installerp = subparser.add_parser( installerp = subparser.add_parser(
"installer", help="Create an OSTree installer image" "installer", help="Create an OSTree installer image"
) )

85
pungi/ostree/container.py Normal file
View File

@ -0,0 +1,85 @@
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
import os
import json
import shlex
from .base import OSTree
from .utils import tweak_treeconf
def emit(cmd):
"""Print line of shell code into the stream."""
if isinstance(cmd, str):
print(cmd)
else:
print(" ".join([shlex.quote(x) for x in cmd]))
class Container(OSTree):
def _make_container(self):
"""Compose OSTree Container Native image"""
stamp_file = os.path.join(self.logdir, "%s.stamp" % self.name)
cmd = [
"rpm-ostree",
"compose",
"image",
# Always initialize for now
"--initialize",
# Touch the file if a new commit was created. This can help us tell
# if the commitid file is missing because no commit was created or
# because something went wrong.
"--touch-if-changed=%s" % stamp_file,
self.treefile,
]
fullpath = os.path.join(self.path, "%s.ociarchive" % self.name)
cmd.append(fullpath)
# Set the umask to be more permissive so directories get group write
# permissions. See https://pagure.io/releng/issue/8811#comment-629051
emit("umask 0002")
emit(cmd)
def run(self):
self.name = self.args.name
self.path = self.args.path
self.treefile = self.args.treefile
self.logdir = self.args.log_dir
self.extra_config = self.args.extra_config
if self.extra_config:
self.extra_config = json.load(open(self.extra_config, "r"))
repos = self.extra_config.get("repo", [])
keep_original_sources = self.extra_config.get(
"keep_original_sources", False
)
else:
# missing extra_config mustn't affect tweak_treeconf call
repos = []
keep_original_sources = True
update_dict = {"automatic-version-prefix": self.args.version}
self.treefile = tweak_treeconf(
self.treefile,
source_repos=repos,
keep_original_sources=keep_original_sources,
update_dict=update_dict,
)
self._make_container()

View File

@ -64,7 +64,8 @@ class Tree(OSTree):
show_cmd=True, show_cmd=True,
stdout=True, stdout=True,
logfile=log_file, logfile=log_file,
universal_newlines=True, text=True,
errors="replace",
) )
finally: finally:
os.umask(oldumask) os.umask(oldumask)
@ -77,7 +78,8 @@ class Tree(OSTree):
show_cmd=True, show_cmd=True,
stdout=True, stdout=True,
logfile=log_file, logfile=log_file,
universal_newlines=True, text=True,
errors="replace",
) )
def _update_ref(self): def _update_ref(self):

229
pungi/otel.py Normal file
View File

@ -0,0 +1,229 @@
import itertools
import os
from contextlib import contextmanager
"""
This module contains two classes with the same interface. An instance of one of
them is available as `tracing`. Which class is instantiated is selected
depending on whether environment variables configuring OTel are configured.
"""
class DummyTracing:
"""A dummy tracing module that doesn't actually do anything."""
def setup(self):
pass
@contextmanager
def span(self, *args, **kwargs):
yield
def set_attribute(self, name, value):
pass
def force_flush(self):
pass
def instrument_xmlrpc_proxy(self, proxy):
return proxy
def get_traceparent(self):
return None
def set_context(self, traceparent):
pass
def record_exception(self, exc, set_error_status=True):
pass
class OtelTracing:
"""This class implements the actual integration with opentelemetry."""
def setup(self):
"""Configure opentelemetry tracing based on environment variables. This
setup is optional as it may not be desirable when pungi is used as a
library.
"""
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
BatchSpanProcessor,
ConsoleSpanExporter,
)
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
OTLPSpanExporter,
)
otel_endpoint = os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"]
provider = TracerProvider(
resource=Resource(attributes={"service.name": "pungi"})
)
if "console" == otel_endpoint:
# This is for debugging the tracing locally.
self.processor = BatchSpanProcessor(ConsoleSpanExporter())
else:
self.processor = BatchSpanProcessor(OTLPSpanExporter())
provider.add_span_processor(self.processor)
trace.set_tracer_provider(provider)
traceparent = os.environ.get("TRACEPARENT")
if traceparent:
self.set_context(traceparent)
try:
from opentelemetry.instrumentation.requests import RequestsInstrumentor
RequestsInstrumentor().instrument()
except ImportError:
pass
@property
def tracer(self):
from opentelemetry import trace
return trace.get_tracer(__name__)
@contextmanager
def span(self, name, **attributes):
"""Create a new span as a child of the current one. Attributes can be
passed via kwargs."""
with self.tracer.start_as_current_span(name, attributes=attributes) as span:
yield span
def get_traceparent(self):
from opentelemetry.trace.propagation.tracecontext import (
TraceContextTextMapPropagator,
)
carrier = {}
TraceContextTextMapPropagator().inject(carrier)
return carrier["traceparent"]
def set_attribute(self, name, value):
"""Set an attribute on the current span."""
from opentelemetry import trace
span = trace.get_current_span()
span.set_attribute(name, value)
def force_flush(self):
"""Ensure all spans and traces are sent out. Call this before the
process exits."""
self.processor.force_flush()
def instrument_xmlrpc_proxy(self, proxy):
return InstrumentedClientSession(proxy)
def set_context(self, traceparent):
"""Configure current context to match the given traceparent."""
from opentelemetry import context
from opentelemetry.trace.propagation.tracecontext import (
TraceContextTextMapPropagator,
)
ctx = TraceContextTextMapPropagator().extract(
carrier={"traceparent": traceparent}
)
context.attach(ctx)
def record_exception(self, exc, set_error_status=True):
"""Records an exception for the current span and optionally marks the
span as failed."""
from opentelemetry import trace
span = trace.get_current_span()
span.record_exception(exc)
if set_error_status:
span.set_status(trace.status.StatusCode.ERROR)
class InstrumentedClientSession:
"""Wrapper around koji.ClientSession that creates spans for each API call.
RequestsInstrumentor can create spans at the HTTP requests level, but since
those all go the same XML-RPC endpoint, they are not very informative.
Multicall is not handled very well here. The spans will only have a
`multicall` boolean attribute, but they don't carry any additional data
that could group them.
Koji ClientSession supports three ways of making multicalls, but Pungi only
uses one, and that one is supported here.
Supported:
c.multicall = True
c.getBuild(1)
c.getBuild(2)
results = c.multiCall()
Not supported:
with c.multicall() as m:
r1 = m.getBuild(1)
r2 = m.getBuild(2)
Also not supported:
m = c.multicall()
r1 = m.getBuild(1)
r2 = m.getBuild(2)
m.call_all()
"""
def __init__(self, session):
self.session = session
def _name(self, name):
"""Helper for generating span names."""
return "%s.%s" % (self.session.__class__.__name__, name)
@property
def system(self):
"""This is only ever used to get list of available API calls. It is
rather awkward though. Ideally we wouldn't really trace this at all,
but there's the underlying POST request to the hub, which is quite
confusing in the trace if there is no additional context."""
return self.session.system
@property
def multicall(self):
return self.session.multicall
@multicall.setter
def multicall(self, value):
self.session.multicall = value
def __getattr__(self, name):
return self._instrument_method(name, getattr(self.session, name))
def _instrument_method(self, name, callable):
def wrapper(*args, **kwargs):
with tracing.span(self._name(name)) as span:
span.set_attribute("arguments", _format_args(args, kwargs))
if self.session.multicall:
tracing.set_attribute("multicall", True)
return callable(*args, **kwargs)
return wrapper
def _format_args(args, kwargs):
"""Turn args+kwargs into a single string. OTel could choke on more
complicated data."""
return ", ".join(
itertools.chain(
(repr(arg) for arg in args),
(f"{key}={value!r}" for key, value in kwargs.items()),
)
)
if "OTEL_EXPORTER_OTLP_ENDPOINT" in os.environ:
tracing = OtelTracing()
else:
tracing = DummyTracing()

View File

@ -1,73 +0,0 @@
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
import fnmatch
def head_tail_split(name):
name_split = name.strip("/").split("/", 1)
if len(name_split) == 2:
head = name_split[0]
tail = name_split[1].strip("/")
else:
head, tail = name_split[0], None
return head, tail
class PathMatch(object):
def __init__(self, parent=None, desc=None):
self._patterns = {}
self._final_patterns = {}
self._values = []
def __setitem__(self, name, value):
head, tail = head_tail_split(name)
if tail is not None:
# recursion
if head not in self._patterns:
self._patterns[head] = PathMatch(parent=self, desc=head)
self._patterns[head][tail] = value
else:
if head not in self._final_patterns:
self._final_patterns[head] = PathMatch(parent=self, desc=head)
if value not in self._final_patterns[head]._values:
self._final_patterns[head]._values.append(value)
def __getitem__(self, name):
result = []
head, tail = head_tail_split(name)
for pattern in self._patterns:
if fnmatch.fnmatch(head, pattern):
if tail is None:
values = self._patterns[pattern]._values
else:
values = self._patterns[pattern][tail]
for value in values:
if value not in result:
result.append(value)
for pattern in self._final_patterns:
if tail is None:
x = head
else:
x = "%s/%s" % (head, tail)
if fnmatch.fnmatch(x, pattern):
values = self._final_patterns[pattern]._values
for value in values:
if value not in result:
result.append(value)
return result

View File

@ -25,16 +25,18 @@ from .buildinstall import BuildinstallPhase # noqa
from .extra_files import ExtraFilesPhase # noqa from .extra_files import ExtraFilesPhase # noqa
from .createiso import CreateisoPhase # noqa from .createiso import CreateisoPhase # noqa
from .extra_isos import ExtraIsosPhase # noqa from .extra_isos import ExtraIsosPhase # noqa
from .live_images import LiveImagesPhase # noqa
from .image_build import ImageBuildPhase # noqa from .image_build import ImageBuildPhase # noqa
from .image_container import ImageContainerPhase # noqa from .image_container import ImageContainerPhase # noqa
from .kiwibuild import KiwiBuildPhase # noqa
from .osbuild import OSBuildPhase # noqa from .osbuild import OSBuildPhase # noqa
from .imagebuilder import ImageBuilderPhase # noqa
from .repoclosure import RepoclosurePhase # noqa from .repoclosure import RepoclosurePhase # noqa
from .test import TestPhase # noqa from .test import TestPhase # noqa
from .image_checksum import ImageChecksumPhase # noqa from .image_checksum import ImageChecksumPhase # noqa
from .livemedia_phase import LiveMediaPhase # noqa from .livemedia_phase import LiveMediaPhase # noqa
from .ostree import OSTreePhase # noqa from .ostree import OSTreePhase # noqa
from .ostree_installer import OstreeInstallerPhase # noqa from .ostree_installer import OstreeInstallerPhase # noqa
from .ostree_container import OSTreeContainerPhase # noqa
from .osbs import OSBSPhase # noqa from .osbs import OSBSPhase # noqa
from .phases_metadata import gather_phases_metadata # noqa from .phases_metadata import gather_phases_metadata # noqa

View File

@ -16,29 +16,30 @@
import errno import errno
import os import os
import pickle
import time import time
import shlex
import shutil import shutil
import re import re
from six.moves import cPickle as pickle
from copy import copy from copy import copy
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from kobo.shortcuts import run, force_list from kobo.shortcuts import run, force_list
import kobo.rpmlib import kobo.rpmlib
from productmd.images import Image from productmd.images import Image
from six.moves import shlex_quote
from pungi.arch import get_valid_arches from pungi.arch import get_valid_arches
from pungi.util import get_volid, get_arch_variant_data from pungi.util import get_volid, get_arch_variant_data
from pungi.util import get_file_size, get_mtime, failable, makedirs from pungi.util import get_file_size, get_mtime, failable, makedirs
from pungi.util import copy_all, translate_path, move_all from pungi.util import copy_all, translate_path
from pungi.wrappers.lorax import LoraxWrapper from pungi.wrappers.lorax import LoraxWrapper
from pungi.wrappers import iso from pungi.wrappers import iso
from pungi.wrappers.scm import get_file from pungi.wrappers.scm import get_file
from pungi.wrappers.scm import get_file_from_scm from pungi.wrappers.scm import get_file_from_scm
from pungi.wrappers import kojiwrapper from pungi.wrappers import kojiwrapper
from pungi.phases.base import PhaseBase from pungi.phases.base import PhaseBase
from pungi.runroot import Runroot from pungi.runroot import Runroot, download_and_extract_archive
from pungi.threading import TelemetryWorkerThread as WorkerThread
class BuildinstallPhase(PhaseBase): class BuildinstallPhase(PhaseBase):
@ -94,6 +95,7 @@ class BuildinstallPhase(PhaseBase):
squashfs_only = False squashfs_only = False
configuration_file = None configuration_file = None
configuration_file_source = None configuration_file_source = None
rootfs_type = None
version = self.compose.conf.get( version = self.compose.conf.get(
"treeinfo_version", self.compose.conf["release_version"] "treeinfo_version", self.compose.conf["release_version"]
) )
@ -116,6 +118,7 @@ class BuildinstallPhase(PhaseBase):
skip_branding = data.get("skip_branding", False) skip_branding = data.get("skip_branding", False)
configuration_file_source = data.get("configuration_file") configuration_file_source = data.get("configuration_file")
squashfs_only = data.get("squashfs_only", False) squashfs_only = data.get("squashfs_only", False)
rootfs_type = data.get("rootfs_type", None)
if "version" in data: if "version" in data:
version = data["version"] version = data["version"]
output_dir = os.path.join(output_dir, variant.uid) output_dir = os.path.join(output_dir, variant.uid)
@ -144,7 +147,7 @@ class BuildinstallPhase(PhaseBase):
) )
if self.compose.has_comps: if self.compose.has_comps:
comps_repo = self.compose.paths.work.comps_repo(arch, variant) comps_repo = self.compose.paths.work.comps_repo(arch, variant)
if final_output_dir != output_dir: if final_output_dir != output_dir or self.lorax_use_koji_plugin:
comps_repo = translate_path(self.compose, comps_repo) comps_repo = translate_path(self.compose, comps_repo)
repos.append(comps_repo) repos.append(comps_repo)
@ -169,9 +172,9 @@ class BuildinstallPhase(PhaseBase):
"rootfs-size": rootfs_size, "rootfs-size": rootfs_size,
"dracut-args": dracut_args, "dracut-args": dracut_args,
"skip_branding": skip_branding, "skip_branding": skip_branding,
"outputdir": output_dir,
"squashfs_only": squashfs_only, "squashfs_only": squashfs_only,
"configuration_file": configuration_file, "configuration_file": configuration_file,
"rootfs-type": rootfs_type,
} }
else: else:
# If the buildinstall_topdir is set, it means Koji is used for # If the buildinstall_topdir is set, it means Koji is used for
@ -206,10 +209,11 @@ class BuildinstallPhase(PhaseBase):
skip_branding=skip_branding, skip_branding=skip_branding,
squashfs_only=squashfs_only, squashfs_only=squashfs_only,
configuration_file=configuration_file, configuration_file=configuration_file,
rootfs_type=rootfs_type,
) )
return "rm -rf %s && %s" % ( return "rm -rf %s && %s" % (
shlex_quote(output_topdir), shlex.quote(output_topdir),
" ".join([shlex_quote(x) for x in lorax_cmd]), " ".join([shlex.quote(x) for x in lorax_cmd]),
) )
def get_repos(self, arch): def get_repos(self, arch):
@ -219,10 +223,6 @@ class BuildinstallPhase(PhaseBase):
return repos return repos
def run(self): def run(self):
lorax = LoraxWrapper()
product = self.compose.conf["release_name"]
version = self.compose.conf["release_version"]
release = self.compose.conf["release_version"]
disc_type = self.compose.conf["disc_types"].get("dvd", "dvd") disc_type = self.compose.conf["disc_types"].get("dvd", "dvd")
# Prepare kickstart file for final images. # Prepare kickstart file for final images.
@ -239,7 +239,7 @@ class BuildinstallPhase(PhaseBase):
) )
makedirs(final_output_dir) makedirs(final_output_dir)
repo_baseurls = self.get_repos(arch) repo_baseurls = self.get_repos(arch)
if final_output_dir != output_dir: if final_output_dir != output_dir or self.lorax_use_koji_plugin:
repo_baseurls = [translate_path(self.compose, r) for r in repo_baseurls] repo_baseurls = [translate_path(self.compose, r) for r in repo_baseurls]
if self.buildinstall_method == "lorax": if self.buildinstall_method == "lorax":
@ -275,29 +275,12 @@ class BuildinstallPhase(PhaseBase):
), ),
) )
) )
elif self.buildinstall_method == "buildinstall":
volid = get_volid(self.compose, arch, disc_type=disc_type)
commands.append(
(
None,
lorax.get_buildinstall_cmd(
product,
version,
release,
repo_baseurls,
output_dir,
is_final=self.compose.supported,
buildarch=arch,
volid=volid,
),
)
)
else: else:
raise ValueError( raise ValueError(
"Unsupported buildinstall method: %s" % self.buildinstall_method "Unsupported buildinstall method: %s" % self.buildinstall_method
) )
for (variant, cmd) in commands: for variant, cmd in commands:
self.pool.add(BuildinstallThread(self.pool)) self.pool.add(BuildinstallThread(self.pool))
self.pool.queue_put( self.pool.queue_put(
(self.compose, arch, variant, cmd, self.pkgset_phase) (self.compose, arch, variant, cmd, self.pkgset_phase)
@ -364,9 +347,17 @@ BOOT_CONFIGS = [
"EFI/BOOT/BOOTX64.conf", "EFI/BOOT/BOOTX64.conf",
"EFI/BOOT/grub.cfg", "EFI/BOOT/grub.cfg",
] ]
BOOT_IMAGES = [
"images/efiboot.img",
]
def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None): def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None):
"""
Put escaped volume ID and possibly kickstart file into the boot
configuration files.
:returns: list of paths to modified config files
"""
volid_escaped = volid.replace(" ", r"\x20").replace("\\", "\\\\") volid_escaped = volid.replace(" ", r"\x20").replace("\\", "\\\\")
volid_escaped_2 = volid_escaped.replace("\\", "\\\\") volid_escaped_2 = volid_escaped.replace("\\", "\\\\")
found_configs = [] found_configs = []
@ -374,7 +365,6 @@ def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None):
config_path = os.path.join(path, config) config_path = os.path.join(path, config)
if not os.path.exists(config_path): if not os.path.exists(config_path):
continue continue
found_configs.append(config)
with open(config_path, "r") as f: with open(config_path, "r") as f:
data = original_data = f.read() data = original_data = f.read()
@ -394,7 +384,12 @@ def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None):
with open(config_path, "w") as f: with open(config_path, "w") as f:
f.write(data) f.write(data)
if logger and data != original_data: if data != original_data:
found_configs.append(config)
if logger:
# Generally lorax should create file with correct volume id
# already. If we don't have a kickstart, this function should
# be a no-op.
logger.info("Boot config %s changed" % config_path) logger.info("Boot config %s changed" % config_path)
return found_configs return found_configs
@ -423,8 +418,8 @@ def tweak_buildinstall(
# copy src to temp # copy src to temp
# TODO: place temp on the same device as buildinstall dir so we can hardlink # TODO: place temp on the same device as buildinstall dir so we can hardlink
cmd = "cp -dRv --preserve=mode,links,timestamps --remove-destination %s/* %s/" % ( cmd = "cp -dRv --preserve=mode,links,timestamps --remove-destination %s/* %s/" % (
shlex_quote(src), shlex.quote(src),
shlex_quote(tmp_dir), shlex.quote(tmp_dir),
) )
run(cmd) run(cmd)
@ -434,9 +429,8 @@ def tweak_buildinstall(
if kickstart_file and found_configs: if kickstart_file and found_configs:
shutil.copy2(kickstart_file, os.path.join(dst, "ks.cfg")) shutil.copy2(kickstart_file, os.path.join(dst, "ks.cfg"))
images = [ images = [os.path.join(tmp_dir, img) for img in BOOT_IMAGES]
os.path.join(tmp_dir, "images", "efiboot.img"), if found_configs:
]
for image in images: for image in images:
if not os.path.isfile(image): if not os.path.isfile(image):
continue continue
@ -446,7 +440,9 @@ def tweak_buildinstall(
logger=compose._logger, logger=compose._logger,
use_guestmount=compose.conf.get("buildinstall_use_guestmount"), use_guestmount=compose.conf.get("buildinstall_use_guestmount"),
) as mount_tmp_dir: ) as mount_tmp_dir:
for config in BOOT_CONFIGS: for config in found_configs:
# Put each modified config file into the image (overwriting the
# original).
config_path = os.path.join(tmp_dir, config) config_path = os.path.join(tmp_dir, config)
config_in_image = os.path.join(mount_tmp_dir, config) config_in_image = os.path.join(mount_tmp_dir, config)
@ -461,12 +457,12 @@ def tweak_buildinstall(
run(cmd) run(cmd)
# HACK: make buildinstall files world readable # HACK: make buildinstall files world readable
run("chmod -R a+rX %s" % shlex_quote(tmp_dir)) run("chmod -R a+rX %s" % shlex.quote(tmp_dir))
# copy temp to dst # copy temp to dst
cmd = "cp -dRv --preserve=mode,links,timestamps --remove-destination %s/* %s/" % ( cmd = "cp -dRv --preserve=mode,links,timestamps --remove-destination %s/* %s/" % (
shlex_quote(tmp_dir), shlex.quote(tmp_dir),
shlex_quote(dst), shlex.quote(dst),
) )
run(cmd) run(cmd)
@ -530,7 +526,10 @@ def link_boot_iso(compose, arch, variant, can_fail):
setattr(img, "can_fail", can_fail) setattr(img, "can_fail", can_fail)
setattr(img, "deliverable", "buildinstall") setattr(img, "deliverable", "buildinstall")
try: try:
img.volume_id = iso.get_volume_id(new_boot_iso_path) img.volume_id = iso.get_volume_id(
new_boot_iso_path,
compose.conf.get("createiso_use_xorrisofs"),
)
except RuntimeError: except RuntimeError:
pass pass
# In this phase we should add to compose only the images that # In this phase we should add to compose only the images that
@ -725,8 +724,8 @@ class BuildinstallThread(WorkerThread):
# input on RPM level. # input on RPM level.
cmd_copy = copy(cmd) cmd_copy = copy(cmd)
for key in ["outputdir", "sources"]: for key in ["outputdir", "sources"]:
del cmd_copy[key] cmd_copy.pop(key, None)
del old_metadata["cmd"][key] old_metadata["cmd"].pop(key, None)
# Do not reuse if command line arguments are not the same. # Do not reuse if command line arguments are not the same.
if old_metadata["cmd"] != cmd_copy: if old_metadata["cmd"] != cmd_copy:
@ -821,8 +820,6 @@ class BuildinstallThread(WorkerThread):
if buildinstall_method == "lorax": if buildinstall_method == "lorax":
packages += ["lorax"] packages += ["lorax"]
chown_paths.append(_get_log_dir(compose, variant, arch)) chown_paths.append(_get_log_dir(compose, variant, arch))
elif buildinstall_method == "buildinstall":
packages += ["anaconda"]
packages += get_arch_variant_data( packages += get_arch_variant_data(
compose.conf, "buildinstall_packages", arch, variant compose.conf, "buildinstall_packages", arch, variant
) )
@ -843,13 +840,13 @@ class BuildinstallThread(WorkerThread):
# Start the runroot task. # Start the runroot task.
runroot = Runroot(compose, phase="buildinstall") runroot = Runroot(compose, phase="buildinstall")
task_id = None
if buildinstall_method == "lorax" and lorax_use_koji_plugin: if buildinstall_method == "lorax" and lorax_use_koji_plugin:
runroot.run_pungi_buildinstall( task_id = runroot.run_pungi_buildinstall(
cmd, cmd,
log_file=log_file, log_file=log_file,
arch=arch, arch=arch,
packages=packages, packages=packages,
mounts=[compose.topdir],
weight=compose.conf["runroot_weights"].get("buildinstall"), weight=compose.conf["runroot_weights"].get("buildinstall"),
) )
else: else:
@ -882,19 +879,17 @@ class BuildinstallThread(WorkerThread):
log_dir = os.path.join(output_dir, "logs") log_dir = os.path.join(output_dir, "logs")
copy_all(log_dir, final_log_dir) copy_all(log_dir, final_log_dir)
elif lorax_use_koji_plugin: elif lorax_use_koji_plugin:
# If Koji pungi-buildinstall is used, then the buildinstall results are # If Koji pungi-buildinstall is used, then the buildinstall results
# not stored directly in `output_dir` dir, but in "results" and "logs" # are attached as outputs to the Koji task. Download and unpack
# subdirectories. We need to move them to final_output_dir. # them to the correct location.
results_dir = os.path.join(output_dir, "results") download_and_extract_archive(
move_all(results_dir, final_output_dir, rm_src_dir=True) compose, task_id, "results.tar.gz", final_output_dir
)
# Get the log_dir into which we should copy the resulting log files. # Download the logs into proper location too.
log_fname = "buildinstall-%s-logs/dummy" % variant.uid log_fname = "buildinstall-%s-logs/dummy" % variant.uid
final_log_dir = os.path.dirname(compose.paths.log.log_file(arch, log_fname)) final_log_dir = os.path.dirname(compose.paths.log.log_file(arch, log_fname))
if not os.path.exists(final_log_dir): download_and_extract_archive(compose, task_id, "logs.tar.gz", final_log_dir)
makedirs(final_log_dir)
log_dir = os.path.join(output_dir, "logs")
move_all(log_dir, final_log_dir, rm_src_dir=True)
rpms = runroot.get_buildroot_rpms() rpms = runroot.get_buildroot_rpms()
self._write_buildinstall_metadata( self._write_buildinstall_metadata(

View File

@ -14,17 +14,18 @@
# along with this program; if not, see <https://gnu.org/licenses/>. # along with this program; if not, see <https://gnu.org/licenses/>.
import itertools
import os import os
import random import random
import shlex
import shutil import shutil
import stat import stat
import json import json
import productmd.treeinfo import productmd.treeinfo
from productmd.images import Image from productmd.images import Image
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from kobo.shortcuts import run, relative_path from kobo.shortcuts import run, relative_path, compute_file_checksums
from six.moves import shlex_quote
from pungi.wrappers import iso from pungi.wrappers import iso
from pungi.wrappers.createrepo import CreaterepoWrapper from pungi.wrappers.createrepo import CreaterepoWrapper
@ -42,6 +43,7 @@ from pungi.util import (
from pungi.media_split import MediaSplitter, convert_media_size from pungi.media_split import MediaSplitter, convert_media_size
from pungi.compose_metadata.discinfo import read_discinfo, write_discinfo from pungi.compose_metadata.discinfo import read_discinfo, write_discinfo
from pungi.runroot import Runroot from pungi.runroot import Runroot
from pungi.threading import TelemetryWorkerThread as WorkerThread
from .. import createiso from .. import createiso
@ -154,6 +156,13 @@ class CreateisoPhase(PhaseLoggerMixin, PhaseBase):
disc_num=cmd["disc_num"], disc_num=cmd["disc_num"],
disc_count=cmd["disc_count"], disc_count=cmd["disc_count"],
) )
if self.compose.notifier:
self.compose.notifier.send(
"createiso-imagedone",
file=cmd["iso_path"],
arch=arch,
variant=str(variant),
)
def try_reuse(self, cmd, variant, arch, opts): def try_reuse(self, cmd, variant, arch, opts):
"""Try to reuse image from previous compose. """Try to reuse image from previous compose.
@ -181,6 +190,14 @@ class CreateisoPhase(PhaseLoggerMixin, PhaseBase):
if not old_config: if not old_config:
self.logger.info("%s - no config for old compose", log_msg) self.logger.info("%s - no config for old compose", log_msg)
return False return False
# Disable reuse if unsigned packages are allowed. The older compose
# could have unsigned packages, and those may have been signed since
# then. We want to regenerate the ISO to have signatures.
if None in self.compose.conf["sigkeys"]:
self.logger.info("%s - unsigned packages are allowed", log_msg)
return False
# Convert current configuration to JSON and back to encode it similarly # Convert current configuration to JSON and back to encode it similarly
# to the old one # to the old one
config = json.loads(json.dumps(self.compose.conf)) config = json.loads(json.dumps(self.compose.conf))
@ -369,7 +386,7 @@ class CreateisoPhase(PhaseLoggerMixin, PhaseBase):
if self.compose.notifier: if self.compose.notifier:
self.compose.notifier.send("createiso-targets", deliverables=deliverables) self.compose.notifier.send("createiso-targets", deliverables=deliverables)
for (cmd, variant, arch) in commands: for cmd, variant, arch in commands:
self.pool.add(CreateIsoThread(self.pool)) self.pool.add(CreateIsoThread(self.pool))
self.pool.queue_put((self.compose, cmd, variant, arch)) self.pool.queue_put((self.compose, cmd, variant, arch))
@ -450,7 +467,14 @@ class CreateIsoThread(WorkerThread):
try: try:
run_createiso_command( run_createiso_command(
num, compose, bootable, arch, cmd["cmd"], mounts, log_file num,
compose,
bootable,
arch,
cmd["cmd"],
mounts,
log_file,
cmd["iso_path"],
) )
except Exception: except Exception:
self.fail(compose, cmd, variant, arch) self.fail(compose, cmd, variant, arch)
@ -517,7 +541,10 @@ def add_iso_to_metadata(
setattr(img, "can_fail", compose.can_fail(variant, arch, "iso")) setattr(img, "can_fail", compose.can_fail(variant, arch, "iso"))
setattr(img, "deliverable", "iso") setattr(img, "deliverable", "iso")
try: try:
img.volume_id = iso.get_volume_id(iso_path) img.volume_id = iso.get_volume_id(
iso_path,
compose.conf.get("createiso_use_xorrisofs"),
)
except RuntimeError: except RuntimeError:
pass pass
if arch == "src": if arch == "src":
@ -528,7 +555,9 @@ def add_iso_to_metadata(
return img return img
def run_createiso_command(num, compose, bootable, arch, cmd, mounts, log_file): def run_createiso_command(
num, compose, bootable, arch, cmd, mounts, log_file, iso_path
):
packages = [ packages = [
"coreutils", "coreutils",
"xorriso" if compose.conf.get("createiso_use_xorrisofs") else "genisoimage", "xorriso" if compose.conf.get("createiso_use_xorrisofs") else "genisoimage",
@ -539,7 +568,6 @@ def run_createiso_command(num, compose, bootable, arch, cmd, mounts, log_file):
if bootable: if bootable:
extra_packages = { extra_packages = {
"lorax": ["lorax", "which"], "lorax": ["lorax", "which"],
"buildinstall": ["anaconda"],
} }
packages.extend(extra_packages[compose.conf["buildinstall_method"]]) packages.extend(extra_packages[compose.conf["buildinstall_method"]])
@ -571,6 +599,76 @@ def run_createiso_command(num, compose, bootable, arch, cmd, mounts, log_file):
weight=compose.conf["runroot_weights"].get("createiso"), weight=compose.conf["runroot_weights"].get("createiso"),
) )
if bootable and compose.conf.get("createiso_use_xorrisofs"):
fix_treeinfo_checksums(compose, iso_path, arch)
def fix_treeinfo_checksums(compose, iso_path, arch):
"""It is possible for the ISO to contain a .treefile with incorrect
checksums. By modifying the ISO (adding files) some of the images may
change.
This function fixes that after the fact by looking for incorrect checksums,
recalculating them and updating the .treeinfo file. Since the size of the
file doesn't change, this seems to not change any images.
"""
modified = False
with iso.mount(iso_path, compose._logger) as mountpoint:
ti = productmd.TreeInfo()
ti.load(os.path.join(mountpoint, ".treeinfo"))
for image, (type_, expected) in ti.checksums.checksums.items():
checksums = compute_file_checksums(os.path.join(mountpoint, image), [type_])
actual = checksums[type_]
if actual == expected:
# Everything fine here, skip to next image.
continue
compose.log_debug("%s: %s: checksum mismatch", iso_path, image)
# Update treeinfo with correct checksum
ti.checksums.checksums[image] = (type_, actual)
modified = True
if not modified:
compose.log_debug("%s: All checksums match, nothing to do.", iso_path)
return
try:
tmpdir = compose.mkdtemp(arch, prefix="fix-checksum-")
# Write modified .treeinfo
ti_path = os.path.join(tmpdir, ".treeinfo")
compose.log_debug("Storing modified .treeinfo in %s", ti_path)
ti.dump(ti_path)
# Write a modified DVD into a temporary path, that is atomically moved
# over the original file.
fixed_path = os.path.join(tmpdir, "fixed-checksum-dvd.iso")
cmd = ["xorriso"]
cmd.extend(
itertools.chain.from_iterable(
iso.xorriso_commands(arch, iso_path, fixed_path)
)
)
cmd.extend(["-map", ti_path, ".treeinfo"])
run(
cmd,
logfile=compose.paths.log.log_file(
arch, "checksum-fix_generate_%s" % os.path.basename(iso_path)
),
)
# The modified ISO no longer has implanted MD5, so that needs to be
# fixed again.
compose.log_debug("Implanting new MD5 to %s", fixed_path)
run(
iso.get_implantisomd5_cmd(fixed_path, compose.supported),
logfile=compose.paths.log.log_file(
arch, "checksum-fix_implantisomd5_%s" % os.path.basename(iso_path)
),
)
# All done, move the updated image to the final location.
compose.log_debug("Updating %s", iso_path)
os.rename(fixed_path, iso_path)
finally:
shutil.rmtree(tmpdir)
def split_iso(compose, arch, variant, no_split=False, logger=None): def split_iso(compose, arch, variant, no_split=False, logger=None):
""" """
@ -685,7 +783,7 @@ def prepare_iso(
if file_list_content: if file_list_content:
# write modified repodata only if there are packages available # write modified repodata only if there are packages available
run("cp -a %s/repodata %s/" % (shlex_quote(tree_dir), shlex_quote(iso_dir))) run("cp -a %s/repodata %s/" % (shlex.quote(tree_dir), shlex.quote(iso_dir)))
with open(file_list, "w") as f: with open(file_list, "w") as f:
f.write("\n".join(file_list_content)) f.write("\n".join(file_list_content))
cmd = repo.get_createrepo_cmd( cmd = repo.get_createrepo_cmd(

View File

@ -27,7 +27,7 @@ import xml.dom.minidom
import productmd.modules import productmd.modules
import productmd.rpms import productmd.rpms
from kobo.shortcuts import relative_path, run from kobo.shortcuts import relative_path, run
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from ..module_util import Modulemd, collect_module_defaults, collect_module_obsoletes from ..module_util import Modulemd, collect_module_defaults, collect_module_obsoletes
from ..util import ( from ..util import (
@ -38,6 +38,7 @@ from ..util import (
from ..wrappers.createrepo import CreaterepoWrapper from ..wrappers.createrepo import CreaterepoWrapper
from ..wrappers.scm import get_dir_from_scm from ..wrappers.scm import get_dir_from_scm
from .base import PhaseBase from .base import PhaseBase
from ..threading import TelemetryWorkerThread as WorkerThread
CACHE_TOPDIR = "/var/cache/pungi/createrepo_c/" CACHE_TOPDIR = "/var/cache/pungi/createrepo_c/"
createrepo_lock = threading.Lock() createrepo_lock = threading.Lock()

View File

@ -112,7 +112,7 @@ def copy_extra_files(
target_path = os.path.join( target_path = os.path.join(
extra_files_dir, scm_dict.get("target", "").lstrip("/") extra_files_dir, scm_dict.get("target", "").lstrip("/")
) )
getter(scm_dict, target_path, compose=compose) getter(scm_dict, target_path, compose=compose, arch=arch)
if os.listdir(extra_files_dir): if os.listdir(extra_files_dir):
metadata.populate_extra_files_metadata( metadata.populate_extra_files_metadata(

View File

@ -18,7 +18,8 @@ import hashlib
import json import json
from kobo.shortcuts import force_list from kobo.shortcuts import force_list
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from pungi.threading import TelemetryWorkerThread as WorkerThread
import productmd.treeinfo import productmd.treeinfo
from productmd.extra_files import ExtraFiles from productmd.extra_files import ExtraFiles
@ -76,7 +77,7 @@ class ExtraIsosPhase(PhaseLoggerMixin, ConfigGuardedPhase, PhaseBase):
for arch in sorted(arches): for arch in sorted(arches):
commands.append((config, variant, arch)) commands.append((config, variant, arch))
for (config, variant, arch) in commands: for config, variant, arch in commands:
self.pool.add(ExtraIsosThread(self.pool, self.bi)) self.pool.add(ExtraIsosThread(self.pool, self.bi))
self.pool.queue_put((self.compose, config, variant, arch)) self.pool.queue_put((self.compose, config, variant, arch))
@ -166,6 +167,7 @@ class ExtraIsosThread(WorkerThread):
log_file=compose.paths.log.log_file( log_file=compose.paths.log.log_file(
arch, "extraiso-%s" % os.path.basename(iso_path) arch, "extraiso-%s" % os.path.basename(iso_path)
), ),
iso_path=iso_path,
) )
img = add_iso_to_metadata( img = add_iso_to_metadata(
@ -204,6 +206,14 @@ class ExtraIsosThread(WorkerThread):
if not old_config: if not old_config:
self.pool.log_info("%s - no config for old compose", log_msg) self.pool.log_info("%s - no config for old compose", log_msg)
return False return False
# Disable reuse if unsigned packages are allowed. The older compose
# could have unsigned packages, and those may have been signed since
# then. We want to regenerate the ISO to have signatures.
if None in compose.conf["sigkeys"]:
self.pool.log_info("%s - unsigned packages are allowed", log_msg)
return False
# Convert current configuration to JSON and back to encode it similarly # Convert current configuration to JSON and back to encode it similarly
# to the old one # to the old one
config = json.loads(json.dumps(compose.conf)) config = json.loads(json.dumps(compose.conf))
@ -333,23 +343,24 @@ def get_extra_files(compose, variant, arch, extra_files):
included in the ISO. included in the ISO.
""" """
extra_files_dir = compose.paths.work.extra_iso_extra_files_dir(arch, variant) extra_files_dir = compose.paths.work.extra_iso_extra_files_dir(arch, variant)
filelist = []
for scm_dict in extra_files: for scm_dict in extra_files:
getter = get_file_from_scm if "file" in scm_dict else get_dir_from_scm getter = get_file_from_scm if "file" in scm_dict else get_dir_from_scm
target = scm_dict.get("target", "").lstrip("/") target = scm_dict.get("target", "").lstrip("/")
target_path = os.path.join(extra_files_dir, target).rstrip("/") target_path = os.path.join(extra_files_dir, target).rstrip("/")
filelist.extend( getter(scm_dict, target_path, compose=compose, arch=arch)
os.path.join(target, f)
for f in getter(scm_dict, target_path, compose=compose)
)
filelist = [
os.path.relpath(os.path.join(root, f), extra_files_dir)
for root, _, files in os.walk(extra_files_dir)
for f in files
]
if filelist: if filelist:
metadata.populate_extra_files_metadata( metadata.populate_extra_files_metadata(
ExtraFiles(), ExtraFiles(),
variant, variant,
arch, arch,
extra_files_dir, extra_files_dir,
filelist, sorted(filelist),
compose.conf["media_checksums"], compose.conf["media_checksums"],
) )

View File

@ -17,6 +17,7 @@
import glob import glob
import json import json
import os import os
import pickle
import shutil import shutil
import threading import threading
@ -24,7 +25,6 @@ from kobo.rpmlib import parse_nvra
from kobo.shortcuts import run from kobo.shortcuts import run
from productmd.rpms import Rpms from productmd.rpms import Rpms
from pungi.phases.pkgset.common import get_all_arches from pungi.phases.pkgset.common import get_all_arches
from six.moves import cPickle as pickle
try: try:
from queue import Queue from queue import Queue
@ -91,7 +91,7 @@ class GatherPhase(PhaseBase):
# check whether variants from configuration value # check whether variants from configuration value
# 'variant_as_lookaside' are correct # 'variant_as_lookaside' are correct
for (requiring, required) in variant_as_lookaside: for requiring, required in variant_as_lookaside:
if requiring in all_variants and required not in all_variants: if requiring in all_variants and required not in all_variants:
errors.append( errors.append(
"variant_as_lookaside: variant %r doesn't exist but is " "variant_as_lookaside: variant %r doesn't exist but is "
@ -100,7 +100,7 @@ class GatherPhase(PhaseBase):
# check whether variants from configuration value # check whether variants from configuration value
# 'variant_as_lookaside' have same architectures # 'variant_as_lookaside' have same architectures
for (requiring, required) in variant_as_lookaside: for requiring, required in variant_as_lookaside:
if ( if (
requiring in all_variants requiring in all_variants
and required in all_variants and required in all_variants
@ -236,7 +236,7 @@ def reuse_old_gather_packages(compose, arch, variant, package_sets, methods):
if not hasattr(compose, "_gather_reused_variant_arch"): if not hasattr(compose, "_gather_reused_variant_arch"):
setattr(compose, "_gather_reused_variant_arch", []) setattr(compose, "_gather_reused_variant_arch", [])
variant_as_lookaside = compose.conf.get("variant_as_lookaside", []) variant_as_lookaside = compose.conf.get("variant_as_lookaside", [])
for (requiring, required) in variant_as_lookaside: for requiring, required in variant_as_lookaside:
if ( if (
requiring == variant.uid requiring == variant.uid
and (required, arch) not in compose._gather_reused_variant_arch and (required, arch) not in compose._gather_reused_variant_arch
@ -469,9 +469,7 @@ def gather_packages(compose, arch, variant, package_sets, fulltree_excludes=None
) )
else: else:
for source_name in ("module", "comps", "json"): for source_name in ("module", "comps", "json"):
packages, groups, filter_packages = get_variant_packages( packages, groups, filter_packages = get_variant_packages(
compose, arch, variant, source_name, package_sets compose, arch, variant, source_name, package_sets
) )
@ -576,7 +574,6 @@ def trim_packages(compose, arch, variant, pkg_map, parent_pkgs=None, remove_pkgs
move_to_parent_pkgs = _mk_pkg_map() move_to_parent_pkgs = _mk_pkg_map()
removed_pkgs = _mk_pkg_map() removed_pkgs = _mk_pkg_map()
for pkg_type, pkgs in pkg_map.items(): for pkg_type, pkgs in pkg_map.items():
new_pkgs = [] new_pkgs = []
for pkg in pkgs: for pkg in pkgs:
pkg_path = pkg["path"] pkg_path = pkg["path"]
@ -648,9 +645,10 @@ def _make_lookaside_repo(compose, variant, arch, pkg_map, package_sets=None):
compose.paths.work.topdir(arch="global"), "download" compose.paths.work.topdir(arch="global"), "download"
) )
+ "/", + "/",
"koji": lambda: pungi.wrappers.kojiwrapper.KojiWrapper( "koji": lambda: compose.conf.get(
compose "koji_cache",
).koji_module.config.topdir.rstrip("/") pungi.wrappers.kojiwrapper.KojiWrapper(compose).koji_module.config.topdir,
).rstrip("/")
+ "/", + "/",
"kojimock": lambda: pungi.wrappers.kojiwrapper.KojiMockWrapper( "kojimock": lambda: pungi.wrappers.kojiwrapper.KojiMockWrapper(
compose, compose,
@ -668,6 +666,11 @@ def _make_lookaside_repo(compose, variant, arch, pkg_map, package_sets=None):
# we need a union of all SRPMs. # we need a union of all SRPMs.
if pkg_type == "srpm" or pkg_arch == arch: if pkg_type == "srpm" or pkg_arch == arch:
for pkg in packages: for pkg in packages:
if "lookaside" in pkg.get("flags", []):
# We want to ignore lookaside packages, those will
# be visible to the depending variants from the
# lookaside repo directly.
continue
pkg = pkg["path"] pkg = pkg["path"]
if path_prefix and pkg.startswith(path_prefix): if path_prefix and pkg.startswith(path_prefix):
pkg = pkg[len(path_prefix) :] pkg = pkg[len(path_prefix) :]

View File

@ -87,7 +87,7 @@ def link_files(compose, arch, variant, pkg_map, pkg_sets, manifest, srpm_map={})
dst_relpath = os.path.join(packages_dir_relpath, package_path) dst_relpath = os.path.join(packages_dir_relpath, package_path)
# link file # link file
pool.queue_put((pkg["path"], dst)) pool.queue_put((os.path.realpath(pkg["path"]), dst))
# update rpm manifest # update rpm manifest
pkg_obj = pkg_by_path[pkg["path"]] pkg_obj = pkg_by_path[pkg["path"]]
@ -116,7 +116,7 @@ def link_files(compose, arch, variant, pkg_map, pkg_sets, manifest, srpm_map={})
dst_relpath = os.path.join(packages_dir_relpath, package_path) dst_relpath = os.path.join(packages_dir_relpath, package_path)
# link file # link file
pool.queue_put((pkg["path"], dst)) pool.queue_put((os.path.realpath(pkg["path"]), dst))
# update rpm manifest # update rpm manifest
pkg_obj = pkg_by_path[pkg["path"]] pkg_obj = pkg_by_path[pkg["path"]]
@ -146,7 +146,7 @@ def link_files(compose, arch, variant, pkg_map, pkg_sets, manifest, srpm_map={})
dst_relpath = os.path.join(packages_dir_relpath, package_path) dst_relpath = os.path.join(packages_dir_relpath, package_path)
# link file # link file
pool.queue_put((pkg["path"], dst)) pool.queue_put((os.path.realpath(pkg["path"]), dst))
# update rpm manifest # update rpm manifest
pkg_obj = pkg_by_path[pkg["path"]] pkg_obj = pkg_by_path[pkg["path"]]

View File

@ -15,7 +15,6 @@
import os import os
import shutil
from kobo.shortcuts import run from kobo.shortcuts import run
from kobo.pkgset import SimpleRpmWrapper, RpmWrapper from kobo.pkgset import SimpleRpmWrapper, RpmWrapper
@ -220,9 +219,7 @@ def resolve_deps(compose, arch, variant, source_name=None):
yum_arch = tree_arch_to_yum_arch(arch) yum_arch = tree_arch_to_yum_arch(arch)
tmp_dir = compose.paths.work.tmp_dir(arch, variant) tmp_dir = compose.paths.work.tmp_dir(arch, variant)
cache_dir = compose.paths.work.pungi_cache_dir(arch, variant) cache_dir = compose.paths.work.pungi_cache_dir(arch, variant)
# TODO: remove YUM code, fully migrate to DNF
backends = { backends = {
"yum": pungi_wrapper.get_pungi_cmd,
"dnf": pungi_wrapper.get_pungi_cmd_dnf, "dnf": pungi_wrapper.get_pungi_cmd_dnf,
} }
get_cmd = backends[compose.conf["gather_backend"]] get_cmd = backends[compose.conf["gather_backend"]]
@ -245,17 +242,6 @@ def resolve_deps(compose, arch, variant, source_name=None):
with temp_dir(prefix="pungi_") as work_dir: with temp_dir(prefix="pungi_") as work_dir:
run(cmd, logfile=pungi_log, show_cmd=True, workdir=work_dir, env=os.environ) run(cmd, logfile=pungi_log, show_cmd=True, workdir=work_dir, env=os.environ)
# Clean up tmp dir
# Workaround for rpm not honoring sgid bit which only appears when yum is used.
yumroot_dir = os.path.join(tmp_dir, "work", arch, "yumroot")
if os.path.isdir(yumroot_dir):
try:
shutil.rmtree(yumroot_dir)
except Exception as e:
compose.log_warning(
"Failed to clean up tmp dir: %s %s" % (yumroot_dir, str(e))
)
with open(pungi_log, "r") as f: with open(pungi_log, "r") as f:
packages, broken_deps, missing_comps_pkgs = pungi_wrapper.parse_log(f) packages, broken_deps, missing_comps_pkgs = pungi_wrapper.parse_log(f)

View File

@ -47,9 +47,15 @@ class FakePackage(object):
@property @property
def files(self): def files(self):
return [ paths = []
os.path.join(dirname, basename) for (_, dirname, basename) in self.pkg.files # createrepo_c.Package.files is a tuple, but its length differs across
] # versions. The constants define index at which the related value is
# located.
for entry in self.pkg.files:
paths.append(
os.path.join(entry[cr.FILE_ENTRY_PATH], entry[cr.FILE_ENTRY_NAME])
)
return paths
@property @property
def provides(self): def provides(self):

View File

@ -16,7 +16,6 @@
import os import os
from pprint import pformat from pprint import pformat
import re import re
import six
import pungi.arch import pungi.arch
from pungi.util import pkg_is_rpm, pkg_is_srpm, pkg_is_debug from pungi.util import pkg_is_rpm, pkg_is_srpm, pkg_is_debug
@ -74,7 +73,7 @@ class GatherMethodNodeps(pungi.phases.gather.method.GatherMethodBase):
if not pkg_is_rpm(pkg): if not pkg_is_rpm(pkg):
continue continue
for gathered_pkg, pkg_arch in packages: for gathered_pkg, pkg_arch in packages:
if isinstance(gathered_pkg, six.string_types) and not re.match( if isinstance(gathered_pkg, str) and not re.match(
gathered_pkg.replace(".", "\\.") gathered_pkg.replace(".", "\\.")
.replace("+", "\\+") .replace("+", "\\+")
.replace("*", ".*") .replace("*", ".*")

View File

@ -13,7 +13,8 @@ from pungi.util import as_local_file, translate_path, get_repo_urls, version_gen
from pungi.phases import base from pungi.phases import base
from pungi.linker import Linker from pungi.linker import Linker
from pungi.wrappers.kojiwrapper import KojiWrapper from pungi.wrappers.kojiwrapper import KojiWrapper
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from pungi.threading import TelemetryWorkerThread as WorkerThread
from kobo.shortcuts import force_list from kobo.shortcuts import force_list
from productmd.images import Image from productmd.images import Image
from productmd.rpms import Rpms from productmd.rpms import Rpms
@ -22,9 +23,13 @@ from productmd.rpms import Rpms
# This is a mapping from formats to file extensions. The format is what koji # This is a mapping from formats to file extensions. The format is what koji
# image-build command expects as argument, and the extension is what the file # image-build command expects as argument, and the extension is what the file
# name will be ending with. The extensions are used to filter out which task # name will be ending with. The extensions are used to filter out which task
# results will be pulled into the compose. # results will be pulled into the compose. This dict is also used later in
# the process to set the image 'type' in productmd metadata terms - the type
# is set as the first key in this dict which has the file's extension in its
# values. This dict is imported and extended for similar purposes by other
# phases (at least osbuild and kiwibuild).
EXTENSIONS = { EXTENSIONS = {
"docker": ["tar.gz", "tar.xz"], "docker": ["tar.xz"],
"liveimg-squashfs": ["liveimg.squashfs"], "liveimg-squashfs": ["liveimg.squashfs"],
"qcow": ["qcow"], "qcow": ["qcow"],
"qcow2": ["qcow2"], "qcow2": ["qcow2"],
@ -344,7 +349,9 @@ class CreateImageBuildThread(WorkerThread):
# let's not change filename of koji outputs # let's not change filename of koji outputs
image_dest = os.path.join(image_dir, os.path.basename(image_info["path"])) image_dest = os.path.join(image_dir, os.path.basename(image_info["path"]))
src_file = os.path.realpath(image_info["path"]) src_file = compose.koji_downloader.get_file(
os.path.realpath(image_info["path"])
)
linker.link(src_file, image_dest, link_type=cmd["link_type"]) linker.link(src_file, image_dest, link_type=cmd["link_type"])
# Update image manifest # Update image manifest

View File

@ -2,12 +2,13 @@
import os import os
import re import re
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from .base import ConfigGuardedPhase, PhaseLoggerMixin from .base import ConfigGuardedPhase, PhaseLoggerMixin
from .. import util from .. import util
from ..wrappers import kojiwrapper from ..wrappers import kojiwrapper
from ..phases.osbs import add_metadata from ..phases.osbs import add_metadata
from ..threading import TelemetryWorkerThread as WorkerThread
class ImageContainerPhase(PhaseLoggerMixin, ConfigGuardedPhase): class ImageContainerPhase(PhaseLoggerMixin, ConfigGuardedPhase):
@ -76,7 +77,7 @@ class ImageContainerThread(WorkerThread):
) )
if koji.watch_task(task_id, log_file) != 0: if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError( raise RuntimeError(
"ImageContainer: task %s failed: see %s for details" "ImageContainer task failed: %s. See %s for details"
% (task_id, log_file) % (task_id, log_file)
) )

View File

@ -0,0 +1,263 @@
# -*- coding: utf-8 -*-
import os
from kobo.threads import ThreadPool
from kobo import shortcuts
from productmd.images import Image
from . import base
from .. import util
from ..linker import Linker
from ..wrappers import kojiwrapper
from .image_build import EXTENSIONS
from ..threading import TelemetryWorkerThread as WorkerThread
IMAGEBUILDEREXTENSIONS = [
("vagrant-libvirt", ["vagrant.libvirt.box"], "vagrant-libvirt.box"),
(
"vagrant-virtualbox",
["vagrant.virtualbox.box"],
"vagrant-virtualbox.box",
),
("container", ["oci.tar.xz"], "tar.xz"),
("wsl2", ["wsl"], "wsl"),
# .iso images can be of many types - boot, cd, dvd, live... -
# so 'boot' is just a default guess. 'iso' is not a valid
# productmd image type
("boot", [".iso"], "iso"),
]
class ImageBuilderPhase(
base.PhaseLoggerMixin, base.ImageConfigMixin, base.ConfigGuardedPhase
):
name = "imagebuilder"
def __init__(self, compose):
super(ImageBuilderPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.logger)
def _get_arches(self, image_conf, arches):
"""Get an intersection of arches in the config dict and the given ones."""
if "arches" in image_conf:
arches = set(image_conf["arches"]) & arches
return sorted(arches)
@staticmethod
def _get_repo_urls(compose, repos, arch="$basearch"):
"""
Get list of repos with resolved repo URLs. Preserve repos defined
as dicts.
"""
resolved_repos = []
for repo in repos:
repo = util.get_repo_url(compose, repo, arch=arch)
if repo is None:
raise RuntimeError("Failed to resolve repo URL for %s" % repo)
resolved_repos.append(repo)
return resolved_repos
def _get_repo(self, image_conf, variant):
"""
Get a list of repos. First included are those explicitly listed in
config, followed by by repo for current variant if it's not included in
the list already.
"""
repos = shortcuts.force_list(image_conf.get("repos", []))
if not variant.is_empty and variant.uid not in repos:
repos.append(variant.uid)
return ImageBuilderPhase._get_repo_urls(self.compose, repos, arch="$arch")
def run(self):
for variant in self.compose.get_variants():
arches = set([x for x in variant.arches if x != "src"])
for image_conf in self.get_config_block(variant):
build_arches = self._get_arches(image_conf, arches)
if not build_arches:
self.log_debug("skip: no arches")
continue
# these properties can be set per-image *or* as e.g.
# imagebuilder_release or global_release in the config
generics = {
"release": self.get_release(image_conf),
"target": self.get_config(image_conf, "target"),
"types": self.get_config(image_conf, "types"),
"seed": self.get_config(image_conf, "seed"),
"scratch": self.get_config(image_conf, "scratch"),
"version": self.get_version(image_conf),
}
repo = self._get_repo(image_conf, variant)
failable_arches = image_conf.pop("failable", [])
if failable_arches == ["*"]:
failable_arches = image_conf["arches"]
self.pool.add(RunImageBuilderThread(self.pool))
self.pool.queue_put(
(
self.compose,
variant,
image_conf,
build_arches,
generics,
repo,
failable_arches,
)
)
self.pool.start()
class RunImageBuilderThread(WorkerThread):
def process(self, item, num):
(compose, variant, config, arches, generics, repo, failable_arches) = item
self.failable_arches = []
# the Koji task as a whole can only fail if *all* arches are failable
can_task_fail = set(self.failable_arches).issuperset(set(arches))
self.num = num
with util.failable(
compose,
can_task_fail,
variant,
"*",
"imageBuilderBuild",
logger=self.pool._logger,
):
self.worker(compose, variant, config, arches, generics, repo)
def worker(self, compose, variant, config, arches, generics, repo):
msg = "imageBuilderBuild task for variant %s" % variant.uid
self.pool.log_info("[BEGIN] %s" % msg)
koji = kojiwrapper.KojiWrapper(compose)
koji.login()
opts = {}
opts["repos"] = repo
if generics.get("release"):
opts["release"] = generics["release"]
if generics.get("seed"):
opts["seed"] = generics["seed"]
if generics.get("scratch"):
opts["scratch"] = generics["scratch"]
if config.get("ostree"):
opts["ostree"] = config["ostree"]
if config.get("blueprint"):
opts["blueprint"] = config["blueprint"]
task_id = koji.koji_proxy.imageBuilderBuild(
generics["target"],
arches,
types=generics["types"],
name=config["name"],
version=generics["version"],
opts=opts,
)
koji.save_task_id(task_id)
# Wait for it to finish and capture the output into log file.
log_dir = os.path.join(compose.paths.log.topdir(), "imageBuilderBuild")
util.makedirs(log_dir)
log_file = os.path.join(
log_dir, "%s-%s-watch-task.log" % (variant.uid, self.num)
)
if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError(
"imageBuilderBuild task failed: %s. See %s for details"
% (task_id, log_file)
)
# Refresh koji session which may have timed out while the task was
# running. Watching is done via a subprocess, so the session is
# inactive.
koji = kojiwrapper.KojiWrapper(compose)
linker = Linker(logger=self.pool._logger)
# Process all images in the build. There should be one for each
# architecture, but we don't verify that.
paths = koji.get_image_paths(task_id)
for arch, paths in paths.items():
for path in paths:
type_, format_ = _find_type_and_format(path)
if not format_:
# Path doesn't match any known type.
continue
# image_dir is absolute path to which the image should be copied.
# We also need the same path as relative to compose directory for
# including in the metadata.
if format_ == "iso":
# If the produced image is actually an ISO, it should go to
# iso/ subdirectory.
image_dir = compose.paths.compose.iso_dir(arch, variant)
rel_image_dir = compose.paths.compose.iso_dir(
arch, variant, relative=True
)
else:
image_dir = compose.paths.compose.image_dir(variant) % {
"arch": arch
}
rel_image_dir = compose.paths.compose.image_dir(
variant, relative=True
) % {"arch": arch}
util.makedirs(image_dir)
filename = os.path.basename(path)
image_dest = os.path.join(image_dir, filename)
src_file = compose.koji_downloader.get_file(path)
linker.link(src_file, image_dest, link_type=compose.conf["link_type"])
# Update image manifest
img = Image(compose.im)
# If user configured exact type, use it, otherwise try to
# figure it out based on the koji output.
img.type = config.get("manifest_type", type_)
img.format = format_
img.path = os.path.join(rel_image_dir, filename)
img.mtime = util.get_mtime(image_dest)
img.size = util.get_file_size(image_dest)
img.arch = arch
img.disc_number = 1 # We don't expect multiple disks
img.disc_count = 1
img.bootable = format_ == "iso"
img.subvariant = config.get("subvariant", variant.uid)
setattr(img, "can_fail", arch in self.failable_arches)
setattr(img, "deliverable", "imageBuilderBuild")
compose.im.add(variant=variant.uid, arch=arch, image=img)
self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, task_id))
def _find_type_and_format(path):
# these are our image-builder-exclusive mappings for images whose extensions
# aren't quite the same as imagefactory. they come first as we
# want our oci.tar.xz mapping to win over the tar.xz one in
# EXTENSIONS
for type_, suffixes, format_ in IMAGEBUILDEREXTENSIONS:
if any(path.endswith(suffix) for suffix in suffixes):
return type_, format_
for type_, suffixes in EXTENSIONS.items():
for suffix in suffixes:
if path.endswith(suffix):
return type_, suffix
return None, None

263
pungi/phases/kiwibuild.py Normal file
View File

@ -0,0 +1,263 @@
# -*- coding: utf-8 -*-
import os
from kobo.threads import ThreadPool
from kobo import shortcuts
from productmd.images import Image
from . import base
from .. import util
from ..linker import Linker
from ..wrappers import kojiwrapper
from .image_build import EXTENSIONS
from ..threading import TelemetryWorkerThread as WorkerThread
KIWIEXTENSIONS = [
("vhd-compressed", ["vhdfixed.xz"], "vhd.xz"),
("vagrant-libvirt", ["vagrant.libvirt.box"], "vagrant-libvirt.box"),
("vagrant-virtualbox", ["vagrant.virtualbox.box"], "vagrant-virtualbox.box"),
# .iso images can be of many types - boot, cd, dvd, live... -
# so 'boot' is just a default guess. 'iso' is not a valid
# productmd image type
("boot", [".iso"], "iso"),
("fex", ["erofs.xz"], "erofs.xz"),
("fex", ["erofs.gz"], "erofs.gz"),
("fex", ["erofs"], "erofs"),
("fex", ["squashfs.xz"], "squashfs.xz"),
("fex", ["squashfs.gz"], "squashfs.gz"),
("fex", ["squashfs"], "squashfs"),
("container", ["oci.tar.xz"], "tar.xz"),
("wsl2", ["wsl"], "wsl"),
]
class KiwiBuildPhase(
base.PhaseLoggerMixin, base.ImageConfigMixin, base.ConfigGuardedPhase
):
name = "kiwibuild"
def __init__(self, compose):
super(KiwiBuildPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.logger)
def _get_arches(self, image_conf, arches):
"""Get an intersection of arches in the config dict and the given ones."""
if "arches" in image_conf:
arches = set(image_conf["arches"]) & arches
return sorted(arches)
@staticmethod
def _get_repo_urls(compose, repos, arch="$basearch"):
"""
Get list of repos with resolved repo URLs. Preserve repos defined
as dicts.
"""
resolved_repos = []
for repo in repos:
repo = util.get_repo_url(compose, repo, arch=arch)
if repo is None:
raise RuntimeError("Failed to resolve repo URL for %s" % repo)
resolved_repos.append(repo)
return resolved_repos
def _get_repo(self, image_conf, variant):
"""
Get a list of repos. First included are those explicitly listed in
config, followed by by repo for current variant if it's not included in
the list already.
"""
repos = shortcuts.force_list(image_conf.get("repos", []))
if not variant.is_empty and variant.uid not in repos:
repos.append(variant.uid)
return KiwiBuildPhase._get_repo_urls(self.compose, repos, arch="$arch")
def run(self):
for variant in self.compose.get_variants():
arches = set([x for x in variant.arches if x != "src"])
for image_conf in self.get_config_block(variant):
build_arches = self._get_arches(image_conf, arches)
if not build_arches:
self.log_debug("skip: no arches")
continue
# these properties can be set per-image *or* as e.g.
# kiwibuild_description_scm or global_release in the config
generics = {
"release": self.get_release(image_conf),
"target": self.get_config(image_conf, "target"),
"descscm": self.get_config(image_conf, "description_scm"),
"descpath": self.get_config(image_conf, "description_path"),
"type": self.get_config(image_conf, "type"),
"type_attr": self.get_config(image_conf, "type_attr"),
"bundle_name_format": self.get_config(
image_conf, "bundle_name_format"
),
"version": self.get_version(image_conf),
"repo_releasever": self.get_config(image_conf, "repo_releasever"),
"use_buildroot_repo": self.get_config(
image_conf, "use_buildroot_repo"
),
}
repo = self._get_repo(image_conf, variant)
failable_arches = image_conf.pop("failable", [])
if failable_arches == ["*"]:
failable_arches = image_conf["arches"]
self.pool.add(RunKiwiBuildThread(self.pool))
self.pool.queue_put(
(
self.compose,
variant,
image_conf,
build_arches,
generics,
repo,
failable_arches,
)
)
self.pool.start()
class RunKiwiBuildThread(WorkerThread):
def process(self, item, num):
(compose, variant, config, arches, generics, repo, failable_arches) = item
self.failable_arches = failable_arches
# the Koji task as a whole can only fail if *all* arches are failable
can_task_fail = set(failable_arches).issuperset(set(arches))
self.num = num
with util.failable(
compose,
can_task_fail,
variant,
"*",
"kiwibuild",
logger=self.pool._logger,
):
self.worker(compose, variant, config, arches, generics, repo)
def worker(self, compose, variant, config, arches, generics, repo):
msg = "kiwibuild task for variant %s" % variant.uid
self.pool.log_info("[BEGIN] %s" % msg)
koji = kojiwrapper.KojiWrapper(compose)
koji.login()
task_id = koji.koji_proxy.kiwiBuild(
generics["target"],
arches,
generics["descscm"],
generics["descpath"],
profile=config["kiwi_profile"],
release=generics["release"],
repos=repo,
type=generics["type"],
type_attr=generics["type_attr"],
result_bundle_name_format=generics["bundle_name_format"],
# this ensures the task won't fail if only failable arches fail
optional_arches=self.failable_arches,
version=generics["version"],
repo_releasever=generics["repo_releasever"],
use_buildroot_repo=generics["use_buildroot_repo"],
)
koji.save_task_id(task_id)
# Wait for it to finish and capture the output into log file.
log_dir = os.path.join(compose.paths.log.topdir(), "kiwibuild")
util.makedirs(log_dir)
log_file = os.path.join(
log_dir, "%s-%s-watch-task.log" % (variant.uid, self.num)
)
if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError(
"kiwiBuild task failed: %s. See %s for details" % (task_id, log_file)
)
# Refresh koji session which may have timed out while the task was
# running. Watching is done via a subprocess, so the session is
# inactive.
koji = kojiwrapper.KojiWrapper(compose)
linker = Linker(logger=self.pool._logger)
# Process all images in the build. There should be one for each
# architecture, but we don't verify that.
paths = koji.get_image_paths(task_id)
for arch, paths in paths.items():
for path in paths:
type_, format_ = _find_type_and_format(path)
if not format_:
# Path doesn't match any known type.
continue
# image_dir is absolute path to which the image should be copied.
# We also need the same path as relative to compose directory for
# including in the metadata.
if format_ == "iso":
# If the produced image is actually an ISO, it should go to
# iso/ subdirectory.
image_dir = compose.paths.compose.iso_dir(arch, variant)
rel_image_dir = compose.paths.compose.iso_dir(
arch, variant, relative=True
)
else:
image_dir = compose.paths.compose.image_dir(variant) % {
"arch": arch
}
rel_image_dir = compose.paths.compose.image_dir(
variant, relative=True
) % {"arch": arch}
util.makedirs(image_dir)
filename = os.path.basename(path)
image_dest = os.path.join(image_dir, filename)
src_file = compose.koji_downloader.get_file(path)
linker.link(src_file, image_dest, link_type=compose.conf["link_type"])
# Update image manifest
img = Image(compose.im)
# If user configured exact type, use it, otherwise try to
# figure it out based on the koji output.
img.type = config.get("manifest_type", type_)
img.format = format_
img.path = os.path.join(rel_image_dir, filename)
img.mtime = util.get_mtime(image_dest)
img.size = util.get_file_size(image_dest)
img.arch = arch
img.disc_number = 1 # We don't expect multiple disks
img.disc_count = 1
# Kiwi produces only bootable ISOs. Other kinds of images are
img.bootable = format_ == "iso"
img.subvariant = config.get("subvariant", variant.uid)
setattr(img, "can_fail", arch in self.failable_arches)
setattr(img, "deliverable", "kiwibuild")
compose.im.add(variant=variant.uid, arch=arch, image=img)
self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, task_id))
def _find_type_and_format(path):
# these are our kiwi-exclusive mappings for images whose extensions
# aren't quite the same as imagefactory. they come first as we
# want our oci.tar.xz mapping to win over the tar.xz one in
# EXTENSIONS
for type_, suffixes, format_ in KIWIEXTENSIONS:
if any(path.endswith(suffix) for suffix in suffixes):
return type_, format_
for type_, suffixes in EXTENSIONS.items():
for suffix in suffixes:
if path.endswith(suffix):
return type_, suffix
return None, None

View File

@ -1,406 +0,0 @@
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
import os
import sys
import time
import shutil
from kobo.threads import ThreadPool, WorkerThread
from kobo.shortcuts import run, save_to_file, force_list
from productmd.images import Image
from six.moves import shlex_quote
from pungi.wrappers.kojiwrapper import KojiWrapper
from pungi.wrappers import iso
from pungi.phases import base
from pungi.util import makedirs, get_mtime, get_file_size, failable
from pungi.util import get_repo_urls
# HACK: define cmp in python3
if sys.version_info[0] == 3:
def cmp(a, b):
return (a > b) - (a < b)
class LiveImagesPhase(
base.PhaseLoggerMixin, base.ImageConfigMixin, base.ConfigGuardedPhase
):
name = "live_images"
def __init__(self, compose):
super(LiveImagesPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.logger)
def _get_repos(self, arch, variant, data):
repos = []
if not variant.is_empty:
repos.append(variant.uid)
repos.extend(force_list(data.get("repo", [])))
return get_repo_urls(self.compose, repos, arch=arch)
def run(self):
symlink_isos_to = self.compose.conf.get("symlink_isos_to")
commands = []
for variant in self.compose.all_variants.values():
for arch in variant.arches + ["src"]:
for data in self.get_config_block(variant, arch):
subvariant = data.get("subvariant", variant.uid)
type = data.get("type", "live")
if type == "live":
dest_dir = self.compose.paths.compose.iso_dir(
arch, variant, symlink_to=symlink_isos_to
)
elif type == "appliance":
dest_dir = self.compose.paths.compose.image_dir(
variant, symlink_to=symlink_isos_to
)
dest_dir = dest_dir % {"arch": arch}
makedirs(dest_dir)
else:
raise RuntimeError("Unknown live image type %s" % type)
if not dest_dir:
continue
cmd = {
"name": data.get("name"),
"version": self.get_version(data),
"release": self.get_release(data),
"dest_dir": dest_dir,
"build_arch": arch,
"ks_file": data["kickstart"],
"ksurl": self.get_ksurl(data),
# Used for images wrapped in RPM
"specfile": data.get("specfile", None),
# Scratch (only taken in consideration if specfile
# specified) For images wrapped in rpm is scratch
# disabled by default For other images is scratch
# always on
"scratch": data.get("scratch", False),
"sign": False,
"type": type,
"label": "", # currently not used
"subvariant": subvariant,
"failable_arches": data.get("failable", []),
# First see if live_target is specified, then fall back
# to regular setup of local, phase and global setting.
"target": self.compose.conf.get("live_target")
or self.get_config(data, "target"),
}
cmd["repos"] = self._get_repos(arch, variant, data)
# Signing of the rpm wrapped image
if not cmd["scratch"] and data.get("sign"):
cmd["sign"] = True
cmd["filename"] = self._get_file_name(
arch, variant, cmd["name"], cmd["version"]
)
commands.append((cmd, variant, arch))
for (cmd, variant, arch) in commands:
self.pool.add(CreateLiveImageThread(self.pool))
self.pool.queue_put((self.compose, cmd, variant, arch))
self.pool.start()
def _get_file_name(self, arch, variant, name=None, version=None):
if self.compose.conf["live_images_no_rename"]:
return None
disc_type = self.compose.conf["disc_types"].get("live", "live")
format = (
"%(compose_id)s-%(variant)s-%(arch)s-%(disc_type)s%(disc_num)s%(suffix)s"
)
# Custom name (prefix)
if name:
custom_iso_name = name
if version:
custom_iso_name += "-%s" % version
format = (
custom_iso_name
+ "-%(variant)s-%(arch)s-%(disc_type)s%(disc_num)s%(suffix)s"
)
# XXX: hardcoded disc_num
return self.compose.get_image_name(
arch, variant, disc_type=disc_type, disc_num=None, format=format
)
class CreateLiveImageThread(WorkerThread):
EXTS = (".iso", ".raw.xz")
def process(self, item, num):
compose, cmd, variant, arch = item
self.failable_arches = cmd.get("failable_arches", [])
self.can_fail = bool(self.failable_arches)
with failable(
compose,
self.can_fail,
variant,
arch,
"live",
cmd.get("subvariant"),
logger=self.pool._logger,
):
self.worker(compose, cmd, variant, arch, num)
def worker(self, compose, cmd, variant, arch, num):
self.basename = "%(name)s-%(version)s-%(release)s" % cmd
log_file = compose.paths.log.log_file(arch, "liveimage-%s" % self.basename)
subvariant = cmd.pop("subvariant")
imgname = "%s-%s-%s-%s" % (
compose.ci_base.release.short,
subvariant,
"Live" if cmd["type"] == "live" else "Disk",
arch,
)
msg = "Creating ISO (arch: %s, variant: %s): %s" % (
arch,
variant,
self.basename,
)
self.pool.log_info("[BEGIN] %s" % msg)
koji_wrapper = KojiWrapper(compose)
_, version = compose.compose_id.rsplit("-", 1)
name = cmd["name"] or imgname
version = cmd["version"] or version
archive = False
if cmd["specfile"] and not cmd["scratch"]:
# Non scratch build are allowed only for rpm wrapped images
archive = True
koji_cmd = koji_wrapper.get_create_image_cmd(
name,
version,
cmd["target"],
cmd["build_arch"],
cmd["ks_file"],
cmd["repos"],
image_type=cmd["type"],
wait=True,
archive=archive,
specfile=cmd["specfile"],
release=cmd["release"],
ksurl=cmd["ksurl"],
)
# avoid race conditions?
# Kerberos authentication failed:
# Permission denied in replay cache code (-1765328215)
time.sleep(num * 3)
output = koji_wrapper.run_blocking_cmd(koji_cmd, log_file=log_file)
if output["retcode"] != 0:
raise RuntimeError(
"LiveImage task failed: %s. See %s for more details."
% (output["task_id"], log_file)
)
# copy finished image to isos/
image_path = [
path
for path in koji_wrapper.get_image_path(output["task_id"])
if self._is_image(path)
]
if len(image_path) != 1:
raise RuntimeError(
"Got %d images from task %d, expected 1."
% (len(image_path), output["task_id"])
)
image_path = image_path[0]
filename = cmd.get("filename") or os.path.basename(image_path)
destination = os.path.join(cmd["dest_dir"], filename)
shutil.copy2(image_path, destination)
# copy finished rpm to isos/ (if rpm wrapped ISO was built)
if cmd["specfile"]:
rpm_paths = koji_wrapper.get_wrapped_rpm_path(output["task_id"])
if cmd["sign"]:
# Sign the rpm wrapped images and get their paths
self.pool.log_info(
"Signing rpm wrapped images in task_id: %s (expected key ID: %s)"
% (output["task_id"], compose.conf.get("signing_key_id"))
)
signed_rpm_paths = self._sign_image(
koji_wrapper, compose, cmd, output["task_id"]
)
if signed_rpm_paths:
rpm_paths = signed_rpm_paths
for rpm_path in rpm_paths:
shutil.copy2(rpm_path, cmd["dest_dir"])
if cmd["type"] == "live":
# ISO manifest only makes sense for live images
self._write_manifest(destination)
self._add_to_images(
compose,
variant,
subvariant,
arch,
cmd["type"],
self._get_format(image_path),
destination,
)
self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, output["task_id"]))
def _add_to_images(self, compose, variant, subvariant, arch, type, format, path):
"""Adds the image to images.json"""
img = Image(compose.im)
img.type = "raw-xz" if type == "appliance" else type
img.format = format
img.path = os.path.relpath(path, compose.paths.compose.topdir())
img.mtime = get_mtime(path)
img.size = get_file_size(path)
img.arch = arch
img.disc_number = 1 # We don't expect multiple disks
img.disc_count = 1
img.bootable = True
img.subvariant = subvariant
setattr(img, "can_fail", self.can_fail)
setattr(img, "deliverable", "live")
compose.im.add(variant=variant.uid, arch=arch, image=img)
def _is_image(self, path):
for ext in self.EXTS:
if path.endswith(ext):
return True
return False
def _get_format(self, path):
"""Get format based on extension."""
for ext in self.EXTS:
if path.endswith(ext):
return ext[1:]
raise RuntimeError("Getting format for unknown image %s" % path)
def _write_manifest(self, iso_path):
"""Generate manifest for ISO at given path.
:param iso_path: (str) absolute path to the ISO
"""
dir, filename = os.path.split(iso_path)
run("cd %s && %s" % (shlex_quote(dir), iso.get_manifest_cmd(filename)))
def _sign_image(self, koji_wrapper, compose, cmd, koji_task_id):
signing_key_id = compose.conf.get("signing_key_id")
signing_command = compose.conf.get("signing_command")
if not signing_key_id:
self.pool.log_warning(
"Signing is enabled but signing_key_id is not specified"
)
self.pool.log_warning("Signing skipped")
return None
if not signing_command:
self.pool.log_warning(
"Signing is enabled but signing_command is not specified"
)
self.pool.log_warning("Signing skipped")
return None
# Prepare signing log file
signing_log_file = compose.paths.log.log_file(
cmd["build_arch"], "live_images-signing-%s" % self.basename
)
# Sign the rpm wrapped images
try:
sign_builds_in_task(
koji_wrapper,
koji_task_id,
signing_command,
log_file=signing_log_file,
signing_key_password=compose.conf.get("signing_key_password"),
)
except RuntimeError:
self.pool.log_error(
"Error while signing rpm wrapped images. See log: %s" % signing_log_file
)
raise
# Get pats to the signed rpms
signing_key_id = signing_key_id.lower() # Koji uses lowercase in paths
rpm_paths = koji_wrapper.get_signed_wrapped_rpms_paths(
koji_task_id, signing_key_id
)
# Wait until files are available
if wait_paths(rpm_paths, 60 * 15):
# Files are ready
return rpm_paths
# Signed RPMs are not available
self.pool.log_warning("Signed files are not available: %s" % rpm_paths)
self.pool.log_warning("Unsigned files will be used")
return None
def wait_paths(paths, timeout=60):
started = time.time()
remaining = paths[:]
while True:
for path in remaining[:]:
if os.path.exists(path):
remaining.remove(path)
if not remaining:
break
time.sleep(1)
if timeout >= 0 and (time.time() - started) > timeout:
return False
return True
def sign_builds_in_task(
koji_wrapper, task_id, signing_command, log_file=None, signing_key_password=None
):
# Get list of nvrs that should be signed
nvrs = koji_wrapper.get_build_nvrs(task_id)
if not nvrs:
# No builds are available (scratch build, etc.?)
return
# Append builds to sign_cmd
for nvr in nvrs:
signing_command += " '%s'" % nvr
# Log signing command before password is filled in it
if log_file:
save_to_file(log_file, signing_command, append=True)
# Fill password into the signing command
if signing_key_password:
signing_command = signing_command % {
"signing_key_password": signing_key_password
}
# Sign the builds
run(signing_command, can_fail=False, show_cmd=False, logfile=log_file)

View File

@ -9,8 +9,9 @@ from pungi.util import translate_path, get_repo_urls
from pungi.phases.base import ConfigGuardedPhase, ImageConfigMixin, PhaseLoggerMixin from pungi.phases.base import ConfigGuardedPhase, ImageConfigMixin, PhaseLoggerMixin
from pungi.linker import Linker from pungi.linker import Linker
from pungi.wrappers.kojiwrapper import KojiWrapper from pungi.wrappers.kojiwrapper import KojiWrapper
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from productmd.images import Image from productmd.images import Image
from pungi.threading import TelemetryWorkerThread as WorkerThread
class LiveMediaPhase(PhaseLoggerMixin, ImageConfigMixin, ConfigGuardedPhase): class LiveMediaPhase(PhaseLoggerMixin, ImageConfigMixin, ConfigGuardedPhase):
@ -182,7 +183,9 @@ class LiveMediaThread(WorkerThread):
# let's not change filename of koji outputs # let's not change filename of koji outputs
image_dest = os.path.join(image_dir, os.path.basename(image_info["path"])) image_dest = os.path.join(image_dir, os.path.basename(image_info["path"]))
src_file = os.path.realpath(image_info["path"]) src_file = compose.koji_downloader.get_file(
os.path.realpath(image_info["path"])
)
linker.link(src_file, image_dest, link_type=link_type) linker.link(src_file, image_dest, link_type=link_type)
# Update image manifest # Update image manifest

View File

@ -1,18 +1,19 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import configparser
import copy import copy
import fnmatch import fnmatch
import json import json
import os import os
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from kobo import shortcuts from kobo import shortcuts
from productmd.rpms import Rpms from productmd.rpms import Rpms
from six.moves import configparser
from .base import ConfigGuardedPhase, PhaseLoggerMixin from .base import ConfigGuardedPhase, PhaseLoggerMixin
from .. import util from .. import util
from ..wrappers import kojiwrapper from ..wrappers import kojiwrapper
from ..wrappers.scm import get_file_from_scm from ..wrappers.scm import get_file_from_scm
from ..threading import TelemetryWorkerThread as WorkerThread
class OSBSPhase(PhaseLoggerMixin, ConfigGuardedPhase): class OSBSPhase(PhaseLoggerMixin, ConfigGuardedPhase):
@ -134,7 +135,7 @@ class OSBSThread(WorkerThread):
# though there is not much there). # though there is not much there).
if koji.watch_task(task_id, log_file) != 0: if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError( raise RuntimeError(
"OSBS: task %s failed: see %s for details" % (task_id, log_file) "OSBS task failed: %s. See %s for details" % (task_id, log_file)
) )
scratch = config.get("scratch", False) scratch = config.get("scratch", False)
@ -154,7 +155,7 @@ class OSBSThread(WorkerThread):
reuse_file, reuse_file,
) )
self.pool.log_info("[DONE ] %s" % msg) self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, task_id))
def _get_image_conf(self, compose, config): def _get_image_conf(self, compose, config):
"""Get image-build.conf from git repo. """Get image-build.conf from git repo.

View File

@ -1,7 +1,7 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from kobo import shortcuts from kobo import shortcuts
from productmd.images import Image from productmd.images import Image
@ -10,6 +10,22 @@ from .. import util
from ..linker import Linker from ..linker import Linker
from ..wrappers import kojiwrapper from ..wrappers import kojiwrapper
from .image_build import EXTENSIONS from .image_build import EXTENSIONS
from ..threading import TelemetryWorkerThread as WorkerThread
# copy and modify EXTENSIONS with some that osbuild produces but which
# do not exist as `koji image-build` formats
OSBUILDEXTENSIONS = EXTENSIONS.copy()
OSBUILDEXTENSIONS.update(
# The key is the type_name as used in Koji archive, the second is a list of
# expected file extensions.
{
"iso": ["iso"],
"vhd-compressed": ["vhd.gz", "vhd.xz"],
# The image is technically wsl2, but the type_name in Koji is set to
# wsl.
"wsl": ["wsl"],
}
)
class OSBuildPhase( class OSBuildPhase(
@ -159,6 +175,10 @@ class RunOSBuildThread(WorkerThread):
if upload_options: if upload_options:
opts["upload_options"] = upload_options opts["upload_options"] = upload_options
customizations = config.get("customizations")
if customizations:
opts["customizations"] = customizations
if release: if release:
opts["release"] = release opts["release"] = release
task_id = koji.koji_proxy.osbuildImage( task_id = koji.koji_proxy.osbuildImage(
@ -181,7 +201,7 @@ class RunOSBuildThread(WorkerThread):
) )
if koji.watch_task(task_id, log_file) != 0: if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError( raise RuntimeError(
"OSBuild: task %s failed: see %s for details" % (task_id, log_file) "OSBuild task failed: %s. See %s for details" % (task_id, log_file)
) )
# Refresh koji session which may have timed out while the task was # Refresh koji session which may have timed out while the task was
@ -199,7 +219,7 @@ class RunOSBuildThread(WorkerThread):
# architecture, but we don't verify that. # architecture, but we don't verify that.
build_info = koji.koji_proxy.getBuild(build_id) build_info = koji.koji_proxy.getBuild(build_id)
for archive in koji.koji_proxy.listArchives(buildID=build_id): for archive in koji.koji_proxy.listArchives(buildID=build_id):
if archive["type_name"] not in EXTENSIONS: if archive["type_name"] not in OSBUILDEXTENSIONS:
# Ignore values that are not of required types. # Ignore values that are not of required types.
continue continue
@ -212,21 +232,32 @@ class RunOSBuildThread(WorkerThread):
# image_dir is absolute path to which the image should be copied. # image_dir is absolute path to which the image should be copied.
# We also need the same path as relative to compose directory for # We also need the same path as relative to compose directory for
# including in the metadata. # including in the metadata.
if archive["type_name"] == "iso":
# If the produced image is actually an ISO, it should go to
# iso/ subdirectory.
image_dir = compose.paths.compose.iso_dir(arch, variant)
rel_image_dir = compose.paths.compose.iso_dir(
arch, variant, relative=True
)
else:
image_dir = compose.paths.compose.image_dir(variant) % {"arch": arch} image_dir = compose.paths.compose.image_dir(variant) % {"arch": arch}
rel_image_dir = compose.paths.compose.image_dir(variant, relative=True) % { rel_image_dir = compose.paths.compose.image_dir(
"arch": arch variant, relative=True
} ) % {"arch": arch}
util.makedirs(image_dir) util.makedirs(image_dir)
image_dest = os.path.join(image_dir, archive["filename"]) image_dest = os.path.join(image_dir, archive["filename"])
src_file = os.path.join( src_file = compose.koji_downloader.get_file(
koji.koji_module.pathinfo.imagebuild(build_info), archive["filename"] os.path.join(
koji.koji_module.pathinfo.imagebuild(build_info),
archive["filename"],
),
) )
linker.link(src_file, image_dest, link_type=compose.conf["link_type"]) linker.link(src_file, image_dest, link_type=compose.conf["link_type"])
for suffix in EXTENSIONS[archive["type_name"]]: for suffix in OSBUILDEXTENSIONS[archive["type_name"]]:
if archive["filename"].endswith(suffix): if archive["filename"].endswith(suffix):
break break
else: else:
@ -238,7 +269,30 @@ class RunOSBuildThread(WorkerThread):
# Update image manifest # Update image manifest
img = Image(compose.im) img = Image(compose.im)
# Get the manifest type from the config if supplied, otherwise we
# determine the manifest type based on the koji output
img.type = config.get("manifest_type")
if not img.type:
if archive["type_name"] == "wsl":
# productmd only knows wsl2 as type, so let's translate
# from the koji type so that users don't need to set the
# type explicitly. There really is no other possible type
# here anyway.
img.type = "wsl2"
elif archive["type_name"] != "iso":
img.type = archive["type_name"] img.type = archive["type_name"]
else:
fn = archive["filename"].lower()
if "ostree" in fn:
img.type = "dvd-ostree-osbuild"
elif "live" in fn:
img.type = "live-osbuild"
elif "netinst" in fn or "boot" in fn:
img.type = "boot"
else:
img.type = "dvd"
img.format = suffix img.format = suffix
img.path = os.path.join(rel_image_dir, archive["filename"]) img.path = os.path.join(rel_image_dir, archive["filename"])
img.mtime = util.get_mtime(image_dest) img.mtime = util.get_mtime(image_dest)

View File

@ -4,7 +4,7 @@ import copy
import json import json
import os import os
from kobo import shortcuts from kobo import shortcuts
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from collections import OrderedDict from collections import OrderedDict
from pungi.arch_utils import getBaseArch from pungi.arch_utils import getBaseArch
@ -14,6 +14,7 @@ from .. import util
from ..ostree.utils import get_ref_from_treefile, get_commitid_from_commitid_file from ..ostree.utils import get_ref_from_treefile, get_commitid_from_commitid_file
from ..util import get_repo_dicts, translate_path from ..util import get_repo_dicts, translate_path
from ..wrappers import scm from ..wrappers import scm
from ..threading import TelemetryWorkerThread as WorkerThread
class OSTreePhase(ConfigGuardedPhase): class OSTreePhase(ConfigGuardedPhase):
@ -85,7 +86,7 @@ class OSTreeThread(WorkerThread):
comps_repo = compose.paths.work.comps_repo( comps_repo = compose.paths.work.comps_repo(
"$basearch", variant=variant, create_dir=False "$basearch", variant=variant, create_dir=False
) )
repos = shortcuts.force_list(config["repo"]) + self.repos repos = shortcuts.force_list(config.get("repo", [])) + self.repos
if compose.has_comps: if compose.has_comps:
repos.append(translate_path(compose, comps_repo)) repos.append(translate_path(compose, comps_repo))
repos = get_repo_dicts(repos, logger=self.pool) repos = get_repo_dicts(repos, logger=self.pool)
@ -168,7 +169,9 @@ class OSTreeThread(WorkerThread):
("unified-core", config.get("unified_core", False)), ("unified-core", config.get("unified_core", False)),
] ]
) )
packages = ["pungi", "ostree", "rpm-ostree"] default_packages = ["pungi", "ostree", "rpm-ostree"]
additional_packages = config.get("runroot_packages", [])
packages = default_packages + additional_packages
log_file = os.path.join(self.logdir, "runroot.log") log_file = os.path.join(self.logdir, "runroot.log")
mounts = [compose.topdir, config["ostree_repo"]] mounts = [compose.topdir, config["ostree_repo"]]
runroot = Runroot(compose, phase="ostree") runroot = Runroot(compose, phase="ostree")

View File

@ -0,0 +1,188 @@
# -*- coding: utf-8 -*-
import copy
import json
import os
from kobo import shortcuts
from kobo.threads import ThreadPool, WorkerThread
from productmd.images import Image
from pungi.runroot import Runroot
from .base import ConfigGuardedPhase
from .. import util
from ..util import get_repo_dicts, translate_path
from ..wrappers import scm
class OSTreeContainerPhase(ConfigGuardedPhase):
name = "ostree_container"
def __init__(self, compose, pkgset_phase=None):
super(OSTreeContainerPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.compose._logger)
self.pkgset_phase = pkgset_phase
def get_repos(self):
return [
translate_path(
self.compose,
self.compose.paths.work.pkgset_repo(
pkgset.name, "$basearch", create_dir=False
),
)
for pkgset in self.pkgset_phase.package_sets
]
def _enqueue(self, variant, arch, conf):
self.pool.add(OSTreeContainerThread(self.pool, self.get_repos()))
self.pool.queue_put((self.compose, variant, arch, conf))
def run(self):
if isinstance(self.compose.conf.get(self.name), dict):
for variant in self.compose.get_variants():
for conf in self.get_config_block(variant):
for arch in conf.get("arches", []) or variant.arches:
self._enqueue(variant, arch, conf)
else:
# Legacy code path to support original configuration.
for variant in self.compose.get_variants():
for arch in variant.arches:
for conf in self.get_config_block(variant, arch):
self._enqueue(variant, arch, conf)
self.pool.start()
class OSTreeContainerThread(WorkerThread):
def __init__(self, pool, repos):
super(OSTreeContainerThread, self).__init__(pool)
self.repos = repos
def process(self, item, num):
compose, variant, arch, config = item
self.num = num
failable_arches = config.get("failable", [])
self.can_fail = util.can_arch_fail(failable_arches, arch)
with util.failable(compose, self.can_fail, variant, arch, "ostree-container"):
self.worker(compose, variant, arch, config)
def worker(self, compose, variant, arch, config):
msg = "OSTree container phase for variant %s, arch %s" % (variant.uid, arch)
self.pool.log_info("[BEGIN] %s" % msg)
workdir = compose.paths.work.topdir("ostree-container-%d" % self.num)
self.logdir = compose.paths.log.topdir(
"%s/%s/ostree-container-%d" % (arch, variant.uid, self.num)
)
repodir = os.path.join(workdir, "config_repo")
self._clone_repo(
compose,
repodir,
config["config_url"],
config.get("config_branch", "main"),
)
repos = shortcuts.force_list(config.get("repo", [])) + self.repos
repos = get_repo_dicts(repos, logger=self.pool)
# copy the original config and update before save to a json file
new_config = copy.copy(config)
# repos in configuration can have repo url set to variant UID,
# update it to have the actual url that we just translated.
new_config.update({"repo": repos})
# remove unnecessary (for 'pungi-make-ostree container' script ) elements
# from config, it doesn't hurt to have them, however remove them can
# reduce confusion
for k in [
"treefile",
"config_url",
"config_branch",
"failable",
"version",
]:
new_config.pop(k, None)
# write a json file to save the configuration, so 'pungi-make-ostree tree'
# can take use of it
extra_config_file = os.path.join(workdir, "extra_config.json")
with open(extra_config_file, "w") as f:
json.dump(new_config, f, indent=4)
self._run_ostree_container_cmd(
compose, variant, arch, config, repodir, extra_config_file=extra_config_file
)
self.pool.log_info("[DONE ] %s" % (msg))
def _run_ostree_container_cmd(
self, compose, variant, arch, config, config_repo, extra_config_file=None
):
subvariant = config.get("subvariant", variant.uid)
target_dir = compose.paths.compose.image_dir(variant) % {"arch": arch}
util.makedirs(target_dir)
version = util.version_generator(compose, config.get("version"))
anb = config.get("name", "%s-%s" % (compose.conf["release_short"], subvariant))
archive_name = "%s-%s-%s" % (anb, arch, version)
# Run the pungi-make-ostree command locally to create a script to
# execute in runroot environment.
cmd = [
"pungi-make-ostree",
"container",
"--log-dir=%s" % self.logdir,
"--name=%s" % archive_name,
"--path=%s" % target_dir,
"--treefile=%s" % os.path.join(config_repo, config["treefile"]),
"--extra-config=%s" % extra_config_file,
"--version=%s" % version,
]
_, runroot_script = shortcuts.run(cmd, text=True, errors="replace")
default_packages = ["ostree", "rpm-ostree", "selinux-policy-targeted"]
additional_packages = config.get("runroot_packages", [])
packages = default_packages + additional_packages
log_file = os.path.join(self.logdir, "runroot.log")
# TODO: Use to get previous build
mounts = [compose.topdir]
runroot = Runroot(compose, phase="ostree_container")
runroot.run(
" && ".join(runroot_script.splitlines()),
log_file=log_file,
arch=arch,
packages=packages,
mounts=mounts,
new_chroot=True,
weight=compose.conf["runroot_weights"].get("ostree"),
)
fullpath = os.path.join(target_dir, "%s.ociarchive" % archive_name)
# Update image manifest
img = Image(compose.im)
# these are hardcoded as they should always be correct, we
# could potentially allow overriding them via config though
img.type = "bootable-container"
img.format = "ociarchive"
img.path = os.path.relpath(fullpath, compose.paths.compose.topdir())
img.mtime = util.get_mtime(fullpath)
img.size = util.get_file_size(fullpath)
img.arch = arch
img.disc_number = 1
img.disc_count = 1
img.bootable = False
img.subvariant = subvariant
setattr(img, "can_fail", self.can_fail)
setattr(img, "deliverable", "ostree-container")
compose.im.add(variant=variant.uid, arch=arch, image=img)
def _clone_repo(self, compose, repodir, url, branch):
scm.get_dir_from_scm(
{"scm": "git", "repo": url, "branch": branch, "dir": "."},
repodir,
compose=compose,
)

View File

@ -1,10 +1,10 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
import shlex
import shutil import shutil
from productmd import images from productmd import images
from six.moves import shlex_quote
from kobo import shortcuts from kobo import shortcuts
from .base import ConfigGuardedPhase, PhaseLoggerMixin from .base import ConfigGuardedPhase, PhaseLoggerMixin
@ -20,6 +20,7 @@ from ..util import (
) )
from ..wrappers import iso, lorax, scm from ..wrappers import iso, lorax, scm
from ..runroot import Runroot from ..runroot import Runroot
from ..threading import TelemetryWorkerThread as WorkerThread
class OstreeInstallerPhase(PhaseLoggerMixin, ConfigGuardedPhase): class OstreeInstallerPhase(PhaseLoggerMixin, ConfigGuardedPhase):
@ -275,8 +276,8 @@ class OstreeInstallerThread(WorkerThread):
skip_branding=config.get("skip_branding"), skip_branding=config.get("skip_branding"),
) )
cmd = "rm -rf %s && %s" % ( cmd = "rm -rf %s && %s" % (
shlex_quote(output_dir), shlex.quote(output_dir),
" ".join([shlex_quote(x) for x in lorax_cmd]), " ".join([shlex.quote(x) for x in lorax_cmd]),
) )
runroot.run( runroot.run(

View File

@ -38,12 +38,17 @@ from pungi.phases.createrepo import add_modular_metadata
def populate_arch_pkgsets(compose, path_prefix, global_pkgset): def populate_arch_pkgsets(compose, path_prefix, global_pkgset):
result = {} result = {}
exclusive_noarch = compose.conf["pkgset_exclusive_arch_considers_noarch"]
for arch in compose.get_arches(): for arch in compose.get_arches():
compose.log_info("Populating package set for arch: %s", arch) compose.log_info("Populating package set for arch: %s", arch)
is_multilib = is_arch_multilib(compose.conf, arch) is_multilib = is_arch_multilib(compose.conf, arch)
arches = get_valid_arches(arch, is_multilib, add_src=True) arches = get_valid_arches(arch, is_multilib, add_src=True)
pkgset = global_pkgset.subset(arch, arches, exclusive_noarch=exclusive_noarch) pkgset = global_pkgset.subset(
arch,
arches,
exclusive_noarch=compose.conf["pkgset_exclusive_arch_considers_noarch"],
inherit_to_noarch=compose.conf["pkgset_inherit_exclusive_arch_to_noarch"],
)
pkgset.save_file_list( pkgset.save_file_list(
compose.paths.work.package_list(arch=arch, pkgset=global_pkgset), compose.paths.work.package_list(arch=arch, pkgset=global_pkgset),
remove_path_prefix=path_prefix, remove_path_prefix=path_prefix,

View File

@ -22,20 +22,23 @@ It automatically finds a signed copies according to *sigkey_ordering*.
import itertools import itertools
import json import json
import os import os
import pickle
import time import time
import pgpy import pgpy
import rpm import rpm
from six.moves import cPickle as pickle from functools import partial
import kobo.log import kobo.log
import kobo.pkgset import kobo.pkgset
import kobo.rpmlib import kobo.rpmlib
from kobo.shortcuts import compute_file_checksums
from kobo.threads import WorkerThread, ThreadPool from kobo.threads import ThreadPool
from pungi.util import pkg_is_srpm, copy_all from pungi.util import pkg_is_srpm, copy_all
from pungi.arch import get_valid_arches, is_excluded from pungi.arch import get_valid_arches, is_excluded
from pungi.errors import UnsignedPackagesError from pungi.errors import UnsignedPackagesError
from pungi.threading import TelemetryWorkerThread as WorkerThread
class ExtendedRpmWrapper(kobo.pkgset.SimpleRpmWrapper): class ExtendedRpmWrapper(kobo.pkgset.SimpleRpmWrapper):
@ -152,9 +155,15 @@ class PackageSetBase(kobo.log.LoggingBase):
""" """
def nvr_formatter(package_info): def nvr_formatter(package_info):
# joins NVR parts of the package with '-' character. epoch_suffix = ''
return "-".join( if package_info['epoch'] is not None:
(package_info["name"], package_info["version"], package_info["release"]) epoch_suffix = ':' + package_info['epoch']
return (
f"{package_info['name']}"
f"{epoch_suffix}-"
f"{package_info['version']}-"
f"{package_info['release']}."
f"{package_info['arch']}"
) )
def get_error(sigkeys, infos): def get_error(sigkeys, infos):
@ -205,16 +214,31 @@ class PackageSetBase(kobo.log.LoggingBase):
return self.rpms_by_arch return self.rpms_by_arch
def subset(self, primary_arch, arch_list, exclusive_noarch=True): def subset(
self, primary_arch, arch_list, exclusive_noarch=True, inherit_to_noarch=True
):
"""Create a subset of this package set that only includes """Create a subset of this package set that only includes
packages compatible with""" packages compatible with"""
pkgset = PackageSetBase( pkgset = PackageSetBase(
self.name, self.sigkey_ordering, logger=self._logger, arches=arch_list self.name, self.sigkey_ordering, logger=self._logger, arches=arch_list
) )
pkgset.merge(self, primary_arch, arch_list, exclusive_noarch=exclusive_noarch) pkgset.merge(
self,
primary_arch,
arch_list,
exclusive_noarch=exclusive_noarch,
inherit_to_noarch=inherit_to_noarch,
)
return pkgset return pkgset
def merge(self, other, primary_arch, arch_list, exclusive_noarch=True): def merge(
self,
other,
primary_arch,
arch_list,
exclusive_noarch=True,
inherit_to_noarch=True,
):
""" """
Merge ``other`` package set into this instance. Merge ``other`` package set into this instance.
""" """
@ -250,10 +274,10 @@ class PackageSetBase(kobo.log.LoggingBase):
for arch in arch_list: for arch in arch_list:
self.rpms_by_arch.setdefault(arch, []) self.rpms_by_arch.setdefault(arch, [])
for i in other.rpms_by_arch.get(arch, []): for i in other.rpms_by_arch.get(arch, []):
if i.file_path in self.file_cache: if i.file_path in self.file_cache.file_cache:
# TODO: test if it really works # TODO: test if it really works
continue continue
if exclusivearch_list and arch == "noarch": if inherit_to_noarch and exclusivearch_list and arch == "noarch":
if is_excluded(i, exclusivearch_list, logger=self._logger): if is_excluded(i, exclusivearch_list, logger=self._logger):
continue continue
@ -320,6 +344,11 @@ class FilelistPackageSet(PackageSetBase):
return result return result
# This is a marker to indicate package set with only extra builds/tasks and no
# tasks.
MISSING_KOJI_TAG = object()
class KojiPackageSet(PackageSetBase): class KojiPackageSet(PackageSetBase):
def __init__( def __init__(
self, self,
@ -336,6 +365,7 @@ class KojiPackageSet(PackageSetBase):
extra_tasks=None, extra_tasks=None,
signed_packages_retries=0, signed_packages_retries=0,
signed_packages_wait=30, signed_packages_wait=30,
downloader=None,
): ):
""" """
Creates new KojiPackageSet. Creates new KojiPackageSet.
@ -373,7 +403,7 @@ class KojiPackageSet(PackageSetBase):
:param int signed_packages_wait: How long to wait between search attemts. :param int signed_packages_wait: How long to wait between search attemts.
""" """
super(KojiPackageSet, self).__init__( super(KojiPackageSet, self).__init__(
name, name if name != MISSING_KOJI_TAG else "no-tag",
sigkey_ordering=sigkey_ordering, sigkey_ordering=sigkey_ordering,
arches=arches, arches=arches,
logger=logger, logger=logger,
@ -390,6 +420,8 @@ class KojiPackageSet(PackageSetBase):
self.signed_packages_retries = signed_packages_retries self.signed_packages_retries = signed_packages_retries
self.signed_packages_wait = signed_packages_wait self.signed_packages_wait = signed_packages_wait
self.downloader = downloader
def __getstate__(self): def __getstate__(self):
result = self.__dict__.copy() result = self.__dict__.copy()
del result["koji_wrapper"] del result["koji_wrapper"]
@ -511,11 +543,20 @@ class KojiPackageSet(PackageSetBase):
# Check if this RPM is coming from scratch task. In this case, we already # Check if this RPM is coming from scratch task. In this case, we already
# know the path. # know the path.
if "path_from_task" in rpm_info: if "path_from_task" in rpm_info:
return rpm_info["path_from_task"] return self.downloader.get_file(rpm_info["path_from_task"])
pathinfo = self.koji_wrapper.koji_module.pathinfo pathinfo = self.koji_wrapper.koji_module.pathinfo
paths = [] paths = []
def checksum_validator(keyname, pkg_path):
checksums = self.koji_proxy.getRPMChecksums(
rpm_info["id"], checksum_types=("sha256",)
)
if "sha256" in checksums.get(keyname, {}):
computed = compute_file_checksums(pkg_path, ("sha256",))
if computed["sha256"] != checksums[keyname]["sha256"]:
raise RuntimeError("Checksum mismatch for %s" % pkg_path)
attempts_left = self.signed_packages_retries + 1 attempts_left = self.signed_packages_retries + 1
while attempts_left > 0: while attempts_left > 0:
for sigkey in self.sigkey_ordering: for sigkey in self.sigkey_ordering:
@ -528,8 +569,11 @@ class KojiPackageSet(PackageSetBase):
) )
if rpm_path not in paths: if rpm_path not in paths:
paths.append(rpm_path) paths.append(rpm_path)
if os.path.isfile(rpm_path): path = self.downloader.get_file(
return rpm_path rpm_path, partial(checksum_validator, sigkey)
)
if path:
return path
# No signed copy was found, wait a little and try again. # No signed copy was found, wait a little and try again.
attempts_left -= 1 attempts_left -= 1
@ -542,16 +586,18 @@ class KojiPackageSet(PackageSetBase):
# use an unsigned copy (if allowed) # use an unsigned copy (if allowed)
rpm_path = os.path.join(pathinfo.build(build_info), pathinfo.rpm(rpm_info)) rpm_path = os.path.join(pathinfo.build(build_info), pathinfo.rpm(rpm_info))
paths.append(rpm_path) paths.append(rpm_path)
if os.path.isfile(rpm_path): path = self.downloader.get_file(rpm_path, partial(checksum_validator, ""))
return rpm_path if path:
return path
if self._allow_invalid_sigkeys and rpm_info["name"] not in self.packages: if self._allow_invalid_sigkeys and rpm_info["name"] not in self.packages:
# use an unsigned copy (if allowed) # use an unsigned copy (if allowed)
rpm_path = os.path.join(pathinfo.build(build_info), pathinfo.rpm(rpm_info)) rpm_path = os.path.join(pathinfo.build(build_info), pathinfo.rpm(rpm_info))
paths.append(rpm_path) paths.append(rpm_path)
if os.path.isfile(rpm_path): path = self.downloader.get_file(rpm_path)
if path:
self._invalid_sigkey_rpms.append(rpm_info) self._invalid_sigkey_rpms.append(rpm_info)
return rpm_path return path
self._invalid_sigkey_rpms.append(rpm_info) self._invalid_sigkey_rpms.append(rpm_info)
self.log_error( self.log_error(
@ -572,7 +618,7 @@ class KojiPackageSet(PackageSetBase):
result_srpms = [] result_srpms = []
include_packages = set(include_packages or []) include_packages = set(include_packages or [])
if type(event) is dict: if isinstance(event, dict):
event = event["id"] event = event["id"]
msg = "Getting latest RPMs (tag: %s, event: %s, inherit: %s)" % ( msg = "Getting latest RPMs (tag: %s, event: %s, inherit: %s)" % (
@ -581,6 +627,8 @@ class KojiPackageSet(PackageSetBase):
inherit, inherit,
) )
self.log_info("[BEGIN] %s" % msg) self.log_info("[BEGIN] %s" % msg)
rpms, builds = [], []
if tag != MISSING_KOJI_TAG:
rpms, builds = self.get_latest_rpms(tag, event, inherit=inherit) rpms, builds = self.get_latest_rpms(tag, event, inherit=inherit)
extra_rpms, extra_builds = self.get_extra_rpms() extra_rpms, extra_builds = self.get_extra_rpms()
rpms += extra_rpms rpms += extra_rpms
@ -686,6 +734,15 @@ class KojiPackageSet(PackageSetBase):
:param include_packages: an iterable of tuples (package name, arch) that should :param include_packages: an iterable of tuples (package name, arch) that should
be included. be included.
""" """
if len(self.sigkey_ordering) > 1 and (
None in self.sigkey_ordering or "" in self.sigkey_ordering
):
self.log_warning(
"Stop writing reuse file as unsigned packages are allowed "
"in the compose."
)
return
reuse_file = compose.paths.work.pkgset_reuse_file(self.name) reuse_file = compose.paths.work.pkgset_reuse_file(self.name)
self.log_info("Writing pkgset reuse file: %s" % reuse_file) self.log_info("Writing pkgset reuse file: %s" % reuse_file)
try: try:
@ -702,6 +759,13 @@ class KojiPackageSet(PackageSetBase):
"srpms_by_name": self.srpms_by_name, "srpms_by_name": self.srpms_by_name,
"extra_builds": self.extra_builds, "extra_builds": self.extra_builds,
"include_packages": include_packages, "include_packages": include_packages,
"inherit_to_noarch": compose.conf[
"pkgset_inherit_exclusive_arch_to_noarch"
],
"exclusive_noarch": compose.conf[
"pkgset_exclusive_arch_considers_noarch"
],
"module_defaults_dir": compose.conf.get("module_defaults_dir"),
}, },
f, f,
protocol=pickle.HIGHEST_PROTOCOL, protocol=pickle.HIGHEST_PROTOCOL,
@ -796,6 +860,9 @@ class KojiPackageSet(PackageSetBase):
self.log_debug("Failed to load reuse file: %s" % str(e)) self.log_debug("Failed to load reuse file: %s" % str(e))
return False return False
inherit_to_noarch = compose.conf["pkgset_inherit_exclusive_arch_to_noarch"]
exclusive_noarch = compose.conf["pkgset_exclusive_arch_considers_noarch"]
module_defaults_dir = compose.conf.get("module_defaults_dir")
if ( if (
reuse_data["allow_invalid_sigkeys"] == self._allow_invalid_sigkeys reuse_data["allow_invalid_sigkeys"] == self._allow_invalid_sigkeys
and reuse_data["packages"] == self.packages and reuse_data["packages"] == self.packages
@ -803,6 +870,11 @@ class KojiPackageSet(PackageSetBase):
and reuse_data["extra_builds"] == self.extra_builds and reuse_data["extra_builds"] == self.extra_builds
and reuse_data["sigkeys"] == self.sigkey_ordering and reuse_data["sigkeys"] == self.sigkey_ordering
and reuse_data["include_packages"] == include_packages and reuse_data["include_packages"] == include_packages
# If the value is not present in reuse data, the compose was
# generated with older version of Pungi. Best to not reuse.
and reuse_data.get("inherit_to_noarch") == inherit_to_noarch
and reuse_data.get("exclusive_noarch") == exclusive_noarch
and reuse_data.get("module_defaults_dir") == module_defaults_dir
): ):
self.log_info("Copying repo data for reuse: %s" % old_repo_dir) self.log_info("Copying repo data for reuse: %s" % old_repo_dir)
copy_all(old_repo_dir, repo_dir) copy_all(old_repo_dir, repo_dir)
@ -818,69 +890,6 @@ class KojiPackageSet(PackageSetBase):
class KojiMockPackageSet(KojiPackageSet): class KojiMockPackageSet(KojiPackageSet):
def __init__(
self,
name,
koji_wrapper,
sigkey_ordering,
arches=None,
logger=None,
packages=None,
allow_invalid_sigkeys=False,
populate_only_packages=False,
cache_region=None,
extra_builds=None,
extra_tasks=None,
signed_packages_retries=0,
signed_packages_wait=30,
):
"""
Creates new KojiPackageSet.
:param list sigkey_ordering: Ordered list of sigkey strings. When
getting package from Koji, KojiPackageSet tries to get the package
signed by sigkey from this list. If None or "" appears in this
list, unsigned package is used.
:param list arches: List of arches to get the packages for.
:param logging.Logger logger: Logger instance to use for logging.
:param list packages: List of package names to be used when
`allow_invalid_sigkeys` or `populate_only_packages` is set.
:param bool allow_invalid_sigkeys: When True, packages *not* listed in
the `packages` list are added to KojiPackageSet even if they have
invalid sigkey. This is useful in case Koji tag contains some
unsigned packages, but we know they won't appear in a compose.
When False, all packages in Koji tag must have valid sigkey as
defined in `sigkey_ordering`.
:param bool populate_only_packages. When True, only packages in
`packages` list are added to KojiPackageSet. This can save time
when generating compose from predefined list of packages from big
Koji tag.
When False, all packages from Koji tag are added to KojiPackageSet.
:param dogpile.cache.CacheRegion cache_region: If set, the CacheRegion
will be used to cache the list of RPMs per Koji tag, so next calls
of the KojiPackageSet.populate(...) method won't try fetching it
again.
:param list extra_builds: Extra builds NVRs to get from Koji and include
in the package set.
:param list extra_tasks: Extra RPMs defined as Koji task IDs to get from Koji
and include in the package set. Useful when building testing compose
with RPM scratch builds.
"""
super(KojiMockPackageSet, self).__init__(
name,
koji_wrapper=koji_wrapper,
sigkey_ordering=sigkey_ordering,
arches=arches,
logger=logger,
packages=packages,
allow_invalid_sigkeys=allow_invalid_sigkeys,
populate_only_packages=populate_only_packages,
cache_region=cache_region,
extra_builds=extra_builds,
extra_tasks=extra_tasks,
signed_packages_retries=signed_packages_retries,
signed_packages_wait=signed_packages_wait,
)
def _is_rpm_signed(self, rpm_path) -> bool: def _is_rpm_signed(self, rpm_path) -> bool:
ts = rpm.TransactionSet() ts = rpm.TransactionSet()
@ -889,6 +898,8 @@ class KojiMockPackageSet(KojiPackageSet):
sigkey.lower() for sigkey in self.sigkey_ordering sigkey.lower() for sigkey in self.sigkey_ordering
if sigkey is not None if sigkey is not None
] ]
if not sigkeys:
return True
with open(rpm_path, 'rb') as fd: with open(rpm_path, 'rb') as fd:
header = ts.hdrFromFdno(fd) header = ts.hdrFromFdno(fd)
signature = header[rpm.RPMTAG_SIGGPG] or header[rpm.RPMTAG_SIGPGP] signature = header[rpm.RPMTAG_SIGGPG] or header[rpm.RPMTAG_SIGPGP]

View File

@ -193,17 +193,13 @@ class PkgsetSourceKoji(pungi.phases.pkgset.source.PkgsetSourceBase):
def __call__(self): def __call__(self):
compose = self.compose compose = self.compose
self.koji_wrapper = pungi.wrappers.kojiwrapper.KojiWrapper(compose) self.koji_wrapper = pungi.wrappers.kojiwrapper.KojiWrapper(compose)
# path prefix must contain trailing '/' package_sets = get_pkgset_from_koji(self.compose, self.koji_wrapper)
path_prefix = self.koji_wrapper.koji_module.config.topdir.rstrip("/") + "/" return (package_sets, self.compose.koji_downloader.path_prefix)
package_sets = get_pkgset_from_koji(
self.compose, self.koji_wrapper, path_prefix
)
return (package_sets, path_prefix)
def get_pkgset_from_koji(compose, koji_wrapper, path_prefix): def get_pkgset_from_koji(compose, koji_wrapper):
event_info = get_koji_event_info(compose, koji_wrapper) event_info = get_koji_event_info(compose, koji_wrapper)
return populate_global_pkgset(compose, koji_wrapper, path_prefix, event_info) return populate_global_pkgset(compose, koji_wrapper, event_info)
def _add_module_to_variant( def _add_module_to_variant(
@ -226,20 +222,23 @@ def _add_module_to_variant(
""" """
mmds = {} mmds = {}
archives = koji_wrapper.koji_proxy.listArchives(build["id"]) archives = koji_wrapper.koji_proxy.listArchives(build["id"])
available_arches = set()
for archive in archives: for archive in archives:
if archive["btype"] != "module": if archive["btype"] != "module":
# Skip non module archives # Skip non module archives
continue continue
typedir = koji_wrapper.koji_module.pathinfo.typedir(build, archive["btype"]) typedir = koji_wrapper.koji_module.pathinfo.typedir(build, archive["btype"])
filename = archive["filename"] filename = archive["filename"]
file_path = os.path.join(typedir, filename) file_path = compose.koji_downloader.get_file(os.path.join(typedir, filename))
try: try:
# If there are two dots, the arch is in the middle. MBS uploads # If there are two dots, the arch is in the middle. MBS uploads
# files with actual architecture in the filename, but Pungi deals # files with actual architecture in the filename, but Pungi deals
# in basearch. This assumes that each arch in the build maps to a # in basearch. This assumes that each arch in the build maps to a
# unique basearch. # unique basearch.
_, arch, _ = filename.split(".") _, arch, _ = filename.split(".")
filename = "modulemd.%s.txt" % getBaseArch(arch) basearch = getBaseArch(arch)
filename = "modulemd.%s.txt" % basearch
available_arches.add(basearch)
except ValueError: except ValueError:
pass pass
mmds[filename] = file_path mmds[filename] = file_path
@ -264,15 +263,26 @@ def _add_module_to_variant(
compose.log_debug("Module %s is filtered from %s.%s", nsvc, variant, arch) compose.log_debug("Module %s is filtered from %s.%s", nsvc, variant, arch)
continue continue
if arch not in available_arches:
compose.log_debug(
"Module %s is not available for arch %s.%s", nsvc, variant, arch
)
continue
filename = "modulemd.%s.txt" % arch filename = "modulemd.%s.txt" % arch
if filename not in mmds: if filename not in mmds:
raise RuntimeError( raise RuntimeError(
"Module %s does not have metadata for arch %s and is not filtered " "Module %s does not have metadata for arch %s and is not filtered "
"out via filter_modules option." % (nsvc, arch) "out via filter_modules option." % (nsvc, arch)
) )
try:
mod_stream = read_single_module_stream_from_file( mod_stream = read_single_module_stream_from_file(
mmds[filename], compose, arch, build mmds[filename], compose, arch, build
) )
except Exception as exc:
# libmodulemd raises various GLib exceptions with not very helpful
# messages. Let's replace it with something more useful.
raise RuntimeError("Failed to read %s: %s", mmds[filename], str(exc))
if mod_stream: if mod_stream:
added = True added = True
variant.arch_mmds.setdefault(arch, {})[nsvc] = mod_stream variant.arch_mmds.setdefault(arch, {})[nsvc] = mod_stream
@ -395,7 +405,13 @@ def _is_filtered_out(compose, variant, arch, module_name, module_stream):
def _get_modules_from_koji( def _get_modules_from_koji(
compose, koji_wrapper, event, variant, variant_tags, tag_to_mmd, exclude_module_ns compose,
koji_wrapper,
event,
variant,
variant_tags,
tag_to_mmd,
exclude_module_ns,
): ):
""" """
Loads modules for given `variant` from koji `session`, adds them to Loads modules for given `variant` from koji `session`, adds them to
@ -480,7 +496,16 @@ def filter_inherited(koji_proxy, event, module_builds, top_tag):
# And keep only builds from that topmost tag # And keep only builds from that topmost tag
result.extend(build for build in builds if build["tag_name"] == tag) result.extend(build for build in builds if build["tag_name"] == tag)
return result # If the same module was inherited multiple times, it will be in result
# multiple times. We need to deduplicate.
deduplicated_result = []
included_nvrs = set()
for build in result:
if build["nvr"] not in included_nvrs:
deduplicated_result.append(build)
included_nvrs.add(build["nvr"])
return deduplicated_result
def filter_by_whitelist(compose, module_builds, input_modules, expected_modules): def filter_by_whitelist(compose, module_builds, input_modules, expected_modules):
@ -670,7 +695,7 @@ def _get_modules_from_koji_tags(
) )
def populate_global_pkgset(compose, koji_wrapper, path_prefix, event): def populate_global_pkgset(compose, koji_wrapper, event):
all_arches = get_all_arches(compose) all_arches = get_all_arches(compose)
# List of compose tags from which we create this compose # List of compose tags from which we create this compose
@ -764,7 +789,12 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
if extra_modules: if extra_modules:
_add_extra_modules_to_variant( _add_extra_modules_to_variant(
compose, koji_wrapper, variant, extra_modules, variant_tags, tag_to_mmd compose,
koji_wrapper,
variant,
extra_modules,
variant_tags,
tag_to_mmd,
) )
variant_scratch_modules = get_variant_data( variant_scratch_modules = get_variant_data(
@ -791,17 +821,23 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
pkgsets = [] pkgsets = []
extra_builds = force_list(compose.conf.get("pkgset_koji_builds", []))
extra_tasks = force_list(compose.conf.get("pkgset_koji_scratch_tasks", []))
if not pkgset_koji_tags and (extra_builds or extra_tasks):
# We have extra packages to pull in, but no tag to merge them with.
compose_tags.append(pungi.phases.pkgset.pkgsets.MISSING_KOJI_TAG)
pkgset_koji_tags.append(pungi.phases.pkgset.pkgsets.MISSING_KOJI_TAG)
# Get package set for each compose tag and merge it to global package # Get package set for each compose tag and merge it to global package
# list. Also prepare per-variant pkgset, because we do not have list # list. Also prepare per-variant pkgset, because we do not have list
# of binary RPMs in module definition - there is just list of SRPMs. # of binary RPMs in module definition - there is just list of SRPMs.
for compose_tag in compose_tags: for compose_tag in compose_tags:
compose.log_info("Loading package set for tag %s", compose_tag) compose.log_info("Loading package set for tag %s", compose_tag)
kwargs = {}
if compose_tag in pkgset_koji_tags: if compose_tag in pkgset_koji_tags:
extra_builds = force_list(compose.conf.get("pkgset_koji_builds", [])) kwargs["extra_builds"] = extra_builds
extra_tasks = force_list(compose.conf.get("pkgset_koji_scratch_tasks", [])) kwargs["extra_tasks"] = extra_tasks
else:
extra_builds = []
extra_tasks = []
pkgset = pungi.phases.pkgset.pkgsets.KojiPackageSet( pkgset = pungi.phases.pkgset.pkgsets.KojiPackageSet(
compose_tag, compose_tag,
@ -813,10 +849,10 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
allow_invalid_sigkeys=allow_invalid_sigkeys, allow_invalid_sigkeys=allow_invalid_sigkeys,
populate_only_packages=populate_only_packages_to_gather, populate_only_packages=populate_only_packages_to_gather,
cache_region=compose.cache_region, cache_region=compose.cache_region,
extra_builds=extra_builds,
extra_tasks=extra_tasks,
signed_packages_retries=compose.conf["signed_packages_retries"], signed_packages_retries=compose.conf["signed_packages_retries"],
signed_packages_wait=compose.conf["signed_packages_wait"], signed_packages_wait=compose.conf["signed_packages_wait"],
downloader=compose.koji_downloader,
**kwargs
) )
# Check if we have cache for this tag from previous compose. If so, use # Check if we have cache for this tag from previous compose. If so, use
@ -874,13 +910,18 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
if pkgset.reuse is None: if pkgset.reuse is None:
pkgset.populate( pkgset.populate(
compose_tag, compose_tag,
event, # We care about packages as they existed on the specified
# event. However, modular content tags are not expected to
# change, so the event doesn't matter there. If an exact NSVC
# of a module is specified, the code above would happily find
# its content tag, but fail here if the content tag doesn't
# exist at the given event.
event=event if is_traditional else None,
inherit=should_inherit, inherit=should_inherit,
include_packages=modular_packages, include_packages=modular_packages,
) )
for variant in compose.all_variants.values(): for variant in compose.all_variants.values():
if compose_tag in variant_tags[variant]: if compose_tag in variant_tags[variant]:
# If it's a modular tag, store the package set for the module. # If it's a modular tag, store the package set for the module.
for nsvc, koji_tag in variant.module_uid_to_koji_tag.items(): for nsvc, koji_tag in variant.module_uid_to_koji_tag.items():
if compose_tag == koji_tag: if compose_tag == koji_tag:
@ -903,7 +944,7 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
MaterializedPackageSet.create, MaterializedPackageSet.create,
compose, compose,
pkgset, pkgset,
path_prefix, compose.koji_downloader.path_prefix,
mmd=tag_to_mmd.get(pkgset.name), mmd=tag_to_mmd.get(pkgset.name),
) )
) )

View File

@ -35,10 +35,12 @@ import pungi.wrappers.kojiwrapper
from pungi.wrappers.comps import CompsWrapper from pungi.wrappers.comps import CompsWrapper
from pungi.wrappers.mbs import MBSWrapper from pungi.wrappers.mbs import MBSWrapper
import pungi.phases.pkgset.pkgsets import pungi.phases.pkgset.pkgsets
from pungi.util import ( from pungi.util import (
retry, retry,
get_arch_variant_data, get_arch_variant_data,
get_variant_data, get_variant_data,
read_single_module_stream_from_string, read_single_module_stream_from_string,
read_single_module_stream_from_file, read_single_module_stream_from_file,
) )
@ -160,14 +162,16 @@ def get_koji_modules(compose, koji_wrapper, event, module_info_str):
# Store module versioning information into the dict, but make sure # Store module versioning information into the dict, but make sure
# not to overwrite any existing keys. # not to overwrite any existing keys.
md["module_stream"] = md["extra"]["typeinfo"]["module"]["stream"] md["module_stream"] = md["extra"]["typeinfo"]["module"]["stream"]
md["module_version"] = int(md["extra"]["typeinfo"]["module"]["version"]) md["module_version"] = int(
md["extra"]["typeinfo"]["module"]["version"])
md["module_context"] = md["extra"]["typeinfo"]["module"]["context"] md["module_context"] = md["extra"]["typeinfo"]["module"]["context"]
except KeyError: except KeyError:
continue continue
if md["state"] == pungi.wrappers.kojiwrapper.KOJI_BUILD_DELETED: if md["state"] == pungi.wrappers.kojiwrapper.KOJI_BUILD_DELETED:
compose.log_debug( compose.log_debug(
"Module build %s has been deleted, ignoring it." % build["name"] "Module build %s has been deleted, ignoring it." % build[
"name"]
) )
continue continue
@ -189,7 +193,8 @@ def get_koji_modules(compose, koji_wrapper, event, module_info_str):
) )
latest_version = sorted_modules[0]["module_version"] latest_version = sorted_modules[0]["module_version"]
modules = [ modules = [
module for module in modules if latest_version == module["module_version"] module for module in modules
if latest_version == module["module_version"]
] ]
return modules return modules
@ -205,7 +210,8 @@ class PkgsetSourceKojiMock(pungi.phases.pkgset.source.PkgsetSourceBase):
get_all_arches(compose), get_all_arches(compose),
) )
# path prefix must contain trailing '/' # path prefix must contain trailing '/'
path_prefix = self.koji_wrapper.koji_module.config.topdir.rstrip("/") + "/" path_prefix = self.koji_wrapper.koji_module.config.topdir.rstrip(
"/") + "/"
package_sets = get_pkgset_from_koji( package_sets = get_pkgset_from_koji(
self.compose, self.koji_wrapper, path_prefix self.compose, self.koji_wrapper, path_prefix
) )
@ -214,7 +220,8 @@ class PkgsetSourceKojiMock(pungi.phases.pkgset.source.PkgsetSourceBase):
def get_pkgset_from_koji(compose, koji_wrapper, path_prefix): def get_pkgset_from_koji(compose, koji_wrapper, path_prefix):
event_info = get_koji_event_info(compose, koji_wrapper) event_info = get_koji_event_info(compose, koji_wrapper)
return populate_global_pkgset(compose, koji_wrapper, path_prefix, event_info) return populate_global_pkgset(compose, koji_wrapper, path_prefix,
event_info)
def _add_module_to_variant( def _add_module_to_variant(
@ -241,13 +248,16 @@ def _add_module_to_variant(
if archive["btype"] != "module": if archive["btype"] != "module":
# Skip non module archives # Skip non module archives
continue continue
filename = archive["filename"] filename = archive["filename"]
file_path = os.path.join( file_path = os.path.join(
koji_wrapper.koji_module.pathinfo.topdir, koji_wrapper.koji_module.pathinfo.topdir,
'modules', 'modules',
build['arch'], build['arch'],
build['extra']['typeinfo']['module']['content_koji_tag'] build['extra']['typeinfo']['module']['content_koji_tag']
) )
mmds[filename] = file_path mmds[filename] = file_path
if len(mmds) <= 1: if len(mmds) <= 1:
@ -266,17 +276,22 @@ def _add_module_to_variant(
added = False added = False
for arch in variant.arches: for arch in variant.arches:
if _is_filtered_out(compose, variant, arch, info["name"], info["stream"]): if _is_filtered_out(compose, variant, arch, info["name"],
compose.log_debug("Module %s is filtered from %s.%s", nsvc, variant, arch) info["stream"]):
compose.log_debug("Module %s is filtered from %s.%s", nsvc,
variant, arch)
continue continue
filename = "modulemd.%s.txt" % arch filename = "modulemd.%s.txt" % arch
try: try:
mod_stream = read_single_module_stream_from_file( mod_stream = read_single_module_stream_from_file(
mmds[filename], compose, arch, build mmds[filename], compose, arch, build
) )
if mod_stream: if mod_stream:
added = True added = True
variant.arch_mmds.setdefault(arch, {})[nsvc] = mod_stream variant.arch_mmds.setdefault(arch, {})[nsvc] = mod_stream
added = True added = True
except KeyError: except KeyError:
# There is no modulemd for this arch. This could mean an arch was # There is no modulemd for this arch. This could mean an arch was
@ -298,7 +313,8 @@ def _add_extra_modules_to_variant(
compose, koji_wrapper, variant, extra_modules, variant_tags, tag_to_mmd compose, koji_wrapper, variant, extra_modules, variant_tags, tag_to_mmd
): ):
for nsvc in extra_modules: for nsvc in extra_modules:
msg = "Adding extra module build '%s' to variant '%s'" % (nsvc, variant) msg = "Adding extra module build '%s' to variant '%s'" % (
nsvc, variant)
compose.log_info(msg) compose.log_info(msg)
nsvc_info = nsvc.split(":") nsvc_info = nsvc.split(":")
@ -344,7 +360,8 @@ def _add_scratch_modules_to_variant(
compose, variant, scratch_modules, variant_tags, tag_to_mmd compose, variant, scratch_modules, variant_tags, tag_to_mmd
): ):
if compose.compose_type != "test" and scratch_modules: if compose.compose_type != "test" and scratch_modules:
compose.log_warning("Only test composes could include scratch module builds") compose.log_warning(
"Only test composes could include scratch module builds")
return return
mbs = MBSWrapper(compose.conf["mbs_api_url"]) mbs = MBSWrapper(compose.conf["mbs_api_url"])
@ -355,7 +372,8 @@ def _add_scratch_modules_to_variant(
try: try:
final_modulemd = mbs.final_modulemd(module_build["id"]) final_modulemd = mbs.final_modulemd(module_build["id"])
except Exception: except Exception:
compose.log_error("Unable to get modulemd for build %s" % module_build) compose.log_error(
"Unable to get modulemd for build %s" % module_build)
raise raise
tag = module_build["koji_tag"] tag = module_build["koji_tag"]
variant_tags[variant].append(tag) variant_tags[variant].append(tag)
@ -363,8 +381,7 @@ def _add_scratch_modules_to_variant(
for arch in variant.arches: for arch in variant.arches:
try: try:
mmd = read_single_module_stream_from_string( mmd = read_single_module_stream_from_string(
final_modulemd[arch] final_modulemd[arch])
)
variant.arch_mmds.setdefault(arch, {})[nsvc] = mmd variant.arch_mmds.setdefault(arch, {})[nsvc] = mmd
except KeyError: except KeyError:
continue continue
@ -390,21 +407,24 @@ def _is_filtered_out(compose, variant, arch, module_name, module_stream):
if not compose: if not compose:
return False return False
for filter in get_arch_variant_data(compose.conf, "filter_modules", arch, variant): for filter in get_arch_variant_data(compose.conf, "filter_modules", arch,
variant):
if ":" not in filter: if ":" not in filter:
name_filter = filter name_filter = filter
stream_filter = "*" stream_filter = "*"
else: else:
name_filter, stream_filter = filter.split(":", 1) name_filter, stream_filter = filter.split(":", 1)
if fnmatch(module_name, name_filter) and fnmatch(module_stream, stream_filter): if fnmatch(module_name, name_filter) and fnmatch(module_stream,
stream_filter):
return True return True
return False return False
def _get_modules_from_koji( def _get_modules_from_koji(
compose, koji_wrapper, event, variant, variant_tags, tag_to_mmd compose, koji_wrapper, event, variant, variant_tags, tag_to_mmd,
exclude_module_ns
): ):
""" """
Loads modules for given `variant` from koji `session`, adds them to Loads modules for given `variant` from koji `session`, adds them to
@ -415,15 +435,21 @@ def _get_modules_from_koji(
:param Variant variant: Variant with modules to find. :param Variant variant: Variant with modules to find.
:param dict variant_tags: Dict populated by this method. Key is `variant` :param dict variant_tags: Dict populated by this method. Key is `variant`
and value is list of Koji tags to get the RPMs from. and value is list of Koji tags to get the RPMs from.
:param list exclude_module_ns: Module name:stream which will be excluded.
""" """
# Find out all modules in every variant and add their Koji tags # Find out all modules in every variant and add their Koji tags
# to variant and variant_tags list. # to variant and variant_tags list.
for module in variant.get_modules(): for module in variant.get_modules():
koji_modules = get_koji_modules(compose, koji_wrapper, event, module["name"]) koji_modules = get_koji_modules(compose, koji_wrapper, event,
module["name"])
for koji_module in koji_modules: for koji_module in koji_modules:
nsvc = _add_module_to_variant( nsvc = _add_module_to_variant(
koji_wrapper, variant, koji_module, compose=compose koji_wrapper,
variant,
koji_module,
compose=compose,
exclude_module_ns=exclude_module_ns,
) )
if not nsvc: if not nsvc:
continue continue
@ -462,7 +488,8 @@ def filter_inherited(koji_proxy, event, module_builds, top_tag):
does not understand streams, so we have to reimplement it here. does not understand streams, so we have to reimplement it here.
""" """
inheritance = [ inheritance = [
tag["name"] for tag in koji_proxy.getFullInheritance(top_tag, event=event["id"]) tag["name"] for tag in
koji_proxy.getFullInheritance(top_tag, event=event["id"])
] ]
def keyfunc(mb): def keyfunc(mb):
@ -487,7 +514,8 @@ def filter_inherited(koji_proxy, event, module_builds, top_tag):
return result return result
def filter_by_whitelist(compose, module_builds, input_modules, expected_modules): def filter_by_whitelist(compose, module_builds, input_modules,
expected_modules):
""" """
Exclude modules from the list that do not match any pattern specified in Exclude modules from the list that do not match any pattern specified in
input_modules. Order may not be preserved. The last argument is a set of input_modules. Order may not be preserved. The last argument is a set of
@ -511,6 +539,7 @@ def filter_by_whitelist(compose, module_builds, input_modules, expected_modules)
info.get("context"), info.get("context"),
) )
nvr_patterns.add((pattern, spec["name"])) nvr_patterns.add((pattern, spec["name"]))
modules_to_keep = [] modules_to_keep = []
for mb in sorted(module_builds, key=lambda i: i['name']): for mb in sorted(module_builds, key=lambda i: i['name']):
@ -575,7 +604,13 @@ def _filter_expected_modules(
def _get_modules_from_koji_tags( def _get_modules_from_koji_tags(
compose, koji_wrapper, event_id, variant, variant_tags, tag_to_mmd compose,
koji_wrapper,
event_id,
variant,
variant_tags,
tag_to_mmd,
exclude_module_ns,
): ):
""" """
Loads modules for given `variant` from Koji, adds them to Loads modules for given `variant` from Koji, adds them to
@ -587,10 +622,12 @@ def _get_modules_from_koji_tags(
:param Variant variant: Variant with modules to find. :param Variant variant: Variant with modules to find.
:param dict variant_tags: Dict populated by this method. Key is `variant` :param dict variant_tags: Dict populated by this method. Key is `variant`
and value is list of Koji tags to get the RPMs from. and value is list of Koji tags to get the RPMs from.
:param list exclude_module_ns: Module name:stream which will be excluded.
""" """
# Compose tags from configuration # Compose tags from configuration
compose_tags = [ compose_tags = [
{"name": tag} for tag in force_list(compose.conf["pkgset_koji_module_tag"]) {"name": tag} for tag in
force_list(compose.conf["pkgset_koji_module_tag"])
] ]
# Get set of configured module names for this variant. If nothing is # Get set of configured module names for this variant. If nothing is
# configured, the set is empty. # configured, the set is empty.
@ -617,7 +654,8 @@ def _get_modules_from_koji_tags(
) )
# Filter out builds inherited from non-top tag # Filter out builds inherited from non-top tag
module_builds = filter_inherited(koji_proxy, event_id, module_builds, tag) module_builds = filter_inherited(koji_proxy, event_id, module_builds,
tag)
# Apply whitelist of modules if specified. # Apply whitelist of modules if specified.
variant_modules = variant.get_modules() variant_modules = variant.get_modules()
@ -625,6 +663,7 @@ def _get_modules_from_koji_tags(
module_builds = filter_by_whitelist( module_builds = filter_by_whitelist(
compose, module_builds, variant_modules, expected_modules compose, module_builds, variant_modules, expected_modules
) )
# Find the latest builds of all modules. This does following: # Find the latest builds of all modules. This does following:
# - Sorts the module_builds descending by Koji NVR (which maps to NSV # - Sorts the module_builds descending by Koji NVR (which maps to NSV
# for modules). Split release into modular version and context, and # for modules). Split release into modular version and context, and
@ -662,6 +701,18 @@ def _get_modules_from_koji_tags(
for build in latest_builds: for build in latest_builds:
# Get the Build from Koji to get modulemd and module_tag. # Get the Build from Koji to get modulemd and module_tag.
build = koji_proxy.getBuild(build["build_id"]) build = koji_proxy.getBuild(build["build_id"])
nsvc = _add_module_to_variant(
koji_wrapper,
variant,
build,
True,
compose=compose,
exclude_module_ns=exclude_module_ns,
)
if not nsvc:
continue
module_tag = ( module_tag = (
build.get("extra", {}) build.get("extra", {})
.get("typeinfo", {}) .get("typeinfo", {})
@ -671,12 +722,6 @@ def _get_modules_from_koji_tags(
variant_tags[variant].append(module_tag) variant_tags[variant].append(module_tag)
nsvc = _add_module_to_variant(
koji_wrapper, variant, build, True, compose=compose
)
if not nsvc:
continue
tag_to_mmd.setdefault(module_tag, {}) tag_to_mmd.setdefault(module_tag, {})
for arch in variant.arch_mmds: for arch in variant.arch_mmds:
try: try:
@ -708,8 +753,9 @@ def _get_modules_from_koji_tags(
# There are some module names that were listed in configuration and not # There are some module names that were listed in configuration and not
# found in any tag... # found in any tag...
raise RuntimeError( raise RuntimeError(
"Configuration specified patterns (%s) that don't match " f"Configuration specified patterns ({', '.join(expected_modules)})"
"any modules in the configured tags." % ", ".join(expected_modules) " that don't match any modules in "
f"the configured tags for variant '{variant.name}'"
) )
@ -767,26 +813,48 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
"modules." "modules."
) )
extra_modules = get_variant_data(
compose.conf, "pkgset_koji_module_builds", variant
)
# When adding extra modules, other modules of the same name:stream available
# in brew tag should be excluded.
exclude_module_ns = []
if extra_modules:
exclude_module_ns = [
":".join(nsvc.split(":")[:2]) for nsvc in extra_modules
]
if modular_koji_tags or ( if modular_koji_tags or (
compose.conf["pkgset_koji_module_tag"] and variant.modules compose.conf["pkgset_koji_module_tag"] and variant.modules
): ):
# List modules tagged in particular tags. # List modules tagged in particular tags.
_get_modules_from_koji_tags( _get_modules_from_koji_tags(
compose, koji_wrapper, event, variant, variant_tags, tag_to_mmd compose,
koji_wrapper,
event,
variant,
variant_tags,
tag_to_mmd,
exclude_module_ns,
) )
elif variant.modules: elif variant.modules:
# Search each module in Koji separately. Tagging does not come into # Search each module in Koji separately. Tagging does not come into
# play here. # play here.
_get_modules_from_koji( _get_modules_from_koji(
compose, koji_wrapper, event, variant, variant_tags, tag_to_mmd compose,
koji_wrapper,
event,
variant,
variant_tags,
tag_to_mmd,
exclude_module_ns,
) )
extra_modules = get_variant_data(
compose.conf, "pkgset_koji_module_builds", variant
)
if extra_modules: if extra_modules:
_add_extra_modules_to_variant( _add_extra_modules_to_variant(
compose, koji_wrapper, variant, extra_modules, variant_tags, tag_to_mmd compose, koji_wrapper, variant, extra_modules, variant_tags,
tag_to_mmd
) )
variant_scratch_modules = get_variant_data( variant_scratch_modules = get_variant_data(
@ -794,7 +862,8 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
) )
if variant_scratch_modules: if variant_scratch_modules:
_add_scratch_modules_to_variant( _add_scratch_modules_to_variant(
compose, variant, variant_scratch_modules, variant_tags, tag_to_mmd compose, variant, variant_scratch_modules, variant_tags,
tag_to_mmd
) )
# Ensure that every tag added to `variant_tags` is added also to # Ensure that every tag added to `variant_tags` is added also to
@ -819,8 +888,10 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
for compose_tag in compose_tags: for compose_tag in compose_tags:
compose.log_info("Loading package set for tag %s", compose_tag) compose.log_info("Loading package set for tag %s", compose_tag)
if compose_tag in pkgset_koji_tags: if compose_tag in pkgset_koji_tags:
extra_builds = force_list(compose.conf.get("pkgset_koji_builds", [])) extra_builds = force_list(
extra_tasks = force_list(compose.conf.get("pkgset_koji_scratch_tasks", [])) compose.conf.get("pkgset_koji_builds", []))
extra_tasks = force_list(
compose.conf.get("pkgset_koji_scratch_tasks", []))
else: else:
extra_builds = [] extra_builds = []
extra_tasks = [] extra_tasks = []
@ -926,7 +997,8 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
def get_koji_event_info(compose, koji_wrapper): def get_koji_event_info(compose, koji_wrapper):
event_file = os.path.join(compose.paths.work.topdir(arch="global"), "koji-event") event_file = os.path.join(compose.paths.work.topdir(arch="global"),
"koji-event")
compose.log_info("Getting koji event") compose.log_info("Getting koji event")
result = get_koji_event_raw(koji_wrapper, compose.koji_event, event_file) result = get_koji_event_raw(koji_wrapper, compose.koji_event, event_file)

View File

@ -15,7 +15,6 @@
import os import os
import shutil
from kobo.shortcuts import run from kobo.shortcuts import run
@ -76,7 +75,6 @@ def get_pkgset_from_repos(compose):
pungi_dir = compose.paths.work.pungi_download_dir(arch) pungi_dir = compose.paths.work.pungi_download_dir(arch)
backends = { backends = {
"yum": pungi.get_pungi_cmd,
"dnf": pungi.get_pungi_cmd_dnf, "dnf": pungi.get_pungi_cmd_dnf,
} }
get_cmd = backends[compose.conf["gather_backend"]] get_cmd = backends[compose.conf["gather_backend"]]
@ -93,8 +91,6 @@ def get_pkgset_from_repos(compose):
cache_dir=compose.paths.work.pungi_cache_dir(arch=arch), cache_dir=compose.paths.work.pungi_cache_dir(arch=arch),
profiler=profiler, profiler=profiler,
) )
if compose.conf["gather_backend"] == "yum":
cmd.append("--force")
# TODO: runroot # TODO: runroot
run(cmd, logfile=pungi_log, show_cmd=True, stdout=False) run(cmd, logfile=pungi_log, show_cmd=True, stdout=False)
@ -111,17 +107,6 @@ def get_pkgset_from_repos(compose):
flist.append(dst) flist.append(dst)
pool.queue_put((src, dst)) pool.queue_put((src, dst))
# Clean up tmp dir
# Workaround for rpm not honoring sgid bit which only appears when yum is used.
yumroot_dir = os.path.join(pungi_dir, "work", arch, "yumroot")
if os.path.isdir(yumroot_dir):
try:
shutil.rmtree(yumroot_dir)
except Exception as e:
compose.log_warning(
"Failed to clean up tmp dir: %s %s" % (yumroot_dir, str(e))
)
msg = "Linking downloaded pkgset packages" msg = "Linking downloaded pkgset packages"
compose.log_info("[BEGIN] %s" % msg) compose.log_info("[BEGIN] %s" % msg)
pool.start() pool.start()

View File

@ -101,20 +101,41 @@ def run_repoclosure(compose):
def _delete_repoclosure_cache_dirs(compose): def _delete_repoclosure_cache_dirs(compose):
if "dnf" == compose.conf["repoclosure_backend"]: """Find any cached repodata and delete it. The case is not going to be
reused ever again, and would otherwise consume storage space.
DNF will use a different directory depending on whether it is running as
root or not. It is not easy to tell though if DNF 4 or 5 is being used, so
let's be sure and check both locations. All our cached entries are prefixed
by compose ID, so there's very limited amount of risk that we would delete
something incorrect.
"""
cache_dirs = []
try:
# DNF 4
from dnf.const import SYSTEM_CACHEDIR from dnf.const import SYSTEM_CACHEDIR
from dnf.util import am_i_root from dnf.util import am_i_root
from dnf.yum.misc import getCacheDir from dnf.yum.misc import getCacheDir
if am_i_root(): if am_i_root():
top_cache_dir = SYSTEM_CACHEDIR cache_dirs.append(SYSTEM_CACHEDIR)
else: else:
top_cache_dir = getCacheDir() cache_dirs.append(getCacheDir())
else: except ImportError:
from yum.misc import getCacheDir pass
top_cache_dir = getCacheDir() try:
# DNF 5 config works directly for root, no need for special case.
import libdnf5
base = libdnf5.base.Base()
config = base.get_config()
cache_dirs.append(config.cachedir)
except ImportError:
pass
for top_cache_dir in cache_dirs:
for name in os.listdir(top_cache_dir): for name in os.listdir(top_cache_dir):
if name.startswith(compose.compose_id): if name.startswith(compose.compose_id):
cache_path = os.path.join(top_cache_dir, name) cache_path = os.path.join(top_cache_dir, name)

View File

@ -95,7 +95,7 @@ def is_iso(f):
def has_mbr(f): def has_mbr(f):
return _check_magic(f, 0x1FE, b"\x55\xAA") return _check_magic(f, 0x1FE, b"\x55\xaa")
def has_gpt(f): def has_gpt(f):

View File

@ -1,7 +1,9 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from kobo import shortcuts from kobo import shortcuts
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool
from pungi.threading import TelemetryWorkerThread as WorkerThread
class WeaverPhase(object): class WeaverPhase(object):

View File

@ -13,13 +13,18 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>. # along with this program; if not, see <https://gnu.org/licenses/>.
import contextlib
import os import os
import re import re
import six import shlex
from six.moves import shlex_quote import shutil
import tarfile
import requests
import kobo.log import kobo.log
from kobo.shortcuts import run from kobo.shortcuts import run
from pungi import util
from pungi.wrappers import kojiwrapper from pungi.wrappers import kojiwrapper
@ -151,7 +156,7 @@ class Runroot(kobo.log.LoggingBase):
formatted_cmd = command.format(**fmt_dict) if fmt_dict else command formatted_cmd = command.format(**fmt_dict) if fmt_dict else command
ssh_cmd = ["ssh", "-oBatchMode=yes", "-n", "-l", user, hostname, formatted_cmd] ssh_cmd = ["ssh", "-oBatchMode=yes", "-n", "-l", user, hostname, formatted_cmd]
output = run(ssh_cmd, show_cmd=True, logfile=log_file)[1] output = run(ssh_cmd, show_cmd=True, logfile=log_file)[1]
if six.PY3 and isinstance(output, bytes): if isinstance(output, bytes):
return output.decode() return output.decode()
else: else:
return output return output
@ -178,7 +183,7 @@ class Runroot(kobo.log.LoggingBase):
# If the output dir is defined, change the permissions of files generated # If the output dir is defined, change the permissions of files generated
# by the runroot task, so the Pungi user can access them. # by the runroot task, so the Pungi user can access them.
if chown_paths: if chown_paths:
paths = " ".join(shlex_quote(pth) for pth in chown_paths) paths = " ".join(shlex.quote(pth) for pth in chown_paths)
command += " ; EXIT_CODE=$?" command += " ; EXIT_CODE=$?"
# Make the files world readable # Make the files world readable
command += " ; chmod -R a+r %s" % paths command += " ; chmod -R a+r %s" % paths
@ -230,9 +235,9 @@ class Runroot(kobo.log.LoggingBase):
fmt_dict["runroot_key"] = runroot_key fmt_dict["runroot_key"] = runroot_key
self._ssh_run(hostname, user, run_template, fmt_dict, log_file=log_file) self._ssh_run(hostname, user, run_template, fmt_dict, log_file=log_file)
fmt_dict[ fmt_dict["command"] = (
"command" "rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}\n'"
] = "rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}\n'" )
buildroot_rpms = self._ssh_run( buildroot_rpms = self._ssh_run(
hostname, hostname,
user, user,
@ -314,7 +319,8 @@ class Runroot(kobo.log.LoggingBase):
arch, arch,
args, args,
channel=runroot_channel, channel=runroot_channel,
chown_uid=os.getuid(), # We want to change owner only if shared NFS directory is used.
chown_uid=os.getuid() if kwargs.get("mounts") else None,
**kwargs **kwargs
) )
@ -325,6 +331,7 @@ class Runroot(kobo.log.LoggingBase):
% (output["task_id"], log_file) % (output["task_id"], log_file)
) )
self._result = output self._result = output
return output["task_id"]
def run_pungi_ostree(self, args, log_file=None, arch=None, **kwargs): def run_pungi_ostree(self, args, log_file=None, arch=None, **kwargs):
""" """
@ -381,3 +388,75 @@ class Runroot(kobo.log.LoggingBase):
return self._result return self._result
else: else:
raise ValueError("Unknown runroot_method %r." % self.runroot_method) raise ValueError("Unknown runroot_method %r." % self.runroot_method)
@util.retry(wait_on=requests.exceptions.RequestException)
def _download_file(url, dest):
# contextlib.closing is only needed in requests<2.18
with contextlib.closing(requests.get(url, stream=True, timeout=5)) as r:
if r.status_code == 404:
raise RuntimeError("Archive %s not found" % url)
r.raise_for_status()
with open(dest, "wb") as f:
shutil.copyfileobj(r.raw, f)
def _download_archive(task_id, fname, archive_url, dest_dir):
"""Download file from URL to a destination, with retries."""
temp_file = os.path.join(dest_dir, fname)
_download_file(archive_url, temp_file)
return temp_file
def _extract_archive(task_id, fname, archive_file, dest_path):
"""Extract the archive into given destination.
All items of the archive must match the name of the archive, i.e. all
paths in foo.tar.gz must start with foo/.
"""
basename = os.path.basename(fname).split(".")[0]
strip_prefix = basename + "/"
with tarfile.open(archive_file, "r") as archive:
for member in archive.getmembers():
# Check if each item is either the root directory or is within it.
if member.name != basename and not member.name.startswith(strip_prefix):
raise RuntimeError(
"Archive %s from task %s contains file without expected prefix: %s"
% (fname, task_id, member)
)
dest = os.path.join(dest_path, member.name[len(strip_prefix) :])
if member.isdir():
# Create directories where needed...
util.makedirs(dest)
elif member.isfile():
# ... and extract files into them.
with open(dest, "wb") as dest_obj:
shutil.copyfileobj(archive.extractfile(member), dest_obj)
elif member.islnk():
# We have a hardlink. Let's also link it.
linked_file = os.path.join(
dest_path, member.linkname[len(strip_prefix) :]
)
os.link(linked_file, dest)
else:
# Any other file type is an error.
raise RuntimeError(
"Unexpected file type in %s from task %s: %s"
% (fname, task_id, member)
)
def download_and_extract_archive(compose, task_id, fname, destination):
"""Download a tar archive from task outputs and extract it to the destination."""
koji = kojiwrapper.KojiWrapper(compose).koji_module
# Koji API provides downloadTaskOutput method, but it's not usable as it
# will attempt to load the entire file into memory.
# So instead let's generate a patch and attempt to convert it to a URL.
server_path = os.path.join(koji.pathinfo.task(task_id), fname)
archive_url = server_path.replace(koji.config.topdir, koji.config.topurl)
tmp_dir = compose.mkdtemp(prefix="buildinstall-download")
try:
local_path = _download_archive(task_id, fname, archive_url, tmp_dir)
_extract_archive(task_id, fname, local_path, destination)
finally:
shutil.rmtree(tmp_dir, ignore_errors=True)

View File

@ -0,0 +1,63 @@
import argparse
import os
import re
import time
from pungi.util import format_size
LOCK_RE = re.compile(r".*\.lock(\|[A-Za-z0-9]+)*$")
def should_be_cleaned_up(path, st, threshold):
if st.st_nlink == 1 and st.st_mtime < threshold:
# No other instances, older than limit
return True
if LOCK_RE.match(path) and st.st_mtime < threshold:
# Suspiciously old lock
return True
return False
def main():
parser = argparse.ArgumentParser()
parser.add_argument("CACHE_DIR")
parser.add_argument("-n", "--dry-run", action="store_true")
parser.add_argument("--verbose", action="store_true")
parser.add_argument(
"--max-age",
help="how old files should be considered for deletion",
default=7,
type=int,
)
args = parser.parse_args()
topdir = os.path.abspath(args.CACHE_DIR)
max_age = args.max_age * 24 * 3600
cleaned_up = 0
threshold = time.time() - max_age
for dirpath, dirnames, filenames in os.walk(topdir):
for f in filenames:
filepath = os.path.join(dirpath, f)
st = os.stat(filepath)
if should_be_cleaned_up(filepath, st, threshold):
if args.verbose:
print("RM %s" % filepath)
cleaned_up += st.st_size
if not args.dry_run:
os.remove(filepath)
if not dirnames and not filenames:
if args.verbose:
print("RMDIR %s" % dirpath)
if not args.dry_run:
os.rmdir(dirpath)
if args.dry_run:
print("Would reclaim %s bytes." % format_size(cleaned_up))
else:
print("Reclaimed %s bytes." % format_size(cleaned_up))

View File

@ -4,13 +4,12 @@ from __future__ import absolute_import
from __future__ import print_function from __future__ import print_function
import argparse import argparse
import configparser
import json import json
import os import os
import shutil import shutil
import sys import sys
from six.moves import configparser
import kobo.conf import kobo.conf
import pungi.checks import pungi.checks
import pungi.util import pungi.util
@ -171,32 +170,11 @@ def main():
group.add_argument( group.add_argument(
"--offline", action="store_true", help="Do not resolve git references." "--offline", action="store_true", help="Do not resolve git references."
) )
parser.add_argument(
"--multi",
metavar="DIR",
help=(
"Treat source as config for pungi-orchestrate and store dump into "
"given directory."
),
)
args = parser.parse_args() args = parser.parse_args()
defines = config_utils.extract_defines(args.define) defines = config_utils.extract_defines(args.define)
if args.multi:
if len(args.sources) > 1:
parser.error("Only one multi config can be specified.")
return dump_multi_config(
args.sources[0],
dest=args.multi,
defines=defines,
just_dump=args.just_dump,
event=args.freeze_event,
offline=args.offline,
)
return process_file( return process_file(
args.sources, args.sources,
defines=defines, defines=defines,

View File

@ -8,8 +8,6 @@ import json
import os import os
import sys import sys
import six
import pungi.checks import pungi.checks
import pungi.compose import pungi.compose
import pungi.paths import pungi.paths
@ -56,7 +54,7 @@ class ValidationCompose(pungi.compose.Compose):
def read_variants(compose, config): def read_variants(compose, config):
with pungi.util.temp_dir() as tmp_dir: with pungi.util.temp_dir() as tmp_dir:
scm_dict = compose.conf["variants_file"] scm_dict = compose.conf["variants_file"]
if isinstance(scm_dict, six.string_types) and scm_dict[0] != "/": if isinstance(scm_dict, str) and scm_dict[0] != "/":
config_dir = os.path.dirname(config) config_dir = os.path.dirname(config)
scm_dict = os.path.join(config_dir, scm_dict) scm_dict = os.path.join(config_dir, scm_dict)
files = pungi.wrappers.scm.get_file_from_scm(scm_dict, tmp_dir) files = pungi.wrappers.scm.get_file_from_scm(scm_dict, tmp_dir)
@ -128,7 +126,6 @@ def run(config, topdir, has_old, offline, defined_variables, schema_overrides):
pungi.phases.OSTreePhase(compose), pungi.phases.OSTreePhase(compose),
pungi.phases.CreateisoPhase(compose, buildinstall_phase), pungi.phases.CreateisoPhase(compose, buildinstall_phase),
pungi.phases.ExtraIsosPhase(compose, buildinstall_phase), pungi.phases.ExtraIsosPhase(compose, buildinstall_phase),
pungi.phases.LiveImagesPhase(compose),
pungi.phases.LiveMediaPhase(compose), pungi.phases.LiveMediaPhase(compose),
pungi.phases.ImageBuildPhase(compose), pungi.phases.ImageBuildPhase(compose),
pungi.phases.ImageChecksumPhase(compose), pungi.phases.ImageChecksumPhase(compose),

View File

@ -5,35 +5,43 @@ import os
import subprocess import subprocess
import tempfile import tempfile
from shutil import rmtree from shutil import rmtree
from typing import AnyStr, List, Dict, Optional from typing import (
AnyStr,
List,
Dict,
Optional,
)
import createrepo_c as cr import createrepo_c as cr
import requests import requests
import yaml import yaml
from dataclasses import dataclass, field from dataclasses import dataclass, field
from .create_packages_json import PackagesGenerator, RepoInfo from .create_packages_json import (
PackagesGenerator,
RepoInfo,
VariantInfo,
)
@dataclass @dataclass
class ExtraRepoInfo(RepoInfo): class ExtraVariantInfo(VariantInfo):
modules: List[AnyStr] = field(default_factory=list) modules: List[AnyStr] = field(default_factory=list)
packages: List[AnyStr] = field(default_factory=list) packages: List[AnyStr] = field(default_factory=list)
is_remote: bool = True
class CreateExtraRepo(PackagesGenerator): class CreateExtraRepo(PackagesGenerator):
def __init__( def __init__(
self, self,
repos: List[ExtraRepoInfo], variants: List[ExtraVariantInfo],
bs_auth_token: AnyStr, bs_auth_token: AnyStr,
local_repository_path: AnyStr, local_repository_path: AnyStr,
clear_target_repo: bool = True, clear_target_repo: bool = True,
): ):
self.repos = [] # type: List[ExtraRepoInfo] self.variants = [] # type: List[ExtraVariantInfo]
super().__init__(repos, [], []) super().__init__(variants, [], [])
self.auth_headers = { self.auth_headers = {
'Authorization': f'Bearer {bs_auth_token}', 'Authorization': f'Bearer {bs_auth_token}',
} }
@ -92,7 +100,7 @@ class CreateExtraRepo(PackagesGenerator):
arch: AnyStr, arch: AnyStr,
packages: Optional[List[AnyStr]] = None, packages: Optional[List[AnyStr]] = None,
modules: Optional[List[AnyStr]] = None, modules: Optional[List[AnyStr]] = None,
) -> List[ExtraRepoInfo]: ) -> List[ExtraVariantInfo]:
""" """
Get info about a BS repo and save it to Get info about a BS repo and save it to
an object of class ExtraRepoInfo an object of class ExtraRepoInfo
@ -110,7 +118,7 @@ class CreateExtraRepo(PackagesGenerator):
api_uri = 'api/v1' api_uri = 'api/v1'
bs_repo_suffix = 'build_repos' bs_repo_suffix = 'build_repos'
repos_info = [] variants_info = []
# get the full info about a BS repo # get the full info about a BS repo
repo_request = requests.get( repo_request = requests.get(
@ -132,7 +140,13 @@ class CreateExtraRepo(PackagesGenerator):
# skip repo with unsuitable architecture # skip repo with unsuitable architecture
if architecture != arch: if architecture != arch:
continue continue
repo_info = ExtraRepoInfo( variant_info = ExtraVariantInfo(
name=f'{build_id}-{platform_name}-{architecture}',
arch=architecture,
packages=packages,
modules=modules,
repos=[
RepoInfo(
path=os.path.join( path=os.path.join(
bs_url, bs_url,
bs_repo_suffix, bs_repo_suffix,
@ -140,14 +154,12 @@ class CreateExtraRepo(PackagesGenerator):
platform_name, platform_name,
), ),
folder=architecture, folder=architecture,
name=f'{build_id}-{platform_name}-{architecture}',
arch=architecture,
is_remote=True, is_remote=True,
packages=packages,
modules=modules,
) )
repos_info.append(repo_info) ]
return repos_info )
variants_info.append(variant_info)
return variants_info
def _create_local_extra_repo(self): def _create_local_extra_repo(self):
""" """
@ -184,7 +196,7 @@ class CreateExtraRepo(PackagesGenerator):
def _download_rpm_to_local_repo( def _download_rpm_to_local_repo(
self, self,
package_location: AnyStr, package_location: AnyStr,
repo_info: ExtraRepoInfo, repo_info: RepoInfo,
) -> None: ) -> None:
""" """
Download a rpm package from a remote repo and save it to a local repo Download a rpm package from a remote repo and save it to a local repo
@ -212,21 +224,22 @@ class CreateExtraRepo(PackagesGenerator):
def _download_packages( def _download_packages(
self, self,
packages: Dict[AnyStr, cr.Package], packages: Dict[AnyStr, cr.Package],
repo_info: ExtraRepoInfo variant_info: ExtraVariantInfo
): ):
""" """
Download all defined packages from a remote repo Download all defined packages from a remote repo
:param packages: information about all packages (including :param packages: information about all packages (including
modularity) in a remote repo modularity) in a remote repo
:param repo_info: information about a remote repo :param variant_info: information about a remote variant
""" """
for package in packages.values(): for package in packages.values():
package_name = package.name package_name = package.name
# Skip a current package from a remote repo if we defined # Skip a current package from a remote repo if we defined
# the list packages and a current package doesn't belong to it # the list packages and a current package doesn't belong to it
if repo_info.packages and \ if variant_info.packages and \
package_name not in repo_info.packages: package_name not in variant_info.packages:
continue continue
for repo_info in variant_info.repos:
self._download_rpm_to_local_repo( self._download_rpm_to_local_repo(
package_location=package.location_href, package_location=package.location_href,
repo_info=repo_info, repo_info=repo_info,
@ -235,14 +248,14 @@ class CreateExtraRepo(PackagesGenerator):
def _download_modules( def _download_modules(
self, self,
modules_data: List[Dict], modules_data: List[Dict],
repo_info: ExtraRepoInfo, variant_info: ExtraVariantInfo,
packages: Dict[AnyStr, cr.Package] packages: Dict[AnyStr, cr.Package]
): ):
""" """
Download all defined modularity packages and their data from Download all defined modularity packages and their data from
a remote repo a remote repo
:param modules_data: information about all modules in a remote repo :param modules_data: information about all modules in a remote repo
:param repo_info: information about a remote repo :param variant_info: information about a remote variant
:param packages: information about all packages (including :param packages: information about all packages (including
modularity) in a remote repo modularity) in a remote repo
""" """
@ -250,8 +263,8 @@ class CreateExtraRepo(PackagesGenerator):
module_data = module['data'] module_data = module['data']
# Skip a current module from a remote repo if we defined # Skip a current module from a remote repo if we defined
# the list modules and a current module doesn't belong to it # the list modules and a current module doesn't belong to it
if repo_info.modules and \ if variant_info.modules and \
module_data['name'] not in repo_info.modules: module_data['name'] not in variant_info.modules:
continue continue
# we should add info about a module if the local repodata # we should add info about a module if the local repodata
# doesn't have it # doesn't have it
@ -266,11 +279,12 @@ class CreateExtraRepo(PackagesGenerator):
# Empty repo_info.packages means that we will download # Empty repo_info.packages means that we will download
# all packages from repo including # all packages from repo including
# the modularity packages # the modularity packages
if not repo_info.packages: if not variant_info.packages:
break break
# skip a rpm if it doesn't belong to a processed repo # skip a rpm if it doesn't belong to a processed repo
if rpm not in packages: if rpm not in packages:
continue continue
for repo_info in variant_info.repos:
self._download_rpm_to_local_repo( self._download_rpm_to_local_repo(
package_location=packages[rpm].location_href, package_location=packages[rpm].location_href,
repo_info=repo_info, repo_info=repo_info,
@ -284,23 +298,12 @@ class CreateExtraRepo(PackagesGenerator):
3. Call `createrepo_c` which creates a local repo 3. Call `createrepo_c` which creates a local repo
with the right repodata with the right repodata
""" """
for repo_info in self.repos: for variant_info in self.variants:
packages = {} # type: Dict[AnyStr, cr.Package] for repo_info in variant_info.repos:
repomd_records = self._get_repomd_records( repomd_records = self._get_repomd_records(
repo_info=repo_info, repo_info=repo_info,
) )
repomd_records_dict = {} # type: Dict[str, str] packages_iterator = self.get_packages_iterator(repo_info)
self._download_repomd_records(
repo_info=repo_info,
repomd_records=repomd_records,
repomd_records_dict=repomd_records_dict,
)
packages_iterator = cr.PackageIterator(
primary_path=repomd_records_dict['primary'],
filelists_path=repomd_records_dict['filelists'],
other_path=repomd_records_dict['other'],
warningcb=self._warning_callback,
)
# parse the repodata (including modules.yaml.gz) # parse the repodata (including modules.yaml.gz)
modules_data = self._parse_module_repomd_record( modules_data = self._parse_module_repomd_record(
repo_info=repo_info, repo_info=repo_info,
@ -316,12 +319,12 @@ class CreateExtraRepo(PackagesGenerator):
} }
self._download_modules( self._download_modules(
modules_data=modules_data, modules_data=modules_data,
repo_info=repo_info, variant_info=variant_info,
packages=packages, packages=packages,
) )
self._download_packages( self._download_packages(
packages=packages, packages=packages,
repo_info=repo_info, variant_info=variant_info,
) )
self._dump_local_modules_yaml() self._dump_local_modules_yaml()
@ -333,7 +336,6 @@ def create_parser():
parser.add_argument( parser.add_argument(
'--bs-auth-token', '--bs-auth-token',
help='Auth token for Build System', help='Auth token for Build System',
required=True,
) )
parser.add_argument( parser.add_argument(
'--local-repo-path', '--local-repo-path',
@ -402,11 +404,16 @@ def cli_main():
packages = packages.split() packages = packages.split()
if repo.startswith('http://'): if repo.startswith('http://'):
repos_info.append( repos_info.append(
ExtraRepoInfo( ExtraVariantInfo(
path=repo,
folder=repo_folder,
name=repo_folder, name=repo_folder,
arch=repo_arch, arch=repo_arch,
repos=[
RepoInfo(
path=repo,
folder=repo_folder,
is_remote=True,
)
],
modules=modules, modules=modules,
packages=packages, packages=packages,
) )
@ -422,7 +429,7 @@ def cli_main():
) )
) )
cer = CreateExtraRepo( cer = CreateExtraRepo(
repos=repos_info, variants=repos_info,
bs_auth_token=args.bs_auth_token, bs_auth_token=args.bs_auth_token,
local_repository_path=args.local_repo_path, local_repository_path=args.local_repo_path,
clear_target_repo=args.clear_local_repo, clear_target_repo=args.clear_local_repo,

View File

@ -9,22 +9,41 @@ https://github.com/rpm-software-management/createrepo_c/blob/master/examples/pyt
import argparse import argparse
import gzip import gzip
import json import json
import logging
import lzma import lzma
import os import os
import re import re
import tempfile import tempfile
from collections import defaultdict from collections import defaultdict
from typing import AnyStr, Dict, List, Optional, Any, Iterator from itertools import tee
from pathlib import Path
from typing import (
AnyStr,
Dict,
List,
Any,
Iterator,
Optional,
Tuple,
Union,
)
import binascii import binascii
import createrepo_c as cr from urllib.parse import urljoin
import dnf.subject
import hawkey
import requests import requests
import rpm import rpm
import yaml import yaml
from createrepo_c import Package, PackageIterator from createrepo_c import (
from dataclasses import dataclass Package,
PackageIterator,
Repomd,
RepomdRecord,
)
from dataclasses import dataclass, field
from kobo.rpmlib import parse_nvra
logging.basicConfig(level=logging.INFO)
def _is_compressed_file(first_two_bytes: bytes, initial_bytes: bytes): def _is_compressed_file(first_two_bytes: bytes, initial_bytes: bytes):
@ -51,21 +70,31 @@ class RepoInfo:
# 'appstream', 'baseos', etc. # 'appstream', 'baseos', etc.
# Or 'http://koji.cloudlinux.com/mirrors/rhel_mirror' if you are # Or 'http://koji.cloudlinux.com/mirrors/rhel_mirror' if you are
# using remote repo # using remote repo
path: AnyStr path: str
# name of folder with a repodata folder. E.g. 'baseos', 'appstream', etc # name of folder with a repodata folder. E.g. 'baseos', 'appstream', etc
folder: AnyStr folder: str
# name of repo. E.g. 'BaseOS', 'AppStream', etc
name: AnyStr
# architecture of repo. E.g. 'x86_64', 'i686', etc
arch: AnyStr
# Is a repo remote or local # Is a repo remote or local
is_remote: bool is_remote: bool
# Is a reference repository (usually it's a RHEL repo) # Is a reference repository (usually it's a RHEL repo)
# Layout of packages from such repository will be taken as example # Layout of packages from such repository will be taken as example
# Only layout of specific package (which don't exist # Only layout of specific package (which doesn't exist
# in a reference repository) will be taken as example # in a reference repository) will be taken as example
is_reference: bool = False is_reference: bool = False
strict_arch: bool = False # The packages from 'present' repo will be added to a variant.
# The packages from 'absent' repo will be removed from a variant.
repo_type: str = 'present'
@dataclass
class VariantInfo:
# name of variant. E.g. 'BaseOS', 'AppStream', etc
name: AnyStr
# architecture of variant. E.g. 'x86_64', 'i686', etc
arch: AnyStr
# The packages which will be not added to a variant
excluded_packages: List[str] = field(default_factory=list)
# Repos of a variant
repos: List[RepoInfo] = field(default_factory=list)
class PackagesGenerator: class PackagesGenerator:
@ -81,22 +110,36 @@ class PackagesGenerator:
def __init__( def __init__(
self, self,
repos: List[RepoInfo], variants: List[VariantInfo],
excluded_packages: List[AnyStr], excluded_packages: List[AnyStr],
included_packages: List[AnyStr], included_packages: List[AnyStr],
): ):
self.repos = repos self.variants = variants
self.pkgs = dict()
self.excluded_packages = excluded_packages self.excluded_packages = excluded_packages
self.included_packages = included_packages self.included_packages = included_packages
self.tmp_files = [] self.tmp_files = [] # type: list[Path]
for arch, arch_list in self.addon_repos.items(): for arch, arch_list in self.addon_repos.items():
self.repo_arches[arch].extend(arch_list) self.repo_arches[arch].extend(arch_list)
self.repo_arches[arch].append(arch) self.repo_arches[arch].append(arch)
def __del__(self): def __del__(self):
for tmp_file in self.tmp_files: for tmp_file in self.tmp_files:
if os.path.exists(tmp_file): if tmp_file.exists():
os.remove(tmp_file) tmp_file.unlink()
@staticmethod
def _get_full_repo_path(repo_info: RepoInfo):
result = os.path.join(
repo_info.path,
repo_info.folder
)
if repo_info.is_remote:
result = urljoin(
repo_info.path + '/',
repo_info.folder,
)
return result
@staticmethod @staticmethod
def _warning_callback(warning_type, message): def _warning_callback(warning_type, message):
@ -106,8 +149,7 @@ class PackagesGenerator:
print(f'Warning message: "{message}"; warning type: "{warning_type}"') print(f'Warning message: "{message}"; warning type: "{warning_type}"')
return True return True
@staticmethod def get_remote_file_content(self, file_url: AnyStr) -> AnyStr:
def get_remote_file_content(file_url: AnyStr) -> AnyStr:
""" """
Get content from a remote file and write it to a temp file Get content from a remote file and write it to a temp file
:param file_url: url of a remote file :param file_url: url of a remote file
@ -120,15 +162,16 @@ class PackagesGenerator:
file_request.raise_for_status() file_request.raise_for_status()
with tempfile.NamedTemporaryFile(delete=False) as file_stream: with tempfile.NamedTemporaryFile(delete=False) as file_stream:
file_stream.write(file_request.content) file_stream.write(file_request.content)
self.tmp_files.append(Path(file_stream.name))
return file_stream.name return file_stream.name
@staticmethod @staticmethod
def _parse_repomd(repomd_file_path: AnyStr) -> cr.Repomd: def _parse_repomd(repomd_file_path: AnyStr) -> Repomd:
""" """
Parse file repomd.xml and create object Repomd Parse file repomd.xml and create object Repomd
:param repomd_file_path: path to local repomd.xml :param repomd_file_path: path to local repomd.xml
""" """
return cr.Repomd(repomd_file_path) return Repomd(repomd_file_path)
@classmethod @classmethod
def _parse_modules_file( def _parse_modules_file(
@ -139,7 +182,7 @@ class PackagesGenerator:
""" """
Parse modules.yaml.gz and returns parsed data Parse modules.yaml.gz and returns parsed data
:param modules_file_path: path to local modules.yaml.gz :param modules_file_path: path to local modules.yaml.gz
:return: List of dict for each modules in a repo :return: List of dict for each module in a repo
""" """
with open(modules_file_path, 'rb') as modules_file: with open(modules_file_path, 'rb') as modules_file:
@ -156,7 +199,7 @@ class PackagesGenerator:
def _get_repomd_records( def _get_repomd_records(
self, self,
repo_info: RepoInfo, repo_info: RepoInfo,
) -> List[cr.RepomdRecord]: ) -> List[RepomdRecord]:
""" """
Get, parse file repomd.xml and extract from it repomd records Get, parse file repomd.xml and extract from it repomd records
:param repo_info: structure which contains info about a current repo :param repo_info: structure which contains info about a current repo
@ -169,9 +212,15 @@ class PackagesGenerator:
'repomd.xml', 'repomd.xml',
) )
if repo_info.is_remote: if repo_info.is_remote:
repomd_file_path = urljoin(
urljoin(
repo_info.path + '/',
repo_info.folder
) + '/',
'repodata/repomd.xml'
)
repomd_file_path = self.get_remote_file_content(repomd_file_path) repomd_file_path = self.get_remote_file_content(repomd_file_path)
else:
repomd_file_path = repomd_file_path
repomd_object = self._parse_repomd(repomd_file_path) repomd_object = self._parse_repomd(repomd_file_path)
if repo_info.is_remote: if repo_info.is_remote:
os.remove(repomd_file_path) os.remove(repomd_file_path)
@ -180,7 +229,7 @@ class PackagesGenerator:
def _download_repomd_records( def _download_repomd_records(
self, self,
repo_info: RepoInfo, repo_info: RepoInfo,
repomd_records: List[cr.RepomdRecord], repomd_records: List[RepomdRecord],
repomd_records_dict: Dict[str, str], repomd_records_dict: Dict[str, str],
): ):
""" """
@ -204,19 +253,17 @@ class PackagesGenerator:
if repo_info.is_remote: if repo_info.is_remote:
repomd_record_file_path = self.get_remote_file_content( repomd_record_file_path = self.get_remote_file_content(
repomd_record_file_path) repomd_record_file_path)
self.tmp_files.append(repomd_record_file_path)
repomd_records_dict[repomd_record.type] = repomd_record_file_path repomd_records_dict[repomd_record.type] = repomd_record_file_path
def _parse_module_repomd_record( def _parse_module_repomd_record(
self, self,
repo_info: RepoInfo, repo_info: RepoInfo,
repomd_records: List[cr.RepomdRecord], repomd_records: List[RepomdRecord],
) -> List[Dict]: ) -> List[Dict]:
""" """
Download repomd records Download repomd records
:param repo_info: structure which contains info about a current repo :param repo_info: structure which contains info about a current repo
:param repomd_records: list with repomd records :param repomd_records: list with repomd records
:param repomd_records_dict: dict with paths to repodata files
""" """
for repomd_record in repomd_records: for repomd_record in repomd_records:
if repomd_record.type != 'modules': if repomd_record.type != 'modules':
@ -229,10 +276,10 @@ class PackagesGenerator:
if repo_info.is_remote: if repo_info.is_remote:
repomd_record_file_path = self.get_remote_file_content( repomd_record_file_path = self.get_remote_file_content(
repomd_record_file_path) repomd_record_file_path)
self.tmp_files.append(repomd_record_file_path)
return list(self._parse_modules_file( return list(self._parse_modules_file(
repomd_record_file_path, repomd_record_file_path,
)) ))
return []
@staticmethod @staticmethod
def compare_pkgs_version(package_1: Package, package_2: Package) -> int: def compare_pkgs_version(package_1: Package, package_2: Package) -> int:
@ -248,21 +295,13 @@ class PackagesGenerator:
) )
return rpm.labelCompare(version_tuple_1, version_tuple_2) return rpm.labelCompare(version_tuple_1, version_tuple_2)
def generate_packages_json( def get_packages_iterator(
self self,
) -> Dict[AnyStr, Dict[AnyStr, Dict[AnyStr, List[AnyStr]]]]: repo_info: RepoInfo,
""" ) -> Union[PackageIterator, Iterator]:
Generate packages.json full_repo_path = self._get_full_repo_path(repo_info)
""" pkgs_iterator = self.pkgs.get(full_repo_path)
packages_json = defaultdict( if pkgs_iterator is None:
lambda: defaultdict(
lambda: defaultdict(
list,
)
)
)
all_packages = defaultdict(lambda: {'variants': list()})
for repo_info in self.repos:
repomd_records = self._get_repomd_records( repomd_records = self._get_repomd_records(
repo_info=repo_info, repo_info=repo_info,
) )
@ -272,157 +311,146 @@ class PackagesGenerator:
repomd_records=repomd_records, repomd_records=repomd_records,
repomd_records_dict=repomd_records_dict, repomd_records_dict=repomd_records_dict,
) )
packages_iterator = PackageIterator( pkgs_iterator = PackageIterator(
primary_path=repomd_records_dict['primary'], primary_path=repomd_records_dict['primary'],
filelists_path=repomd_records_dict['filelists'], filelists_path=repomd_records_dict['filelists'],
other_path=repomd_records_dict['other'], other_path=repomd_records_dict['other'],
warningcb=self._warning_callback, warningcb=self._warning_callback,
) )
for package in packages_iterator: pkgs_iterator, self.pkgs[full_repo_path] = tee(pkgs_iterator)
if package.arch not in self.repo_arches[repo_info.arch]: return pkgs_iterator
package_arch = repo_info.arch
else: def get_package_arch(
package_arch = package.arch self,
package_key = f'{package.name}.{package_arch}' package: Package,
if 'module' in package.release and not any( variant_arch: str,
re.search(included_package, package.name) ) -> str:
for included_package in self.included_packages result = variant_arch
): if package.arch in self.repo_arches[variant_arch]:
result = package.arch
return result
def is_skipped_module_package(
self,
package: Package,
variant_arch: str,
) -> bool:
package_key = self.get_package_key(package, variant_arch)
# Even a module package will be added to packages.json if # Even a module package will be added to packages.json if
# it presents in the list of included packages # it presents in the list of included packages
continue return 'module' in package.release and not any(
if package_key not in all_packages: re.search(
all_packages[package_key]['variants'].append( f'^{included_pkg}$',
(repo_info.name, repo_info.arch) package_key,
) ) or included_pkg in (package.name, package_key)
all_packages[package_key]['arch'] = package_arch for included_pkg in self.included_packages
all_packages[package_key]['package'] = package
all_packages[package_key]['type'] = repo_info.is_reference
# replace an older package if it's not reference or
# a newer package is from reference repo
elif (not all_packages[package_key]['type'] or
all_packages[package_key]['type'] ==
repo_info.is_reference) and \
self.compare_pkgs_version(
package,
all_packages[package_key]['package']
) > 0:
all_packages[package_key]['variants'] = [
(repo_info.name, repo_info.arch)
]
all_packages[package_key]['arch'] = package_arch
all_packages[package_key]['package'] = package
elif self.compare_pkgs_version(
package,
all_packages[package_key]['package']
) == 0:
all_packages[package_key]['variants'].append(
(repo_info.name, repo_info.arch)
) )
for package_dict in all_packages.values(): def is_excluded_package(
for variant_name, variant_arch in package_dict['variants']: self,
package_arch = package_dict['arch'] package: Package,
package = package_dict['package'] variant_arch: str,
package_name = package.name excluded_packages: List[str],
if any(re.search(excluded_package, package_name) ) -> bool:
for excluded_package in self.excluded_packages): package_key = self.get_package_key(package, variant_arch)
return any(
re.search(
f'^{excluded_pkg}$',
package_key,
) or excluded_pkg in (package.name, package_key)
for excluded_pkg in excluded_packages
)
@staticmethod
def get_source_rpm_name(package: Package) -> str:
source_rpm_nvra = parse_nvra(package.rpm_sourcerpm)
return source_rpm_nvra['name']
def get_package_key(self, package: Package, variant_arch: str) -> str:
return (
f'{package.name}.'
f'{self.get_package_arch(package, variant_arch)}'
)
def generate_packages_json(
self
) -> Dict[AnyStr, Dict[AnyStr, Dict[AnyStr, List[AnyStr]]]]:
"""
Generate packages.json
"""
packages = defaultdict(lambda: defaultdict(lambda: {
'variants': list(),
}))
for variant_info in self.variants:
for repo_info in variant_info.repos:
is_reference = repo_info.is_reference
for package in self.get_packages_iterator(repo_info=repo_info):
if self.is_skipped_module_package(
package=package,
variant_arch=variant_info.arch,
):
continue continue
src_package_name = dnf.subject.Subject( if self.is_excluded_package(
package.rpm_sourcerpm, package=package,
).get_nevra_possibilities( variant_arch=variant_info.arch,
forms=hawkey.FORM_NEVRA, excluded_packages=self.excluded_packages,
):
continue
if self.is_excluded_package(
package=package,
variant_arch=variant_info.arch,
excluded_packages=variant_info.excluded_packages,
):
continue
package_key = self.get_package_key(
package,
variant_info.arch,
) )
if len(src_package_name) > 1: source_rpm_name = self.get_source_rpm_name(package)
# We should stop utility if we can't get exact name of srpm package_info = packages[source_rpm_name][package_key]
raise ValueError( if 'is_reference' not in package_info:
'We can\'t get exact name of srpm ' package_info['variants'].append(variant_info.name)
f'by its NEVRA "{package.rpm_sourcerpm}"' package_info['is_reference'] = is_reference
) package_info['package'] = package
else: elif not package_info['is_reference'] or \
src_package_name = src_package_name[0].name package_info['is_reference'] == is_reference and \
# TODO: for x86_64 + i686 in one packages.json self.compare_pkgs_version(
# don't remove! package_1=package,
# if package.arch in self.addon_repos[variant_arch]: package_2=package_info['package'],
# arches = self.addon_repos[variant_arch] + [variant_arch] ) > 0:
# else: package_info['variants'] = [variant_info.name]
# arches = [variant_arch] package_info['is_reference'] = is_reference
# for arch in arches: package_info['package'] = package
# pkgs_list = packages_json[variant_name][ elif self.compare_pkgs_version(
# arch][src_package_name] package_1=package,
# added_pkg = f'{package_name}.{package_arch}' package_2=package_info['package'],
# if added_pkg not in pkgs_list: ) == 0 and repo_info.repo_type != 'absent':
# pkgs_list.append(added_pkg) package_info['variants'].append(variant_info.name)
pkgs_list = packages_json[variant_name][ result = defaultdict(lambda: defaultdict(
variant_arch][src_package_name] lambda: defaultdict(list),
added_pkg = f'{package_name}.{package_arch}' ))
if added_pkg not in pkgs_list: for variant_info in self.variants:
pkgs_list.append(added_pkg) for source_rpm_name, packages_info in packages.items():
return packages_json for package_key, package_info in packages_info.items():
variant_pkgs = result[variant_info.name][variant_info.arch]
if variant_info.name not in package_info['variants']:
continue
variant_pkgs[source_rpm_name].append(package_key)
return result
def create_parser(): def create_parser():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument( parser.add_argument(
'--repo-path', '-c',
action='append', '--config',
help='Path to a folder with repofolders. E.g. "/var/repos" or ' type=Path,
'"http://koji.cloudlinux.com/mirrors/rhel_mirror"', default=Path('config.yaml'),
required=True,
)
parser.add_argument(
'--repo-folder',
action='append',
help='A folder which contains folder repodata . E.g. "baseos-stream"',
required=True,
)
parser.add_argument(
'--repo-arch',
action='append',
help='What architecture packages a repository contains. E.g. "x86_64"',
required=True,
)
parser.add_argument(
'--repo-name',
action='append',
help='Name of a repository. E.g. "AppStream"',
required=True,
)
parser.add_argument(
'--is-remote',
action='append',
type=str,
help='A repository is remote or local',
choices=['yes', 'no'],
required=True,
)
parser.add_argument(
'--is-reference',
action='append',
type=str,
help='A repository is used as reference for packages layout',
choices=['yes', 'no'],
required=True,
)
parser.add_argument(
'--excluded-packages',
nargs='+',
type=str,
default=[],
help='A list of globally excluded packages from generated json.'
'All of list elements should be separated by space',
required=False,
)
parser.add_argument(
'--included-packages',
nargs='+',
type=str,
default=[],
help='A list of globally included packages from generated json.'
'All of list elements should be separated by space',
required=False, required=False,
help='Path to a config',
) )
parser.add_argument( parser.add_argument(
'-o',
'--json-output-path', '--json-output-path',
type=str, type=str,
help='Full path to output json file', help='Full path to output json file',
@ -432,30 +460,45 @@ def create_parser():
return parser return parser
def read_config(config_path: Path) -> Optional[Dict]:
if not config_path.exists():
logging.error('A config by path "%s" does not exist', config_path)
exit(1)
with config_path.open('r') as config_fd:
return yaml.safe_load(config_fd)
def process_config(config_data: Dict) -> Tuple[
List[VariantInfo],
List[str],
List[str],
]:
excluded_packages = config_data.get('excluded_packages', [])
included_packages = config_data.get('included_packages', [])
variants = [VariantInfo(
name=variant_name,
arch=variant_info['arch'],
excluded_packages=variant_info.get('excluded_packages', []),
repos=[RepoInfo(
path=variant_repo['path'],
folder=variant_repo['folder'],
is_remote=variant_repo['remote'],
is_reference=variant_repo['reference'],
repo_type=variant_repo.get('repo_type', 'present'),
) for variant_repo in variant_info['repos']]
) for variant_name, variant_info in config_data['variants'].items()]
return variants, excluded_packages, included_packages
def cli_main(): def cli_main():
args = create_parser().parse_args() args = create_parser().parse_args()
repos = [] variants, excluded_packages, included_packages = process_config(
for repo_path, repo_folder, repo_name, \ config_data=read_config(args.config)
repo_arch, is_remote, is_reference in zip( )
args.repo_path,
args.repo_folder,
args.repo_name,
args.repo_arch,
args.is_remote,
args.is_reference,
):
repos.append(RepoInfo(
path=repo_path,
folder=repo_folder,
name=repo_name,
arch=repo_arch,
is_remote=True if is_remote == 'yes' else False,
is_reference=True if is_reference == 'yes' else False
))
pg = PackagesGenerator( pg = PackagesGenerator(
repos=repos, variants=variants,
excluded_packages=args.excluded_packages, excluded_packages=excluded_packages,
included_packages=args.included_packages, included_packages=included_packages,
) )
result = pg.generate_packages_json() result = pg.generate_packages_json()
with open(args.json_output_path, 'w') as packages_file: with open(args.json_output_path, 'w') as packages_file:

View File

@ -14,6 +14,9 @@ def send(cmd, data):
topic = "compose.%s" % cmd.replace("-", ".").lower() topic = "compose.%s" % cmd.replace("-", ".").lower()
try: try:
msg = fedora_messaging.api.Message(topic="pungi.{}".format(topic), body=data) msg = fedora_messaging.api.Message(topic="pungi.{}".format(topic), body=data)
if cmd == "ostree":
# https://pagure.io/fedora-infrastructure/issue/10899
msg.priority = 3
fedora_messaging.api.publish(msg) fedora_messaging.api.publish(msg)
except fedora_messaging.exceptions.PublishReturned as e: except fedora_messaging.exceptions.PublishReturned as e:
print("Fedora Messaging broker rejected message %s: %s" % (msg.id, e)) print("Fedora Messaging broker rejected message %s: %s" % (msg.id, e))

View File

@ -2,6 +2,7 @@ import gzip
import lzma import lzma
import os import os
from argparse import ArgumentParser, FileType from argparse import ArgumentParser, FileType
from glob import iglob
from io import BytesIO from io import BytesIO
from pathlib import Path from pathlib import Path
from typing import List, AnyStr, Iterable, Union, Optional from typing import List, AnyStr, Iterable, Union, Optional
@ -30,8 +31,11 @@ def grep_list_of_modules_yaml(repos_path: AnyStr) -> Iterable[BytesIO]:
""" """
return ( return (
read_modules_yaml_from_specific_repo(repo_path=path.parent) read_modules_yaml_from_specific_repo(repo_path=Path(path).parent)
for path in Path(repos_path).rglob('repodata') for path in iglob(
str(Path(repos_path).joinpath('**/repodata')),
recursive=True
)
) )
@ -55,7 +59,12 @@ def read_modules_yaml_from_specific_repo(
repo_path + '/', repo_path + '/',
'repodata/repomd.xml', 'repodata/repomd.xml',
) )
repomd_file_path = PackagesGenerator.get_remote_file_content( packages_generator = PackagesGenerator(
variants=[],
excluded_packages=[],
included_packages=[],
)
repomd_file_path = packages_generator.get_remote_file_content(
file_url=repomd_url file_url=repomd_url
) )
else: else:
@ -73,7 +82,12 @@ def read_modules_yaml_from_specific_repo(
repo_path + '/', repo_path + '/',
record.location_href, record.location_href,
) )
modules_yaml_path = PackagesGenerator.get_remote_file_content( packages_generator = PackagesGenerator(
variants=[],
excluded_packages=[],
included_packages=[],
)
modules_yaml_path = packages_generator.get_remote_file_content(
file_url=modules_yaml_url file_url=modules_yaml_url
) )
else: else:

View File

@ -1,39 +1,53 @@
import re
from argparse import ArgumentParser from argparse import ArgumentParser
import os import os
from glob import iglob
from typing import List from typing import List
from pathlib import Path
from attr import dataclass from dataclasses import dataclass
from productmd.common import parse_nvra from productmd.common import parse_nvra
@dataclass @dataclass
class Package: class Package:
nvra: str nvra: dict
path: str path: Path
def search_rpms(top_dir) -> List[Package]: def search_rpms(top_dir: Path) -> List[Package]:
""" """
Search for all *.rpm files recursively Search for all *.rpm files recursively
in given top directory in given top directory
Returns: Returns:
list: list of paths list: list of paths
""" """
rpms = [] return [Package(
for root, dirs, files in os.walk(top_dir): nvra=parse_nvra(Path(path).stem),
path = root.split(os.sep) path=Path(path),
for file in files: ) for path in iglob(str(top_dir.joinpath('**/*.rpm')), recursive=True)]
if not file.endswith('.rpm'):
continue
nvra, _ = os.path.splitext(file) def is_excluded_package(
rpms.append( package: Package,
Package(nvra=nvra, path=os.path.join('/', *path, file)) excluded_packages: List[str],
) -> bool:
package_key = f'{package.nvra["name"]}.{package.nvra["arch"]}'
return any(
re.search(
f'^{excluded_pkg}$',
package_key,
) or excluded_pkg in (package.nvra['name'], package_key)
for excluded_pkg in excluded_packages
) )
return rpms
def copy_rpms(packages: List[Package], target_top_dir: str): def copy_rpms(
packages: List[Package],
target_top_dir: Path,
excluded_packages: List[str],
):
""" """
Search synced repos for rpms and prepare Search synced repos for rpms and prepare
koji-like structure for pungi koji-like structure for pungi
@ -45,30 +59,37 @@ def copy_rpms(packages: List[Package], target_top_dir: str):
Nothing: Nothing:
""" """
for package in packages: for package in packages:
info = parse_nvra(package.nvra) if is_excluded_package(package, excluded_packages):
continue
target_arch_dir = os.path.join(target_top_dir, info['arch']) target_arch_dir = target_top_dir.joinpath(package.nvra['arch'])
target_file = target_arch_dir.joinpath(package.path.name)
os.makedirs(target_arch_dir, exist_ok=True) os.makedirs(target_arch_dir, exist_ok=True)
target_file = os.path.join(target_arch_dir, os.path.basename(package.path)) if not target_file.exists():
if not os.path.exists(target_file):
try: try:
os.link(package.path, target_file) os.link(package.path, target_file)
except OSError: except OSError:
# hardlink failed, try symlinking # hardlink failed, try symlinking
os.symlink(package.path, target_file) package.path.symlink_to(target_file)
def cli_main(): def cli_main():
parser = ArgumentParser() parser = ArgumentParser()
parser.add_argument('-p', '--path', required=True) parser.add_argument('-p', '--path', required=True, type=Path)
parser.add_argument('-t', '--target', required=True) parser.add_argument('-t', '--target', required=True, type=Path)
parser.add_argument(
'-e',
'--excluded-packages',
required=False,
nargs='+',
type=str,
default=[],
)
namespace = parser.parse_args() namespace = parser.parse_args()
rpms = search_rpms(namespace.path) rpms = search_rpms(namespace.path)
copy_rpms(rpms, namespace.target) copy_rpms(rpms, namespace.target, namespace.excluded_packages)
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,515 +0,0 @@
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
from __future__ import absolute_import
from __future__ import print_function
import os
import selinux
import sys
from argparse import ArgumentParser, Action
from pungi import get_full_version
import pungi.gather
import pungi.config
import pungi.ks
def get_arguments(config):
parser = ArgumentParser()
class SetConfig(Action):
def __call__(self, parser, namespace, value, option_string=None):
config.set("pungi", self.dest, value)
parser.add_argument("--version", action="version", version=get_full_version())
# Pulled in from config file to be cli options as part of pykickstart conversion
parser.add_argument(
"--name",
dest="family",
type=str,
action=SetConfig,
help='the name for your distribution (defaults to "Fedora"), DEPRECATED',
)
parser.add_argument(
"--family",
dest="family",
action=SetConfig,
help='the family name for your distribution (defaults to "Fedora")',
)
parser.add_argument(
"--ver",
dest="version",
action=SetConfig,
help="the version of your distribution (defaults to datestamp)",
)
parser.add_argument(
"--flavor",
dest="variant",
action=SetConfig,
help="the flavor of your distribution spin (optional), DEPRECATED",
)
parser.add_argument(
"--variant",
dest="variant",
action=SetConfig,
help="the variant of your distribution spin (optional)",
)
parser.add_argument(
"--destdir",
dest="destdir",
action=SetConfig,
help="destination directory (defaults to current directory)",
)
parser.add_argument(
"--cachedir",
dest="cachedir",
action=SetConfig,
help="package cache directory (defaults to /var/cache/pungi)",
)
parser.add_argument(
"--bugurl",
dest="bugurl",
action=SetConfig,
help="the url for your bug system (defaults to http://bugzilla.redhat.com)",
)
parser.add_argument(
"--selfhosting",
action="store_true",
dest="selfhosting",
help="build a self-hosting tree by following build dependencies (optional)",
)
parser.add_argument(
"--fulltree",
action="store_true",
dest="fulltree",
help="build a tree that includes all packages built from corresponding source rpms (optional)", # noqa: E501
)
parser.add_argument(
"--nosource",
action="store_true",
dest="nosource",
help="disable gathering of source packages (optional)",
)
parser.add_argument(
"--nodebuginfo",
action="store_true",
dest="nodebuginfo",
help="disable gathering of debuginfo packages (optional)",
)
parser.add_argument(
"--nodownload",
action="store_true",
dest="nodownload",
help="disable downloading of packages. instead, print the package URLs (optional)", # noqa: E501
)
parser.add_argument(
"--norelnotes",
action="store_true",
dest="norelnotes",
help="disable gathering of release notes (optional); DEPRECATED",
)
parser.add_argument(
"--nogreedy",
action="store_true",
dest="nogreedy",
help="disable pulling of all providers of package dependencies (optional)",
)
parser.add_argument(
"--nodeps",
action="store_false",
dest="resolve_deps",
default=True,
help="disable resolving dependencies",
)
parser.add_argument(
"--sourceisos",
default=False,
action="store_true",
dest="sourceisos",
help="Create the source isos (other arch runs must be done)",
)
parser.add_argument(
"--force",
default=False,
action="store_true",
help="Force reuse of an existing destination directory (will overwrite files)",
)
parser.add_argument(
"--isfinal",
default=False,
action="store_true",
help="Specify this is a GA tree, which causes betanag to be turned off during install", # noqa: E501
)
parser.add_argument(
"--nohash",
default=False,
action="store_true",
help="disable hashing the Packages trees",
)
parser.add_argument(
"--full-archlist",
action="store_true",
help="Use the full arch list for x86_64 (include i686, i386, etc.)",
)
parser.add_argument("--arch", help="Override default (uname based) arch")
parser.add_argument(
"--greedy", metavar="METHOD", help="Greedy method; none, all, build"
)
parser.add_argument(
"--multilib",
action="append",
metavar="METHOD",
help="Multilib method; can be specified multiple times; recommended: devel, runtime", # noqa: E501
)
parser.add_argument(
"--lookaside-repo",
action="append",
dest="lookaside_repos",
metavar="NAME",
help="Specify lookaside repo name(s) (packages will used for depsolving but not be included in the output)", # noqa: E501
)
parser.add_argument(
"--workdirbase",
dest="workdirbase",
action=SetConfig,
help="base working directory (defaults to destdir + /work)",
)
parser.add_argument(
"--no-dvd",
default=False,
action="store_true",
dest="no_dvd",
help="Do not make a install DVD/CD only the netinstall image and the tree",
)
parser.add_argument("--lorax-conf", help="Path to lorax.conf file (optional)")
parser.add_argument(
"-i",
"--installpkgs",
default=[],
action="append",
metavar="STRING",
help="Package glob for lorax to install before runtime-install.tmpl runs. (may be listed multiple times)", # noqa: E501
)
parser.add_argument(
"--multilibconf",
default=None,
action=SetConfig,
help="Path to multilib conf files. Default is /usr/share/pungi/multilib/",
)
parser.add_argument(
"-c",
"--config",
dest="config",
required=True,
help="Path to kickstart config file",
)
parser.add_argument(
"--all-stages",
action="store_true",
default=True,
dest="do_all",
help="Enable ALL stages",
)
parser.add_argument(
"-G",
action="store_true",
default=False,
dest="do_gather",
help="Flag to enable processing the Gather stage",
)
parser.add_argument(
"-C",
action="store_true",
default=False,
dest="do_createrepo",
help="Flag to enable processing the Createrepo stage",
)
parser.add_argument(
"-B",
action="store_true",
default=False,
dest="do_buildinstall",
help="Flag to enable processing the BuildInstall stage",
)
parser.add_argument(
"-I",
action="store_true",
default=False,
dest="do_createiso",
help="Flag to enable processing the CreateISO stage",
)
parser.add_argument(
"--relnotepkgs",
dest="relnotepkgs",
action=SetConfig,
help="Rpms which contain the release notes",
)
parser.add_argument(
"--relnotefilere",
dest="relnotefilere",
action=SetConfig,
help="Which files are the release notes -- GPL EULA",
)
parser.add_argument(
"--nomacboot",
action="store_true",
dest="nomacboot",
help="disable setting up macboot as no hfs support ",
)
parser.add_argument(
"--rootfs-size",
dest="rootfs_size",
action=SetConfig,
default=False,
help="Size of root filesystem in GiB. If not specified, use lorax default value", # noqa: E501
)
parser.add_argument(
"--pungirc",
dest="pungirc",
default="~/.pungirc",
action=SetConfig,
help="Read pungi options from config file ",
)
opts = parser.parse_args()
if (
not config.get("pungi", "variant").isalnum()
and not config.get("pungi", "variant") == ""
):
parser.error("Variant must be alphanumeric")
if (
opts.do_gather
or opts.do_createrepo
or opts.do_buildinstall
or opts.do_createiso
):
opts.do_all = False
if opts.arch and (opts.do_all or opts.do_buildinstall):
parser.error("Cannot override arch while the BuildInstall stage is enabled")
# set the iso_basename.
if not config.get("pungi", "variant") == "":
config.set(
"pungi",
"iso_basename",
"%s-%s" % (config.get("pungi", "family"), config.get("pungi", "variant")),
)
else:
config.set("pungi", "iso_basename", config.get("pungi", "family"))
return opts
def main():
config = pungi.config.Config()
opts = get_arguments(config)
# Read the config to create "new" defaults
# reparse command line options so they take precedence
config = pungi.config.Config(pungirc=opts.pungirc)
opts = get_arguments(config)
# You must be this high to ride if you're going to do root tasks
if os.geteuid() != 0 and (opts.do_all or opts.do_buildinstall):
print("You must run pungi as root", file=sys.stderr)
return 1
if opts.do_all or opts.do_buildinstall:
try:
enforcing = selinux.security_getenforce()
except Exception:
print("INFO: selinux disabled")
enforcing = False
if enforcing:
print(
"WARNING: SELinux is enforcing. This may lead to a compose with selinux disabled." # noqa: E501
)
print("Consider running with setenforce 0.")
# Set up the kickstart parser and pass in the kickstart file we were handed
ksparser = pungi.ks.get_ksparser(ks_path=opts.config)
if opts.sourceisos:
config.set("pungi", "arch", "source")
for part in ksparser.handler.partition.partitions:
if part.mountpoint == "iso":
config.set("pungi", "cdsize", str(part.size))
config.set("pungi", "force", str(opts.force))
if config.get("pungi", "workdirbase") == "/work":
config.set("pungi", "workdirbase", "%s/work" % config.get("pungi", "destdir"))
# Set up our directories
if not os.path.exists(config.get("pungi", "destdir")):
try:
os.makedirs(config.get("pungi", "destdir"))
except OSError:
print(
"Error: Cannot create destination dir %s"
% config.get("pungi", "destdir"),
file=sys.stderr,
)
sys.exit(1)
else:
print("Warning: Reusing existing destination directory.")
if not os.path.exists(config.get("pungi", "workdirbase")):
try:
os.makedirs(config.get("pungi", "workdirbase"))
except OSError:
print(
"Error: Cannot create working base dir %s"
% config.get("pungi", "workdirbase"),
file=sys.stderr,
)
sys.exit(1)
else:
print("Warning: Reusing existing working base directory.")
cachedir = config.get("pungi", "cachedir")
if not os.path.exists(cachedir):
try:
os.makedirs(cachedir)
except OSError:
print("Error: Cannot create cache dir %s" % cachedir, file=sys.stderr)
sys.exit(1)
# Set debuginfo flag
if opts.nodebuginfo:
config.set("pungi", "debuginfo", "False")
if opts.greedy:
config.set("pungi", "greedy", opts.greedy)
else:
# XXX: compatibility
if opts.nogreedy:
config.set("pungi", "greedy", "none")
else:
config.set("pungi", "greedy", "all")
config.set("pungi", "resolve_deps", str(bool(opts.resolve_deps)))
if opts.isfinal:
config.set("pungi", "isfinal", "True")
if opts.nohash:
config.set("pungi", "nohash", "True")
if opts.full_archlist:
config.set("pungi", "full_archlist", "True")
if opts.arch:
config.set("pungi", "arch", opts.arch)
if opts.multilib:
config.set("pungi", "multilib", " ".join(opts.multilib))
if opts.lookaside_repos:
config.set("pungi", "lookaside_repos", " ".join(opts.lookaside_repos))
if opts.no_dvd:
config.set("pungi", "no_dvd", "True")
if opts.nomacboot:
config.set("pungi", "nomacboot", "True")
config.set("pungi", "fulltree", str(bool(opts.fulltree)))
config.set("pungi", "selfhosting", str(bool(opts.selfhosting)))
config.set("pungi", "nosource", str(bool(opts.nosource)))
config.set("pungi", "nodebuginfo", str(bool(opts.nodebuginfo)))
if opts.lorax_conf:
config.set("lorax", "conf_file", opts.lorax_conf)
if opts.installpkgs:
config.set("lorax", "installpkgs", " ".join(opts.installpkgs))
# Actually do work.
mypungi = pungi.gather.Pungi(config, ksparser)
with mypungi.yumlock:
if not opts.sourceisos:
if opts.do_all or opts.do_gather or opts.do_buildinstall:
mypungi._inityum() # initialize the yum object for things that need it
if opts.do_all or opts.do_gather:
mypungi.gather()
if opts.nodownload:
for line in mypungi.list_packages():
flags_str = ",".join(line["flags"])
if flags_str:
flags_str = "(%s)" % flags_str
sys.stdout.write("RPM%s: %s\n" % (flags_str, line["path"]))
sys.stdout.flush()
else:
mypungi.downloadPackages()
mypungi.makeCompsFile()
if not opts.nodebuginfo:
mypungi.getDebuginfoList()
if opts.nodownload:
for line in mypungi.list_debuginfo():
flags_str = ",".join(line["flags"])
if flags_str:
flags_str = "(%s)" % flags_str
sys.stdout.write(
"DEBUGINFO%s: %s\n" % (flags_str, line["path"])
)
sys.stdout.flush()
else:
mypungi.downloadDebuginfo()
if not opts.nosource:
if opts.nodownload:
for line in mypungi.list_srpms():
flags_str = ",".join(line["flags"])
if flags_str:
flags_str = "(%s)" % flags_str
sys.stdout.write("SRPM%s: %s\n" % (flags_str, line["path"]))
sys.stdout.flush()
else:
mypungi.downloadSRPMs()
print("RPM size: %s MiB" % (mypungi.size_packages() / 1024**2))
if not opts.nodebuginfo:
print(
"DEBUGINFO size: %s MiB"
% (mypungi.size_debuginfo() / 1024**2)
)
if not opts.nosource:
print("SRPM size: %s MiB" % (mypungi.size_srpms() / 1024**2))
# Furthermore (but without the yumlock...)
if not opts.sourceisos:
if opts.do_all or opts.do_createrepo:
mypungi.doCreaterepo()
if opts.do_all or opts.do_buildinstall:
if not opts.norelnotes:
mypungi.doGetRelnotes()
mypungi.doBuildinstall()
if opts.do_all or opts.do_createiso:
mypungi.doCreateIsos()
# Do things slightly different for src.
if opts.sourceisos:
# we already have all the content gathered
mypungi.topdir = os.path.join(
config.get("pungi", "destdir"),
config.get("pungi", "version"),
config.get("pungi", "variant"),
"source",
"SRPMS",
)
mypungi.doCreaterepo(comps=False)
if opts.do_all or opts.do_createiso:
mypungi.doCreateIsos()
print("All done!")

View File

@ -97,6 +97,7 @@ def main(ns, persistdir, cachedir):
dnf_conf = Conf(ns.arch) dnf_conf = Conf(ns.arch)
dnf_conf.persistdir = persistdir dnf_conf.persistdir = persistdir
dnf_conf.cachedir = cachedir dnf_conf.cachedir = cachedir
dnf_conf.optional_metadata_types = ["filelists"]
dnf_obj = DnfWrapper(dnf_conf) dnf_obj = DnfWrapper(dnf_conf)
gather_opts = GatherOptions() gather_opts = GatherOptions()

View File

@ -11,18 +11,19 @@ import locale
import logging import logging
import os import os
import socket import socket
import shlex
import signal import signal
import sys import sys
import traceback import traceback
import shutil import shutil
import subprocess import subprocess
from six.moves import shlex_quote
from pungi.phases import PHASES_NAMES from pungi.phases import PHASES_NAMES
from pungi import get_full_version, util from pungi import get_full_version, util
from pungi.errors import UnsignedPackagesError from pungi.errors import UnsignedPackagesError
from pungi.wrappers import kojiwrapper from pungi.wrappers import kojiwrapper
from pungi.util import rmtree
from pungi.otel import tracing
# force C locales # force C locales
@ -251,9 +252,15 @@ def main():
kobo.log.add_stderr_logger(logger) kobo.log.add_stderr_logger(logger)
conf = util.load_config(opts.config) conf = util.load_config(opts.config)
compose_type = opts.compose_type or conf.get("compose_type", "production") compose_type = opts.compose_type or conf.get("compose_type", "production")
if compose_type == "production" and not opts.label and not opts.no_label: label = opts.label or conf.get("label")
if label:
try:
productmd.composeinfo.verify_label(label)
except ValueError as ex:
abort(str(ex))
if compose_type == "production" and not label and not opts.no_label:
abort("must specify label for a production compose") abort("must specify label for a production compose")
if ( if (
@ -300,7 +307,12 @@ def main():
if opts.target_dir: if opts.target_dir:
compose_dir = Compose.get_compose_dir( compose_dir = Compose.get_compose_dir(
opts.target_dir, conf, compose_type=compose_type, compose_label=opts.label opts.target_dir,
conf,
compose_type=compose_type,
compose_label=label,
parent_compose_ids=opts.parent_compose_id,
respin_of=opts.respin_of,
) )
else: else:
compose_dir = opts.compose_dir compose_dir = opts.compose_dir
@ -309,7 +321,7 @@ def main():
ci = Compose.get_compose_info( ci = Compose.get_compose_info(
conf, conf,
compose_type=compose_type, compose_type=compose_type,
compose_label=opts.label, compose_label=label,
parent_compose_ids=opts.parent_compose_id, parent_compose_ids=opts.parent_compose_id,
respin_of=opts.respin_of, respin_of=opts.respin_of,
) )
@ -374,12 +386,20 @@ def run_compose(
compose.log_info("User name: %s" % getpass.getuser()) compose.log_info("User name: %s" % getpass.getuser())
compose.log_info("Working directory: %s" % os.getcwd()) compose.log_info("Working directory: %s" % os.getcwd())
compose.log_info( compose.log_info(
"Command line: %s" % " ".join([shlex_quote(arg) for arg in sys.argv]) "Command line: %s" % " ".join([shlex.quote(arg) for arg in sys.argv])
) )
compose.log_info("Compose top directory: %s" % compose.topdir) compose.log_info("Compose top directory: %s" % compose.topdir)
compose.log_info("Current timezone offset: %s" % pungi.util.get_tz_offset()) compose.log_info("Current timezone offset: %s" % pungi.util.get_tz_offset())
compose.log_info("COMPOSE_ID=%s" % compose.compose_id) compose.log_info("COMPOSE_ID=%s" % compose.compose_id)
installed_pkgs_log = compose.paths.log.log_file("global", "installed-pkgs")
compose.log_info("Logging installed packages to %s" % installed_pkgs_log)
try:
with open(installed_pkgs_log, "w") as f:
subprocess.Popen(["rpm", "-qa"], stdout=f)
except Exception as e:
compose.log_warning("Failed to log installed packages: %s" % str(e))
compose.read_variants() compose.read_variants()
# dump the config file # dump the config file
@ -403,12 +423,14 @@ def run_compose(
compose, buildinstall_phase, pkgset_phase compose, buildinstall_phase, pkgset_phase
) )
ostree_phase = pungi.phases.OSTreePhase(compose, pkgset_phase) ostree_phase = pungi.phases.OSTreePhase(compose, pkgset_phase)
ostree_container_phase = pungi.phases.OSTreeContainerPhase(compose, pkgset_phase)
createiso_phase = pungi.phases.CreateisoPhase(compose, buildinstall_phase) createiso_phase = pungi.phases.CreateisoPhase(compose, buildinstall_phase)
extra_isos_phase = pungi.phases.ExtraIsosPhase(compose, buildinstall_phase) extra_isos_phase = pungi.phases.ExtraIsosPhase(compose, buildinstall_phase)
liveimages_phase = pungi.phases.LiveImagesPhase(compose)
livemedia_phase = pungi.phases.LiveMediaPhase(compose) livemedia_phase = pungi.phases.LiveMediaPhase(compose)
image_build_phase = pungi.phases.ImageBuildPhase(compose, buildinstall_phase) image_build_phase = pungi.phases.ImageBuildPhase(compose, buildinstall_phase)
kiwibuild_phase = pungi.phases.KiwiBuildPhase(compose)
osbuild_phase = pungi.phases.OSBuildPhase(compose) osbuild_phase = pungi.phases.OSBuildPhase(compose)
imagebuilder_phase = pungi.phases.ImageBuilderPhase(compose)
osbs_phase = pungi.phases.OSBSPhase(compose, pkgset_phase, buildinstall_phase) osbs_phase = pungi.phases.OSBSPhase(compose, pkgset_phase, buildinstall_phase)
image_container_phase = pungi.phases.ImageContainerPhase(compose) image_container_phase = pungi.phases.ImageContainerPhase(compose)
image_checksum_phase = pungi.phases.ImageChecksumPhase(compose) image_checksum_phase = pungi.phases.ImageChecksumPhase(compose)
@ -424,17 +446,19 @@ def run_compose(
gather_phase, gather_phase,
extrafiles_phase, extrafiles_phase,
createiso_phase, createiso_phase,
liveimages_phase,
livemedia_phase, livemedia_phase,
image_build_phase, image_build_phase,
image_checksum_phase, image_checksum_phase,
test_phase, test_phase,
ostree_phase, ostree_phase,
ostree_installer_phase, ostree_installer_phase,
ostree_container_phase,
extra_isos_phase, extra_isos_phase,
osbs_phase, osbs_phase,
osbuild_phase, osbuild_phase,
image_container_phase, image_container_phase,
kiwibuild_phase,
imagebuilder_phase,
): ):
if phase.skip(): if phase.skip():
continue continue
@ -449,50 +473,6 @@ def run_compose(
print(i) print(i)
raise RuntimeError("Configuration is not valid") raise RuntimeError("Configuration is not valid")
# PREP
# Note: This may be put into a new method of phase classes (e.g. .prep())
# in same way as .validate() or .run()
# Prep for liveimages - Obtain a password for signing rpm wrapped images
if (
"signing_key_password_file" in compose.conf
and "signing_command" in compose.conf
and "%(signing_key_password)s" in compose.conf["signing_command"]
and not liveimages_phase.skip()
):
# TODO: Don't require key if signing is turned off
# Obtain signing key password
signing_key_password = None
# Use appropriate method
if compose.conf["signing_key_password_file"] == "-":
# Use stdin (by getpass module)
try:
signing_key_password = getpass.getpass("Signing key password: ")
except EOFError:
compose.log_debug("Ignoring signing key password")
pass
else:
# Use text file with password
try:
signing_key_password = (
open(compose.conf["signing_key_password_file"], "r")
.readline()
.rstrip("\n")
)
except IOError:
# Filename is not print intentionally in case someone puts
# password directly into the option
err_msg = "Cannot load password from file specified by 'signing_key_password_file' option" # noqa: E501
compose.log_error(err_msg)
print(err_msg)
raise RuntimeError(err_msg)
if signing_key_password:
# Store the password
compose.conf["signing_key_password"] = signing_key_password
init_phase.start() init_phase.start()
init_phase.stop() init_phase.stop()
@ -504,10 +484,12 @@ def run_compose(
buildinstall_phase, buildinstall_phase,
(gather_phase, createrepo_phase), (gather_phase, createrepo_phase),
extrafiles_phase, extrafiles_phase,
(ostree_phase, ostree_installer_phase), ostree_phase,
) )
essentials_phase = pungi.phases.WeaverPhase(compose, essentials_schema) essentials_phase = pungi.phases.WeaverPhase(compose, essentials_schema)
essentials_phase.start() essentials_phase.start()
ostree_container_phase.start()
try:
essentials_phase.stop() essentials_phase.stop()
# write treeinfo before ISOs are created # write treeinfo before ISOs are created
@ -529,17 +511,16 @@ def run_compose(
compose_images_schema = ( compose_images_schema = (
createiso_phase, createiso_phase,
extra_isos_phase, extra_isos_phase,
liveimages_phase,
image_build_phase, image_build_phase,
livemedia_phase, livemedia_phase,
osbuild_phase, osbuild_phase,
) kiwibuild_phase,
post_image_phase = pungi.phases.WeaverPhase( imagebuilder_phase,
compose, (image_checksum_phase, image_container_phase)
) )
compose_images_phase = pungi.phases.WeaverPhase(compose, compose_images_schema) compose_images_phase = pungi.phases.WeaverPhase(compose, compose_images_schema)
extra_phase_schema = ( extra_phase_schema = (
(compose_images_phase, post_image_phase), (compose_images_phase, image_container_phase),
ostree_installer_phase,
osbs_phase, osbs_phase,
repoclosure_phase, repoclosure_phase,
) )
@ -547,6 +528,14 @@ def run_compose(
extra_phase.start() extra_phase.start()
extra_phase.stop() extra_phase.stop()
finally:
# wait for ostree container phase here too - it can happily run in parallel with
# all of the other stuff, but we must ensure it always gets stopped
ostree_container_phase.stop()
# now we do checksums as all images are done
image_checksum_phase.start()
image_checksum_phase.stop()
pungi.metadata.write_compose_info(compose) pungi.metadata.write_compose_info(compose)
if not ( if not (
@ -554,10 +543,12 @@ def run_compose(
and ostree_installer_phase.skip() and ostree_installer_phase.skip()
and createiso_phase.skip() and createiso_phase.skip()
and extra_isos_phase.skip() and extra_isos_phase.skip()
and liveimages_phase.skip()
and livemedia_phase.skip() and livemedia_phase.skip()
and image_build_phase.skip() and image_build_phase.skip()
and kiwibuild_phase.skip()
and imagebuilder_phase.skip()
and osbuild_phase.skip() and osbuild_phase.skip()
and ostree_container_phase.skip()
): ):
compose.im.dump(compose.paths.compose.metadata("images.json")) compose.im.dump(compose.paths.compose.metadata("images.json"))
compose.dump_containers_metadata() compose.dump_containers_metadata()
@ -666,12 +657,16 @@ def cli_main():
signal.signal(signal.SIGINT, sigterm_handler) signal.signal(signal.SIGINT, sigterm_handler)
signal.signal(signal.SIGTERM, sigterm_handler) signal.signal(signal.SIGTERM, sigterm_handler)
tracing.setup()
with tracing.span("run-compose"):
try: try:
main() main()
except (Exception, KeyboardInterrupt) as ex: except (Exception, KeyboardInterrupt) as ex:
tracing.record_exception(ex)
if COMPOSE: if COMPOSE:
COMPOSE.log_error("Compose run failed: %s" % ex) COMPOSE.log_error("Compose run failed: %s" % ex)
COMPOSE.traceback() COMPOSE.traceback(show_locals=getattr(ex, "show_locals", True))
COMPOSE.log_critical("Compose failed: %s" % COMPOSE.topdir) COMPOSE.log_critical("Compose failed: %s" % COMPOSE.topdir)
COMPOSE.write_status("DOOMED") COMPOSE.write_status("DOOMED")
else: else:
@ -680,3 +675,10 @@ def cli_main():
sys.stdout.flush() sys.stdout.flush()
sys.stderr.flush() sys.stderr.flush()
sys.exit(1) sys.exit(1)
finally:
# Remove repositories cloned during ExtraFiles phase
process_id = os.getpid()
directoy_to_remove = "/tmp/pungi-temp-git-repos-" + str(process_id) + "/"
rmtree(directoy_to_remove)
# Wait for all traces to be sent...
tracing.force_flush()

21
pungi/threading.py Normal file
View File

@ -0,0 +1,21 @@
from kobo.threads import WorkerThread
from .otel import tracing
class TelemetryWorkerThread(WorkerThread):
"""
Subclass of WorkerThread that captures current context when the thread is
created, and restores the context in the new thread.
A regular WorkerThread would start from an empty context, leading to any
spans created in the thread disconnected from the overall trace.
"""
def __init__(self, *args, **kwargs):
self.traceparent = tracing.get_traceparent()
super(TelemetryWorkerThread, self).__init__(*args, **kwargs)
def run(self, *args, **kwargs):
tracing.set_context(self.traceparent)
super(TelemetryWorkerThread, self).run(*args, **kwargs)

View File

@ -19,22 +19,24 @@ import subprocess
import os import os
import shutil import shutil
import string import string
import sys
import hashlib
import errno import errno
import re import re
import contextlib import contextlib
import shlex
import traceback import traceback
import tempfile import tempfile
import time import time
import urllib.parse
import urllib.request
import functools import functools
from six.moves import urllib, range, shlex_quote
import kobo.conf import kobo.conf
from kobo.shortcuts import run, force_list from kobo.shortcuts import run, force_list
from kobo.threads import WorkerThread, ThreadPool from kobo.threads import ThreadPool
from productmd.common import get_major_version from productmd.common import get_major_version
from pungi.module_util import Modulemd from pungi.module_util import Modulemd
from pungi.otel import tracing
from pungi.threading import TelemetryWorkerThread as WorkerThread
# Patterns that match all names of debuginfo packages # Patterns that match all names of debuginfo packages
DEBUG_PATTERNS = ["*-debuginfo", "*-debuginfo-*", "*-debugsource"] DEBUG_PATTERNS = ["*-debuginfo", "*-debuginfo-*", "*-debugsource"]
@ -43,132 +45,6 @@ DEBUG_PATTERN_RE = re.compile(
) )
def _doRunCommand(
command,
logger,
rundir="/tmp",
output=subprocess.PIPE,
error=subprocess.PIPE,
env=None,
):
"""Run a command and log the output. Error out if we get something on stderr"""
logger.info("Running %s" % subprocess.list2cmdline(command))
p1 = subprocess.Popen(
command,
cwd=rundir,
stdout=output,
stderr=error,
universal_newlines=True,
env=env,
close_fds=True,
)
(out, err) = p1.communicate()
if out:
logger.debug(out)
if p1.returncode != 0:
logger.error("Got an error from %s" % command[0])
logger.error(err)
raise OSError(
"Got an error (%d) from %s: %s" % (p1.returncode, command[0], err)
)
def _link(local, target, logger, force=False):
"""Simple function to link or copy a package, removing target optionally."""
if os.path.exists(target) and force:
os.remove(target)
# check for broken links
if force and os.path.islink(target):
if not os.path.exists(os.readlink(target)):
os.remove(target)
try:
os.link(local, target)
except OSError as e:
if e.errno != 18: # EXDEV
logger.error("Got an error linking from cache: %s" % e)
raise OSError(e)
# Can't hardlink cross file systems
shutil.copy2(local, target)
def _ensuredir(target, logger, force=False, clean=False):
"""Ensure that a directory exists, if it already exists, only continue
if force is set."""
# We have to check existence of a logger, as setting the logger could
# itself cause an issue.
def whoops(func, path, exc_info):
message = "Could not remove %s" % path
if logger:
logger.error(message)
else:
sys.stderr(message)
sys.exit(1)
if os.path.exists(target) and not os.path.isdir(target):
message = "%s exists but is not a directory." % target
if logger:
logger.error(message)
else:
sys.stderr(message)
sys.exit(1)
if not os.path.isdir(target):
os.makedirs(target)
elif force and clean:
shutil.rmtree(target, onerror=whoops)
os.makedirs(target)
elif force:
return
else:
message = "Directory %s already exists. Use --force to overwrite." % target
if logger:
logger.error(message)
else:
sys.stderr(message)
sys.exit(1)
def _doCheckSum(path, hash, logger):
"""Generate a checksum hash from a provided path.
Return a string of type:hash"""
# Try to figure out what hash we want to do
try:
sum = hashlib.new(hash)
except ValueError:
logger.error("Invalid hash type: %s" % hash)
return False
# Try to open the file, using binary flag.
try:
myfile = open(path, "rb")
except IOError as e:
logger.error("Could not open file %s: %s" % (path, e))
return False
# Loop through the file reading chunks at a time as to not
# put the entire file in memory. That would suck for DVDs
while True:
chunk = myfile.read(
8192
) # magic number! Taking suggestions for better blocksize
if not chunk:
break # we're done with the file
sum.update(chunk)
myfile.close()
return "%s:%s" % (hash, sum.hexdigest())
def makedirs(path, mode=0o775): def makedirs(path, mode=0o775):
try: try:
os.makedirs(path, mode=mode) os.makedirs(path, mode=mode)
@ -193,14 +69,14 @@ def explode_rpm_package(pkg_path, target_dir):
try: try:
# rpm2archive writes to stdout only if reading from stdin, thus the redirect # rpm2archive writes to stdout only if reading from stdin, thus the redirect
run( run(
"rpm2archive - <%s | tar xfz - && chmod -R a+rX ." % shlex_quote(pkg_path), "rpm2archive - <%s | tar xfz - && chmod -R a+rX ." % shlex.quote(pkg_path),
workdir=target_dir, workdir=target_dir,
) )
except RuntimeError: except RuntimeError:
# Fall back to rpm2cpio in case rpm2archive failed (most likely due to # Fall back to rpm2cpio in case rpm2archive failed (most likely due to
# not being present on the system). # not being present on the system).
run( run(
"rpm2cpio %s | cpio -iuvmd && chmod -R a+rX ." % shlex_quote(pkg_path), "rpm2cpio %s | cpio -iuvmd && chmod -R a+rX ." % shlex.quote(pkg_path),
workdir=target_dir, workdir=target_dir,
) )
@ -279,7 +155,7 @@ class GitUrlResolveError(RuntimeError):
pass pass
def resolve_git_ref(repourl, ref): def resolve_git_ref(repourl, ref, credential_helper=None):
"""Resolve a reference in a Git repo to a commit. """Resolve a reference in a Git repo to a commit.
Raises RuntimeError if there was an error. Most likely cause is failure to Raises RuntimeError if there was an error. Most likely cause is failure to
@ -289,7 +165,7 @@ def resolve_git_ref(repourl, ref):
# This looks like a commit ID already. # This looks like a commit ID already.
return ref return ref
try: try:
_, output = git_ls_remote(repourl, ref) _, output = git_ls_remote(repourl, ref, credential_helper)
except RuntimeError as e: except RuntimeError as e:
raise GitUrlResolveError( raise GitUrlResolveError(
"ref does not exist in remote repo %s with the error %s %s" "ref does not exist in remote repo %s with the error %s %s"
@ -316,7 +192,7 @@ def resolve_git_ref(repourl, ref):
return lines[0].split()[0] return lines[0].split()[0]
def resolve_git_url(url): def resolve_git_url(url, credential_helper=None):
"""Given a url to a Git repo specifying HEAD or origin/<branch> as a ref, """Given a url to a Git repo specifying HEAD or origin/<branch> as a ref,
replace that specifier with actual SHA1 of the commit. replace that specifier with actual SHA1 of the commit.
@ -335,7 +211,7 @@ def resolve_git_url(url):
scheme = r.scheme.replace("git+", "") scheme = r.scheme.replace("git+", "")
baseurl = urllib.parse.urlunsplit((scheme, r.netloc, r.path, "", "")) baseurl = urllib.parse.urlunsplit((scheme, r.netloc, r.path, "", ""))
fragment = resolve_git_ref(baseurl, ref) fragment = resolve_git_ref(baseurl, ref, credential_helper)
result = urllib.parse.urlunsplit((r.scheme, r.netloc, r.path, r.query, fragment)) result = urllib.parse.urlunsplit((r.scheme, r.netloc, r.path, r.query, fragment))
if "?#" in url: if "?#" in url:
@ -354,13 +230,18 @@ class GitUrlResolver(object):
self.offline = offline self.offline = offline
self.cache = {} self.cache = {}
def __call__(self, url, branch=None): def __call__(self, url, branch=None, options=None):
credential_helper = options.get("credential_helper") if options else None
if self.offline: if self.offline:
return branch or url return branch or url
key = (url, branch) key = (url, branch)
if key not in self.cache: if key not in self.cache:
try: try:
res = resolve_git_ref(url, branch) if branch else resolve_git_url(url) res = (
resolve_git_ref(url, branch, credential_helper)
if branch
else resolve_git_url(url, credential_helper)
)
self.cache[key] = res self.cache[key] = res
except GitUrlResolveError as exc: except GitUrlResolveError as exc:
self.cache[key] = exc self.cache[key] = exc
@ -369,6 +250,38 @@ class GitUrlResolver(object):
return self.cache[key] return self.cache[key]
class ContainerTagResolver(object):
"""
A caching resolver for container image urls that replaces tags with digests.
"""
def __init__(self, offline=False):
self.offline = offline
self.cache = {}
def __call__(self, url):
if self.offline:
# We're offline, nothing to do
return url
if re.match(".*@sha256:[a-z0-9]+", url):
# We already have a digest
return url
if url not in self.cache:
self.cache[url] = self._resolve(url)
return self.cache[url]
def _resolve(self, url):
m = re.match("^.+(:.+)$", url)
if not m:
raise RuntimeError("Failed to find tag name")
tag = m.group(1)
with tracing.span("skopeo-inspect", url=url):
data = _skopeo_inspect(url)
digest = data["Digest"]
return url.replace(tag, f"@{digest}")
# format: {arch|*: [data]} # format: {arch|*: [data]}
def get_arch_data(conf, var_name, arch): def get_arch_data(conf, var_name, arch):
result = [] result = []
@ -456,6 +369,9 @@ def get_volid(compose, arch, variant=None, disc_type=False, formats=None, **kwar
if not variant_uid and "%(variant)s" in i: if not variant_uid and "%(variant)s" in i:
continue continue
try: try:
# fmt: off
# Black wants to add a comma after kwargs, but that's not valid in
# Python 2.7
args = get_format_substs( args = get_format_substs(
compose, compose,
variant=variant_uid, variant=variant_uid,
@ -467,6 +383,7 @@ def get_volid(compose, arch, variant=None, disc_type=False, formats=None, **kwar
base_product_version=base_product_version, base_product_version=base_product_version,
**kwargs **kwargs
) )
# fmt: on
volid = (i % args).format(**args) volid = (i % args).format(**args)
except KeyError as err: except KeyError as err:
raise RuntimeError( raise RuntimeError(
@ -478,10 +395,7 @@ def get_volid(compose, arch, variant=None, disc_type=False, formats=None, **kwar
tried.add(volid) tried.add(volid)
if volid and len(volid) > 32: if volid and len(volid) > 32:
raise ValueError( volid = volid[:32]
"Could not create volume ID longer than 32 bytes, options are %r",
sorted(tried, key=len),
)
if compose.conf["restricted_volid"]: if compose.conf["restricted_volid"]:
# Replace all non-alphanumeric characters and non-underscores) with # Replace all non-alphanumeric characters and non-underscores) with
@ -584,6 +498,12 @@ def failable(
else: else:
compose.require_deliverable(variant, arch, deliverable, subvariant) compose.require_deliverable(variant, arch, deliverable, subvariant)
try: try:
with tracing.span(
f"generate-{deliverable}",
variant=variant.uid,
arch=arch,
subvariant=subvariant or "",
):
yield yield
except Exception as exc: except Exception as exc:
if not can_fail: if not can_fail:
@ -769,7 +689,11 @@ def run_unmount_cmd(cmd, max_retries=10, path=None, logger=None):
""" """
for i in range(max_retries): for i in range(max_retries):
proc = subprocess.Popen( proc = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
errors="replace",
) )
out, err = proc.communicate() out, err = proc.communicate()
if proc.returncode == 0: if proc.returncode == 0:
@ -791,7 +715,8 @@ def run_unmount_cmd(cmd, max_retries=10, path=None, logger=None):
c, c,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, stderr=subprocess.STDOUT,
universal_newlines=True, text=True,
errors="replace",
) )
out, _ = proc.communicate() out, _ = proc.communicate()
logger.debug( logger.debug(
@ -991,8 +916,13 @@ def retry(timeout=120, interval=30, wait_on=Exception):
@retry(wait_on=RuntimeError) @retry(wait_on=RuntimeError)
def git_ls_remote(baseurl, ref): def git_ls_remote(baseurl, ref, credential_helper=None):
return run(["git", "ls-remote", baseurl, ref], universal_newlines=True) with tracing.span("git-ls-remote", baseurl=baseurl, ref=ref):
cmd = ["git"]
if credential_helper:
cmd.extend(["-c", "credential.useHttpPath=true"])
cmd.extend(["-c", "credential.helper=%s" % credential_helper])
return run(cmd + ["ls-remote", baseurl, ref], text=True, errors="replace")
def get_tz_offset(): def get_tz_offset():
@ -1137,3 +1067,27 @@ def read_json_file(file_path):
"""A helper function to read a JSON file.""" """A helper function to read a JSON file."""
with open(file_path) as f: with open(file_path) as f:
return json.load(f) return json.load(f)
UNITS = ["", "Ki", "Mi", "Gi", "Ti"]
def format_size(sz):
sz = float(sz)
unit = 0
while sz > 1024:
sz /= 1024
unit += 1
return "%.3g %sB" % (sz, UNITS[unit])
@retry(interval=5, timeout=60, wait_on=RuntimeError)
def _skopeo_inspect(url):
"""Wrapper for running `skopeo inspect {url}` and parsing the output.
Retries on failure.
"""
cp = subprocess.run(
["skopeo", "inspect", url], stdout=subprocess.PIPE, check=True, encoding="utf-8"
)
return json.loads(cp.stdout)

View File

@ -183,11 +183,12 @@ class CompsFilter(object):
""" """
all_groups = self.tree.xpath("/comps/group/id/text()") + lookaside_groups all_groups = self.tree.xpath("/comps/group/id/text()") + lookaside_groups
for environment in self.tree.xpath("/comps/environment"): for environment in self.tree.xpath("/comps/environment"):
for group in environment.xpath("grouplist/groupid"): for parent_tag in ("grouplist", "optionlist"):
for group in environment.xpath("%s/groupid" % parent_tag):
if group.text not in all_groups: if group.text not in all_groups:
group.getparent().remove(group) group.getparent().remove(group)
for group in environment.xpath("grouplist/groupid[@arch]"): for group in environment.xpath("%s/groupid[@arch]" % parent_tag):
value = group.attrib.get("arch") value = group.attrib.get("arch")
values = [v for v in re.split(r"[, ]+", value) if v] values = [v for v in re.split(r"[, ]+", value) if v]
if arch not in values: if arch not in values:

View File

@ -15,9 +15,9 @@
import os import os
import shlex
from fnmatch import fnmatch from fnmatch import fnmatch
import contextlib import contextlib
from six.moves import shlex_quote
from kobo.shortcuts import force_list, relative_path, run from kobo.shortcuts import force_list, relative_path, run
from pungi import util from pungi import util
@ -227,7 +227,7 @@ def get_checkisomd5_cmd(iso_path, just_print=False):
def get_checkisomd5_data(iso_path, logger=None): def get_checkisomd5_data(iso_path, logger=None):
cmd = get_checkisomd5_cmd(iso_path, just_print=True) cmd = get_checkisomd5_cmd(iso_path, just_print=True)
retcode, output = run(cmd, universal_newlines=True) retcode, output = run(cmd, text=True, errors="replace")
items = [line.strip().rsplit(":", 1) for line in output.splitlines()] items = [line.strip().rsplit(":", 1) for line in output.splitlines()]
items = dict([(k, v.strip()) for k, v in items]) items = dict([(k, v.strip()) for k, v in items])
md5 = items.get(iso_path, "") md5 = items.get(iso_path, "")
@ -260,26 +260,36 @@ def get_isohybrid_cmd(iso_path, arch):
return cmd return cmd
def get_manifest_cmd(iso_name, xorriso=False): def get_manifest_cmd(iso_name, xorriso=False, output_file=None):
if not output_file:
output_file = "%s.manifest" % iso_name
if xorriso: if xorriso:
return """xorriso -dev %s --find | return """xorriso -dev %s --find |
tail -n+2 | tail -n+2 |
tr -d "'" | tr -d "'" |
cut -c2- | cut -c2- |
sort >> %s.manifest""" % ( sort >> %s""" % (
shlex_quote(iso_name), shlex.quote(iso_name),
shlex_quote(iso_name), shlex.quote(output_file),
) )
else: else:
return "isoinfo -R -f -i %s | grep -v '/TRANS.TBL$' | sort >> %s.manifest" % ( return "isoinfo -R -f -i %s | grep -v '/TRANS.TBL$' | sort >> %s" % (
shlex_quote(iso_name), shlex.quote(iso_name),
shlex_quote(iso_name), shlex.quote(output_file),
) )
def get_volume_id(path): def get_volume_id(path, xorriso=False):
if xorriso:
cmd = ["xorriso", "-indev", path]
retcode, output = run(cmd, text=True, errors="replace")
for line in output.splitlines():
if line.startswith("Volume id"):
return line.split("'")[1]
else:
cmd = ["isoinfo", "-d", "-i", path] cmd = ["isoinfo", "-d", "-i", path]
retcode, output = run(cmd, universal_newlines=True) retcode, output = run(cmd, text=True, errors="replace")
for line in output.splitlines(): for line in output.splitlines():
line = line.strip() line = line.strip()
@ -490,7 +500,7 @@ def mount(image, logger=None, use_guestmount=True):
else: else:
env = {} env = {}
cmd = ["mount", "-o", "loop", image, mount_dir] cmd = ["mount", "-o", "loop", image, mount_dir]
ret, out = run(cmd, env=env, can_fail=True, universal_newlines=True) ret, out = run(cmd, env=env, can_fail=True, text=True, errors="replace")
if ret != 0: if ret != 0:
# The mount command failed, something is wrong. # The mount command failed, something is wrong.
# Log the output and raise an exception. # Log the output and raise an exception.
@ -506,3 +516,21 @@ def mount(image, logger=None, use_guestmount=True):
util.run_unmount_cmd(["fusermount", "-u", mount_dir], path=mount_dir) util.run_unmount_cmd(["fusermount", "-u", mount_dir], path=mount_dir)
else: else:
util.run_unmount_cmd(["umount", mount_dir], path=mount_dir) util.run_unmount_cmd(["umount", mount_dir], path=mount_dir)
def xorriso_commands(arch, input, output):
"""List of xorriso commands to modify a bootable image."""
commands = [
("-indev", input),
("-outdev", output),
# isoinfo -J uses the Joliet tree, and it's used by virt-install
("-joliet", "on"),
# Support long filenames in the Joliet trees. Repodata is particularly
# likely to run into this limit.
("-compliance", "joliet_long_names"),
("-boot_image", "any", "replay"),
]
if arch == "ppc64le":
# This is needed for the image to be bootable.
commands.append(("-as", "mkisofs", "-U", "--"))
return commands

View File

@ -203,31 +203,12 @@ class KojiMock:
packages = [] packages = []
# get all rpms in folder # get all rpms in folder
rpms = search_rpms(self._packages_dir) rpms = search_rpms(Path(self._packages_dir))
all_rpms = [package.path for package in rpms]
# get nvras for modular packages for rpm in rpms:
nvras = set() info = parse_nvra(rpm.path.stem)
for module in self._modules.values(): if 'module' in info['release']:
path = os.path.join( continue
self._modules_dir,
module.arch,
module.nvr,
)
info = Modulemd.ModuleStream.read_string(open(path).read(), strict=True)
for package in info.get_rpm_artifacts():
data = parse_nvra(package)
nvras.add((data['name'], data['version'], data['release'], data['arch']))
# and remove modular packages from global list
for rpm in all_rpms[:]:
data = parse_nvra(os.path.basename(rpm[:-4]))
if (data['name'], data['version'], data['release'], data['arch']) in nvras:
all_rpms.remove(rpm)
for rpm in all_rpms:
info = parse_nvra(os.path.basename(rpm))
packages.append({ packages.append({
"build_id": RELEASE_BUILD_ID, "build_id": RELEASE_BUILD_ID,
"name": info['name'], "name": info['name'],

View File

@ -14,20 +14,27 @@
# along with this program; if not, see <https://gnu.org/licenses/>. # along with this program; if not, see <https://gnu.org/licenses/>.
import configparser
import contextlib
import os import os
import re import re
import socket
import shlex
import shutil
import time import time
import threading import threading
import contextlib import xmlrpc.client
import requests
import koji import koji
from kobo.shortcuts import run, force_list from kobo.shortcuts import run, force_list
import six from flufl.lock import Lock
from six.moves import configparser, shlex_quote from datetime import timedelta
import six.moves.xmlrpc_client as xmlrpclib
from .kojimock import KojiMock from .kojimock import KojiMock
from .. import util from .. import util
from ..otel import tracing
from ..arch_utils import getBaseArch from ..arch_utils import getBaseArch
@ -62,13 +69,13 @@ class KojiWrapper(object):
value = getattr(self.koji_module.config, key, None) value = getattr(self.koji_module.config, key, None)
if value is not None: if value is not None:
session_opts[key] = value session_opts[key] = value
self.koji_proxy = koji.ClientSession( self.koji_proxy = tracing.instrument_xmlrpc_proxy(
self.koji_module.config.server, session_opts koji.ClientSession(self.koji_module.config.server, session_opts)
) )
# This retry should be removed once https://pagure.io/koji/issue/3170 is # This retry should be removed once https://pagure.io/koji/issue/3170 is
# fixed and released. # fixed and released.
@util.retry(wait_on=(xmlrpclib.ProtocolError, koji.GenericError)) @util.retry(wait_on=(xmlrpc.client.ProtocolError, koji.GenericError))
def login(self): def login(self):
"""Authenticate to the hub.""" """Authenticate to the hub."""
auth_type = self.koji_module.config.authtype auth_type = self.koji_module.config.authtype
@ -139,7 +146,7 @@ class KojiWrapper(object):
cmd.append(arch) cmd.append(arch)
if isinstance(command, list): if isinstance(command, list):
command = " ".join([shlex_quote(i) for i in command]) command = " ".join([shlex.quote(i) for i in command])
# HACK: remove rpmdb and yum cache # HACK: remove rpmdb and yum cache
command = ( command = (
@ -147,7 +154,7 @@ class KojiWrapper(object):
) )
if chown_paths: if chown_paths:
paths = " ".join(shlex_quote(pth) for pth in chown_paths) paths = " ".join(shlex.quote(pth) for pth in chown_paths)
command += " ; EXIT_CODE=$?" command += " ; EXIT_CODE=$?"
# Make the files world readable # Make the files world readable
command += " ; chmod -R a+r %s" % paths command += " ; chmod -R a+r %s" % paths
@ -281,6 +288,7 @@ class KojiWrapper(object):
:return dict: {"retcode": 0, "output": "", "task_id": 1} :return dict: {"retcode": 0, "output": "", "task_id": 1}
""" """
task_id = None task_id = None
with tracing.span("run-runroot-cmd", command=command):
with self.get_koji_cmd_env() as env: with self.get_koji_cmd_env() as env:
retcode, output = run( retcode, output = run(
command, command,
@ -289,7 +297,8 @@ class KojiWrapper(object):
show_cmd=True, show_cmd=True,
env=env, env=env,
buffer_size=-1, buffer_size=-1,
universal_newlines=True, text=True,
errors="replace",
) )
# Look for first line that contains only a number. This is the ID of # Look for first line that contains only a number. This is the ID of
@ -308,6 +317,7 @@ class KojiWrapper(object):
) )
self.save_task_id(task_id) self.save_task_id(task_id)
tracing.set_attribute("task_id", task_id)
retcode, output = self._wait_for_task(task_id, logfile=log_file) retcode, output = self._wait_for_task(task_id, logfile=log_file)
@ -353,7 +363,7 @@ class KojiWrapper(object):
for option, value in opts.items(): for option, value in opts.items():
if isinstance(value, list): if isinstance(value, list):
value = ",".join(value) value = ",".join(value)
if not isinstance(value, six.string_types): if not isinstance(value, str):
# Python 3 configparser will reject non-string values. # Python 3 configparser will reject non-string values.
value = str(value) value = str(value)
cfg_parser.set(section, option, value) cfg_parser.set(section, option, value)
@ -408,92 +418,6 @@ class KojiWrapper(object):
return cmd return cmd
def get_create_image_cmd(
self,
name,
version,
target,
arch,
ks_file,
repos,
image_type="live",
image_format=None,
release=None,
wait=True,
archive=False,
specfile=None,
ksurl=None,
):
# Usage: koji spin-livecd [options] <name> <version> <target> <arch> <kickstart-file> # noqa: E501
# Usage: koji spin-appliance [options] <name> <version> <target> <arch> <kickstart-file> # noqa: E501
# Examples:
# * name: RHEL-7.0
# * name: Satellite-6.0.1-RHEL-6
# ** -<type>.<arch>
# * version: YYYYMMDD[.n|.t].X
# * release: 1
cmd = self._get_cmd()
if image_type == "live":
cmd.append("spin-livecd")
elif image_type == "appliance":
cmd.append("spin-appliance")
else:
raise ValueError("Invalid image type: %s" % image_type)
if not archive:
cmd.append("--scratch")
cmd.append("--noprogress")
if wait:
cmd.append("--wait")
else:
cmd.append("--nowait")
if specfile:
cmd.append("--specfile=%s" % specfile)
if ksurl:
cmd.append("--ksurl=%s" % ksurl)
if isinstance(repos, list):
for repo in repos:
cmd.append("--repo=%s" % repo)
else:
cmd.append("--repo=%s" % repos)
if image_format:
if image_type != "appliance":
raise ValueError("Format can be specified only for appliance images'")
supported_formats = ["raw", "qcow", "qcow2", "vmx"]
if image_format not in supported_formats:
raise ValueError(
"Format is not supported: %s. Supported formats: %s"
% (image_format, " ".join(sorted(supported_formats)))
)
cmd.append("--format=%s" % image_format)
if release is not None:
cmd.append("--release=%s" % release)
# IMPORTANT: all --opts have to be provided *before* args
# Usage:
# koji spin-livecd [options] <name> <version> <target> <arch> <kickstart-file>
cmd.append(name)
cmd.append(version)
cmd.append(target)
# i686 -> i386 etc.
arch = getBaseArch(arch)
cmd.append(arch)
cmd.append(ks_file)
return cmd
def _has_connection_error(self, output): def _has_connection_error(self, output):
"""Checks if output indicates connection error.""" """Checks if output indicates connection error."""
return re.search("error: failed to connect\n$", output) return re.search("error: failed to connect\n$", output)
@ -510,8 +434,9 @@ class KojiWrapper(object):
attempt = 0 attempt = 0
while True: while True:
with tracing.span("watch-task", task_id=task_id):
retcode, output = run( retcode, output = run(
cmd, can_fail=True, logfile=logfile, universal_newlines=True cmd, can_fail=True, logfile=logfile, text=True, errors="replace"
) )
if retcode == 0 or not ( if retcode == 0 or not (
@ -536,6 +461,7 @@ class KojiWrapper(object):
its exit code and parsed task id. This method will block until the its exit code and parsed task id. This method will block until the
command finishes. command finishes.
""" """
with tracing.span("run-blocking-cmd", command=command):
with self.get_koji_cmd_env() as env: with self.get_koji_cmd_env() as env:
retcode, output = run( retcode, output = run(
command, command,
@ -544,7 +470,8 @@ class KojiWrapper(object):
logfile=log_file, logfile=log_file,
env=env, env=env,
buffer_size=-1, buffer_size=-1,
universal_newlines=True, text=True,
errors="replace",
) )
match = re.search(r"Created task: (\d+)", output) match = re.search(r"Created task: (\d+)", output)
@ -554,6 +481,7 @@ class KojiWrapper(object):
% (" ".join(command), output) % (" ".join(command), output)
) )
task_id = int(match.groups()[0]) task_id = int(match.groups()[0])
tracing.set_attribute("task_id", task_id)
self.save_task_id(task_id) self.save_task_id(task_id)
@ -607,6 +535,8 @@ class KojiWrapper(object):
"createImage", "createImage",
"createLiveMedia", "createLiveMedia",
"createAppliance", "createAppliance",
"createKiwiImage",
"imageBuilderBuildArch",
]: ]:
continue continue
@ -642,126 +572,6 @@ class KojiWrapper(object):
return result return result
def get_image_path(self, task_id):
result = []
task_info_list = []
task_info_list.append(self.koji_proxy.getTaskInfo(task_id, request=True))
task_info_list.extend(self.koji_proxy.getTaskChildren(task_id, request=True))
# scan parent and child tasks for certain methods
task_info = None
for i in task_info_list:
if i["method"] in ("createAppliance", "createLiveCD", "createImage"):
task_info = i
break
scratch = task_info["request"][-1].get("scratch", False)
task_result = self.koji_proxy.getTaskResult(task_info["id"])
task_result.pop("rpmlist", None)
if scratch:
topdir = os.path.join(
self.koji_module.pathinfo.work(),
self.koji_module.pathinfo.taskrelpath(task_info["id"]),
)
else:
build = self.koji_proxy.getImageBuild(
"%(name)s-%(version)s-%(release)s" % task_result
)
build["name"] = task_result["name"]
build["version"] = task_result["version"]
build["release"] = task_result["release"]
build["arch"] = task_result["arch"]
topdir = self.koji_module.pathinfo.imagebuild(build)
for i in task_result["files"]:
result.append(os.path.join(topdir, i))
return result
def get_wrapped_rpm_path(self, task_id, srpm=False):
result = []
task_info_list = []
task_info_list.extend(self.koji_proxy.getTaskChildren(task_id, request=True))
# scan parent and child tasks for certain methods
task_info = None
for i in task_info_list:
if i["method"] in ("wrapperRPM"):
task_info = i
break
# Get results of wrapperRPM task
# {'buildroot_id': 2479520,
# 'logs': ['checkout.log', 'root.log', 'state.log', 'build.log'],
# 'rpms': ['foreman-discovery-image-2.1.0-2.el7sat.noarch.rpm'],
# 'srpm': 'foreman-discovery-image-2.1.0-2.el7sat.src.rpm'}
task_result = self.koji_proxy.getTaskResult(task_info["id"])
# Get koji dir with results (rpms, srpms, logs, ...)
topdir = os.path.join(
self.koji_module.pathinfo.work(),
self.koji_module.pathinfo.taskrelpath(task_info["id"]),
)
# TODO: Maybe use different approach for non-scratch
# builds - see get_image_path()
# Get list of filenames that should be returned
result_files = task_result["rpms"]
if srpm:
result_files += [task_result["srpm"]]
# Prepare list with paths to the required files
for i in result_files:
result.append(os.path.join(topdir, i))
return result
def get_signed_wrapped_rpms_paths(self, task_id, sigkey, srpm=False):
result = []
parent_task = self.koji_proxy.getTaskInfo(task_id, request=True)
task_info_list = []
task_info_list.extend(self.koji_proxy.getTaskChildren(task_id, request=True))
# scan parent and child tasks for certain methods
task_info = None
for i in task_info_list:
if i["method"] in ("wrapperRPM"):
task_info = i
break
# Check parent_task if it's scratch build
scratch = parent_task["request"][-1].get("scratch", False)
if scratch:
raise RuntimeError("Scratch builds cannot be signed!")
# Get results of wrapperRPM task
# {'buildroot_id': 2479520,
# 'logs': ['checkout.log', 'root.log', 'state.log', 'build.log'],
# 'rpms': ['foreman-discovery-image-2.1.0-2.el7sat.noarch.rpm'],
# 'srpm': 'foreman-discovery-image-2.1.0-2.el7sat.src.rpm'}
task_result = self.koji_proxy.getTaskResult(task_info["id"])
# Get list of filenames that should be returned
result_files = task_result["rpms"]
if srpm:
result_files += [task_result["srpm"]]
# Prepare list with paths to the required files
for i in result_files:
rpminfo = self.koji_proxy.getRPM(i)
build = self.koji_proxy.getBuild(rpminfo["build_id"])
path = os.path.join(
self.koji_module.pathinfo.build(build),
self.koji_module.pathinfo.signed(rpminfo, sigkey),
)
result.append(path)
return result
def get_build_nvrs(self, task_id):
builds = self.koji_proxy.listBuilds(taskID=task_id)
return [build.get("nvr") for build in builds if build.get("nvr")]
def multicall_map( def multicall_map(
self, koji_session, koji_session_fnc, list_of_args=None, list_of_kwargs=None self, koji_session, koji_session_fnc, list_of_args=None, list_of_kwargs=None
): ):
@ -786,11 +596,10 @@ class KojiWrapper(object):
if list_of_args is None and list_of_kwargs is None: if list_of_args is None and list_of_kwargs is None:
raise ValueError("One of list_of_args or list_of_kwargs must be set.") raise ValueError("One of list_of_args or list_of_kwargs must be set.")
if type(list_of_args) not in [type(None), list] or type(list_of_kwargs) not in [ if list_of_args is not None and not isinstance(list_of_args, list):
type(None), raise ValueError("list_of_args must be list or None.")
list, if list_of_kwargs is not None and not isinstance(list_of_kwargs, list):
]: raise ValueError("list_of_kwargs must be list or None.")
raise ValueError("list_of_args and list_of_kwargs must be list or None.")
if list_of_kwargs is None: if list_of_kwargs is None:
list_of_kwargs = [{}] * len(list_of_args) list_of_kwargs = [{}] * len(list_of_args)
@ -804,9 +613,9 @@ class KojiWrapper(object):
koji_session.multicall = True koji_session.multicall = True
for args, kwargs in zip(list_of_args, list_of_kwargs): for args, kwargs in zip(list_of_args, list_of_kwargs):
if type(args) != list: if not isinstance(args, list):
args = [args] args = [args]
if type(kwargs) != dict: if not isinstance(kwargs, dict):
raise ValueError("Every item in list_of_kwargs must be a dict") raise ValueError("Every item in list_of_kwargs must be a dict")
koji_session_fnc(*args, **kwargs) koji_session_fnc(*args, **kwargs)
@ -814,7 +623,7 @@ class KojiWrapper(object):
if not responses: if not responses:
return None return None
if type(responses) != list: if not isinstance(responses, list):
raise ValueError( raise ValueError(
"Fault element was returned for multicall of method %r: %r" "Fault element was returned for multicall of method %r: %r"
% (koji_session_fnc, responses) % (koji_session_fnc, responses)
@ -830,7 +639,7 @@ class KojiWrapper(object):
# a one-item array containing the result value, # a one-item array containing the result value,
# or a struct of the form found inside the standard <fault> element. # or a struct of the form found inside the standard <fault> element.
for response, args, kwargs in zip(responses, list_of_args, list_of_kwargs): for response, args, kwargs in zip(responses, list_of_args, list_of_kwargs):
if type(response) == list: if isinstance(response, list):
if not response: if not response:
raise ValueError( raise ValueError(
"Empty list returned for multicall of method %r with args %r, %r" # noqa: E501 "Empty list returned for multicall of method %r with args %r, %r" # noqa: E501
@ -845,11 +654,11 @@ class KojiWrapper(object):
return results return results
@util.retry(wait_on=(xmlrpclib.ProtocolError, koji.GenericError)) @util.retry(wait_on=(xmlrpc.client.ProtocolError, koji.GenericError))
def retrying_multicall_map(self, *args, **kwargs): def retrying_multicall_map(self, *args, **kwargs):
""" """
Retrying version of multicall_map. This tries to retry the Koji call Retrying version of multicall_map. This tries to retry the Koji call
in case of koji.GenericError or xmlrpclib.ProtocolError. in case of koji.GenericError or xmlrpc.client.ProtocolError.
Please refer to koji_multicall_map for further specification of arguments. Please refer to koji_multicall_map for further specification of arguments.
""" """
@ -928,10 +737,186 @@ def get_buildroot_rpms(compose, task_id):
# local # local
retcode, output = run( retcode, output = run(
"rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}\n'", "rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}\n'",
universal_newlines=True, text=True,
errors="replace",
) )
for i in output.splitlines(): for i in output.splitlines():
if not i: if not i:
continue continue
result.append(i) result.append(i)
return sorted(result) return sorted(result)
class KojiDownloadProxy:
def __init__(self, topdir, topurl, cache_dir, logger):
if not topdir:
# This will only happen if there is either no koji_profile
# configured, or the profile doesn't have a topdir. In the first
# case there will be no koji interaction, and the second indicates
# broken koji configuration.
# We can pretend to have local access in both cases to avoid any
# external requests.
self.has_local_access = True
return
self.cache_dir = cache_dir
self.logger = logger
self.topdir = topdir
self.topurl = topurl
# If cache directory is configured, we want to use it (even if we
# actually have local access to the storage).
self.has_local_access = not bool(cache_dir)
# This is used for temporary downloaded files. The suffix is unique
# per-process. To prevent threads in the same process from colliding, a
# thread id is added later.
self.unique_suffix = "%s.%s" % (socket.gethostname(), os.getpid())
self.session = None
if not self.has_local_access:
self.session = requests.Session()
@property
def path_prefix(self):
dir = self.topdir if self.has_local_access else self.cache_dir
return dir.rstrip("/") + "/"
@classmethod
def from_config(klass, conf, logger):
topdir = None
topurl = None
cache_dir = None
if "koji_profile" in conf:
koji_module = koji.get_profile_module(conf["koji_profile"])
topdir = koji_module.config.topdir
topurl = koji_module.config.topurl
cache_dir = conf.get("koji_cache")
if cache_dir:
cache_dir = cache_dir.rstrip("/") + "/"
return klass(topdir, topurl, cache_dir, logger)
@util.retry(wait_on=requests.exceptions.RequestException)
def _download(self, url, dest):
"""Download file into given location
:param str url: URL of the file to download
:param str dest: file path to store the result in
:returns: path to the downloaded file (same as dest) or None if the URL
"""
# contextlib.closing is only needed in requests<2.18
with contextlib.closing(self.session.get(url, stream=True)) as r:
if r.status_code == 404:
self.logger.warning("GET %s NOT FOUND", url)
return None
if r.status_code != 200:
self.logger.error("GET %s %s", url, r.status_code)
r.raise_for_status()
# The exception from here will be retried by the decorator.
file_size = int(r.headers.get("Content-Length", 0))
self.logger.info("GET %s OK %s", url, util.format_size(file_size))
with open(dest, "wb") as f:
shutil.copyfileobj(r.raw, f)
return dest
def _delete(self, path):
"""Try to delete file at given path and ignore errors."""
try:
os.remove(path)
except Exception:
self.logger.warning("Failed to delete %s", path)
def _atomic_download(self, url, dest, validator):
"""Atomically download a file
:param str url: URL of the file to download
:param str dest: file path to store the result in
:returns: path to the downloaded file (same as dest) or None if the URL
return 404.
"""
temp_file = "%s.%s.%s" % (dest, self.unique_suffix, threading.get_ident())
# First download to the temporary location.
try:
if self._download(url, temp_file) is None:
# The file was not found.
return None
except Exception:
# Download failed, let's make sure to clean up potentially partial
# temporary file.
self._delete(temp_file)
raise
# Check if the temporary file is correct (assuming we were provided a
# validator function).
try:
if validator:
validator(temp_file)
except Exception:
# Validation failed. Let's delete the problematic file and re-raise
# the exception.
self._delete(temp_file)
raise
# Atomically move the temporary file into final location
os.rename(temp_file, dest)
return dest
def _download_file(self, path, validator):
"""Ensure file on Koji volume in ``path`` is present in the local
cache.
:returns: path to the local file or None if file is not found
"""
url = path.replace(self.topdir, self.topurl)
destination_file = path.replace(self.topdir, self.cache_dir)
util.makedirs(os.path.dirname(destination_file))
lock = Lock(destination_file + ".lock")
# Hold the lock for this file for 5 minutes. If another compose needs
# the same file but it's not downloaded yet, the process will wait.
#
# If the download finishes in time, the downloaded file will be used
# here.
#
# If the download takes longer, this process will steal the lock and
# start its own download.
#
# That should not be a problem: the same file will be downloaded and
# then replaced atomically on the filesystem. If the original process
# managed to hardlink the first file already, that hardlink will be
# broken, but that will only result in the same file stored twice.
lock.lifetime = timedelta(minutes=5)
with lock:
# Check if the file already exists. If yes, return the path.
if os.path.exists(destination_file):
# Update mtime of the file. This covers the case of packages in the
# tag that are not included in the compose. Updating mtime will
# exempt them from cleanup for extra time.
os.utime(destination_file)
return destination_file
with tracing.span("download-rpm", url=url):
return self._atomic_download(url, destination_file, validator)
def get_file(self, path, validator=None):
"""
If path refers to an existing file in Koji, return a valid local path
to it. If no such file exists, return None.
:param validator: A callable that will be called with the path to the
downloaded file if and only if the file was actually downloaded.
Any exception raised from there will be abort the download and be
propagated.
"""
if self.has_local_access:
# We have koji volume mounted locally. No transformation needed for
# the path, just check it exists.
if os.path.exists(path):
return path
return None
else:
# We need to download the file.
return self._download_file(path, validator)

View File

@ -46,6 +46,7 @@ class LoraxWrapper(object):
skip_branding=False, skip_branding=False,
squashfs_only=False, squashfs_only=False,
configuration_file=None, configuration_file=None,
rootfs_type=None,
): ):
cmd = ["lorax"] cmd = ["lorax"]
cmd.append("--product=%s" % product) cmd.append("--product=%s" % product)
@ -106,58 +107,9 @@ class LoraxWrapper(object):
output_dir = os.path.abspath(output_dir) output_dir = os.path.abspath(output_dir)
cmd.append(output_dir) cmd.append(output_dir)
if rootfs_type:
cmd.append("--rootfs-type=%s" % rootfs_type)
# TODO: workdir # TODO: workdir
return cmd return cmd
def get_buildinstall_cmd(
self,
product,
version,
release,
repo_baseurl,
output_dir,
variant=None,
bugurl=None,
nomacboot=False,
noupgrade=False,
is_final=False,
buildarch=None,
volid=None,
brand=None,
):
# RHEL 6 compatibility
# Usage: buildinstall [--debug] --version <version> --brand <brand> --product <product> --release <comment> --final [--output outputdir] [--discs <discstring>] <root> # noqa: E501
brand = brand or "redhat"
# HACK: ignore provided release
release = "%s %s" % (brand, version)
bugurl = bugurl or "https://bugzilla.redhat.com"
cmd = ["/usr/lib/anaconda-runtime/buildinstall"]
cmd.append("--debug")
cmd.extend(["--version", version])
cmd.extend(["--brand", brand])
cmd.extend(["--product", product])
cmd.extend(["--release", release])
if is_final:
cmd.append("--final")
if buildarch:
cmd.extend(["--buildarch", buildarch])
if bugurl:
cmd.extend(["--bugurl", bugurl])
output_dir = os.path.abspath(output_dir)
cmd.extend(["--output", output_dir])
for i in force_list(repo_baseurl):
if "://" not in i:
i = "file://%s" % os.path.abspath(i)
cmd.append(i)
return cmd

View File

@ -105,85 +105,6 @@ class PungiWrapper(object):
kickstart.close() kickstart.close()
def get_pungi_cmd(
self,
config,
destdir,
name,
version=None,
flavor=None,
selfhosting=False,
fulltree=False,
greedy=None,
nodeps=False,
nodownload=True,
full_archlist=False,
arch=None,
cache_dir=None,
lookaside_repos=None,
multilib_methods=None,
profiler=False,
):
cmd = ["pungi"]
# Gather stage
cmd.append("-G")
# path to a kickstart file
cmd.append("--config=%s" % config)
# destdir is optional in Pungi (defaults to current dir), but
# want it mandatory here
cmd.append("--destdir=%s" % destdir)
# name
cmd.append("--name=%s" % name)
# version; optional, defaults to datestamp
if version:
cmd.append("--ver=%s" % version)
# rhel variant; optional
if flavor:
cmd.append("--flavor=%s" % flavor)
# turn selfhosting on
if selfhosting:
cmd.append("--selfhosting")
# NPLB
if fulltree:
cmd.append("--fulltree")
greedy = greedy or "none"
cmd.append("--greedy=%s" % greedy)
if nodeps:
cmd.append("--nodeps")
# don't download packages, just print paths
if nodownload:
cmd.append("--nodownload")
if full_archlist:
cmd.append("--full-archlist")
if arch:
cmd.append("--arch=%s" % arch)
if multilib_methods:
for i in multilib_methods:
cmd.append("--multilib=%s" % i)
if cache_dir:
cmd.append("--cachedir=%s" % cache_dir)
if lookaside_repos:
for i in lookaside_repos:
cmd.append("--lookaside-repo=%s" % i)
return cmd
def get_pungi_cmd_dnf( def get_pungi_cmd_dnf(
self, self,
config, config,
@ -269,70 +190,3 @@ class PungiWrapper(object):
broken_deps.setdefault(match.group(2), set()).add(match.group(1)) broken_deps.setdefault(match.group(2), set()).add(match.group(1))
return packages, broken_deps, missing_comps return packages, broken_deps, missing_comps
def run_pungi(
self,
ks_file,
destdir,
name,
selfhosting=False,
fulltree=False,
greedy="",
cache_dir=None,
arch="",
multilib_methods=[],
nodeps=False,
lookaside_repos=[],
):
"""
This is a replacement for get_pungi_cmd that runs it in-process. Not
all arguments are supported.
"""
from .. import ks, gather, config
ksparser = ks.get_ksparser(ks_path=ks_file)
cfg = config.Config()
cfg.set("pungi", "destdir", destdir)
cfg.set("pungi", "family", name)
cfg.set("pungi", "iso_basename", name)
cfg.set("pungi", "fulltree", str(fulltree))
cfg.set("pungi", "selfhosting", str(selfhosting))
cfg.set("pungi", "cachedir", cache_dir)
cfg.set("pungi", "full_archlist", "True")
cfg.set("pungi", "workdirbase", "%s/work" % destdir)
cfg.set("pungi", "greedy", greedy)
cfg.set("pungi", "nosource", "False")
cfg.set("pungi", "nodebuginfo", "False")
cfg.set("pungi", "force", "False")
cfg.set("pungi", "resolve_deps", str(not nodeps))
if arch:
cfg.set("pungi", "arch", arch)
if multilib_methods:
cfg.set("pungi", "multilib", " ".join(multilib_methods))
if lookaside_repos:
cfg.set("pungi", "lookaside_repos", " ".join(lookaside_repos))
mypungi = gather.Pungi(cfg, ksparser)
with open(os.path.join(destdir, "out"), "w") as f:
with mypungi.yumlock:
mypungi._inityum()
mypungi.gather()
for line in mypungi.list_packages():
flags_str = ",".join(line["flags"])
if flags_str:
flags_str = "(%s)" % flags_str
f.write("RPM%s: %s\n" % (flags_str, line["path"]))
mypungi.makeCompsFile()
mypungi.getDebuginfoList()
for line in mypungi.list_debuginfo():
flags_str = ",".join(line["flags"])
if flags_str:
flags_str = "(%s)" % flags_str
f.write("DEBUGINFO%s: %s\n" % (flags_str, line["path"]))
for line in mypungi.list_srpms():
flags_str = ",".join(line["flags"])
if flags_str:
flags_str = "(%s)" % flags_str
f.write("SRPM%s: %s\n" % (flags_str, line["path"]))

View File

@ -19,13 +19,8 @@ import os
from kobo.shortcuts import force_list from kobo.shortcuts import force_list
def get_repoclosure_cmd(backend="yum", arch=None, repos=None, lookaside=None): def get_repoclosure_cmd(backend="dnf", arch=None, repos=None, lookaside=None):
cmds = { cmds = {
"yum": {
"cmd": ["/usr/bin/repoclosure", "--tempcache"],
"repoarg": "--repoid=%s",
"lookaside": "--lookaside=%s",
},
"dnf": { "dnf": {
"cmd": ["dnf", "repoclosure"], "cmd": ["dnf", "repoclosure"],
"repoarg": "--repo=%s", "repoarg": "--repo=%s",
@ -44,14 +39,13 @@ def get_repoclosure_cmd(backend="yum", arch=None, repos=None, lookaside=None):
for i in arches: for i in arches:
cmd.append("--arch=%s" % i) cmd.append("--arch=%s" % i)
if backend == "dnf" and arches: if arches:
cmd.append("--forcearch=%s" % arches[0]) cmd.append("--forcearch=%s" % arches[0])
repos = repos or {} repos = repos or {}
for repo_id, repo_path in repos.items(): for repo_id, repo_path in repos.items():
cmd.append("--repofrompath=%s,%s" % (repo_id, _to_url(repo_path))) cmd.append("--repofrompath=%s,%s" % (repo_id, _to_url(repo_path)))
cmd.append(cmds[backend]["repoarg"] % repo_id) cmd.append(cmds[backend]["repoarg"] % repo_id)
if backend == "dnf":
# For dnf we want to add all repos with the --repo option (which # For dnf we want to add all repos with the --repo option (which
# enables only those and not any system repo), and the repos to # enables only those and not any system repo), and the repos to
# check are also listed with the --check option. # check are also listed with the --check option.

View File

@ -19,22 +19,26 @@ from __future__ import absolute_import
import os import os
import shutil import shutil
import glob import glob
import six import shlex
from six.moves import shlex_quote import threading
from six.moves.urllib.request import urlretrieve from urllib.request import urlretrieve
from fnmatch import fnmatch from fnmatch import fnmatch
import kobo.log import kobo.log
from kobo.shortcuts import run, force_list from kobo.shortcuts import run, force_list
from pungi.util import explode_rpm_package, makedirs, copy_all, temp_dir, retry from pungi.util import explode_rpm_package, makedirs, copy_all, temp_dir, retry
from .kojiwrapper import KojiWrapper from .kojiwrapper import KojiWrapper
from ..otel import tracing
lock = threading.Lock()
class ScmBase(kobo.log.LoggingBase): class ScmBase(kobo.log.LoggingBase):
def __init__(self, logger=None, command=None, compose=None): def __init__(self, logger=None, command=None, compose=None, options=None):
kobo.log.LoggingBase.__init__(self, logger=logger) kobo.log.LoggingBase.__init__(self, logger=logger)
self.command = command self.command = command
self.compose = compose self.compose = compose
self.options = options or {}
@retry(interval=60, timeout=300, wait_on=RuntimeError) @retry(interval=60, timeout=300, wait_on=RuntimeError)
def retry_run(self, cmd, **kwargs): def retry_run(self, cmd, **kwargs):
@ -53,7 +57,8 @@ class ScmBase(kobo.log.LoggingBase):
workdir=cwd, workdir=cwd,
can_fail=True, can_fail=True,
stdin_data="", stdin_data="",
universal_newlines=True, text=True,
errors="replace",
) )
if retcode != 0: if retcode != 0:
self.log_error("Output was: %r" % output) self.log_error("Output was: %r" % output)
@ -75,7 +80,7 @@ class FileWrapper(ScmBase):
for i in dirs: for i in dirs:
copy_all(i, target_dir) copy_all(i, target_dir)
def export_file(self, scm_root, scm_file, target_dir, scm_branch=None): def export_file(self, scm_root, scm_file, target_dir, scm_branch=None, arch=None):
if scm_root: if scm_root:
raise ValueError("FileWrapper: 'scm_root' should be empty.") raise ValueError("FileWrapper: 'scm_root' should be empty.")
self.log_debug( self.log_debug(
@ -114,7 +119,7 @@ class CvsWrapper(ScmBase):
) )
copy_all(os.path.join(tmp_dir, scm_dir), target_dir) copy_all(os.path.join(tmp_dir, scm_dir), target_dir)
def export_file(self, scm_root, scm_file, target_dir, scm_branch=None): def export_file(self, scm_root, scm_file, target_dir, scm_branch=None, arch=None):
scm_file = scm_file.lstrip("/") scm_file = scm_file.lstrip("/")
scm_branch = scm_branch or "HEAD" scm_branch = scm_branch or "HEAD"
with temp_dir() as tmp_dir: with temp_dir() as tmp_dir:
@ -156,22 +161,34 @@ class GitWrapper(ScmBase):
if "://" not in repo: if "://" not in repo:
repo = "file://%s" % repo repo = "file://%s" % repo
if repo.startswith("git+http"):
repo = repo[4:]
git_cmd = ["git"]
if "credential_helper" in self.options:
git_cmd.extend(["-c", "credential.useHttpPath=true"])
git_cmd.extend(
["-c", "credential.helper=%s" % self.options["credential_helper"]]
)
run(["git", "init"], workdir=destdir) run(["git", "init"], workdir=destdir)
try: try:
run(["git", "fetch", "--depth=1", repo, branch], workdir=destdir) run(git_cmd + ["fetch", "--depth=1", repo, branch], workdir=destdir)
run(["git", "checkout", "FETCH_HEAD"], workdir=destdir) run(["git", "checkout", "FETCH_HEAD"], workdir=destdir)
except RuntimeError as e: except RuntimeError as e:
# Fetch failed, to do a full clone we add a remote to our empty # Fetch failed, to do a full clone we add a remote to our empty
# repo, get its content and check out the reference we want. # repo, get its content and check out the reference we want.
self.log_debug( self.log_debug(
"Trying to do a full clone because shallow clone failed: %s %s" "Trying to do a full clone because shallow clone failed: %s %s"
% (e, e.output) % (e, getattr(e, "output", ""))
) )
try: try:
# Re-run git init in case of previous failure breaking .git dir # Re-run git init in case of previous failure breaking .git dir
run(["git", "init"], workdir=destdir) run(["git", "init"], workdir=destdir)
run(["git", "remote", "add", "origin", repo], workdir=destdir) run(["git", "remote", "add", "origin", repo], workdir=destdir)
self.retry_run(["git", "remote", "update", "origin"], workdir=destdir) self.retry_run(
git_cmd + ["remote", "update", "origin"], workdir=destdir
)
run(["git", "checkout", branch], workdir=destdir) run(["git", "checkout", branch], workdir=destdir)
except RuntimeError: except RuntimeError:
if self.compose: if self.compose:
@ -185,27 +202,57 @@ class GitWrapper(ScmBase):
copy_all(destdir, debugdir) copy_all(destdir, debugdir)
raise raise
self.run_process_command(destdir) if os.path.exists(os.path.join(destdir, ".gitmodules")):
try:
self.log_debug("Cloning submodules")
run(["git", "submodule", "init"], workdir=destdir)
run(["git", "submodule", "update"], workdir=destdir)
except RuntimeError as e:
self.log_error(
"Failed to clone submodules: %s %s", e, getattr(e, "output", "")
)
# Ignore the error here, there may just be no submodules.
def get_temp_repo_path(self, scm_root, scm_branch):
scm_repo = scm_root.split("/")[-1]
process_id = os.getpid()
tmp_dir = (
"/tmp/pungi-temp-git-repos-"
+ str(process_id)
+ "/"
+ scm_repo
+ "-"
+ scm_branch
)
return tmp_dir
def setup_repo(self, scm_root, scm_branch):
tmp_dir = self.get_temp_repo_path(scm_root, scm_branch)
if not os.path.isdir(tmp_dir):
makedirs(tmp_dir)
with tracing.span("git-clone", repo=scm_root, ref=scm_branch):
self._clone(scm_root, scm_branch, tmp_dir)
self.run_process_command(tmp_dir)
return tmp_dir
def export_dir(self, scm_root, scm_dir, target_dir, scm_branch=None): def export_dir(self, scm_root, scm_dir, target_dir, scm_branch=None):
scm_dir = scm_dir.lstrip("/") scm_dir = scm_dir.lstrip("/")
scm_branch = scm_branch or "master" scm_branch = scm_branch or "master"
with temp_dir() as tmp_dir:
self.log_debug( self.log_debug(
"Exporting directory %s from git %s (branch %s)..." "Exporting directory %s from git %s (branch %s)..."
% (scm_dir, scm_root, scm_branch) % (scm_dir, scm_root, scm_branch)
) )
self._clone(scm_root, scm_branch, tmp_dir) with lock:
tmp_dir = self.setup_repo(scm_root, scm_branch)
copy_all(os.path.join(tmp_dir, scm_dir), target_dir) copy_all(os.path.join(tmp_dir, scm_dir), target_dir)
def export_file(self, scm_root, scm_file, target_dir, scm_branch=None): def export_file(self, scm_root, scm_file, target_dir, scm_branch=None, arch=None):
scm_file = scm_file.lstrip("/") scm_file = scm_file.lstrip("/")
scm_branch = scm_branch or "master" scm_branch = scm_branch or "master"
with temp_dir() as tmp_dir:
target_path = os.path.join(target_dir, os.path.basename(scm_file)) target_path = os.path.join(target_dir, os.path.basename(scm_file))
self.log_debug( self.log_debug(
@ -213,7 +260,8 @@ class GitWrapper(ScmBase):
% (scm_file, scm_root, scm_branch) % (scm_file, scm_root, scm_branch)
) )
self._clone(scm_root, scm_branch, tmp_dir) with lock:
tmp_dir = self.setup_repo(scm_root, scm_branch)
makedirs(target_dir) makedirs(target_dir)
shutil.copy2(os.path.join(tmp_dir, scm_file), target_path) shutil.copy2(os.path.join(tmp_dir, scm_file), target_path)
@ -242,12 +290,12 @@ class RpmScmWrapper(ScmBase):
run( run(
"cp -a %s %s/" "cp -a %s %s/"
% ( % (
shlex_quote(os.path.join(tmp_dir, scm_dir)), shlex.quote(os.path.join(tmp_dir, scm_dir)),
shlex_quote(target_dir), shlex.quote(target_dir),
) )
) )
def export_file(self, scm_root, scm_file, target_dir, scm_branch=None): def export_file(self, scm_root, scm_file, target_dir, scm_branch=None, arch=None):
for rpm in self._list_rpms(scm_root): for rpm in self._list_rpms(scm_root):
scm_file = scm_file.lstrip("/") scm_file = scm_file.lstrip("/")
with temp_dir() as tmp_dir: with temp_dir() as tmp_dir:
@ -272,7 +320,7 @@ class KojiScmWrapper(ScmBase):
def export_dir(self, *args, **kwargs): def export_dir(self, *args, **kwargs):
raise RuntimeError("Only files can be exported from Koji") raise RuntimeError("Only files can be exported from Koji")
def export_file(self, scm_root, scm_file, target_dir, scm_branch=None): def export_file(self, scm_root, scm_file, target_dir, scm_branch=None, arch=None):
if scm_branch: if scm_branch:
self._get_latest_from_tag(scm_branch, scm_root, scm_file, target_dir) self._get_latest_from_tag(scm_branch, scm_root, scm_file, target_dir)
else: else:
@ -309,6 +357,44 @@ class KojiScmWrapper(ScmBase):
urlretrieve(url, target_file) urlretrieve(url, target_file)
class SkopeoCopyTimeoutError(RuntimeError):
pass
class ContainerImageScmWrapper(ScmBase):
def export_dir(self, *args, **kwargs):
raise RuntimeError("Containers can only be exported as files")
def export_file(self, scm_root, scm_file, target_dir, scm_branch=None, arch=None):
if arch == "src":
return
ARCHES = {"aarch64": "arm64", "x86_64": "amd64"}
arch = ARCHES.get(arch, arch)
cmd = [
"skopeo",
"--override-arch=" + arch,
"copy",
scm_root,
"oci:" + target_dir,
"--remove-signatures",
]
try:
self.log_debug(
"Exporting container %s to %s: %s", scm_root, target_dir, cmd
)
with tracing.span("skopeo-copy", arch=arch, image=scm_root):
self.retry_run(cmd, can_fail=False)
except RuntimeError as e:
output = getattr(e, "output", "")
self.log_error("Failed to copy container image: %s %s", e, output)
if "connection timed out" in output:
raise SkopeoCopyTimeoutError(output) from e
raise
def _get_wrapper(scm_type, *args, **kwargs): def _get_wrapper(scm_type, *args, **kwargs):
SCM_WRAPPERS = { SCM_WRAPPERS = {
"file": FileWrapper, "file": FileWrapper,
@ -316,6 +402,7 @@ def _get_wrapper(scm_type, *args, **kwargs):
"git": GitWrapper, "git": GitWrapper,
"rpm": RpmScmWrapper, "rpm": RpmScmWrapper,
"koji": KojiScmWrapper, "koji": KojiScmWrapper,
"container-image": ContainerImageScmWrapper,
} }
try: try:
cls = SCM_WRAPPERS[scm_type] cls = SCM_WRAPPERS[scm_type]
@ -324,7 +411,7 @@ def _get_wrapper(scm_type, *args, **kwargs):
return cls(*args, **kwargs) return cls(*args, **kwargs)
def get_file_from_scm(scm_dict, target_path, compose=None): def get_file_from_scm(scm_dict, target_path, compose=None, arch=None):
""" """
Copy one or more files from source control to a target path. A list of files Copy one or more files from source control to a target path. A list of files
created in ``target_path`` is returned. created in ``target_path`` is returned.
@ -355,26 +442,40 @@ def get_file_from_scm(scm_dict, target_path, compose=None):
>>> get_file_from_scm(scm_dict, target_path) >>> get_file_from_scm(scm_dict, target_path)
['/tmp/path/share/variants.dtd'] ['/tmp/path/share/variants.dtd']
""" """
if isinstance(scm_dict, six.string_types): if isinstance(scm_dict, str):
scm_type = "file" scm_type = "file"
scm_repo = None scm_repo = None
scm_file = os.path.abspath(scm_dict) scm_file = os.path.abspath(scm_dict)
scm_branch = None scm_branch = None
command = None command = None
options = {}
else: else:
scm_type = scm_dict["scm"] scm_type = scm_dict["scm"]
scm_repo = scm_dict["repo"] scm_repo = scm_dict["repo"]
scm_file = scm_dict["file"] scm_file = scm_dict["file"]
scm_branch = scm_dict.get("branch", None) scm_branch = scm_dict.get("branch", None)
command = scm_dict.get("command") command = scm_dict.get("command")
options = scm_dict.get("options", {})
logger = compose._logger if compose else None logger = compose._logger if compose else None
scm = _get_wrapper(scm_type, logger=logger, command=command, compose=compose) scm = _get_wrapper(
scm_type, logger=logger, command=command, compose=compose, options=options
)
files_copied = [] files_copied = []
for i in force_list(scm_file): for i in force_list(scm_file):
with temp_dir(prefix="scm_checkout_") as tmp_dir: with temp_dir(prefix="scm_checkout_") as tmp_dir:
scm.export_file(scm_repo, i, scm_branch=scm_branch, target_dir=tmp_dir) # Most SCM wrappers need a temporary directory: the git repo is
# cloned there, and only relevant files are copied out. But this
# doesn't work for the container image fetching. That pulls in only
# required files, and the final output needs to be done by skopeo
# to correctly handle multiple containers landing in the same OCI
# archive.
dest = target_path if scm_type == "container-image" else tmp_dir
scm.export_file(
scm_repo, i, scm_branch=scm_branch, target_dir=dest, arch=arch
)
if dest == tmp_dir:
files_copied += copy_all(tmp_dir, target_path) files_copied += copy_all(tmp_dir, target_path)
return files_copied return files_copied
@ -414,7 +515,7 @@ def get_file(source, destination, compose, overwrite=False):
return destination return destination
def get_dir_from_scm(scm_dict, target_path, compose=None): def get_dir_from_scm(scm_dict, target_path, compose=None, arch=None):
""" """
Copy a directory from source control to a target path. A list of files Copy a directory from source control to a target path. A list of files
created in ``target_path`` is returned. created in ``target_path`` is returned.
@ -444,21 +545,25 @@ def get_dir_from_scm(scm_dict, target_path, compose=None):
>>> get_dir_from_scm(scm_dict, target_path) >>> get_dir_from_scm(scm_dict, target_path)
['/tmp/path/share/variants.dtd', '/tmp/path/share/rawhide-fedora.ks', ...] ['/tmp/path/share/variants.dtd', '/tmp/path/share/rawhide-fedora.ks', ...]
""" """
if isinstance(scm_dict, six.string_types): if isinstance(scm_dict, str):
scm_type = "file" scm_type = "file"
scm_repo = None scm_repo = None
scm_dir = os.path.abspath(scm_dict) scm_dir = os.path.abspath(scm_dict)
scm_branch = None scm_branch = None
command = None command = None
options = {}
else: else:
scm_type = scm_dict["scm"] scm_type = scm_dict["scm"]
scm_repo = scm_dict.get("repo", None) scm_repo = scm_dict.get("repo", None)
scm_dir = scm_dict["dir"] scm_dir = scm_dict["dir"]
scm_branch = scm_dict.get("branch", None) scm_branch = scm_dict.get("branch", None)
command = scm_dict.get("command") command = scm_dict.get("command")
options = scm_dict.get("options", {})
logger = compose._logger if compose else None logger = compose._logger if compose else None
scm = _get_wrapper(scm_type, logger=logger, command=command, compose=compose) scm = _get_wrapper(
scm_type, logger=logger, command=command, compose=compose, options=options
)
with temp_dir(prefix="scm_checkout_") as tmp_dir: with temp_dir(prefix="scm_checkout_") as tmp_dir:
scm.export_dir(scm_repo, scm_dir, scm_branch=scm_branch, target_dir=tmp_dir) scm.export_dir(scm_repo, scm_dir, scm_branch=scm_branch, target_dir=tmp_dir)

View File

@ -276,7 +276,6 @@ class Variant(object):
modules=None, modules=None,
modular_koji_tags=None, modular_koji_tags=None,
): ):
environments = environments or [] environments = environments or []
buildinstallpackages = buildinstallpackages or [] buildinstallpackages = buildinstallpackages or []

View File

@ -1,705 +0,0 @@
# -*- coding: utf-8 -*-
from __future__ import print_function
import argparse
import atexit
import errno
import json
import logging
import os
import re
import shutil
import subprocess
import sys
import tempfile
import time
import threading
from collections import namedtuple
import kobo.conf
import kobo.log
import productmd
from kobo import shortcuts
from six.moves import configparser, shlex_quote
import pungi.util
from pungi.compose import get_compose_dir
from pungi.linker import linker_pool
from pungi.phases.pkgset.sources.source_koji import get_koji_event_raw
from pungi.util import find_old_compose, parse_koji_event, temp_dir
from pungi.wrappers.kojiwrapper import KojiWrapper
Config = namedtuple(
"Config",
[
# Path to directory with the compose
"target",
"compose_type",
"label",
# Path to the selected old compose that will be reused
"old_compose",
# Path to directory with config file copies
"config_dir",
# Which koji event to use (if any)
"event",
# Additional arguments to pungi-koji executable
"extra_args",
],
)
log = logging.getLogger(__name__)
class Status(object):
# Ready to start
READY = "READY"
# Waiting for dependencies to finish.
WAITING = "WAITING"
# Part is currently running
STARTED = "STARTED"
# A dependency failed, this one will never start.
BLOCKED = "BLOCKED"
class ComposePart(object):
def __init__(self, name, config, just_phase=[], skip_phase=[], dependencies=[]):
self.name = name
self.config = config
self.status = Status.WAITING if dependencies else Status.READY
self.just_phase = just_phase
self.skip_phase = skip_phase
self.blocked_on = set(dependencies)
self.depends_on = set(dependencies)
self.path = None
self.log_file = None
self.failable = False
def __str__(self):
return self.name
def __repr__(self):
return (
"ComposePart({0.name!r},"
" {0.config!r},"
" {0.status!r},"
" just_phase={0.just_phase!r},"
" skip_phase={0.skip_phase!r},"
" dependencies={0.depends_on!r})"
).format(self)
def refresh_status(self):
"""Refresh status of this part with the result of the compose. This
should only be called once the compose finished.
"""
try:
with open(os.path.join(self.path, "STATUS")) as fh:
self.status = fh.read().strip()
except IOError as exc:
log.error("Failed to update status of %s: %s", self.name, exc)
log.error("Assuming %s is DOOMED", self.name)
self.status = "DOOMED"
def is_finished(self):
return "FINISHED" in self.status
def unblock_on(self, finished_part):
"""Update set of blockers for this part. If it's empty, mark us as ready."""
self.blocked_on.discard(finished_part)
if self.status == Status.WAITING and not self.blocked_on:
log.debug("%s is ready to start", self)
self.status = Status.READY
def setup_start(self, global_config, parts):
substitutions = dict(
("part-%s" % name, p.path) for name, p in parts.items() if p.is_finished()
)
substitutions["configdir"] = global_config.config_dir
config = pungi.util.load_config(self.config)
for f in config.opened_files:
# apply substitutions
fill_in_config_file(f, substitutions)
self.status = Status.STARTED
self.path = get_compose_dir(
os.path.join(global_config.target, "parts"),
config,
compose_type=global_config.compose_type,
compose_label=global_config.label,
)
self.log_file = os.path.join(global_config.target, "logs", "%s.log" % self.name)
log.info("Starting %s in %s", self.name, self.path)
def get_cmd(self, global_config):
cmd = ["pungi-koji", "--config", self.config, "--compose-dir", self.path]
cmd.append("--%s" % global_config.compose_type)
if global_config.label:
cmd.extend(["--label", global_config.label])
for phase in self.just_phase:
cmd.extend(["--just-phase", phase])
for phase in self.skip_phase:
cmd.extend(["--skip-phase", phase])
if global_config.old_compose:
cmd.extend(
["--old-compose", os.path.join(global_config.old_compose, "parts")]
)
if global_config.event:
cmd.extend(["--koji-event", str(global_config.event)])
if global_config.extra_args:
cmd.extend(global_config.extra_args)
cmd.extend(["--no-latest-link"])
return cmd
@classmethod
def from_config(cls, config, section, config_dir):
part = cls(
name=section,
config=os.path.join(config_dir, config.get(section, "config")),
just_phase=_safe_get_list(config, section, "just_phase", []),
skip_phase=_safe_get_list(config, section, "skip_phase", []),
dependencies=_safe_get_list(config, section, "depends_on", []),
)
if config.has_option(section, "failable"):
part.failable = config.getboolean(section, "failable")
return part
def _safe_get_list(config, section, option, default=None):
"""Get a value from config parser. The result is split into a list on
commas or spaces, and `default` is returned if the key does not exist.
"""
if config.has_option(section, option):
value = config.get(section, option)
return [x.strip() for x in re.split(r"[, ]+", value) if x]
return default
def fill_in_config_file(fp, substs):
"""Templating function. It works with Jinja2 style placeholders such as
{{foo}}. Whitespace around the key name is fine. The file is modified in place.
:param fp string: path to the file to process
:param substs dict: a mapping for values to put into the file
"""
def repl(match):
try:
return substs[match.group(1)]
except KeyError as exc:
raise RuntimeError(
"Unknown placeholder %s in %s" % (exc, os.path.basename(fp))
)
with open(fp, "r") as f:
contents = re.sub(r"{{ *([a-zA-Z-_]+) *}}", repl, f.read())
with open(fp, "w") as f:
f.write(contents)
def start_part(global_config, parts, part):
part.setup_start(global_config, parts)
fh = open(part.log_file, "w")
cmd = part.get_cmd(global_config)
log.debug("Running command %r", " ".join(shlex_quote(x) for x in cmd))
return subprocess.Popen(cmd, stdout=fh, stderr=subprocess.STDOUT)
def handle_finished(global_config, linker, parts, proc, finished_part):
finished_part.refresh_status()
log.info("%s finished with status %s", finished_part, finished_part.status)
if proc.returncode == 0:
# Success, unblock other parts...
for part in parts.values():
part.unblock_on(finished_part.name)
# ...and link the results into final destination.
copy_part(global_config, linker, finished_part)
update_metadata(global_config, finished_part)
else:
# Failure, other stuff may be blocked.
log.info("See details in %s", finished_part.log_file)
block_on(parts, finished_part.name)
def copy_part(global_config, linker, part):
c = productmd.Compose(part.path)
for variant in c.info.variants:
data_path = os.path.join(part.path, "compose", variant)
link = os.path.join(global_config.target, "compose", variant)
log.info("Hardlinking content %s -> %s", data_path, link)
hardlink_dir(linker, data_path, link)
def hardlink_dir(linker, srcdir, dstdir):
for root, dirs, files in os.walk(srcdir):
root = os.path.relpath(root, srcdir)
for f in files:
src = os.path.normpath(os.path.join(srcdir, root, f))
dst = os.path.normpath(os.path.join(dstdir, root, f))
linker.queue_put((src, dst))
def update_metadata(global_config, part):
part_metadata_dir = os.path.join(part.path, "compose", "metadata")
final_metadata_dir = os.path.join(global_config.target, "compose", "metadata")
for f in os.listdir(part_metadata_dir):
# Load the metadata
with open(os.path.join(part_metadata_dir, f)) as fh:
part_metadata = json.load(fh)
final_metadata = os.path.join(final_metadata_dir, f)
if os.path.exists(final_metadata):
# We already have this file, will need to merge.
merge_metadata(final_metadata, part_metadata)
else:
# A new file, just copy it.
copy_metadata(global_config, final_metadata, part_metadata)
def copy_metadata(global_config, final_metadata, source):
"""Copy file to final location, but update compose information."""
with open(
os.path.join(global_config.target, "compose/metadata/composeinfo.json")
) as f:
composeinfo = json.load(f)
try:
source["payload"]["compose"].update(composeinfo["payload"]["compose"])
except KeyError:
# No [payload][compose], probably OSBS metadata
pass
with open(final_metadata, "w") as f:
json.dump(source, f, indent=2, sort_keys=True)
def merge_metadata(final_metadata, source):
with open(final_metadata) as f:
metadata = json.load(f)
try:
key = {
"productmd.composeinfo": "variants",
"productmd.modules": "modules",
"productmd.images": "images",
"productmd.rpms": "rpms",
}[source["header"]["type"]]
# TODO what if multiple parts create images for the same variant
metadata["payload"][key].update(source["payload"][key])
except KeyError:
# OSBS metadata, merge whole file
metadata.update(source)
with open(final_metadata, "w") as f:
json.dump(metadata, f, indent=2, sort_keys=True)
def block_on(parts, name):
"""Part ``name`` failed, mark everything depending on it as blocked."""
for part in parts.values():
if name in part.blocked_on:
log.warning("%s is blocked now and will not run", part)
part.status = Status.BLOCKED
block_on(parts, part.name)
def check_finished_processes(processes):
"""Walk through all active processes and check if something finished."""
for proc in processes.keys():
proc.poll()
if proc.returncode is not None:
yield proc, processes[proc]
def run_all(global_config, parts):
# Mapping subprocess.Popen -> ComposePart
processes = dict()
remaining = set(p.name for p in parts.values() if not p.is_finished())
with linker_pool("hardlink") as linker:
while remaining or processes:
update_status(global_config, parts)
for proc, part in check_finished_processes(processes):
del processes[proc]
handle_finished(global_config, linker, parts, proc, part)
# Start new available processes.
for name in list(remaining):
part = parts[name]
# Start all ready parts
if part.status == Status.READY:
remaining.remove(name)
processes[start_part(global_config, parts, part)] = part
# Remove blocked parts from todo list
elif part.status == Status.BLOCKED:
remaining.remove(part.name)
# Wait for any child process to finish if there is any.
if processes:
pid, reason = os.wait()
for proc in processes.keys():
# Set the return code for process that we caught by os.wait().
# Calling poll() on it would not set the return code properly
# since the value was already consumed by os.wait().
if proc.pid == pid:
proc.returncode = (reason >> 8) & 0xFF
log.info("Waiting for linking to finish...")
return update_status(global_config, parts)
def get_target_dir(config, compose_info, label, reldir=""):
"""Find directory where this compose will be.
@param reldir: if target path in config is relative, it will be resolved
against this directory
"""
dir = os.path.realpath(os.path.join(reldir, config.get("general", "target")))
target_dir = get_compose_dir(
dir,
compose_info,
compose_type=config.get("general", "compose_type"),
compose_label=label,
)
return target_dir
def setup_logging(debug=False):
FORMAT = "%(asctime)s: %(levelname)s: %(message)s"
level = logging.DEBUG if debug else logging.INFO
kobo.log.add_stderr_logger(log, log_level=level, format=FORMAT)
log.setLevel(level)
def compute_status(statuses):
if any(map(lambda x: x[0] in ("STARTED", "WAITING"), statuses)):
# If there is anything still running or waiting to start, the whole is
# still running.
return "STARTED"
elif any(map(lambda x: x[0] in ("DOOMED", "BLOCKED") and not x[1], statuses)):
# If any required part is doomed or blocked, the whole is doomed
return "DOOMED"
elif all(map(lambda x: x[0] == "FINISHED", statuses)):
# If all parts are complete, the whole is complete
return "FINISHED"
else:
return "FINISHED_INCOMPLETE"
def update_status(global_config, parts):
log.debug("Updating status metadata")
metadata = {}
statuses = set()
for part in parts.values():
metadata[part.name] = {"status": part.status, "path": part.path}
statuses.add((part.status, part.failable))
metadata_path = os.path.join(
global_config.target, "compose", "metadata", "parts.json"
)
with open(metadata_path, "w") as fh:
json.dump(metadata, fh, indent=2, sort_keys=True, separators=(",", ": "))
status = compute_status(statuses)
log.info("Overall status is %s", status)
with open(os.path.join(global_config.target, "STATUS"), "w") as fh:
fh.write(status)
return status != "DOOMED"
def prepare_compose_dir(config, args, main_config_file, compose_info):
if not hasattr(args, "compose_path"):
# Creating a brand new compose
target_dir = get_target_dir(
config, compose_info, args.label, reldir=os.path.dirname(main_config_file)
)
for dir in ("logs", "parts", "compose/metadata", "work/global"):
try:
os.makedirs(os.path.join(target_dir, dir))
except OSError as exc:
if exc.errno != errno.EEXIST:
raise
with open(os.path.join(target_dir, "STATUS"), "w") as fh:
fh.write("STARTED")
# Copy initial composeinfo for new compose
shutil.copy(
os.path.join(target_dir, "work/global/composeinfo-base.json"),
os.path.join(target_dir, "compose/metadata/composeinfo.json"),
)
else:
# Restarting a particular compose
target_dir = args.compose_path
return target_dir
def load_parts_metadata(global_config):
parts_metadata = os.path.join(global_config.target, "compose/metadata/parts.json")
with open(parts_metadata) as f:
return json.load(f)
def setup_for_restart(global_config, parts, to_restart):
has_stuff_to_do = False
metadata = load_parts_metadata(global_config)
for key in metadata:
# Update state to match what is on disk
log.debug(
"Reusing %s (%s) from %s",
key,
metadata[key]["status"],
metadata[key]["path"],
)
parts[key].status = metadata[key]["status"]
parts[key].path = metadata[key]["path"]
for key in to_restart:
# Set restarted parts to run again
parts[key].status = Status.WAITING
parts[key].path = None
for key in to_restart:
# Remove blockers that are already finished
for blocker in list(parts[key].blocked_on):
if parts[blocker].is_finished():
parts[key].blocked_on.discard(blocker)
if not parts[key].blocked_on:
log.debug("Part %s in not blocked", key)
# Nothing blocks it; let's go
parts[key].status = Status.READY
has_stuff_to_do = True
if not has_stuff_to_do:
raise RuntimeError("All restarted parts are blocked. Nothing to do.")
def run_kinit(config):
if not config.getboolean("general", "kerberos"):
return
keytab = config.get("general", "kerberos_keytab")
principal = config.get("general", "kerberos_principal")
fd, fname = tempfile.mkstemp(prefix="krb5cc_pungi-orchestrate_")
os.close(fd)
os.environ["KRB5CCNAME"] = fname
shortcuts.run(["kinit", "-k", "-t", keytab, principal])
log.debug("Created a kerberos ticket for %s", principal)
atexit.register(os.remove, fname)
def get_compose_data(compose_path):
try:
compose = productmd.compose.Compose(compose_path)
data = {
"compose_id": compose.info.compose.id,
"compose_date": compose.info.compose.date,
"compose_type": compose.info.compose.type,
"compose_respin": str(compose.info.compose.respin),
"compose_label": compose.info.compose.label,
"release_id": compose.info.release_id,
"release_name": compose.info.release.name,
"release_short": compose.info.release.short,
"release_version": compose.info.release.version,
"release_type": compose.info.release.type,
"release_is_layered": compose.info.release.is_layered,
}
if compose.info.release.is_layered:
data.update(
{
"base_product_name": compose.info.base_product.name,
"base_product_short": compose.info.base_product.short,
"base_product_version": compose.info.base_product.version,
"base_product_type": compose.info.base_product.type,
}
)
return data
except Exception:
return {}
def get_script_env(compose_path):
env = os.environ.copy()
env["COMPOSE_PATH"] = compose_path
for key, value in get_compose_data(compose_path).items():
if isinstance(value, bool):
env[key.upper()] = "YES" if value else ""
else:
env[key.upper()] = str(value) if value else ""
return env
def run_scripts(prefix, compose_dir, scripts):
env = get_script_env(compose_dir)
for idx, script in enumerate(scripts.strip().splitlines()):
command = script.strip()
logfile = os.path.join(compose_dir, "logs", "%s%s.log" % (prefix, idx))
log.debug("Running command: %r", command)
log.debug("See output in %s", logfile)
shortcuts.run(command, env=env, logfile=logfile)
def try_translate_path(parts, path):
translation = []
for part in parts.values():
conf = pungi.util.load_config(part.config)
translation.extend(conf.get("translate_paths", []))
return pungi.util.translate_path_raw(translation, path)
def send_notification(compose_dir, command, parts):
if not command:
return
from pungi.notifier import PungiNotifier
data = get_compose_data(compose_dir)
data["location"] = try_translate_path(parts, compose_dir)
notifier = PungiNotifier([command])
with open(os.path.join(compose_dir, "STATUS")) as f:
status = f.read().strip()
notifier.send("status-change", workdir=compose_dir, status=status, **data)
def setup_progress_monitor(global_config, parts):
"""Update configuration so that each part send notifications about its
progress to the orchestrator.
There is a file to which the notification is written. The orchestrator is
reading it and mapping the entries to particular parts. The path to this
file is stored in an environment variable.
"""
tmp_file = tempfile.NamedTemporaryFile(prefix="pungi-progress-monitor_")
os.environ["_PUNGI_ORCHESTRATOR_PROGRESS_MONITOR"] = tmp_file.name
atexit.register(os.remove, tmp_file.name)
global_config.extra_args.append(
"--notification-script=pungi-notification-report-progress"
)
def reader():
while True:
line = tmp_file.readline()
if not line:
time.sleep(0.1)
continue
path, msg = line.split(":", 1)
for part in parts:
if parts[part].path == os.path.dirname(path):
log.debug("%s: %s", part, msg.strip())
break
monitor = threading.Thread(target=reader)
monitor.daemon = True
monitor.start()
def run(work_dir, main_config_file, args):
config_dir = os.path.join(work_dir, "config")
shutil.copytree(os.path.dirname(main_config_file), config_dir)
# Read main config
parser = configparser.RawConfigParser(
defaults={
"kerberos": "false",
"pre_compose_script": "",
"post_compose_script": "",
"notification_script": "",
}
)
parser.read(main_config_file)
# Create kerberos ticket
run_kinit(parser)
compose_info = dict(parser.items("general"))
compose_type = parser.get("general", "compose_type")
target_dir = prepare_compose_dir(parser, args, main_config_file, compose_info)
kobo.log.add_file_logger(log, os.path.join(target_dir, "logs", "orchestrator.log"))
log.info("Composing %s", target_dir)
run_scripts("pre_compose_", target_dir, parser.get("general", "pre_compose_script"))
old_compose = find_old_compose(
os.path.dirname(target_dir),
compose_info["release_short"],
compose_info["release_version"],
"",
)
if old_compose:
log.info("Reusing old compose %s", old_compose)
global_config = Config(
target=target_dir,
compose_type=compose_type,
label=args.label,
old_compose=old_compose,
config_dir=os.path.dirname(main_config_file),
event=args.koji_event,
extra_args=_safe_get_list(parser, "general", "extra_args"),
)
if not global_config.event and parser.has_option("general", "koji_profile"):
koji_wrapper = KojiWrapper(parser.get("general", "koji_profile"))
event_file = os.path.join(global_config.target, "work/global/koji-event")
result = get_koji_event_raw(koji_wrapper, None, event_file)
global_config = global_config._replace(event=result["id"])
parts = {}
for section in parser.sections():
if section == "general":
continue
parts[section] = ComposePart.from_config(parser, section, config_dir)
if hasattr(args, "part"):
setup_for_restart(global_config, parts, args.part)
setup_progress_monitor(global_config, parts)
send_notification(target_dir, parser.get("general", "notification_script"), parts)
retcode = run_all(global_config, parts)
if retcode:
# Only run the script if we are not doomed.
run_scripts(
"post_compose_", target_dir, parser.get("general", "post_compose_script")
)
send_notification(target_dir, parser.get("general", "notification_script"), parts)
return retcode
def parse_args(argv):
parser = argparse.ArgumentParser()
parser.add_argument("--debug", action="store_true")
parser.add_argument("--koji-event", metavar="ID", type=parse_koji_event)
subparsers = parser.add_subparsers()
start = subparsers.add_parser("start")
start.add_argument("config", metavar="CONFIG")
start.add_argument("--label")
restart = subparsers.add_parser("restart")
restart.add_argument("config", metavar="CONFIG")
restart.add_argument("compose_path", metavar="COMPOSE_PATH")
restart.add_argument(
"part", metavar="PART", nargs="*", help="which parts to restart"
)
restart.add_argument("--label")
return parser.parse_args(argv)
def main(argv=None):
args = parse_args(argv)
setup_logging(args.debug)
main_config_file = os.path.abspath(args.config)
with temp_dir() as work_dir:
try:
if not run(work_dir, main_config_file, args):
sys.exit(1)
except Exception:
log.exception("Unhandled exception!")
sys.exit(1)

View File

@ -15,8 +15,8 @@
from kobo import shortcuts from kobo import shortcuts
import os import os
import productmd import productmd
import shlex
import tempfile import tempfile
from six.moves import shlex_quote
from pungi import util from pungi import util
from pungi.phases.buildinstall import tweak_configs from pungi.phases.buildinstall import tweak_configs
@ -24,8 +24,8 @@ from pungi.wrappers import iso
def sh(log, cmd, *args, **kwargs): def sh(log, cmd, *args, **kwargs):
log.info("Running: %s", " ".join(shlex_quote(x) for x in cmd)) log.info("Running: %s", " ".join(shlex.quote(x) for x in cmd))
ret, out = shortcuts.run(cmd, *args, universal_newlines=True, **kwargs) ret, out = shortcuts.run(cmd, *args, text=True, errors="replace", **kwargs)
if out: if out:
log.debug("%s", out) log.debug("%s", out)
return ret, out return ret, out
@ -35,7 +35,8 @@ def get_lorax_dir(default="/usr/share/lorax"):
try: try:
_, out = shortcuts.run( _, out = shortcuts.run(
["python3", "-c" "import pylorax; print(pylorax.find_templates())"], ["python3", "-c" "import pylorax; print(pylorax.find_templates())"],
universal_newlines=True, text=True,
errors="replace",
) )
return out.strip() return out.strip()
except Exception: except Exception:

View File

@ -148,6 +148,15 @@ class UnifiedISO(object):
new_path = os.path.join(self.temp_dir, "trees", arch, old_relpath) new_path = os.path.join(self.temp_dir, "trees", arch, old_relpath)
makedirs(os.path.dirname(new_path)) makedirs(os.path.dirname(new_path))
# Resolve symlinks to external files. Symlinks within the
# provided `dir` are kept.
if os.path.islink(old_path):
real_path = os.readlink(old_path)
abspath = os.path.normpath(
os.path.join(os.path.dirname(old_path), real_path)
)
if not abspath.startswith(dir):
old_path = real_path
try: try:
self.linker.link(old_path, new_path) self.linker.link(old_path, new_path)
except OSError as exc: except OSError as exc:
@ -385,7 +394,8 @@ class UnifiedISO(object):
iso.get_mkisofs_cmd( iso.get_mkisofs_cmd(
iso_path, [source_dir], volid=volid, exclude=["./lost+found"] iso_path, [source_dir], volid=volid, exclude=["./lost+found"]
), ),
universal_newlines=True, text=True,
errors="replace",
) )
# implant MD5 # implant MD5

View File

@ -1,7 +1,6 @@
# Some packages must be installed via dnf/yum first, see doc/contributing.rst # Some packages must be installed via dnf/yum first, see doc/contributing.rst
dict.sorted
dogpile.cache dogpile.cache
funcsigs flufl.lock
jsonschema jsonschema
kobo kobo
koji koji
@ -12,4 +11,3 @@ ordered_set
productmd productmd
pykickstart pykickstart
python-multilib python-multilib
urlgrabber

2
setup.cfg Normal file
View File

@ -0,0 +1,2 @@
[sdist]
formats=bztar

View File

@ -5,14 +5,9 @@
import os import os
import glob import glob
import distutils.command.sdist
from setuptools import setup from setuptools import setup
# override default tarball format with bzip2
distutils.command.sdist.sdist.default_format = {"posix": "bztar"}
# recursively scan for python modules to be included # recursively scan for python modules to be included
package_root_dirs = ["pungi", "pungi_utils"] package_root_dirs = ["pungi", "pungi_utils"]
packages = set() packages = set()
@ -25,7 +20,7 @@ packages = sorted(packages)
setup( setup(
name="pungi", name="pungi",
version="4.3.7", version="4.10.1",
description="Distribution compose tool", description="Distribution compose tool",
url="https://pagure.io/pungi", url="https://pagure.io/pungi",
author="Dennis Gilmore", author="Dennis Gilmore",
@ -35,17 +30,17 @@ setup(
entry_points={ entry_points={
"console_scripts": [ "console_scripts": [
"comps_filter = pungi.scripts.comps_filter:main", "comps_filter = pungi.scripts.comps_filter:main",
"pungi = pungi.scripts.pungi:main",
"pungi-create-unified-isos = pungi.scripts.create_unified_isos:main", "pungi-create-unified-isos = pungi.scripts.create_unified_isos:main",
"pungi-fedmsg-notification = pungi.scripts.fedmsg_notification:main",
"pungi-patch-iso = pungi.scripts.patch_iso:cli_main", "pungi-patch-iso = pungi.scripts.patch_iso:cli_main",
"pungi-make-ostree = pungi.ostree:main", "pungi-make-ostree = pungi.ostree:main",
"pungi-notification-report-progress = pungi.scripts.report_progress:main", "pungi-notification-report-progress = pungi.scripts.report_progress:main",
"pungi-orchestrate = pungi_utils.orchestrator:main",
"pungi-wait-for-signed-ostree-handler = pungi.scripts.wait_for_signed_ostree_handler:main", # noqa: E501 "pungi-wait-for-signed-ostree-handler = pungi.scripts.wait_for_signed_ostree_handler:main", # noqa: E501
"pungi-koji = pungi.scripts.pungi_koji:cli_main", "pungi-koji = pungi.scripts.pungi_koji:cli_main",
"pungi-gather = pungi.scripts.pungi_gather:cli_main", "pungi-gather = pungi.scripts.pungi_gather:cli_main",
"pungi-config-dump = pungi.scripts.config_dump:cli_main", "pungi-config-dump = pungi.scripts.config_dump:cli_main",
"pungi-config-validate = pungi.scripts.config_validate:cli_main", "pungi-config-validate = pungi.scripts.config_validate:cli_main",
"pungi-cache-cleanup = pungi.scripts.cache_cleanup:main",
"pungi-gather-modules = pungi.scripts.gather_modules:cli_main", "pungi-gather-modules = pungi.scripts.gather_modules:cli_main",
"pungi-gather-rpms = pungi.scripts.gather_rpms:cli_main", "pungi-gather-rpms = pungi.scripts.gather_rpms:cli_main",
"pungi-generate-packages-json = pungi.scripts.create_packages_json:cli_main", # noqa: E501 "pungi-generate-packages-json = pungi.scripts.create_packages_json:cli_main", # noqa: E501
@ -54,20 +49,19 @@ setup(
}, },
scripts=["contrib/yum-dnf-compare/pungi-compare-depsolving"], scripts=["contrib/yum-dnf-compare/pungi-compare-depsolving"],
data_files=[ data_files=[
("/usr/share/pungi", glob.glob("share/*.xsl")), ("lib/tmpfiles.d", glob.glob("contrib/tmpfiles.d/*.conf")),
("/usr/share/pungi", glob.glob("share/*.ks")), ("share/pungi", glob.glob("share/*.xsl")),
("/usr/share/pungi", glob.glob("share/*.dtd")), ("share/pungi", glob.glob("share/*.ks")),
("/usr/share/pungi/multilib", glob.glob("share/multilib/*")), ("share/pungi", glob.glob("share/*.dtd")),
("share/pungi/multilib", glob.glob("share/multilib/*")),
], ],
test_suite="tests", test_suite="tests",
install_requires=[ install_requires=[
"jsonschema", "jsonschema",
"kobo", "kobo",
"lxml", "lxml",
"productmd>=1.23", "productmd>=1.45",
"six",
"dogpile.cache", "dogpile.cache",
], ],
extras_require={':python_version=="2.7"': ["enum34", "lockfile"]}, tests_require=["pytest", "pytest-cov", "pyfakefs"],
tests_require=["mock", "pytest", "pytest-cov", "pyfakefs"],
) )

1
sources Normal file
View File

@ -0,0 +1 @@
SHA512 (pungi-4.10.1.tar.bz2) = 4ff1005ece77ac9b41ac31c3b0bcdd558afaaea4d99bf178d42b24a4318ccc9a5576ad4740446f1589a07f88f59f5cb4954d182f3f4e15b1a798e19d9a54fb22

View File

@ -1,5 +1,3 @@
mock
parameterized parameterized
pytest pytest
pytest-cov pytest-cov
unittest2

View File

@ -1,4 +1,4 @@
FROM fedora:33 FROM registry.fedoraproject.org/fedora:latest
LABEL \ LABEL \
name="Pungi test" \ name="Pungi test" \
description="Run tests using tox with Python 3" \ description="Run tests using tox with Python 3" \
@ -6,6 +6,7 @@ LABEL \
license="MIT" license="MIT"
RUN dnf -y update && dnf -y install \ RUN dnf -y update && dnf -y install \
--setopt=install_weak_deps=false \
findutils \ findutils \
libmodulemd \ libmodulemd \
git \ git \
@ -15,6 +16,7 @@ RUN dnf -y update && dnf -y install \
python3-gobject-base \ python3-gobject-base \
python3-tox \ python3-tox \
python3-urlgrabber \ python3-urlgrabber \
python3-dnf \
&& dnf clean all && dnf clean all
WORKDIR /src WORKDIR /src

View File

@ -1,27 +0,0 @@
FROM centos:7
LABEL \
name="Pungi test" \
description="Run tests using tox with Python 2" \
vendor="Pungi developers" \
license="MIT"
RUN yum -y update && yum -y install epel-release && yum -y install \
git \
libmodulemd2 \
make \
python3 \
python-createrepo_c \
python-gobject-base \
python-gssapi \
python-libcomps \
pykickstart \
&& yum clean all
# python-tox in yum repo is too old, let's install latest version
RUN pip3 install tox
WORKDIR /src
COPY . .
CMD ["tox", "-e", "py27"]

4
tests/Jenkinsfile vendored
View File

@ -1,5 +1,3 @@
def DUFFY_SESSION_ID
pipeline { pipeline {
agent { agent {
label 'cico-workspace' label 'cico-workspace'
@ -17,6 +15,7 @@ pipeline {
if (params.REPO == "" || params.BRANCH == "") { if (params.REPO == "" || params.BRANCH == "") {
error "Please supply both params (REPO and BRANCH)" error "Please supply both params (REPO and BRANCH)"
} }
def DUFFY_SESSION_ID
try { try {
echo "Requesting duffy node ..." echo "Requesting duffy node ..."
def session_str = sh returnStdout: true, script: "set +x; duffy client --url https://duffy.ci.centos.org/api/v1 --auth-name fedora-infra --auth-key $CICO_API_KEY request-session pool=virt-ec2-t2-centos-9s-x86_64,quantity=1" def session_str = sh returnStdout: true, script: "set +x; duffy client --url https://duffy.ci.centos.org/api/v1 --auth-name fedora-infra --auth-key $CICO_API_KEY request-session pool=virt-ec2-t2-centos-9s-x86_64,quantity=1"
@ -40,7 +39,6 @@ git fetch proposed
git checkout origin/master git checkout origin/master
git merge --no-ff "proposed/$params.BRANCH" -m "Merge PR" git merge --no-ff "proposed/$params.BRANCH" -m "Merge PR"
podman run --rm -v .:/src:Z quay.io/exd-guild-compose/pungi-test tox -r -e flake8,black,py3,bandit podman run --rm -v .:/src:Z quay.io/exd-guild-compose/pungi-test tox -r -e flake8,black,py3,bandit
podman run --rm -v .:/src:Z quay.io/exd-guild-compose/pungi-test-py2 tox -r -e py27
""" """
sh "cat job.sh" sh "cat job.sh"
sh "ssh -o StrictHostKeyChecking=no root@$hostname mkdir $remote_dir" sh "ssh -o StrictHostKeyChecking=no root@$hostname mkdir $remote_dir"

View File

@ -108,6 +108,7 @@
<groupid>core</groupid> <groupid>core</groupid>
</grouplist> </grouplist>
<optionlist> <optionlist>
<groupid arch="x86_64">standard</groupid>
</optionlist> </optionlist>
</environment> </environment>

View File

@ -35,6 +35,11 @@ for spec in $DIR/*.spec; do
if [ "$(basename $spec)" == "dummy-skype.spec" ]; then if [ "$(basename $spec)" == "dummy-skype.spec" ]; then
continue continue
fi fi
if [ "$(basename $spec)" == "dummy-fcoe-target-utils.spec" ]; then
if [ "$target" == "ppc" -o "$target" == "s390" -o "$target" == "s390x" ]; then
continue
fi
fi
echo "Building ${spec/.spec/} for $target" echo "Building ${spec/.spec/} for $target"
rpmbuild --quiet --target=$target -ba --nodeps --define "_srcrpmdir $DIR/../repo/src" --define "_rpmdir $DIR/../repo" $spec rpmbuild --quiet --target=$target -ba --nodeps --define "_srcrpmdir $DIR/../repo/src" --define "_rpmdir $DIR/../repo" $spec
done done

View File

@ -2,18 +2,14 @@
import difflib import difflib
import errno import errno
import hashlib
import os import os
import shutil import shutil
import tempfile import tempfile
from collections import defaultdict from collections import defaultdict
from unittest import mock from unittest import mock
import six
from kobo.rpmlib import parse_nvr from kobo.rpmlib import parse_nvr
try:
import unittest2 as unittest
except ImportError:
import unittest import unittest
from pungi.util import get_arch_variant_data from pungi.util import get_arch_variant_data
@ -21,6 +17,15 @@ from pungi import paths, checks
from pungi.module_util import Modulemd from pungi.module_util import Modulemd
GIT_WITH_CREDS = [
"git",
"-c",
"credential.useHttpPath=true",
"-c",
"credential.helper=!ch",
]
class BaseTestCase(unittest.TestCase): class BaseTestCase(unittest.TestCase):
def assertFilesEqual(self, fn1, fn2): def assertFilesEqual(self, fn1, fn2):
with open(fn1, "rb") as f1: with open(fn1, "rb") as f1:
@ -158,6 +163,20 @@ class IterableMock(mock.Mock):
return iter([]) return iter([])
class FSKojiDownloader(object):
"""Mock for KojiDownloadProxy that checks provided path."""
def get_file(self, path, validator=None):
return path if os.path.isfile(path) else None
class DummyKojiDownloader(object):
"""Mock for KojiDownloadProxy that always finds the file in original location."""
def get_file(self, path, validator=None):
return path
class DummyCompose(object): class DummyCompose(object):
def __init__(self, topdir, config): def __init__(self, topdir, config):
self.supported = True self.supported = True
@ -232,6 +251,8 @@ class DummyCompose(object):
self.cache_region = None self.cache_region = None
self.containers_metadata = {} self.containers_metadata = {}
self.load_old_compose_config = mock.Mock(return_value=None) self.load_old_compose_config = mock.Mock(return_value=None)
self.koji_downloader = DummyKojiDownloader()
self.koji_downloader.path_prefix = "/prefix"
def setup_optional(self): def setup_optional(self):
self.all_variants["Server-optional"] = MockVariant( self.all_variants["Server-optional"] = MockVariant(
@ -272,7 +293,7 @@ class DummyCompose(object):
return tempfile.mkdtemp(suffix=suffix, prefix=prefix, dir=self.topdir) return tempfile.mkdtemp(suffix=suffix, prefix=prefix, dir=self.topdir)
def touch(path, content=None): def touch(path, content=None, mode=None):
"""Helper utility that creates an dummy file in given location. Directories """Helper utility that creates an dummy file in given location. Directories
will be created.""" will be created."""
content = content or (path + "\n") content = content or (path + "\n")
@ -280,10 +301,12 @@ def touch(path, content=None):
os.makedirs(os.path.dirname(path)) os.makedirs(os.path.dirname(path))
except OSError: except OSError:
pass pass
if not isinstance(content, six.binary_type): if not isinstance(content, bytes):
content = content.encode() content = content.encode()
with open(path, "wb") as f: with open(path, "wb") as f:
f.write(content) f.write(content)
if mode:
os.chmod(path, mode)
return path return path
@ -334,3 +357,9 @@ def fake_run_in_threads(func, params, threads=None):
"""Like run_in_threads from Kobo, but actually runs tasks serially.""" """Like run_in_threads from Kobo, but actually runs tasks serially."""
for num, param in enumerate(params): for num, param in enumerate(params):
func(None, param, num) func(None, param, num)
def hash_string(alg, s):
m = hashlib.new(alg)
m.update(s.encode("utf-8"))
return m.hexdigest()

View File

@ -1,25 +1,17 @@
from unittest import mock from unittest import mock
import io
try:
import unittest2 as unittest
except ImportError:
import unittest import unittest
import six
from pungi.scripts.pungi_koji import cli_main from pungi.scripts.pungi_koji import cli_main
class PungiKojiTestCase(unittest.TestCase): class PungiKojiTestCase(unittest.TestCase):
@mock.patch("sys.argv", new=["prog", "--version"]) @mock.patch("sys.argv", new=["prog", "--version"])
@mock.patch("sys.stderr", new_callable=six.StringIO) @mock.patch("sys.stderr", new_callable=io.StringIO)
@mock.patch("sys.stdout", new_callable=six.StringIO) @mock.patch("sys.stdout", new_callable=io.StringIO)
@mock.patch("pungi.scripts.pungi_koji.get_full_version", return_value="a-b-c.111") @mock.patch("pungi.scripts.pungi_koji.get_full_version", return_value="a-b-c.111")
def test_version(self, get_full_version, stdout, stderr): def test_version(self, get_full_version, stdout, stderr):
with self.assertRaises(SystemExit) as cm: with self.assertRaises(SystemExit) as cm:
cli_main() cli_main()
self.assertEqual(cm.exception.code, 0) self.assertEqual(cm.exception.code, 0)
# Python 2.7 prints the version to stderr, 3.4+ to stdout.
if six.PY3:
self.assertMultiLineEqual(stdout.getvalue(), "a-b-c.111\n") self.assertMultiLineEqual(stdout.getvalue(), "a-b-c.111\n")
else:
self.assertMultiLineEqual(stderr.getvalue(), "a-b-c.111\n")

Some files were not shown because too many files have changed in this diff Show More