Compare commits

...

298 Commits

Author SHA1 Message Date
e17a6d7f42
- Changelog date order
- Typo
2024-10-09 12:48:48 +03:00
5152dfa764
- Add x86_64_v2 to a lisf of exclusive arches if there is any arch with base x86_64
- Changelog
- Bumbed version
2024-09-27 15:43:27 +03:00
b61614969d - Add x86_64_v2 to arch list if x86_64 in list 2024-09-16 14:59:03 +03:00
38cc2f79a0
- Unittests are fixed 2024-09-08 12:01:32 +03:00
d8b7f9210e
- Typo 2024-09-08 11:47:45 +03:00
69ec4df8f0
- Release is fixed 2024-09-06 22:30:35 +03:00
20841cfd4c
- Changelog
- Release is bumped
2024-09-06 22:29:55 +03:00
cb53de3c46
- Truncate a volume ID to 32 bytes
- Add new architecture `x86_64_v2`
2024-09-06 22:28:38 +03:00
72635cf5c1
- Release is bumped 2024-09-06 15:06:55 +03:00
9ce519426d
- Typo 2024-09-06 15:06:35 +03:00
208c71c194
- Typo 2024-09-05 17:36:42 +03:00
71c4e3c178
- Use xorriso as recommended package and genisoimage as required for RHEL8/9 and vice versa for RHEL10 2024-09-05 17:28:11 +03:00
1308986569
- New release of AL version of Pungi 2024-08-30 13:42:27 +03:00
Lubomír Sedlář
e05a11f99a
Release 4.7.0
JIRA: RHELCMP-13991
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit a8dbd77f7f)
2024-08-30 13:40:54 +03:00
Lubomír Sedlář
cb9dede604
kiwibuild: Add support for type, type attr and bundle format
This is a very basic support. Whatever users specify in the new option
will be passed to the koji task.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=2270197
Related: https://pagure.io/koji/pull-request/4157
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e43cf68f08)
2024-08-30 13:40:50 +03:00
Lubomír Sedlář
ce2c222dc2
createiso: Block reuse if unsigned packages are allowed
We can have a compose with unsigned packages.

By the time the next compose is generated, the packages could have been
signed. However, the new compose would still reuse the ISO with unsigned
copies.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d546a49299)
2024-08-30 13:40:49 +03:00
Lubomír Sedlář
be4fd75a7a
Allow live_images phase to still be skipped
Without this fix existing configurations break even though they don't
use the phase.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c59f2371a3)
2024-08-30 13:40:48 +03:00
Lubomír Sedlář
33bb0ceceb
createiso: Recompute .treeinfo checksums for images
Running xorriso to modify an ISO image can update content of included
images such as images/eltorito.img, unless we explicitly update the
image, which is undesirable (https://pagure.io/pungi/issue/1647).

However, when the file is changed, the checksum changes and .treeinfo no
longer matches.

This patch implements a workaround: once the DVD is written, it looks
for incorrect checksums, recalculates them and updates the .treeinfo on
the DVD. Since only the checksum is changing and the size of the file
remains the same, this seems to help fix the issue.

An additional step for implanting MD5 is needed again, as that gets
erased by the workaround.

JIRA: RHELCMP-13664
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 3b2c6ae72a)
2024-08-30 13:40:47 +03:00
Lubomír Sedlář
aef48c0ab4
Drop support for signing rpm-wrapped artifacts
This was only usable in live_images phase that doesn't exist anymore,
and wasn't used much in the first place.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 0726a4dca7)
2024-08-30 13:40:15 +03:00
Adam Williamson
bd91ef1d10
Remove live_images.py (LiveImagesPhase)
This phase was used to create live images with livecd-creator
and 32-bit ARM images with appliance-creator. We also remove
get_create_image_cmd from the Koji wrapper as it was only used
for this phase, remove associated tests, and remove related
configuration settings and documentation.

Fixes: https://pagure.io/pungi/issue/1753
Merges: https://pagure.io/pungi/pull-request/1774
Signed-off-by: Adam Williamson <awilliam@redhat.com>

(cherry picked from commit 531f0ef389)
2024-08-30 13:40:14 +03:00
Lubomír Sedlář
32d5d32a6e
Clean up requirements
* dict.sorted and funcsigs are not used anywhere anymore
* urlgrabber is used only in the yum based gather.py module, and thus
  only needed on Python 2
* py3 doesn't need to reinstall mock as that is part of stdlib now

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c96b5358ba)
2024-08-30 13:40:02 +03:00
Haibo Lin
5bcb3f5ac1
Release 4.6.3
JIRA: RHELCMP-13724

Signed-off-by: Haibo Lin <hlin@redhat.com>

(cherry picked from commit 0cb18bfa24)
2024-08-30 13:39:59 +03:00
Lubomír Sedlář
78bfbef206
Fix formatting of long line
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit f72adc03b1)
2024-08-30 13:39:51 +03:00
Lubomír Sedlář
88b6d8ebf5
unified-isos: Resolve symlinks
If the compose is configured to use symlinks for packages, the unified
ISO would include the symlinks which is useless.

Instead, let's check and replace any symlinks pointing outside of the
compose with the actual file.

JIRA: RHELCMP-13802
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8ced384540)
2024-08-30 13:39:50 +03:00
Lubomír Sedlář
6223baa2ba
gather: Skip lookaside packages from local lookaside repo
When variant X depends on variant A, Pungi creates a temporary local
lookaside with packages from A. If there's an external lookaside
configured, the list of package for variant A can contain URLs to the
external repo.

Newer versions of createrepo fail when pkglist specifies an unreachable
package, and it doesn't do downloading.

JIRA: RHELCMP-13648
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 4a5106375e)
2024-08-30 13:39:49 +03:00
Haibo Lin
9d6226b436
pkgset: Avoid adding modules to unavailable arches
If a module is not built for specific arches, pungi will skip adding it
to these arches in pkgset phase.

JIRA: RHELCMP-13625
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 627b72597e)
2024-08-30 13:39:48 +03:00
Lubomír Sedlář
927a0d35ab
iso: Extract volume id with xorriso if available
Pungi can use either genisoimage or xorriso to create ISOs.

It also needed isoinfo utility for querying volume ID from the ISO
image. However, the utility is part of the genisoimage suite of tools.

On systems that no longer provide genisoimage, the image would be
successfully generate with xorriso, but then pungi would fail to extract
the volume id leading to metadata with missing values.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bc0334cc09)
2024-08-30 13:39:47 +03:00
Adam Williamson
d81ee0f553
De-duplicate log messages for ostree and ostree_container phases
The ostree and ostree_container phases both log messages in the
exact same form, which is rather confusing. This will make it
much clearer which message comes from which phase.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 5c9e79f535)
2024-08-30 13:39:46 +03:00
Lubomír Sedlář
e601345a38
Handle tracebacks as str or bytes
Kobo 0.36.0 changed how tracebacks are handled. Instead of `bytes`, it
returns a `str`. That makes pungi fail to write it into a file opened as
binary.

Relates: https://github.com/release-engineering/kobo/pull/246
Fixes: https://pagure.io/pungi/issue/1756
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 29c166ab99)
2024-08-30 13:39:45 +03:00
Adam Williamson
1fe075e7e4
ostree/container: add missing --version arg
https://pagure.io/pungi/pull-request/1726 tries to use
`self.args.version`, but the `pungi-make-ostree container`
subcommand does not actually have a `--version` arg, so that is
not going to work. This adds the required arg.

We *could* make it optional by still setting an empty update
dict if it's not specified, I guess, but not sure if that's worth
the effort.

Fixes: https://pagure.io/pungi/issue/1751

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 51d58322f2)
2024-08-30 13:39:44 +03:00
Lubomír Sedlář
a8fc1b183b
Block pkgset reuse on module defaults change
JIRA: RHELCMP-13463
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 0ef1c102b8)
2024-08-30 13:39:43 +03:00
Adam Williamson
8f171b81a1
Include task ID in DONE message for OSBS phase
Again, composetracker expects the message in this format.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit b6cfd8c5d4)
2024-08-30 13:39:41 +03:00
Adam Williamson
ee8a56e64d
Various phases: consistent format of failure message
composetracker expects the failure message to be in a specific
form, but some phases weren't using it. They were phrasing it
slightly differently, which throws off composetracker's parsing.
We could extend composetracker to handle both forms, but it seems
simpler to just make all the phases use a consistent form.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 9f8377abab)
2024-08-30 13:39:40 +03:00
Lubomír Sedlář
2bf6c216bc
Update tests to exercise kiwi specific metadata
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 949add0dac)
2024-08-30 13:39:39 +03:00
Adam Williamson
99a6dfe8ad
Kiwi: translate virtualbox and azure productmd formats
As discussed in
https://pagure.io/releng/failed-composes/issue/6047#comment-899622
the list of 'acceptable' types and formats (in productmd terms)
is locked down in productmd, we cannot just 'declare' new formats
in pungi as we kinda wound up doing by adding these Kiwi
extensions to the EXTENSIONS dict in image_build phase. So
instead, let's return the image_build phase to the way it was,
and add an additional layer of handling in kiwibuild phase for
these awkward cases, which 'translates' the file suffix to a
format productmd knows about already. This is actually how we
would rather behave anyway, because a Kiwi-produced
`vagrant.libvirt.box` file really is the same kind of thing as an
ImageFactory-produced `vagrant-libvirt.box` file; we want them to
have compatible metadata, we don't want them to look like
different things.

Merges: https://pagure.io/pungi/pull-request/1740
Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 8fb694f000)
2024-08-30 13:39:37 +03:00
Lubomír Sedlář
c63f9f41b6
kiwibuild: Add tests for the basic functionality
Merges: https://pagure.io/pungi/pull-request/1739
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8a3b64e5b8)
2024-08-30 13:39:36 +03:00
Lubomír Sedlář
ab1960de6d
kiwibuild: Remove repos as dicts
The task needs just URLs, the dics don't bring anything here.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c80ebb029b)
2024-08-30 13:39:35 +03:00
Lubomír Sedlář
c17b820490
Fix additional image metadata
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e2ceb48450)
2024-08-30 13:39:33 +03:00
Lubomír Sedlář
36133b71da
Drop kiwibuild_version option
Version in kiwibuild is embedded in the definition file. The option
makes no sense.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 242d7d951f)
2024-08-30 13:39:32 +03:00
Lubomír Sedlář
50b217145c
Update docs with kiwibuild options
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 04d4e1d585)
2024-08-30 13:39:31 +03:00
Adam Williamson
57f2b428d5
kiwibuild: allow setting description scm and path at phase level
Neal wanted this to work - he tried using global_description_scm
and global_description_path in the initial PR - but it wasn't
wired up to work. This should make it possible to set
`kiwibuild_description_scm` and `kiwibuild_description_path`.
It also technically lets you set `global_` for both, since the
`get_config` implementation is very generic, but it doesn't add
it to the checks, so you'd still get an "unrecognized config
option" warning, I think. It seems appropriate to encourage
setting this as a phase-level option rather than a global one
since it seems quite specific to the kiwibuild phase.

Merges: https://pagure.io/pungi/pull-request/1737
Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit e90ffdfd93)
2024-08-30 13:39:29 +03:00
Lubomír Sedlář
3cdc8d0ba7
Use latest Fedora for python 3 test environment
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 0d310fb3b3)
2024-08-30 13:39:28 +03:00
Lubomír Sedlář
07829f2229
Install unittest2 only on python 2
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 5172d7e5eb)
2024-08-30 13:39:26 +03:00
Adam Williamson
bdf06ea038
Fix 'failable' handling for kiwibuild phase
The mechanisms here are a bit subtle and the kiwibuild phase
didn't quite get them right. The arg passed to `util.failable`
is supposed to be a boolean, but kiwibuild was passing it the
list of failable arches (which will always evaluate True).

How this is meant to work is that we only make *the Koji task
as a whole* failable (by passing `True` to `util.failable`) if
*all* the arches in it are failable. If *any* arch in the task
is not failable, the task should not be failable.

We allow a subset of arches to fail by passing the Koji task a
list of `optional_arches`, later. If an arch is 'optional', that
arch failing won't cause the Koji task itself to be considered
failed.

This commit fixes the logic (I hope), renames all the variables
and adds a couple of comments to make it clearer what's going on,
and does a bit of making the code simpler.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 0d306d4964)
2024-08-30 13:39:25 +03:00
Jeremy Cline
bcab3431e1
image_build: Accept Kiwi extension for Azure VHD images
Kiwi builds for Azure fixed VHD images are suffixed with "vhdfixed"
instead of plain "vhd". Add that to the list of suffixes.

Signed-off-by: Jeremy Cline <jeremycline@microsoft.com>
(cherry picked from commit 1494f203ce)
2024-08-30 13:39:24 +03:00
Adam Williamson
b181b08033
image_build: accept Kiwi vagrant image name format
According to Neal, Vagrant images produced by Kiwi end in e.g.
`vagrant.libvirt.box` and `vagrant.virtualbox.box` - with a
period between `vagrant` and the image type, not a dash as with
oz. We should accept this slightly different format so we can
correctly derive the productmd `type` and `format` for these.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 93b4b4ae0f)
2024-08-30 13:39:23 +03:00
Lubomír Sedlář
e05b1bcd78
Release 4.6.2
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit b8e26bfb64)
2024-08-30 13:39:22 +03:00
Tomáš Hozza
a97488721d
Phases/osbuild: support passing 'customizations' for image builds
The osbuild Koji plugin supports passing customizations for an image
build. This is also supported in the Koji CLI plugin. Some teams want to
pass image customizations for images built as part of Pungi composes.
Extend the osbuild phase to support passing customizations in the Pungi
configuration.

Merges: https://pagure.io/pungi/pull-request/1733
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
(cherry picked from commit e738f65458)
2024-08-30 13:39:16 +03:00
Lubomír Sedlář
4d858ef958
dnf: Load filelists for actual solver too
Not just in tests.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 209d308e1c)
2024-08-30 13:39:14 +03:00
Lubomír Sedlář
744b00499d
kiwibuild: Tell Koji which arches are allowed to fail
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit be410d9fd5)
2024-08-30 13:39:13 +03:00
Lubomír Sedlář
583547c6ee
kiwibuild: Update documentation with more details
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 1f819ee08a)
2024-08-30 13:39:11 +03:00
Lubomír Sedlář
f28053eecc
kiwibuild: Add kiwibuild global options
This is already supported by code, just missing in the schema.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit b9d94970b5)
2024-08-30 13:39:10 +03:00
Lubomír Sedlář
a196e9c895
kiwibuild: Process images same as image-build
Getting the images from task is less hacky then matching on filenames.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit b032425f30)
2024-08-30 13:39:08 +03:00
Lubomír Sedlář
a6f6199910
kiwibuild: Add subvariant configuration
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bcd937d16d)
2024-08-30 13:39:07 +03:00
Lubomír Sedlář
a3dcec5059
kiwibuild: Work around missing arch in build data
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit f0137fd9b9)
2024-08-30 13:39:05 +03:00
Haibo Lin
6aa674fbb3
Support KiwiBuild
Adding kiwibuild phase which is similar to osbuild.

Fixes: https://pagure.io/pungi/issue/1710
Merges: https://pagure.io/pungi/pull-request/1720
JIRA: RHELCMP-13348
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 3d630d3e8e)
2024-08-30 13:39:04 +03:00
Timothée Ravier
05d9651eba
ostree/container: Set version in treefile 'automatic-version-prefix'
In the non container path, we're setting the version for the build using
the `--add-metadata-string=version=XYZ` argument passed to `rpm-ostree
compose tree ...`.

The `rpm-ostree compose image` path does not expose this option yet so
modify the treefile directly as we are already doing it to set the
repos used for the compose.

See: https://github.com/coreos/rpm-ostree/issues/4829
See: https://pagure.io/workstation-ostree-config/pull-request/472
Merges: https://pagure.io/pungi/pull-request/1726
Signed-off-by: Timothée Ravier <tim@siosm.fr>
(cherry picked from commit 8412890640)
2024-08-30 13:39:02 +03:00
Lubomír Sedlář
75ab6a14b2
dnf: Explicitly load filelists
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2264414
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 42befba0b1)
2024-08-30 13:39:01 +03:00
Lubomír Sedlář
533ea641d8
Fix buildinstall reuse with pungi_buildinstall plugin
The keys may not exist anymore. If there's nothing to delete, it's fine.

JIRA: RHELCMP-13464
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 52c2cea0ef)
2024-08-30 13:38:59 +03:00
Lubomír Sedlář
185a53d56b
Fix filters for DNF query
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d2e9ccefde)
2024-08-30 13:38:58 +03:00
Lubomír Sedlář
305deab9ed
gather-dnf: Support dotarch in filter_packages
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 2c61416423)
2024-08-30 13:38:57 +03:00
Lubomír Sedlář
6af11d5747
gather: Support dotarch notation for debuginfo packages
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 986099f8b5)
2024-08-30 13:38:56 +03:00
Lubomír Sedlář
58f96531c7
Correctly set input and fultree_exclude flags for debuginfo
This only matters for composes that use the functionality for trimming
addon packages from parent variants.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 947ddf0a1a)
2024-08-30 13:38:55 +03:00
Lubomír Sedlář
e570aa7726
4.6.1 release
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit e46393263e)
2024-08-30 13:38:53 +03:00
Lubomír Sedlář
d8a553163f
Make python3-mock dependency optional
https://fedoraproject.org/wiki/Changes/RemovePythonMockUsage

Prefer using unittest.mock to a standalone package. The separate
packages should only really be needed on Python 2.7 these days.

The test requirements file is updated to only require mock on old
Python, and the dependency is removed from setup.py to avoid issues
there.

Relates: https://src.fedoraproject.org/rpms/pungi/pull-request/9

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit ff5a7e6377)
2024-08-30 13:38:43 +03:00
Lubomír Sedlář
a9839d8078
Make latest black happy
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit dd7ecbd5fd)
2024-08-30 13:31:29 +03:00
Lubomír Sedlář
dc05d1fbba
Update tox configuration
The whitelist_externals option has been renamed to allowlist_externals.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit ba613563f6)
2024-08-30 13:31:28 +03:00
Lubomír Sedlář
dc4e8b2fb7
Fix scm tests to not use user configuration
If you configure default branch name in new repos to anything else than
master, there will be failures in tests. The test expects the branch to
be called master, but does not ensure it in any way.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c8d16e6978)
2024-08-30 13:31:26 +03:00
Lubomír Sedlář
27d055992e
Add workaround for old requests in kojiwrapper
When running with requests<2.18 (i.e. on RHEL 7), streaming responses
are not a context manager and need to be wrapped in contextlib.closing.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 860360629d)
2024-08-30 13:31:25 +03:00
Lubomír Sedlář
34fcd550b6
Use pungi_buildinstall without NFS
The plugin supports two modes of operation:
1. Mount a shared storage volume into the runroot and have the output
   written there.
2. Have the plugin create a tar.gz with the outputs and upload them to
   the hub, from where they can be downloaded.

This patch switches from option 1 to option 2.

This requires all input repositories to be passes in as URLs and not
paths. Once the task finishes, Pungi will download the output archives
and unpack them into the expected locations.

JIRA: RHELCMP-13284
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit f25489d060)
2024-08-30 13:31:24 +03:00
Adam Williamson
4c0059e91b
checks: don't require "repo" in the "ostree" schema
Per @siosm in https://pagure.io/pungi-fedora/pull-request/1227
this option "is deprecated and not needed anymore", so Pungi
should not be requiring it.

Merges: https://pagure.io/pungi/pull-request/1714
Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit 432b0bce04)
2024-08-30 13:31:23 +03:00
Lubomír Sedlář
bb2e32132e
ostree_container: Use unique temporary directory
The config repository is cloned into a path that conflicts with the
regular ostree phase. Let's use a unique name to avoid that problem.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 7e779aa90f)
2024-08-30 13:31:22 +03:00
Lubomír Sedlář
dca3be5861
4.6.0 release
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit f4bf0739aa)
2024-08-30 13:31:20 +03:00
Lubomír Sedlář
38ec4ca159
Add ostree container to image metadata
This requires https://github.com/release-engineering/productmd/pull/172

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 119b212241)
2024-08-30 13:30:44 +03:00
Lubomír Sedlář
c589ccb56f
Updates for ostree-container phase
This patch connects the phase into the main script, and adds other
modifications:

* The archive is now stored in the images/ subdirectory in the compose.
* Documentation is updated to correctly mention that variant repos are
  not available.
* Configuration for path and name of the final archive is dropped. There
  are reasonable defaults for this and there's no point in having users
  configure it.
* The extra message for the archive is no longer sent.
* The pungi-make-ostree utility is no longer required in the buildroot.

The pungi-make-ostree utility doesn't do any significant work. It
modifies configuration files (which can happen on the compose host), and
it starts other processes.

This patch changes the ostree-container phase to no longer need the
script in the buildroot. Instead, the utility is called on the compose
host to do the config manipulation and output the needed commands. Those
are then passed into the runroot task.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 081c31238b)
2024-08-30 13:30:42 +03:00
Timothée Ravier
e413955849
Add ostree native container support
Add a new `ostree_container` stage to create ostree native container
images as OCI archives, using rpm-ostree compose image.

See: https://fedoraproject.org/wiki/Changes/OstreeNativeContainerStable
See: https://gitlab.com/CentOS/cloud/issue-tracker/-/issues/1

Fixes: https://pagure.io/pungi/issue/1698
Merges: https://pagure.io/pungi/pull-request/1699

Signed-off-by: Timothée Ravier <tim@siosm.fr>
(cherry picked from commit 95497d2676)
2024-08-30 13:30:41 +03:00
Adam Williamson
e70e1841c7
Improve autodetection of productmd image type for osbuild images
I don't love inferring the type from the filename like this -
it's kinda backwards - but it's an improvement on the current
logic (I don't think 'dvd' is ever currently the correct value
here, I don't think osbuild *can* currently build the type of
image that 'dvd' is meant to indicate). I can't immediately see
any better source of data here (we could use the 'name' or
'package_name' from 'build_info', but those are pretty much
just inputs to the filenames anyway).

Types that are possible in productmd but not covered here are
'cd' (never likely to be used again in Fedora at least, not sure
about RHEL), 'dvd-debuginfo' (again not used in Fedora, may be
used in RHEL), 'ec2', 'kvm' (not sure about those), 'netinst'
(this is a synonym for 'boot', we use 'boot' in practice in
Fedora metadata), 'p2v' and 'rescue' (not sure).

Signed-off-by: Adam Williamson <awilliam@redhat.com>
(cherry picked from commit aa7fcc1c20)
2024-08-30 13:30:40 +03:00
Lubomír Sedlář
fc86e03e44
pkgset: ignore events for modular content tags
Generally we want all packages to come from particular event.

There are two exceptions: packages configured via `pkgset_koji_builds`
are pulled in by exact NVR and skip event; and modules in
`pkgset_koji_modules` are pulled in by NSVC and also ignore events.

However, the modular content tag did honor event, and could lead to a
crashed compose if the content tag did not exist at the configured
event.

This patch is a slightly too big hammer. It ignores events for all
modules, not just ones configured by explicit NSVC. It's not a huge deal
as the content tags are created before the corresponding module build is
created, and once all rpm builds are tagged into the content tag, MBS
will never change it again.

JIRA: RHELCMP-12765
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit b32c8f3e5e)
2024-08-30 13:30:38 +03:00
Lubomír Sedlář
548441644b
pkgset: Ignore duplicated module builds
If the module tag contains the same module build multiple times (because
it's in multiple tags in the inheritance), Pungi will not process that
correctly and try to include the same NSVC in the compose multiple
times. That leads to a crash.

This patch adds another step to the inheritance filter to ensure the
result contains each module only once.

JIRA: RHELCMP-12768
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 935da7c246)
2024-08-30 13:30:36 +03:00
Aditya Bisoi
ca369df0df
Drop buildinstall method
JIRA: RHELCMP-12388

Signed-off-by: Aditya Bisoi <abisoi@redhat.com>
(cherry picked from commit b513c8cd00)
2024-08-30 13:30:35 +03:00
Lingyan Zhuang
67ae4202c4
Add step to send UMB message
If reuse old ISO finished, send out UMB message.

Signed-off-by: Lingyan Zhuang <lzhuang@redhat.com>
(cherry picked from commit 8cf1d98312)
2024-08-30 13:30:33 +03:00
Timothée Ravier
aba5a7a093
Fix minor Ruff/flake8 warnings
```
pungi/checks.py:575:17: F601 [*] Dictionary key literal `"type"` repeated
pungi/phases/pkgset/pkgsets.py:617:12: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:241:16: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:244:16: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:370:16: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:374:20: E721 Do not compare types, use `isinstance()`
```

Signed-off-by: Timothée Ravier <tim@siosm.fr>
(cherry picked from commit 2534ddee99)
2024-08-30 13:30:32 +03:00
Simon de Vlieger
323d1c1eb6
osbuild: manifest type in config
Allow the manifest type used to be specified in the pungi configuration
instead of always selecting the manifest type based on the koji output.

Signed-off-by: Simon de Vlieger <cmdr@supakeen.com>
(cherry picked from commit f30a8b4d15)
2024-08-30 13:30:31 +03:00
Lubomír Sedlář
b0964ff555
4.5.1 release
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 3ffb991bac)
2024-08-30 13:30:30 +03:00
Ozan Unsal
79bc4e0c3a
gather_dnf.py: Do not raise error when the downloaded package is exists.
If the packages are pulled from different repos and a package is already
exists in target directory, pungi raises File exists error and breaks. This
behavior can be suspended and skipped if the package is already available.

Merges: https://pagure.io/pungi/pull-request/1696
Signed-off-by: Ozan Unsal <ounsal@redhat.com>
(cherry picked from commit dbc0e531b2)
2024-08-30 13:30:05 +03:00
Lubomír Sedlář
8772ccca23
New upstream release 4.7.0
(cherry picked from commit e0600a2abac9e0e9b8a3b15b51eb44e3cd467bd3)
2024-08-30 13:29:32 +03:00
Fedora Release Engineering
3bb34225a9
Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild
(cherry picked from commit 192a8ef731fbc134bf5337dfb3d60ba6c5ad7bd5)
2024-08-30 13:29:32 +03:00
Haibo Lin
daea6cabdf
Upstream release 4.6.3
(cherry picked from commit 9a24cfff1bfccbafde32a4a34805d9d0aeff5650)
2024-08-30 13:29:30 +03:00
Python Maint
35b720e87a
Rebuilt for Python 3.13
(cherry picked from commit 639bb6214433a96a6275817baf893ab4850a3309)
2024-08-30 13:29:29 +03:00
Lubomír Sedlář
5a6ee9f8eb
Bump release over f40-infra build
(cherry picked from commit 1ad8b6fa2edeb91316dd1d1e33a9c234800e28d9)
2024-08-30 13:29:28 +03:00
Lubomír Sedlář
9a64db0485
Require xorriso for bug#2278677
(cherry picked from commit 22214e03b888c9b5f85919815f2825ad176c5370)
2024-08-30 13:29:27 +03:00
Lubomír Sedlář
de7210f69a
Upstream release 4.6.2
(cherry picked from commit f24f577c89647dc80a84bfa76f3055d24ced55a5)
2024-08-30 13:29:05 +03:00
Lubomír Sedlář
24418ef74d
New upstream release 4.6.1
(cherry picked from commit 98b4f26e0972a2bea2d46f2c74c1db94ed087477)
2024-08-30 13:29:03 +03:00
f4765fbe3a
Remove python3-mock dependency
Merges: https://src.fedoraproject.org/rpms/pungi/pull-request/9

(cherry picked from commit 67a11d878b04bd46a0d9fb98036467bca6ffed92)
2024-08-30 13:28:01 +03:00
Fedora Release Engineering
80b9add9f7
Rebuilt for https://fedoraproject.org/wiki/Fedora_40_Mass_Rebuild
(cherry picked from commit 40fd963a495689a2a3a0279760f5a4024e7e5857)
2024-08-30 13:27:24 +03:00
Fedora Release Engineering
b241545ca6
Rebuilt for https://fedoraproject.org/wiki/Fedora_40_Mass_Rebuild
(cherry picked from commit 5cfb290545fdd5b18bb1691218e5e8e732e351e4)
2024-08-30 13:27:00 +03:00
Lubomír Sedlář
2e536228ae
Backport: Stop requiring repo option in ostree phase
(cherry picked from commit 6778cae05afb2b5784a46ed72ee2703785756dde)
2024-08-30 13:26:39 +03:00
Lubomír Sedlář
ff7950b9d1
ostree_container: Use unique temporary directory
(cherry picked from commit 58ca2a86231e53cc329e3e20294853230fabf587)
2024-08-30 13:26:38 +03:00
Lubomír Sedlář
6971624f83
New upstream release 4.6.0
(cherry picked from commit 2b47d8ea021a7b6e694c52fd8d74880f9a6b79a5)
2024-08-30 13:26:11 +03:00
Lubomír Sedlář
b7d371d1c3
Backport patch for explicit setting of osbuild image type
(cherry picked from commit c0bf9a2a78)
2024-08-30 13:25:21 +03:00
bc8c776872
- Method get_remote_file_content is object's method now 2024-05-04 10:43:19 +03:00
91d282708e
- Method get_remote_file_content is object's method now 2023-11-21 09:19:01 +02:00
ccaf31bc87
- Method get_remote_file_content is object's method now 2023-11-21 08:51:05 +02:00
5fe0504265
- Spec's changelog chronology is fixed 2023-11-15 15:14:22 +02:00
d79f163685
- Bump version 2023-11-15 14:49:51 +02:00
793fb23958
- Bump version 2023-11-15 14:02:10 +02:00
65d0c09e97
- Return empty list if a repo doesn't contain any module 2023-11-15 13:17:57 +02:00
0a9e5df66c
- Properly removing tmp files 2023-11-10 21:38:01 +02:00
ae527a2e01
- The unittests are fixed 2023-11-10 18:08:03 +02:00
Aditya Bisoi
4991144a01
4.5.0 release
Signed-off-by: Aditya Bisoi <abisoi@redhat.com>

(cherry picked from commit 4c7611291d (centos_master))
2023-11-10 16:58:03 +02:00
Lubomír Sedlář
68d94ff488
kojiwrapper: Stop being smart about local access
Rather than trying to use local access when it's accessible, let user
make the decision:

 * if koji_cache is configured use it and download stuff
 * if not, fall back to local access

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 0d3cd150bd)
2023-11-10 16:57:53 +02:00
Ozan Unsal
ce45fdc39a
Fix unittest errors
Signed-off-by: Ozan Unsal <ounsal@redhat.com>

(cherry picked from commit aa0aae3d3e (centos_master))
2023-11-10 16:57:51 +02:00
Lubomír Sedlář
b625ccea06
Add integrity checking for builds
When a real build is downloaded, Koji can provide a checksum via API.
This commit adds verification of that checksum.

A mismatch will abort the compose. If Koji doesn't provide a checksum
for the particular sigkey, no checking will happen.

Nothing is still checked for scratch builds and images.

This patch requires Koji 1.32. When talking to an older version, there
is no checking done.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 77f8fa25ad)
2023-11-10 16:55:44 +02:00
Lubomír Sedlář
8eccfc5a03
Add script for cleaning up the cache
Pungi would by default only ever add files to the cache. That would
eventually result in essentially a mirror of the Koji volume.

This patch adds a helper cleanup script. When called, it goes through
files in the cache and deletes anything that is not hardlinked from
elsewhere and with mtime not updated recently.

Cleaning up files that hardlinked from some compose would not save any
space anyway. The mtime check should account for cases like subpackage
being downloaded but not included in any compose. This would avoid it
from being downloaded over and over again.

When a compose fails or is aborted, there can be a stale lock file left
behind in the cache. This script cleans that up too.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit e6d9f31ef4 (centos_master))
2023-11-10 16:55:43 +02:00
Lubomír Sedlář
f5a0e06af5
Add ability to download images
This patch extends the ability to download files from Koji to image
building phases too.

There is no integrity checking for the downloaded images.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bf3e9bc53a)
2023-11-10 16:55:20 +02:00
Lubomír Sedlář
f6f54b56ca
Add support for not having koji volume mounted locally
With this patch, Pungi can be configured with a local directory to be
used as a cache for RPMs, and it will download packages from Koji over
HTTP instead of reading them from filesystem directly.

The files from the cache can then be hardlink as usual.

There is locking in place to avoid different composes running at the
same time to step on each other.

This is now supported for RPMs only, be it real builds or scratch
builds.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 631bb01d8f)
2023-11-10 16:55:19 +02:00
Aditya Bisoi
fcee346c7c
Remove repository cloning multiple times
JIRA: RHELCMP-8913
Signed-off-by: Aditya Bisoi <abisoi@redhat.com>
(cherry picked from commit b6296bdfcd)
2023-11-10 16:55:18 +02:00
Lubomír Sedlář
82ec38ad60
Support require_all_comps_packages on DNF backend
It's not a great name anymore though, because it will fail the compose
if any input package is missing, no matter whether it's from comps,
prepopulate or additional_packages.

JIRA: RHELCMP-12484
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 1c4275bbfa)
2023-11-10 16:55:17 +02:00
Lubomír Sedlář
c9cbd80569
Fix new warnings from flake8
Use isinstance rather than directly comparing types.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit fe2dad3b3c)
2023-11-10 16:55:16 +02:00
Aditya Bisoi
035fca1e6d
4.4.1 release
Signed-off-by: Aditya Bisoi <abisoi@redhat.com>

(cherry picked from commit 7128021654 (centos_master))
2023-11-10 16:55:15 +02:00
Lubomír Sedlář
0f8cae69b7
ostree: Add configuration for custom runroot packages
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bd64894a03)
2023-11-10 16:55:01 +02:00
Lubomír Sedlář
f17628dd5f
pkgset: Emit better error for missing modulemd file
The exceptions from libmodulemd are not particularly helpful as they do
not contain information about what file caused it.

   modulemd-yaml-error-quark: Failed to open file: Permission denied (0)

This patch should add the path to the problematic file into the message.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 14e025a5a1)
2023-11-10 16:55:00 +02:00
Lubomír Sedlář
f3485410ad
Add support for git-credential-helper
This patch adds an additional field `options` to scm_dict, which can be
used to provide additional information to the backends.

It implements a single new option for GitWrapper. This option allows
setting a custom git credentials wrapper. This can be useful if Pungi
needs to get files from a git repository that requires authentication.

The helper can be as simple as this (assuming the username is already
provided in the url):

    #!/bin/sh
    echo password=i-am-secret

The helper would need to be referenced by an absolute path from the
pungi configuration, or prefixed with ! to have git interpret it as a
shell script and look it up in PATH.

See https://git-scm.com/docs/gitcredentials for more details.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
JIRA: RHELCMP-11808
(cherry picked from commit ada8f4e346)
2023-11-10 16:54:59 +02:00
Haibo Lin
cccfaea14e
Support OIDC Client Credentials authentication to CTS
JIRA: RHELCMP-11324
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit e4c525ecbf)
2023-11-10 16:54:58 +02:00
Lubomír Sedlář
e2057b75c5
4.4.0 release
JIRA: RHELCMP-11764
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 091d228219 (centos_stream))
2023-11-10 16:54:57 +02:00
Lubomír Sedlář
44ea4d4419
gather-dnf: Run latest() later
The initial version of the filtered the latest builds at the start. That
doesn't matter in many cases:

* When there are no lookaside repos, there is generally a single version
  of each package.
* When lookaside repos do not overlap with compose repos, or contain
  only older versions.

It is however a problem when the lookaside repos contain higher version
of a package than what is in a compose repo, and some package explicitly
requires the older version.

Consider this scenario:

* lookaside contains bar-1.1
* compose repo contains bar-1.0 and foo-1.0
* foo-1.0 `Requires: bar < 1.1`

The original code would filter out the bar-1.0 package, and then fail on
unresolved dependencies.

This patch moves the computation of latest packages much later, to part
of code where all options to satisfy a dependency are selected and the
best match is chosen. At that point if there are multiple versions
available, we do want the latest one.

JIRA: SPMM-13483
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit bcc440491e)
2023-11-10 16:54:43 +02:00
Lubomír Sedlář
d4425f7935
iso: Support joliet long names
Without this option the names reported by joliet tree are truncated.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit fa50eedfad)
2023-11-10 16:54:42 +02:00
Lubomír Sedlář
c8118527ea
Drop pungi-orchestrator code
This was never actually used.

JIRA: RHELCMP-10218
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit b7adbf8a91 (centos_master))
2023-11-10 16:54:40 +02:00
Lubomír Sedlář
a8ea322907
isos: Ensure proper file ownership and permissions
The genisoimage backend uses the -rational-rock option, which sets uid
and gid to 0, and makes file readable by everyone.

With xorriso this must be done explicitly. Setting ownership is a single
command, but the permissions require a per-file command to not make
files executable where not needed.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2203888
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 82ae9e86d5 (centos_master))
2023-11-10 16:54:22 +02:00
Lubomír Sedlář
c4995c8f4b
gather: Always get latest packages
If lookaside contains an older version of a package, but with a
different arch, the depsolver doesn't notice that and prefers the
lookaside version.

This is not correct. The latest package should be used no matter if
there are different arches available.

The filtering in DNF doesn't ensure this, so we have to build it
ourselves. To limit the performance impact, only run this filtering when
there actually are some lookaside repos configured.

JIRA: RHELCMP-11728

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 2ad341a01c)
2023-11-10 16:54:01 +02:00
Lubomír Sedlář
997e372f25
Add back compatibility with jsonschema <3.0.0
Resolves: https://pagure.io/pungi/issue/1667
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit e888e76992 (centos_master))
2023-11-10 16:54:00 +02:00
Lubomír Sedlář
42f1c62528
Remove useless debug message
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 6e72de7efe)
2023-11-10 16:52:27 +02:00
Lubomír Sedlář
3fd29d0ee0
Remove fedmsg from requirements
The code for sending messages in Fedora actually relies on
fedora-messaging library now. However, we do not have any tests for
that, so there's little reason to pull the library in via
requirements.txt

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit c8263fcd39 (centos_master))
2023-11-10 16:52:04 +02:00
Lubomír Sedlář
c1f2fa5035
gather: Support dotarch in DNF backend
The documentation claims that dotarch syntax is supported for additional
packages. For yum backend this seems to be handled automatically, but
the dnf backend could not interpret this.

This patch checks if a package is specified in the syntax and contains a
valid architecture. If so, the query will honor the arch.

JIRA: RHELCMP-11728
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 82ca4f4e65)
2023-11-10 16:51:55 +02:00
Aurélien Bompard
85c9e9e776
Set the priority in the fedora-messaging notifier
According to [infra ticket #10899](https://pagure.io/fedora-infrastructure/issue/10899),
ostree messages should have prioriy 3.

Signed-off-by: Aurélien Bompard <aurelien@bompard.org>
(cherry picked from commit b8b6b46ce7)
2023-11-10 16:51:54 +02:00
Lubomír Sedlář
33012ab31e
Fix compatibility with createrepo_c 0.21.1
The length of the file entry tuple has changed, it can not be unpacked
reliably.

Relates: https://github.com/rpm-software-management/createrepo_c/issues/360
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e9d836c115)
2023-11-10 16:51:53 +02:00
Lubomír Sedlář
72ddf65e62
comps: Apply arch filtering to environment/optionlist
Let's filter this list too, not just the grouplist tag.

JIRA: RHELCMP-7926
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d3f0701e01)
2023-11-10 16:51:52 +02:00
Haibo Lin
c402ff3d60
Add config file for cleaning up cache files
systemd-tmpfiles is required to enable the auto clean up.

JIRA: RHELCMP-6327
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 8f6f0f463f)
2023-11-10 16:51:51 +02:00
Haibo Lin
8dd344f9ee
4.3.8 release
JIRA: RHELCMP-11448
Signed-off-by: Haibo Lin <hlin@redhat.com>

(cherry picked from commit 467c7a7f6a (centos_master))
2023-11-10 16:51:49 +02:00
Lubomír Sedlář
d07f517a90
createiso: Update possibly changed file on DVD
There's no good way of detecting if buildinstall phase tweaked boot
configuration (and efiboot.img). We should update those files in the DVD
just to be sure.

The .discinfo file is always different and needs to be updated.

Relates: https://pagure.io/pungi/issue/1647
JIRA: RHELCMP-10811
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit e1d7544c2b)
2023-11-10 16:51:39 +02:00
Lubomír Sedlář
48366177cc
pkgset: Stop reuse if configuration changed
When options controlling excluding arches change, it should break reuse.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit a71c8e23be)
2023-11-10 16:51:38 +02:00
Lubomír Sedlář
4cb8671fe4
Allow disabling inheriting ExcludeArch to noarch packages
Copying ExcludeArch/ExclusiveArch from source rpm to noarch is an easy
option to block shipping that particular noarch package from a certain
architecture. However, there is no way to bypass it, and it is rather
confusing and not discoverable.

An alternative way to remove an unwanted package is to use the good old
`filter_packages`, which has enough granularity to remove pretty much
anything from anywhere. The only downside is that it requires a change
in configuration, so it can't be done by a packager directly from a spec
file.

When we decide to break backwards compatibility, this option should be
removed and the entire ExcludeArch/ExclusiveArch inheritance removed
completely.

JIRA: ENGCMP-2606
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit ab508c1511)
2023-11-10 16:51:37 +02:00
Lubomír Sedlář
135bbbfe7e
pkgset: Support extra builds with no tags
This is a rather fringe use case. If the configuration contains
pkgset_koji_builds or pkgset_koji_scratch_tasks but no pkgset_koji_tag,
the compose will be empty.

The expectation though is that the packages should be pulled.

The extra RPMs are added to all non-modular tags because they are
supposed to mask builds from the same packages (e.g. user may want to
explicitly pull in older version than tagged).

This patch adds support for composes containing only explicitly listed
builds by creating a dummy package set that is not actually using any
tag.

JIRA: RHELCMP-11385
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit f960b4d155)
2023-11-10 16:51:36 +02:00
Lubomír Sedlář
5624829564
buildinstall: Avoid pointlessly tweaking the boot images
Only modify boot images if there actually is some change.

The tweak function updates config files with volume id and kickstart
file. Even if we don't have a kickstart and there is no change in the
config files, the image will be regenerated. This leads to a change in
checksum for no good reason.

This patch keeps track of modified config files. If there are none, it
avoids touching anything else.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 602b698080)
2023-11-10 16:51:35 +02:00
Haibo Lin
5fb4f86312
Prevent to reuse if unsigned packages are allowed
JIRA: RHELCMP-8415
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit b30f7e0d83)
2023-11-10 16:51:34 +02:00
Lubomír Sedlář
e891fe7b09
Pass parent id/respin id to CTS
When the --target-dir option is used, the compose can be created in CTS,
but the parent and respin information is not passed through. That leads
to data missing later on.

JIRA: RHELCMP-11411
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>

(cherry picked from commit 0c3b6e22f9 (centos_master))
2023-11-10 16:51:33 +02:00
Haibo Lin
4cd7d39914
Exclude existing files in boot.iso
JIRA: RHELCMP-10811
Fixes: https://pagure.io/pungi/issue/1647
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 3175ede38a)
2023-11-10 16:50:46 +02:00
Lubomír Sedlář
5de829d05b
image-build/osbuild: Pull ISOs into the compose
OSBuild tasks can produce ISO files. If they do, we should include them
in the compose, and we should pull them into the iso/ subdirectory
together with other ISOs.

Fixes: https://pagure.io/pungi/issue/1657
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8920eef339)
2023-11-10 16:50:45 +02:00
Lubomír Sedlář
2930a1cc54
Retry 401 error from CTS
This could be a transient error caused by kerberos server instability.

JIRA: RHELCMP-11251
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 58036eab84)
2023-11-10 16:50:43 +02:00
Lubomír Sedlář
9c4d3d496d
gather: Better detection of debuginfo in lookaside
If the depsolver wants to include a package that is present in both the
source repo and a lookaside repo, it reliably detects binary packages
present in lookaside, but for debuginfo it's not so reliable.

There is a separate package object for each package in each repo.
Depending on which one is used, debuginfo could be included in the
result or not. This patch fixes that by actually looking if the same
package is present in any lookaside repo.

JIRA: RHELCMP-9373
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit a4476f2570)
2023-11-10 16:50:42 +02:00
Haibo Lin
4637fd6697
Log versions of all installed packages
JIRA: RHELCMP-9493
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit 8c06b7a3f1)
2023-11-10 16:50:41 +02:00
Lubomír Sedlář
2ff8132eaf
Use authentication for all CTS calls
The update of compose URL relied on environment being set from the
initial import. This got broken when a unique credentials cache started
to be used, and was cleaned up after the import.

JIRA: RHELCMP-11072
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 64ae81b416)
2023-11-10 16:50:40 +02:00
Lubomír Sedlář
f9190d1fd1
Fix black complaints
These are newly detected by black 23.1.0.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 826169af7c)
2023-11-10 16:50:38 +02:00
Lubomír Sedlář
80ad0448ec
Add vhd.gz extension to compressed VHD images
JIRA: RHELCMP-11027
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit d97b8bdd33)
2023-11-10 16:50:37 +02:00
Lubomír Sedlář
027380f969
Add vhd-compressed image type
JIRA: RHELCMP-11027
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 8768b23cbe)
2023-11-10 16:50:36 +02:00
Lubomír Sedlář
41048f60b7
Update to work with latest mock
The `called_once` attribute now raises an exception. Switch to
`assert_called_once` method. Also replace `assertTrue(x.called)` with
`x.assert_called()`.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 51628a974d)
2023-11-10 16:50:34 +02:00
Ondrej Nosek
9f8f6a7956
Default bztar format for sdist command
Usage of the 'bztar' format is unchanged, just changing the way
of configuration. The previous method was deprecated.

Signed-off-by: Ondrej Nosek <onosek@redhat.com>
(cherry picked from commit 88327d5784)
2023-11-10 16:50:33 +02:00
Lubomír Sedlář
3d3e4bafdf
- New upstream release 4.5.0
(cherry picked from commit 4dfabb647b (fedora_master))
2023-11-10 16:47:04 +02:00
Lubomír Sedlář
8fe0257e93
Release 4.4.1
(cherry picked from commit 4c604f434a (fedora_master))
2023-11-10 16:46:02 +02:00
Fedora Release Engineering
d7b5fd2278
Rebuilt for https://fedoraproject.org/wiki/Fedora_39_Mass_Rebuild
Signed-off-by: Fedora Release Engineering <releng@fedoraproject.org>

(cherry picked from commit bf4f5b6e53 (fedora_master))
2023-11-10 16:44:52 +02:00
Lubomír Sedlář
8b49d4ad61
Backport patch from upstream PR 1690
(cherry picked from commit 2362ef59c5 (fedora_master))
2023-11-10 16:44:19 +02:00
Lubomír Sedlář
57443cd0aa
Backport patch from upstream PR 1690
(cherry picked from commit 9ee6caf117 (fedora_master))
2023-11-10 16:43:47 +02:00
Python Maint
1d146bb8d5
Rebuilt for Python 3.12
(cherry picked from commit 8b8b558fbc (fedora_master))
2023-11-10 16:42:36 +02:00
Lubomír Sedlář
790091b7d7
Release 4.4.0
(cherry picked from commit a6196da315 (fedora_master))
2023-11-10 16:42:10 +02:00
Lubomír Sedlář
28aad3ea40
Rebuild without fedmsg dependencs
(cherry picked from commit d142464ef1 (fedora_master))
2023-11-10 16:41:29 +02:00
Pierre-Yves Chibon
7373b4dbbf
Replace the requirement on fedmsg to one on fedora-messaging
Signed-off-by: Pierre-Yves Chibon <pingou@pingoured.fr>
(cherry picked from commit 802f5fe854)
2023-11-10 16:40:34 +02:00
Lubomír Sedlář
218b11f1b7
Backport patches
(cherry picked from commit 20a5d00961 (fedora_master))
2023-11-10 16:40:33 +02:00
Haibo Lin
bfbe9095d2
Release 4.3.8
Signed-off-by: Haibo Lin <hlin@redhat.com>

(cherry picked from commit 3548f55821 (fedora_master))
2023-11-10 16:38:58 +02:00
Lubomír Sedlář
eb17182c04
Update license tag to SPDX
(cherry picked from commit f9143f6ea1 (fedora_master))
2023-11-10 16:33:41 +02:00
f91f90cf64 - Test empty sub-package 2023-10-26 00:01:45 +03:00
49931082b2 - Test empty sub-package 2023-10-25 23:11:26 +03:00
8ba8609bda - Test empty sub-package 2023-10-25 22:58:28 +03:00
6f495a8133 - Test empty sub-package 2023-10-25 22:55:18 +03:00
2b4bddbfe0 - Test empty sub-package 2023-10-25 22:17:42 +03:00
032cf725de - Bump version
- Changelog
2023-07-25 11:12:03 +03:00
8b11bb81af AL-5220: Investigate why CL9 can't built on the new nebula
- Exclude the packages for using in a build
2023-07-24 18:26:51 +03:00
soksanichenko
114a73f100 - gather-module can find modules through symlinks
- Bump version
- Update changelog
2023-04-15 20:03:27 +03:00
soksanichenko
1c3e5dce5e - CLI option --label can be passed through a Pungi config file
- Bump version
- Update changelog
2023-04-13 00:57:39 +03:00
soksanichenko
e55abb17f1 - Bump version 2023-04-04 10:12:22 +03:00
soksanichenko
e81d78a1d1 - The log message contains a variant's name if Pungi didn't find one or more modules for that 2023-04-04 10:11:59 +03:00
soksanichenko
68915d04f8 - Excluded/included modules/packages will be processed correctly 2023-04-02 22:27:24 +03:00
soksanichenko
a25bf72fb8 - Changelog is updated
- Version is bumped
2023-03-31 12:07:22 +03:00
Stepan Oksanichenko
68aee1fa2d Merge pull request 'ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically' (#15) from ALBS-987 into al_master
Reviewed-on: #15
2023-03-31 09:03:39 +00:00
soksanichenko
6592735aec ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Unittests are fixed
2023-03-30 14:05:47 +03:00
soksanichenko
943fd8e77d ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Script `create extra repo` is fixed
- Unittests are fixed
2023-03-30 12:52:51 +03:00
soksanichenko
004fc4382f ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Review comments
2023-03-29 11:40:00 +03:00
soksanichenko
596c5c0b7f ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Refactoring
- Some absent packages are in packages.json now
2023-03-28 12:58:08 +03:00
soksanichenko
141d00e941 ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- More info about unsigned packages
2023-03-24 16:39:10 +02:00
soksanichenko
4b64d20826 ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Path.rglob/glob doesn't work with symlinks (it's the bug and reported)
- Refactoring
2023-03-24 12:45:28 +02:00
soksanichenko
0747e967b0 ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- Some refactoring
2023-03-23 09:36:52 +02:00
soksanichenko
6d58bc2ed8 ALBS-987: Generate i686 and dev repositories with pungi on building new distr. version automatically
- [Generator of packages.json] Replace using CLI by config.yaml
- [Gather RPMs] os.path is replaced by Path
2023-03-22 15:56:58 +02:00
Stepan Oksanichenko
60a347a4a2 Merge pull request 'ALBS-1030: Generate Devel section in packages.json' (#14) from ALBS-1030 into al_master
Reviewed-on: #14
2023-03-22 10:06:58 +00:00
soksanichenko
53ed7386f3 ALBS-1030: Generate Devel section in packages.json
- Redundant empty lines are removed
2023-03-20 13:56:44 +02:00
soksanichenko
ed43f0038e ALBS-1030: Generate Devel section in packages.json
- Style fix
2023-03-20 13:55:06 +02:00
soksanichenko
fcc9b4f1ca ALBS-1030: Generate Devel section in packages.json
- Skip verifying an RPM signature if sigkeys are empty
2023-03-20 13:25:45 +02:00
soksanichenko
d32c293bca ALBS-1030: Generate Devel section in packages.json
- Some upstream changes to KojiMock parts
2023-03-19 21:11:12 +02:00
soksanichenko
f0bd1af999 ALBS-1030: Generate Devel section in packages.json
- Also the tool can combine (remove and add) packages in a variant from different
  sources according to an url's type of source
2023-03-19 18:21:33 +02:00
soksanichenko
1b4747b915 - Changelog is updated
- Version is bumped
- New release 4.3.7-3.alma
2023-03-17 12:02:48 +02:00
Lubomír Sedlář
6aabfc9285 osbuild: test passing of rich repos from configuration
Test that "rich" repositories defined as dicts in the configuration
stay as dicts in the arguments passed to the osbuild phase.

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
(cherry picked from commit 8be0d84f8a)
2023-03-17 11:58:11 +02:00
Tomáš Hozza
9e014fed6a osbuild: support specifying package_sets for repos
The `koji-osbuild` plugin supports additional formats for the `repo`
property since v4 [1]. Specifically, a repo can be specified as a
dictionary with `baseurl` key and `package_sets` list containing
specific package set names, that the repository should be used for.

Extend the configuration schema to reflect the plugin change.
Extend the documentation to cover the new repository format.
Extend an existing unit test to specify additional repository using the
added format.

[1] https://github.com/osbuild/koji-osbuild/pull/82

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
(cherry picked from commit 8f0906be53)
2023-03-17 11:58:11 +02:00
Tomáš Hozza
7ccb1d4849 osbuild: don't use util.get_repo_urls()
Don't use `util.get_repo_urls()` to resolve provided repositories, but
implement osbuild-specific variant of the function named
`_get_repo_urls(). The reason is that the function from `utils`
transforms repositories defined as dicts to strings, which is
undesired for osbuild. The requirement for osbuild is to preserve the
dict as is, just to resolve the string in `baseurl` to the actual
repository URL.

Add a unit test covering the newly added function. It is inspired by a
similar test from `test_util.py`.

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
(cherry picked from commit e3072c3d5f)
2023-03-17 11:58:11 +02:00
Tomáš Hozza
abec28256d osbuild: update schema and config documentation
The `koji-osbuild` Hub schema has been relaxed a bit in the latest
release (v11). Adjust the schema in Pungi to reflect changes in
`koji-osbuild`.

For more information on the changes in `koji-osbuild`, see:
https://github.com/osbuild/koji-osbuild/pull/108

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
(cherry picked from commit ef6d40dce4)
2023-03-17 11:58:11 +02:00
Lubomír Sedlář
46216b4f17 Speed up tests by 30 seconds
The retry test for CTS doesn't actually need to wait. Let's mock the
sleep function.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit df6664098d)
2023-03-17 11:58:11 +02:00
Lubomír Sedlář
02b3adbaeb Stop sending compose paths to CTS
The tracking service will reject it as it's not an HTTP URL. Let's not
even try.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 147df93f75)
2023-03-17 11:58:11 +02:00
Lubomír Sedlář
d17e578645 Report errors from CTS
If the service returns a status code indicating a user error, report
that and do not retry.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit dd8c1002d4)
2023-03-17 11:58:11 +02:00
Lubomír Sedlář
6c1c9d9efd createiso: Create Joliet tree with xorriso
This structure is important for isoinfo -J, which is in turn called by
virt-install.

This can be tested by using a bootable ISO by modifying it with a dummy
additional file and preserving boot records:

    $ xorriso -indev netinst.iso -outdev test.iso -boot_image any replay -map setup.py setup.py -end
    ...
    $ isoinfo -J -i test.iso
    isoinfo: Unable to find Joliet SVD
    $ rm test.iso
    $ xorriso -indev netinst.iso -outdev test.iso -joliet on -boot_image any replay -map setup.py setup.py -end
    ...
    $ isoinfo -J -i test.iso
    $

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2144105
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 12e3a46390)
2023-03-17 11:58:04 +02:00
Stepan Oksanichenko
8dd7d8326f Merge pull request 'ALBS-1040: Investigate why Pungi doesn't put modules packages into the final repos' (#13) from ALBS-1040 into al_master
Reviewed-on: #13
2023-03-16 11:52:02 +00:00
soksanichenko
d7b173cae5 ALBS-1040: Investigate why Pungi doesn't put modules packages into the final repos
- The unitttest is fixed
2023-03-14 18:43:14 +02:00
soksanichenko
fa4640f03e ALBS-1040: Investigate why Pungi doesn't put modules packages into the final repos
- Refactoring
- KojiMock extracts all modules which are suitable for the variant's arches
2023-03-14 18:25:21 +02:00
Stepan Oksanichenko
d66eb0dea8 Merge pull request 'ALBS-1032: Generate i686 section for all variants in packages.json' (#12) from ALBS-1032 into al_master
Reviewed-on: #12
2023-03-14 16:21:41 +00:00
soksanichenko
d56227ab4a ALBS-1032: Generate i686 section for all variants in packages.json
- Remove old non-necessary methods
- Some fixes to arch code
2023-03-09 12:32:11 +02:00
soksanichenko
12433157dd - changelog 2022-11-12 00:04:44 +02:00
soksanichenko
623955cb1f - python3-distro as dependency 2022-11-11 19:21:37 +02:00
soksanichenko
4e0d2d14c9 - Unify branch for both RHEL versions 2022-11-11 16:31:43 +02:00
soksanichenko
b61e59d676 - Use unittest.mock instead external mock 2022-11-11 15:32:00 +02:00
soksanichenko
eb35d7baac - Unify branch for both RHEL versions 2022-11-11 01:38:14 +02:00
soksanichenko
54209f3643 ALBS-732 2022-11-09 21:42:13 +02:00
soksanichenko
80c4536eaa ALBS-732 2022-11-09 21:27:51 +02:00
soksanichenko
9bb5550d36 ALBS-732 2022-11-09 21:01:30 +02:00
soksanichenko
364ed6c3af - kojimock is added to pungi.phases.gather._make_lookaside_repo#prefixes
- unittests are fixed
2022-11-09 20:56:56 +02:00
soksanichenko
0b965096ee - PkgsetSourceKojiMock is added to ALL_SOURCES 2022-11-09 18:18:12 +02:00
soksanichenko
d914626d92 - "kojimock" is valid value for option "pkgset_source" 2022-11-09 17:59:50 +02:00
soksanichenko
32215d955a - fedmsg is removed as not needed 2022-11-09 12:38:34 +02:00
soksanichenko
d711f8a2d6 - fedmsg is removed as not needed 2022-11-09 09:06:09 +02:00
soksanichenko
bd9d800b52 - Fix spec 2022-11-08 17:11:21 +02:00
soksanichenko
e03648589d - Fix spec 2022-11-08 17:09:03 +02:00
soksanichenko
b5fe2e8129 - Fix spec 2022-11-08 17:06:36 +02:00
soksanichenko
b14e85324c - Fix unittests 2022-11-08 14:57:52 +02:00
soksanichenko
5a19ad2258 - Fix unittests 2022-11-08 12:47:14 +02:00
soksanichenko
9ae49dae5b - Fix unittests 2022-11-08 01:43:53 +02:00
soksanichenko
c82cbfdc32 - Fix unittests 2022-11-08 00:59:10 +02:00
soksanichenko
ee9c9a74e6 - Fix unittests 2022-11-07 23:55:26 +02:00
soksanichenko
ea0f933315 - Updates from upstream (https://pagure.io/pungi.git#master) 2022-11-07 23:40:26 +02:00
soksanichenko
323d31df2b Merge branch 'master' into a8_updated
# Conflicts:
#	pungi.spec
#	pungi/wrappers/kojiwrapper.py
#	setup.py
#	tests/test_extra_isos_phase.py
#	tests/test_pkgset_pkgsets.py
2022-11-07 23:38:38 +02:00
soksanichenko
9acd7f5fa4 Merge remote-tracking branch 'centos-origin/master' 2022-11-07 23:33:20 +02:00
soksanichenko
a2b16eb44f - spec is updated (merged with last changed from Fedora repo
https://src.fedoraproject.org/rpms/pungi/blob/main/f/pungi.spec
2022-11-07 23:33:03 +02:00
soksanichenko
ff946d3f7b - Unittests are fixed 2022-11-07 20:15:37 +02:00
soksanichenko
ede91bcd03 - Right name of the class in constructor 2022-11-07 20:03:59 +02:00
soksanichenko
0fa459eb9e - Right name of the class in constructor 2022-11-07 19:56:02 +02:00
soksanichenko
b49ffee06d - Mock of Koji is moved to the separate modules, classes
- Unittests for mock of Koji are moved to the separate
2022-11-07 19:24:39 +02:00
soksanichenko
fce5493f09 Merge remote-tracking branch 'centos-origin/master'
# Conflicts:
#	pungi/phases/init.py
#	pungi/wrappers/comps.py
2022-11-03 22:49:11 +02:00
soksanichenko
750499eda1 - The unittests are fixed 2022-10-19 14:10:48 +03:00
soksanichenko
d999960235 - bump the dependency version 2022-10-19 13:00:32 +03:00
soksanichenko
6edece449d - changelog
- bump version
2022-10-19 04:40:39 +03:00
Stepan Oksanichenko
dd22d94a9e Merge pull request 'Replace list of cr.packages by cr.PackageIterator' (#6) from package_iterator into aln8
Reviewed-on: #6
2022-10-19 01:38:44 +00:00
soksanichenko
b157a1825a Do not lose a module from koji if we have more than one arch (e.g. x86_64 + i686) 2022-10-19 04:33:34 +03:00
soksanichenko
fd298d4f17 Replace list of cr.packages by cr.PackageIterator 2022-10-18 22:53:50 +03:00
soksanichenko
f21ed6f607 ALBS-334: Make the ability of Pungi to give module_defaults from remote sources 2022-05-04 20:16:23 +03:00
soksanichenko
cfe6ec3f4e Merge pull request 'ALBS-334: Make the ability of Pungi to give module_defaults from remote sources' (#4) from ALBS-334 into aln8
Reviewed-on: #4
2022-05-04 17:05:45 +00:00
soksanichenko
e6c6f74176 ALBS-334: Make the ability of Pungi to give module_defaults from remote sources 2022-05-03 18:18:17 +03:00
soksanichenko
8676941655 ALBS-334: Make the ability of Pungi to give module_defaults from remote sources 2022-05-02 02:25:32 +03:00
soksanichenko
5f74175c33 ALBS-334: Make the ability of Pungi to give module_defaults from remote sources 2022-05-01 03:41:40 +03:00
soksanichenko
1e18e8995d ALBS-334: Make the ability of Pungi to give module_defaults from remote sources 2022-05-01 03:32:01 +03:00
soksanichenko
38ea822260 ALBS-334: Make the ability of Pungi to give module_defaults from remote sources 2022-04-30 00:27:31 +03:00
soksanichenko
34eb45c7ec ALBS-334: Make the ability of Pungi to give module_defaults from remote sources 2022-04-29 21:39:51 +03:00
soksanichenko
7422d1e045 ALBS-334: Make the ability of Pungi to give module_defaults from remote sources 2022-04-29 21:33:28 +03:00
soksanichenko
97801e772e ALBS-334: Make the ability of Pungi to give module_defaults from remote sources 2022-04-29 21:25:59 +03:00
soksanichenko
dff346eedb - Unit tests are fixed 2022-04-28 16:44:47 +03:00
soksanichenko
de53dd0bbd - Unit tests are fixed 2022-04-28 16:30:03 +03:00
soksanichenko
88121619bc ALBS-226: Patch pungi/lorax for building AL9
- Defaults modules can be empty, but pungi detects
  empty folder while copying and raises the exception in this case
2022-03-18 22:37:57 +02:00
soksanichenko
0484426e0c ALBS-97: Build AlmaLinux PPC64le repos and ISOs with pungi
- Changelog
- Version is bumped

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: I933925b7a27a5e1b642020e060f59212fdc6ebf4
2021-12-30 12:42:34 +02:00
soksanichenko
b9d86b90e1 ALBS-97: Build AlmaLinux PPC64le repos and ISOs with pungi
- Scripts `create_packages_json` & `gather_modules` can process lzma compressed yaml files
- Script `create_package_json` can use repodata there are packages with different
  arch by compare with passed to the script

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: Ia9a3bacfa4344f0cf33b9f416649fd4a5f8d3c37
2021-12-28 16:08:04 +02:00
soksanichenko
58a16e5688 - The version is bumped
- The changelog is updated
- The test `create_packages_json` is fixed

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: I173013da990eb296e58ca8f3555a05913ca1c852
2021-12-20 14:11:17 +02:00
soksanichenko
f2ed64d952 ALBS-66: Prepare Jenkins jobs for building distribution of AlmaLinux 8.5
- Script `create_packages_json` can duplicate the packages with
  same version in different variants

Change-Id: I3c79ad06c4c22442423c12d5fa06baf82d663a3f
2021-11-10 15:29:59 +02:00
stepan_oksanichenko
b2c49dcaf6 - The version is bumped
- The changelog is updated

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: Iadbf3d7223db85a58ba82f41597de27dbfffe1ca
2021-06-18 14:47:09 +03:00
stepan_oksanichenko
14dd6a195f LNX-326: Add the ability to include any package by mask in packages.json to the generator
- The reference packages should be replaced only by the newer reference packages
- The non-reference packages should be replaced by both of types packages

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: I881bd4e58527ae219ef6e1adbc6332b3b05933c1
2021-06-18 14:23:42 +03:00
stepan_oksanichenko
084321dd97 LNX-326: Add the ability to include any package by mask in packages.json to the generator
- The ability is added
- Also the generator includes only the latest by version packages to packages.json
- The generator has key `--is-reference` for an each repo. This key marks a repo as reference.
  An reference repo is used as main source of packages. A not reference repo is used as source
  of packages which don't exist in the reference repos.
- All cases are covered by the unittest

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: I2f80ba4fbfce27fb9a30500ae46c0b8a2f2aabcd
2021-06-15 17:42:12 +03:00
stepan_oksanichenko
941d6b064a LNX-318: Modify build scripts for building CloudLinux OS 8.4
- [Fixed] The script `create_packages_json` selects a first
          comer package from variant, but it should select the
          higher by version of package

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: I36268f2a493897fc11e787c040066d2d501a1c81
2021-06-04 12:36:03 +03:00
stepan_oksanichenko
aaeee7132d - It's bumped version
- It's added changelog

@BS-TARGET-CL8

Change-Id: I51eef1eb45ba54d034e6bed46d99b0470f4e9221
2021-05-25 21:28:47 +03:00
stepan_oksanichenko
cc4d99441c LNX-108: Add multiarch support to pungi
@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: Ibfd540454941922d790ae4e56cc0992c0c85635d
2021-05-24 18:07:11 +03:00
stepan_oksanichenko
a435eeed06 - it's added changelog
@BS-NOBUILD

Change-Id: I3a0a0377f9c1cefabf52c33fbc0d19ab0e4fe4f1
2021-04-29 17:15:17 +03:00
stepan_oksanichenko
b9f554bf39 LNX-311: Add ability to productmd set a main variant while dumping TreeInfo
@BS-NOBUILD
@BS-TARGET-CL8
@BS-LINKED-608ab56299ce8ac801a396c5  # python3-productmd

Change-Id: Id86d627ae8ae0b9a73b5ce6531c20538f3d040b1
2021-04-29 17:01:49 +03:00
stepan_oksanichenko
ebf028ca3b LNX-286: Prepare pungi configuration and setup Jenkins job for AlmaLinux 8.4 beta
- The modules from a parsend output of FUS should be have a stream
  with replaced dash by underscore

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: If36d3d0a1ef8010bf85a4a0218b9838e0888453c
2021-04-27 13:39:09 +03:00
stepan_oksanichenko
305103a38e LNX-286: Prepare pungi configuration and setup Jenkins job for AlmaLinux 8.4 beta
- Some modules can be absent in koji env but be present in variants.xml.
  And Pungi will fail in this case. So we must filter out those modules
  from expected modules list by list from pungi build config

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: I22c15c42868412e34fd554030130bd7c3e25b8ef
2021-04-23 13:03:05 +03:00
stepan_oksanichenko
01bce26275 LNX-286: Prepare pungi configuration and setup Jenkins job for AlmaLinux 8.4 beta
- The script `gather_modules` should replace `-` by `_`
  in stream of modules as pungi does it in self

@BS-NOBUILD
@BS-TARGET-CL8

Change-Id: Iea05b70afbf80f3ccd20ad4943c9d86c7ed7aa90
2021-04-22 13:40:48 +03:00
soksanichenko
4d763514c1 - Version is bumped
- Changelog is added

Change-Id: I440b44f12c4a1aa41619acd3ba5ca354dc71b419
2021-02-24 17:42:22 +02:00
Danylo Kuropiatnyk
41381df6a5 LU-2202: Start unittests during installation or build of pungi
* added section with tests and pytest module to requires
IMPORTANT - build.sh script is commented
* added pyfakefs dependency
* fixed little mock_open issue for runroot test
* bumped version

@BS-TARGET-CL8

Change-Id: I036db225646875eb610736cd26f473850a78447c
2021-02-23 07:55:36 -05:00
soksanichenko
02686d7bdf LU-2186 .treeinfo file in AlmaLinux public kickstart repo should contain AppStream variant
- We are modifying existing repo's .treeinfo:
-- Take info about included variants from iso's .treeinfo and put it to repo's .treeinfo

Change-Id: I29bf655d90994e8a1bda40ad04568dd7364f5dca
2021-02-23 06:48:15 -05:00
soksanichenko
2e48c9a56f LU-2195 Change path to sources and iso when generating repositories
- We should add the images to the compose if they will be used only as netinstall image.
  E.g. *-boot.iso.
- And we shouldn't add if the images will be modified in phase `extra_isos`.
  E.g. *-minimal.iso

Change-Id: I9095cfd87414ecca46b1213553589731c82dd2e2
2021-02-22 13:23:48 +02:00
soksanichenko
b3a8c3f28a - Version is bumped
- Changelog is added

Change-Id: Ib1366f1fe2639037db99b8e939537bb63801058e
2021-02-11 14:50:12 +02:00
soksanichenko
5434d24027 LU-2133: Prepare CI for iso builds of CLOSS 8
@BS-TARGET-CL8
@BS-NOBUILD

- It's added the script which can collect packages/modules
  from the remote repos (including BS repos) and merge them
  to an one local repo with the right repodata (including
  modules.yaml.gz)
- The script `create_packages_json` can use regexps for list of excluded packages

Change-Id: I1365b712460959db6bb451d1199d640bff6ffe5e
2021-02-09 10:47:46 +02:00
soksanichenko
3b5501b4bf LNX-133: Create a server for building nightly builds of AlmaLinux
- It's added key argument '--json-output-path' to script `pungi-generate-package-json`

Change-Id: Ic18fa2708cc4913002023828b3be018d4907de25
2021-01-28 14:03:40 +02:00
soksanichenko
cea8d92906 Bump version for setup.py
Change-Id: I980e9ebb728c3a88597c987d585e1b5937499e81
2021-01-28 00:06:40 +02:00
soksanichenko
1a29de435e - It's added changelog
- Its bumped version

Change-Id: I4c7b8d9c64da3379a24d93837657cec2686a8511
2021-01-27 23:47:39 +02:00
soksanichenko
69ed7699e8 LNX-133: Create a server for building nightly builds of AlmaLinux
- It's added dependency `python3-dataclasses` to spec

Change-Id: Id6b6f33ca6621ddc1408d9ab51e278801e4dd0a2
2021-01-27 07:47:07 -05:00
Stepan Oksanichenko
103c3dc608 LNX-133: Create a server for building nightly builds of AlmaLinux
- Script `pungi-gather-modules` can find valid *modules.yaml.gz in the repo dirs by itself

@BS-LINKED-5ffda6156f44affc6c5ea239  # pungi & dependencies
@BS-TARGET-CL8

Change-Id: I3cddc0cf41ea1087183e23de39126a52c69bc9ac
2021-01-25 16:17:35 +02:00
Stepan Oksanichenko
94ad7603b8 LNX-104: Create gather_prepopulate file generator for Pungi
- It's added the tool which can generate json like as `centos-packages.json` using repodata from completed repos.

@BS-LINKED-5ffda6156f44affc6c5ea239  # pungi & dependencies
@BS-TARGET-CL8

Change-Id: Ib0466a1d8e06feb855e81fb7160fe170e2e82e04
2021-01-25 16:17:34 +02:00
oshyshatskyi
903db91c0f LNX-102: Patch pungi tool to use local koji mock
Instead of koji.mbox use local koji-like wrapper.

@BS-LINKED-5ff8b8cb6f44affc6c5e9a7a
@BS-TARGET-CL8

Change-Id: I82a2bc8bc71ae06240656898f3df71bb28bcb9e9
2021-01-25 16:17:33 +02:00
oshyshatskyi
552343fffe LNX-102: Add tool that gathers directory for all rpms
Tool that finds all available rpm files in directory
and creates special tree for pungi:
 # ls /mnt/koji/
   i686/  noarch/  x86_64/

Change-Id: Ibcf2d23c46411ad89477058f4d56e07ca117f0d1
2021-01-25 16:17:33 +02:00
oshyshatskyi
5806217041 LNX-102: Add tool that collects information about modules
Add special tool that gathers given modules.tar.gz files
and collects information about modules into two dirs:
 - module_defaults
 - modules

 First one is used by pungi during repocreate phase and
 the second one is used by koji mock to get list of
 available modules and their versions.

Change-Id: I50a095a5f3bafa7e7a1effc2c0d4a2fc52ba603b
2021-01-25 16:17:33 +02:00
67eacf8483 LNX-103 Update .spec file for AlmaLinux
New binaries added to pungi rpm:
pungi-gather-rpms
pungi-gather-modules

Change-Id: Idb25dffb10d50fa9f566c99d714d32df962b6f52
2021-01-25 16:17:32 +02:00
Ken Dreyer
38789d07ee doc: remove default createrepo_checksum value from example
createrepo_checksum already defaults to sha256. Remove this setting from
the documented Minimal Example configuration to make it easier to read.

Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
(cherry picked from commit 39b847094a)
2021-01-25 14:06:34 +02:00
Lubomír Sedlář
3735aaa443 comps: Preserve default arg on groupid
When the wrapper processes comps file, it wasn't emitting "default"
argument for groupid element. The default is false and most entries are
actually using the default, so let's only emit it if set to true.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1882358
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
(cherry picked from commit 9ea1098eae)
2021-01-25 14:06:33 +02:00
Haibo Lin
2c1603c414 Stop copying .git directory with module defaults
JIRA: RHELCMP-3016
Fixes: https://pagure.io/pungi/issue/1464

Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit f518c1bb7c)
2021-01-25 14:06:33 +02:00
Haibo Lin
f2fd10b0ab React to SIGINT signal
ODCS sends SIGINT signal.

JIRA: RHELCMP-3687
Signed-off-by: Haibo Lin <hlin@redhat.com>
(cherry picked from commit f470599f6c)
2021-01-25 14:06:33 +02:00
Sergey Fokin
ac601ab8ea change Source0 in spec file 2021-01-08 13:30:17 +03:00
oshyshatskyi
757a6ed653 Revert unneeded commit to match upstream sources
This reverts commit b2e439e5

Change-Id: Ia6706415039681a6fe7b5ec6a735c3bda66d6bb1
2020-12-30 13:58:06 +02:00
Oleksandr Shyshatskyi
b2e439e561 current 2020-12-29 10:44:49 +02:00
155 changed files with 9488 additions and 4991 deletions

41
1715.patch Normal file
View File

@ -0,0 +1,41 @@
From 432b0bce0401c4bbcd1a958a89305c475a794f26 Mon Sep 17 00:00:00 2001
From: Adam Williamson <awilliam@redhat.com>
Date: Jan 19 2024 07:25:09 +0000
Subject: checks: don't require "repo" in the "ostree" schema
Per @siosm in https://pagure.io/pungi-fedora/pull-request/1227
this option "is deprecated and not needed anymore", so Pungi
should not be requiring it.
Merges: https://pagure.io/pungi/pull-request/1714
Signed-off-by: Adam Williamson <awilliam@redhat.com>
---
diff --git a/pungi/checks.py b/pungi/checks.py
index a340f93..db8b297 100644
--- a/pungi/checks.py
+++ b/pungi/checks.py
@@ -1066,7 +1066,6 @@ def make_schema():
"required": [
"treefile",
"config_url",
- "repo",
"ostree_repo",
],
"additionalProperties": False,
diff --git a/pungi/phases/ostree.py b/pungi/phases/ostree.py
index 90578ae..2649cdb 100644
--- a/pungi/phases/ostree.py
+++ b/pungi/phases/ostree.py
@@ -85,7 +85,7 @@ class OSTreeThread(WorkerThread):
comps_repo = compose.paths.work.comps_repo(
"$basearch", variant=variant, create_dir=False
)
- repos = shortcuts.force_list(config["repo"]) + self.repos
+ repos = shortcuts.force_list(config.get("repo", [])) + self.repos
if compose.has_comps:
repos.append(translate_path(compose, comps_repo))
repos = get_repo_dicts(repos, logger=self.pool)

View File

@ -2,6 +2,7 @@ include AUTHORS
include COPYING include COPYING
include GPL include GPL
include pungi.spec include pungi.spec
include setup.cfg
include tox.ini include tox.ini
include share/* include share/*
include share/multilib/* include share/multilib/*

1
TODO
View File

@ -47,7 +47,6 @@ Split Pungi into smaller well-defined tools
* create install images * create install images
* lorax * lorax
* buildinstall
* create isos * create isos
* isos * isos

View File

@ -0,0 +1,2 @@
# Clean up pungi cache
d /var/cache/pungi/createrepo_c/ - - - 30d

130
doc/_static/phases.svg vendored
View File

@ -1,22 +1,22 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg <svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="610.46454" width="610.46454"
height="301.1662" height="327.16599"
viewBox="0 0 610.46457 301.1662" viewBox="0 0 610.46457 327.16599"
id="svg2" id="svg2"
version="1.1" version="1.1"
inkscape:version="1.0.2 (e86c870879, 2021-01-15)" inkscape:version="1.3.2 (091e20e, 2023-11-25)"
sodipodi:docname="phases.svg" sodipodi:docname="phases.svg"
inkscape:export-filename="/home/lsedlar/repos/pungi/doc/_static/phases.png" inkscape:export-filename="/home/lsedlar/repos/pungi/doc/_static/phases.png"
inkscape:export-xdpi="90" inkscape:export-xdpi="90"
inkscape:export-ydpi="90"> inkscape:export-ydpi="90"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<sodipodi:namedview <sodipodi:namedview
id="base" id="base"
pagecolor="#ffffff" pagecolor="#ffffff"
@ -25,15 +25,15 @@
inkscape:pageopacity="1" inkscape:pageopacity="1"
inkscape:pageshadow="2" inkscape:pageshadow="2"
inkscape:zoom="1.5" inkscape:zoom="1.5"
inkscape:cx="9.4746397" inkscape:cx="268"
inkscape:cy="58.833855" inkscape:cy="260.66667"
inkscape:document-units="px" inkscape:document-units="px"
inkscape:current-layer="layer1" inkscape:current-layer="layer1"
showgrid="false" showgrid="false"
inkscape:window-width="2560" inkscape:window-width="1920"
inkscape:window-height="1376" inkscape:window-height="1027"
inkscape:window-x="0" inkscape:window-x="0"
inkscape:window-y="0" inkscape:window-y="25"
inkscape:window-maximized="1" inkscape:window-maximized="1"
units="px" units="px"
inkscape:document-rotation="0" inkscape:document-rotation="0"
@ -43,7 +43,10 @@
fit-margin-left="7.4" fit-margin-left="7.4"
fit-margin-right="7.4" fit-margin-right="7.4"
fit-margin-bottom="7.4" fit-margin-bottom="7.4"
lock-margins="true" /> lock-margins="true"
inkscape:showpageshadow="2"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1" />
<defs <defs
id="defs4"> id="defs4">
<marker <marker
@ -70,7 +73,6 @@
<dc:format>image/svg+xml</dc:format> <dc:format>image/svg+xml</dc:format>
<dc:type <dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /> rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title />
</cc:Work> </cc:Work>
</rdf:RDF> </rdf:RDF>
</metadata> </metadata>
@ -103,7 +105,7 @@
style="font-size:13.1479px;line-height:1.25">Pkgset</tspan></text> style="font-size:13.1479px;line-height:1.25">Pkgset</tspan></text>
</g> </g>
<g <g
transform="translate(58.253953,-80.817124)" transform="translate(56.378954,-80.817124)"
id="g3398"> id="g3398">
<rect <rect
y="553.98242" y="553.98242"
@ -301,13 +303,16 @@
</g> </g>
</g> </g>
</g> </g>
<g
id="g2"
transform="translate(-1.4062678e-8,9.3749966)">
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
style="fill:#e9b96e;fill-rule:evenodd;stroke:none;stroke-width:1.85901px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#e9b96e;fill-rule:evenodd;stroke:none;stroke-width:1.85901px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect3338-1" id="rect3338-1"
width="90.874992" width="103.12497"
height="115.80065" height="115.80065"
x="872.67383" x="863.29883"
y="486.55563" /> y="486.55563" />
<text <text
id="text3384-0" id="text3384-0"
@ -320,6 +325,7 @@
sodipodi:role="line" sodipodi:role="line"
x="489.56451" x="489.56451"
y="921.73846">ImageChecksum</tspan></text> y="921.73846">ImageChecksum</tspan></text>
</g>
<g <g
transform="translate(-42.209584,-80.817124)" transform="translate(-42.209584,-80.817124)"
id="g3458"> id="g3458">
@ -417,16 +423,16 @@
id="rect290" id="rect290"
width="26.295755" width="26.295755"
height="224.35098" height="224.35098"
x="1063.5973" x="1091.7223"
y="378.43698" y="378.43698"
transform="matrix(0,1,1,0,0,0)" /> transform="matrix(0,1,1,0,0,0)" />
<text <text
xml:space="preserve" xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="380.74133" x="380.74133"
y="1080.3723" y="1106.6223"
id="text294"><tspan id="text294"><tspan
y="1080.3723" y="1106.6223"
x="380.74133" x="380.74133"
sodipodi:role="line" sodipodi:role="line"
id="tspan301" id="tspan301"
@ -454,32 +460,9 @@
y="1069.0087" y="1069.0087"
id="tspan3812">ExtraIsos</tspan></text> id="tspan3812">ExtraIsos</tspan></text>
</g> </g>
<g
id="g1031"
transform="translate(-40.740337,29.23522)">
<rect
transform="matrix(0,1,1,0,0,0)"
style="fill:#5ed4ec;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect206"
width="26.295755"
height="102.36562"
x="1066.8611"
y="418.66275" />
<text
id="text210"
y="1084.9105"
x="421.51923"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
y="1084.9105"
x="421.51923"
id="tspan208"
sodipodi:role="line"
style="font-size:13.1479px;line-height:1.25">Repoclosure</tspan></text>
</g>
<rect <rect
y="377.92242" y="377.92242"
x="1096.0963" x="1122.3463"
height="224.24059" height="224.24059"
width="26.295755" width="26.295755"
id="rect87" id="rect87"
@ -489,17 +472,18 @@
xml:space="preserve" xml:space="preserve"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="380.7789" x="380.7789"
y="1114.1458" y="1140.3958"
id="text91"><tspan id="text91"><tspan
style="font-size:13.1479px;line-height:1.25" style="font-size:13.1479px;line-height:1.25"
sodipodi:role="line" sodipodi:role="line"
id="tspan89" id="tspan89"
x="380.7789" x="380.7789"
y="1114.1458">Repoclosure</tspan></text> y="1140.3958">Repoclosure</tspan></text>
<g <g
id="g206"> id="g206"
transform="translate(0,-1.8749994)">
<rect <rect
style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:1.00033px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#fcd9a4;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:1.00033px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect290-6" id="rect290-6"
width="26.295755" width="26.295755"
height="101.91849" height="101.91849"
@ -516,19 +500,25 @@
x="380.23166" x="380.23166"
sodipodi:role="line" sodipodi:role="line"
id="tspan301-5" id="tspan301-5"
style="font-size:12px;line-height:0">OSBuild</tspan></text> style="font-size:12px;line-height:0">KiwiBuild</tspan></text>
</g> </g>
<g
id="g3">
<g
id="g1">
<g
id="g4">
<rect <rect
transform="matrix(0,1,1,0,0,0)" transform="matrix(0,1,1,0,0,0)"
style="fill:#729fcf;fill-rule:evenodd;stroke:none;stroke-width:1.83502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#729fcf;fill-rule:evenodd;stroke:none;stroke-width:1.83502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect3338-1-3" id="rect3338-1-3"
width="88.544876" width="103.12497"
height="115.80065" height="115.80065"
x="970.31763" x="983.44263"
y="486.55563" /> y="486.55563" />
<text <text
id="text3384-0-6" id="text3384-0-6"
y="1018.2172" y="1038.8422"
x="489.56451" x="489.56451"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan xml:space="preserve"><tspan
@ -536,6 +526,32 @@
id="tspan3391-7" id="tspan3391-7"
sodipodi:role="line" sodipodi:role="line"
x="489.56451" x="489.56451"
y="1018.2172">ImageContainer</tspan></text> y="1038.8422">ImageContainer</tspan></text>
</g>
</g>
</g>
<g
id="g206-1"
transform="translate(-0.04628921,28.701853)">
<rect
style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:1.00033px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect290-6-7"
width="26.295755"
height="101.91849"
x="1032.3469"
y="377.92731"
transform="matrix(0,1,1,0,0,0)" />
<text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="380.23166"
y="1049.1219"
id="text294-7-5"><tspan
y="1049.1219"
x="380.23166"
sodipodi:role="line"
id="tspan301-5-5"
style="font-size:12px;line-height:0">OSBuild</tspan></text>
</g>
</g> </g>
</svg> </svg>

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -18,12 +18,12 @@ import os
# If extensions (or modules to document with autodoc) are in another directory, # If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the # add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.')) # sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------ # -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here. # If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0' # needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be # Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
@ -31,207 +31,201 @@ import os
extensions = [] extensions = []
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ["_templates"]
# The suffix of source filenames. # The suffix of source filenames.
source_suffix = '.rst' source_suffix = ".rst"
# The encoding of source files. # The encoding of source files.
#source_encoding = 'utf-8-sig' # source_encoding = 'utf-8-sig'
# The master toctree document. # The master toctree document.
master_doc = 'index' master_doc = "index"
# General information about the project. # General information about the project.
project = u'Pungi' project = "Pungi"
copyright = u'2016, Red Hat, Inc.' copyright = "2016, Red Hat, Inc."
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
# built documents. # built documents.
# #
# The short X.Y version. # The short X.Y version.
version = '4.3' version = "4.7"
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = '4.3.6' release = "4.7.0"
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.
#language = None # language = None
# There are two options for replacing |today|: either, you set today to some # There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used: # non-false value, then it is used:
#today = '' # today = ''
# Else, today_fmt is used as the format for a strftime call. # Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y' # today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
exclude_patterns = ['_build'] exclude_patterns = ["_build"]
# The reST default role (used for this markup: `text`) to use for all # The reST default role (used for this markup: `text`) to use for all
# documents. # documents.
#default_role = None # default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text. # If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True # add_function_parentheses = True
# If true, the current module name will be prepended to all description # If true, the current module name will be prepended to all description
# unit titles (such as .. function::). # unit titles (such as .. function::).
#add_module_names = True # add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the # If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default. # output. They are ignored by default.
#show_authors = False # show_authors = False
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = "sphinx"
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
#modindex_common_prefix = [] # modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents. # If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False # keep_warnings = False
# -- Options for HTML output ---------------------------------------------- # -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes. # a list of builtin themes.
html_theme = 'default' html_theme = "default"
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the
# documentation. # documentation.
#html_theme_options = {} # html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory. # Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = [] # html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to # The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation". # "<project> v<release> documentation".
#html_title = None # html_title = None
# A shorter title for the navigation bar. Default is the same as html_title. # A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None # html_short_title = None
# The name of an image file (relative to this directory) to place at the top # The name of an image file (relative to this directory) to place at the top
# of the sidebar. # of the sidebar.
#html_logo = None # html_logo = None
# The name of an image file (within the static path) to use as favicon of the # The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large. # pixels large.
#html_favicon = None # html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static'] html_static_path = ["_static"]
# Add any extra paths that contain custom files (such as robots.txt or # Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied # .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation. # directly to the root of the documentation.
#html_extra_path = [] # html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format. # using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y' # html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to # If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities. # typographically correct entities.
#html_use_smartypants = True # html_use_smartypants = True
# Custom sidebar templates, maps document names to template names. # Custom sidebar templates, maps document names to template names.
#html_sidebars = {} # html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to # Additional templates that should be rendered to pages, maps page names to
# template names. # template names.
#html_additional_pages = {} # html_additional_pages = {}
# If false, no module index is generated. # If false, no module index is generated.
#html_domain_indices = True # html_domain_indices = True
# If false, no index is generated. # If false, no index is generated.
#html_use_index = True # html_use_index = True
# If true, the index is split into individual pages for each letter. # If true, the index is split into individual pages for each letter.
#html_split_index = False # html_split_index = False
# If true, links to the reST sources are added to the pages. # If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True # html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True # html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True # html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will # If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the # contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served. # base URL from which the finished HTML is served.
#html_use_opensearch = '' # html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml"). # This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None # html_file_suffix = None
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = 'Pungidoc' htmlhelp_basename = "Pungidoc"
# -- Options for LaTeX output --------------------------------------------- # -- Options for LaTeX output ---------------------------------------------
latex_elements = { latex_elements = {
# The paper size ('letterpaper' or 'a4paper'). # The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper', #'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt',
#'pointsize': '10pt', # Additional stuff for the LaTeX preamble.
#'preamble': '',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
} }
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, # (source start file, target name, title,
# author, documentclass [howto, manual, or own class]). # author, documentclass [howto, manual, or own class]).
latex_documents = [ latex_documents = [
('index', 'Pungi.tex', u'Pungi Documentation', ("index", "Pungi.tex", "Pungi Documentation", "Daniel Mach", "manual"),
u'Daniel Mach', 'manual'),
] ]
# The name of an image file (relative to this directory) to place at the top of # The name of an image file (relative to this directory) to place at the top of
# the title page. # the title page.
#latex_logo = None # latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts, # For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters. # not chapters.
#latex_use_parts = False # latex_use_parts = False
# If true, show page references after internal links. # If true, show page references after internal links.
#latex_show_pagerefs = False # latex_show_pagerefs = False
# If true, show URL addresses after external links. # If true, show URL addresses after external links.
#latex_show_urls = False # latex_show_urls = False
# Documents to append as an appendix to all manuals. # Documents to append as an appendix to all manuals.
#latex_appendices = [] # latex_appendices = []
# If false, no module index is generated. # If false, no module index is generated.
#latex_domain_indices = True # latex_domain_indices = True
# -- Options for manual page output --------------------------------------- # -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [ man_pages = [("index", "pungi", "Pungi Documentation", ["Daniel Mach"], 1)]
('index', 'pungi', u'Pungi Documentation',
[u'Daniel Mach'], 1)
]
# If true, show URL addresses after external links. # If true, show URL addresses after external links.
#man_show_urls = False # man_show_urls = False
# -- Options for Texinfo output ------------------------------------------- # -- Options for Texinfo output -------------------------------------------
@ -240,19 +234,25 @@ man_pages = [
# (source start file, target name, title, author, # (source start file, target name, title, author,
# dir menu entry, description, category) # dir menu entry, description, category)
texinfo_documents = [ texinfo_documents = [
('index', 'Pungi', u'Pungi Documentation', (
u'Daniel Mach', 'Pungi', 'One line description of project.', "index",
'Miscellaneous'), "Pungi",
"Pungi Documentation",
"Daniel Mach",
"Pungi",
"One line description of project.",
"Miscellaneous",
),
] ]
# Documents to append as an appendix to all manuals. # Documents to append as an appendix to all manuals.
#texinfo_appendices = [] # texinfo_appendices = []
# If false, no module index is generated. # If false, no module index is generated.
#texinfo_domain_indices = True # texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'. # How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote' # texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu. # If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False # texinfo_no_detailmenu = False

View File

@ -194,6 +194,17 @@ Options
Tracking Service Kerberos authentication. If not defined, the default Tracking Service Kerberos authentication. If not defined, the default
Kerberos principal is used. Kerberos principal is used.
**cts_oidc_token_url**
(*str*) -- URL to the OIDC token endpoint.
For example ``https://oidc.example.com/openid-connect/token``.
This option can be overridden by the environment variable ``CTS_OIDC_TOKEN_URL``.
**cts_oidc_client_id*
(*str*) -- OIDC client ID.
This option can be overridden by the environment variable ``CTS_OIDC_CLIENT_ID``.
Note that environment variable ``CTS_OIDC_CLIENT_SECRET`` must be configured with
corresponding client secret to authenticate to CTS via OIDC.
**compose_type** **compose_type**
(*str*) -- Allows to set default compose type. Type set via a command-line (*str*) -- Allows to set default compose type. Type set via a command-line
option overwrites this. option overwrites this.
@ -281,8 +292,8 @@ There a couple common format specifiers available for both the options:
format string. The pattern should not overlap, otherwise it is undefined format string. The pattern should not overlap, otherwise it is undefined
which one will be used. which one will be used.
This format will be used for all phases generating images. Currently that This format will be used for some phases generating images. Currently that
means ``createiso``, ``live_images`` and ``buildinstall``. means ``createiso``, ``buildinstall`` and ``ostree_installer``.
Available extra keys are: Available extra keys are:
* ``disc_num`` * ``disc_num``
@ -312,7 +323,6 @@ There a couple common format specifiers available for both the options:
Available keys are: Available keys are:
* ``boot`` -- for ``boot.iso`` images created in *buildinstall* phase * ``boot`` -- for ``boot.iso`` images created in *buildinstall* phase
* ``live`` -- for images created by *live_images* phase
* ``dvd`` -- for images created by *createiso* phase * ``dvd`` -- for images created by *createiso* phase
* ``ostree`` -- for ostree installer images * ``ostree`` -- for ostree installer images
@ -340,48 +350,10 @@ Example
disc_types = { disc_types = {
'boot': 'netinst', 'boot': 'netinst',
'live': 'Live',
'dvd': 'DVD', 'dvd': 'DVD',
} }
Signing
=======
If you want to sign deliverables generated during pungi run like RPM wrapped
images. You must provide few configuration options:
**signing_command** [optional]
(*str*) -- Command that will be run with a koji build as a single
argument. This command must not require any user interaction.
If you need to pass a password for a signing key to the command,
do this via command line option of the command and use string
formatting syntax ``%(signing_key_password)s``.
(See **signing_key_password_file**).
**signing_key_id** [optional]
(*str*) -- ID of the key that will be used for the signing.
This ID will be used when crafting koji paths to signed files
(``kojipkgs.fedoraproject.org/packages/NAME/VER/REL/data/signed/KEYID/..``).
**signing_key_password_file** [optional]
(*str*) -- Path to a file with password that will be formatted
into **signing_command** string via ``%(signing_key_password)s``
string format syntax (if used).
Because pungi config is usually stored in git and is part of compose
logs we don't want password to be included directly in the config.
Note: If ``-`` string is used instead of a filename, then you will be asked
for the password interactivelly right after pungi starts.
Example
-------
::
signing_command = '~/git/releng/scripts/sigulsign_unsigned.py -vv --password=%(signing_key_password)s fedora-24'
signing_key_id = '81b46521'
signing_key_password_file = '~/password_for_fedora-24_key'
.. _git-urls: .. _git-urls:
Git URLs Git URLs
@ -581,6 +553,16 @@ Options
with everything. Set this option to ``False`` to ignore ``noarch`` in with everything. Set this option to ``False`` to ignore ``noarch`` in
``ExclusiveArch`` and always consider only binary architectures. ``ExclusiveArch`` and always consider only binary architectures.
**pkgset_inherit_exclusive_arch_to_noarch** = True
(*bool*) -- When set to ``True``, the value of ``ExclusiveArch`` or
``ExcludeArch`` will be copied from source rpm to all its noarch packages.
That will than limit which architectures the noarch packages can be
included in.
By setting this option to ``False`` this step is skipped, and noarch
packages will by default land in all architectures. They can still be
excluded by listing them in a relevant section of ``filter_packages``.
**pkgset_allow_reuse** = True **pkgset_allow_reuse** = True
(*bool*) -- When set to ``True``, *Pungi* will try to reuse pkgset data (*bool*) -- When set to ``True``, *Pungi* will try to reuse pkgset data
from the old composes specified by ``--old-composes``. When enabled, this from the old composes specified by ``--old-composes``. When enabled, this
@ -621,7 +603,7 @@ Options
------- -------
**buildinstall_method** **buildinstall_method**
(*str*) -- "lorax" (f16+, rhel7+) or "buildinstall" (older releases) (*str*) -- "lorax" (f16+, rhel7+)
**lorax_options** **lorax_options**
(*list*) -- special options passed on to *lorax*. (*list*) -- special options passed on to *lorax*.
@ -920,6 +902,10 @@ Options
comps file can not be found in the package set. When disabled (the comps file can not be found in the package set. When disabled (the
default), such cases are still reported as warnings in the log. default), such cases are still reported as warnings in the log.
With ``dnf`` gather backend, this option will abort the compose on any
missing package no matter if it's listed in comps, ``additional_packages``
or prepopulate file.
**gather_source_mapping** **gather_source_mapping**
(*str*) -- JSON mapping with initial packages for the compose. The value (*str*) -- JSON mapping with initial packages for the compose. The value
should be a path to JSON file with following mapping: ``{variant: {arch: should be a path to JSON file with following mapping: ``{variant: {arch:
@ -1343,8 +1329,8 @@ All non-``RC`` milestones from label get appended to the version. For release
either label is used or date, type and respin. either label is used or date, type and respin.
Common options for Live Images, Live Media and Image Build Common options for Live Media and Image Build
========================================================== =============================================
All images can have ``ksurl``, ``version``, ``release`` and ``target`` All images can have ``ksurl``, ``version``, ``release`` and ``target``
specified. Since this can create a lot of duplication, there are global options specified. Since this can create a lot of duplication, there are global options
@ -1360,14 +1346,12 @@ The kickstart URL is configured by these options.
* ``global_ksurl`` -- global fallback setting * ``global_ksurl`` -- global fallback setting
* ``live_media_ksurl`` * ``live_media_ksurl``
* ``image_build_ksurl`` * ``image_build_ksurl``
* ``live_images_ksurl``
Target is specified by these settings. Target is specified by these settings.
* ``global_target`` -- global fallback setting * ``global_target`` -- global fallback setting
* ``live_media_target`` * ``live_media_target``
* ``image_build_target`` * ``image_build_target``
* ``live_images_target``
* ``osbuild_target`` * ``osbuild_target``
Version is specified by these options. If no version is set, a default value Version is specified by these options. If no version is set, a default value
@ -1376,7 +1360,6 @@ will be provided according to :ref:`automatic versioning <auto-version>`.
* ``global_version`` -- global fallback setting * ``global_version`` -- global fallback setting
* ``live_media_version`` * ``live_media_version``
* ``image_build_version`` * ``image_build_version``
* ``live_images_version``
* ``osbuild_version`` * ``osbuild_version``
Release is specified by these options. If set to a magic value to Release is specified by these options. If set to a magic value to
@ -1386,44 +1369,14 @@ to :ref:`automatic versioning <auto-version>`.
* ``global_release`` -- global fallback setting * ``global_release`` -- global fallback setting
* ``live_media_release`` * ``live_media_release``
* ``image_build_release`` * ``image_build_release``
* ``live_images_release``
* ``osbuild_release`` * ``osbuild_release``
Each configuration block can also optionally specify a ``failable`` key. For Each configuration block can also optionally specify a ``failable`` key. It
live images it should have a boolean value. For live media and image build it
should be a list of strings containing architectures that are optional. If any should be a list of strings containing architectures that are optional. If any
deliverable fails on an optional architecture, it will not abort the whole deliverable fails on an optional architecture, it will not abort the whole
compose. If the list contains only ``"*"``, all arches will be substituted. compose. If the list contains only ``"*"``, all arches will be substituted.
Live Images Settings
====================
**live_images**
(*list*) -- Configuration for the particular image. The elements of the
list should be tuples ``(variant_uid_regex, {arch|*: config})``. The config
should be a dict with these keys:
* ``kickstart`` (*str*)
* ``ksurl`` (*str*) [optional] -- where to get the kickstart from
* ``name`` (*str*)
* ``version`` (*str*)
* ``target`` (*str*)
* ``repo`` (*str|[str]*) -- repos specified by URL or variant UID
* ``specfile`` (*str*) -- for images wrapped in RPM
* ``scratch`` (*bool*) -- only RPM-wrapped images can use scratch builds,
but by default this is turned off
* ``type`` (*str*) -- what kind of task to start in Koji. Defaults to
``live`` meaning ``koji spin-livecd`` will be used. Alternative option
is ``appliance`` corresponding to ``koji spin-appliance``.
* ``sign`` (*bool*) -- only RPM-wrapped images can be signed
**live_images_no_rename**
(*bool*) -- When set to ``True``, filenames generated by Koji will be used.
When ``False``, filenames will be generated based on ``image_name_format``
configuration option.
Live Media Settings Live Media Settings
=================== ===================
@ -1579,6 +1532,61 @@ Example
} }
KiwiBuild Settings
==================
**kiwibuild**
(*dict*) -- configuration for building images using kiwi by a Koji plugin.
Pungi will trigger a Koji task delegating to kiwi, which will build the image,
import it to Koji via content generators.
Format: ``{variant_uid_regex: [{...}]}``.
Required keys in the configuration dict:
* ``kiwi_profile`` -- (*str*) select profile from description file.
Description scm, description path and target have to be provided too, but
instead of specifying them for each image separately, you can use the
``kiwibuild_*`` options or ``global_target``.
Optional keys:
* ``description_scm`` -- (*str*) scm URL of description kiwi description.
* ``description_path`` -- (*str*) path to kiwi description inside the scm
repo.
* ``repos`` -- additional repos used to install RPMs in the image. The
compose repository for the enclosing variant is added automatically.
Either variant name or a URL is supported.
* ``target`` -- (*str*) which build target to use for the task. If not
provided, then either ``kiwibuild_target`` or ``global_target`` is
needed.
* ``release`` -- (*str*) release of the output image.
* ``arches`` -- (*[str]*) List of architectures to build for. If not
provided, all variant architectures will be built.
* ``failable`` -- (*[str]*) List of architectures for which this
deliverable is not release blocking.
* ``type`` -- (*str*) override default type from the bundle with this value.
* ``type_attr`` -- (*[str]*) override default attributes for the build type
from description.
* ``bundle_name_format`` -- (*str*) override default bundle format name.
**kiwibuild_description_scm**
(*str*) -- URL for scm containing the description files
**kiwibuild_description_path**
(*str*) -- path to a description file within the description scm
**kiwibuild_type**
(*str*) -- override default type from the bundle with this value.
**kiwibuild_type_attr**
(*[str]*) -- override default attributes for the build type from description.
**kiwibuild_bundle_name_format**
(*str*) -- override default bundle format name.
OSBuild Composer for building images OSBuild Composer for building images
==================================== ====================================
@ -1607,16 +1615,37 @@ OSBuild Composer for building images
* ``release`` -- release part of the final NVR. If neither this option nor * ``release`` -- release part of the final NVR. If neither this option nor
the global ``osbuild_release`` is set, Koji will automatically generate a the global ``osbuild_release`` is set, Koji will automatically generate a
value. value.
* ``repo`` -- a list of repository URLs from which to consume packages for * ``repo`` -- a list of repositories from which to consume packages for
building the image. By default only the variant repository is used. building the image. By default only the variant repository is used.
The list items may use one of the following formats:
* String with just the repository URL.
* Dictionary with the following keys:
* ``baseurl`` -- URL of the repository.
* ``package_sets`` -- a list of package set names to use for this
repository. Package sets are an internal concept of Image Builder
and are used in image definitions. If specified, the repository is
used by Image Builder only for the pipeline with the same name.
For example, specifying the ``build`` package set name will make
the repository to be used only for the build environment in which
the image will be built. (optional)
* ``arches`` -- list of architectures for which to build the image. By * ``arches`` -- list of architectures for which to build the image. By
default, the variant arches are used. This option can only restrict it, default, the variant arches are used. This option can only restrict it,
not add a new one. not add a new one.
* ``manifest_type`` -- the image type that is put into the manifest by
pungi. If not supplied then it is autodetected from the Koji output.
* ``ostree_url`` -- URL of the repository that's used to fetch the parent * ``ostree_url`` -- URL of the repository that's used to fetch the parent
commit from. commit from.
* ``ostree_ref`` -- name of the ostree branch * ``ostree_ref`` -- name of the ostree branch
* ``ostree_parent`` -- commit hash or a a branch-like reference to the * ``ostree_parent`` -- commit hash or a a branch-like reference to the
parent commit. parent commit.
* ``customizations`` -- a dictionary with customizations to use for the
image build. For the list of supported customizations, see the **hosted**
variants in the `Image Builder documentation
<https://osbuild.org/docs/user-guide/blueprint-reference#installation-device>`.
* ``upload_options`` -- a dictionary with upload options specific to the * ``upload_options`` -- a dictionary with upload options specific to the
target cloud environment. If provided, the image will be uploaded to the target cloud environment. If provided, the image will be uploaded to the
cloud environment, in addition to the Koji server. One can't combine cloud environment, in addition to the Koji server. One can't combine
@ -1641,13 +1670,13 @@ OSBuild Composer for building images
* ``tenant_id`` -- Azure tenant ID to upload the image to * ``tenant_id`` -- Azure tenant ID to upload the image to
* ``subscription_id`` -- Azure subscription ID to upload the image to * ``subscription_id`` -- Azure subscription ID to upload the image to
* ``resource_group`` -- Azure resource group to upload the image to * ``resource_group`` -- Azure resource group to upload the image to
* ``location`` -- Azure location to upload the image to * ``location`` -- Azure location of the resource group (optional)
* ``image_name`` -- Image name of the uploaded Azure image (optional) * ``image_name`` -- Image name of the uploaded Azure image (optional)
* **GCP upload options** -- upload to Google Cloud Platform. * **GCP upload options** -- upload to Google Cloud Platform.
* ``region`` -- GCP region to upload the image to * ``region`` -- GCP region to upload the image to
* ``bucket`` -- GCP bucket to upload the image to * ``bucket`` -- GCP bucket to upload the image to (optional)
* ``share_with_accounts`` -- list of GCP accounts to share the image * ``share_with_accounts`` -- list of GCP accounts to share the image
with with
* ``image_name`` -- Image name of the uploaded GCP image (optional) * ``image_name`` -- Image name of the uploaded GCP image (optional)
@ -1724,16 +1753,16 @@ another directory. Any new packages in the compose will be added to the
repository with a new commit. repository with a new commit.
**ostree** **ostree**
(*dict*) -- a mapping of configuration for each. The format should be (*dict*) -- a mapping of configuration for each variant. The format should
``{variant_uid_regex: config_dict}``. It is possible to use a list of be ``{variant_uid_regex: config_dict}``. It is possible to use a list of
configuration dicts as well. configuration dicts as well.
The configuration dict for each variant arch pair must have these keys: The configuration dict for each variant arch pair must have these keys:
* ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``. * ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``.
* ``config_url`` -- (*str*) URL for Git repository with the ``treefile``. * ``config_url`` -- (*str*) URL for Git repository with the ``treefile``.
* ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or variant UID * ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or a dict of
or a dict of repo options, ``baseurl`` is required in the dict. repo options, ``baseurl`` is required in the dict.
* ``ostree_repo`` -- (*str*) Where to put the ostree repository * ``ostree_repo`` -- (*str*) Where to put the ostree repository
These keys are optional: These keys are optional:
@ -1764,6 +1793,8 @@ repository with a new commit.
* ``tag_ref`` -- (*bool*, default ``True``) If set to ``False``, a git * ``tag_ref`` -- (*bool*, default ``True``) If set to ``False``, a git
reference will not be created. reference will not be created.
* ``ostree_ref`` -- (*str*) To override value ``ref`` from ``treefile``. * ``ostree_ref`` -- (*str*) To override value ``ref`` from ``treefile``.
* ``runroot_packages`` -- (*list*) A list of additional package names to be
installed in the runroot environment in Koji.
Example config Example config
-------------- --------------
@ -1773,13 +1804,11 @@ Example config
"^Atomic$": { "^Atomic$": {
"treefile": "fedora-atomic-docker-host.json", "treefile": "fedora-atomic-docker-host.json",
"config_url": "https://git.fedorahosted.org/git/fedora-atomic.git", "config_url": "https://git.fedorahosted.org/git/fedora-atomic.git",
"keep_original_sources": True,
"repo": [ "repo": [
"Server",
"http://example.com/repo/x86_64/os", "http://example.com/repo/x86_64/os",
{"baseurl": "Everything"},
{"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"}, {"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
], ],
"keep_original_sources": True,
"ostree_repo": "/mnt/koji/compose/atomic/Rawhide/", "ostree_repo": "/mnt/koji/compose/atomic/Rawhide/",
"update_summary": True, "update_summary": True,
# Automatically generate a reasonable version # Automatically generate a reasonable version
@ -1795,6 +1824,79 @@ Example config
has the pungi_ostree plugin installed. has the pungi_ostree plugin installed.
OSTree Native Container Settings
================================
The ``ostree_container`` phase of *Pungi* can create an ostree native container
image as an OCI archive. This is done by running ``rpm-ostree compose image``
in a Koji runroot environment.
While rpm-ostree can use information from previously built images to improve
the split in container layers, we can not use that functionnality until
https://github.com/containers/skopeo/pull/2114 is resolved. Each invocation
will thus create a new OCI archive image *from scratch*.
**ostree_container**
(*dict*) -- a mapping of configuration for each variant. The format should
be ``{variant_uid_regex: config_dict}``. It is possible to use a list of
configuration dicts as well.
The configuration dict for each variant arch pair must have these keys:
* ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``.
* ``config_url`` -- (*str*) URL for Git repository with the ``treefile``.
These keys are optional:
* ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or a dict of
repo options, ``baseurl`` is required in the dict.
* ``keep_original_sources`` -- (*bool*) Keep the existing source repos in
the tree config file. If not enabled, all the original source repos will
be removed from the tree config file.
* ``config_branch`` -- (*str*) Git branch of the repo to use. Defaults to
``main``.
* ``arches`` -- (*[str]*) List of architectures for which to generate
ostree native container images. There will be one task per architecture.
By default all architectures in the variant are used.
* ``failable`` -- (*[str]*) List of architectures for which this
deliverable is not release blocking.
* ``version`` -- (*str*) Version string to be added to the OCI archive name.
If this option is set to ``!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN``,
a value will be generated automatically as ``$VERSION.$RELEASE``.
If this option is set to ``!VERSION_FROM_VERSION_DATE_RESPIN``,
a value will be generated automatically as ``$VERSION.$DATE.$RESPIN``.
:ref:`See how those values are created <auto-version>`.
* ``tag_ref`` -- (*bool*, default ``True``) If set to ``False``, a git
reference will not be created.
* ``runroot_packages`` -- (*list*) A list of additional package names to be
installed in the runroot environment in Koji.
Example config
--------------
::
ostree_container = {
"^Sagano$": {
"treefile": "fedora-tier-0-38.yaml",
"config_url": "https://gitlab.com/CentOS/cloud/sagano.git",
"config_branch": "main",
"repo": [
"http://example.com/repo/x86_64/os",
{"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
],
# Automatically generate a reasonable version
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
# Only run this for x86_64 even if Sagano has more arches
"arches": ["x86_64"],
}
}
**ostree_container_use_koji_plugin** = False
(*bool*) -- When set to ``True``, the Koji pungi_ostree task will be
used to execute rpm-ostree instead of runroot. Use only if the Koji instance
has the pungi_ostree plugin installed.
Ostree Installer Settings Ostree Installer Settings
========================= =========================
@ -2145,9 +2247,9 @@ Miscellaneous Settings
format string accepting ``%(variant_name)s`` and ``%(arch)s`` placeholders. format string accepting ``%(variant_name)s`` and ``%(arch)s`` placeholders.
**symlink_isos_to** **symlink_isos_to**
(*str*) -- If set, the ISO files from ``buildinstall``, ``createiso`` and (*str*) -- If set, the ISO files from ``buildinstall`` and ``createiso``
``live_images`` phases will be put into this destination, and a symlink phases will be put into this destination, and a symlink pointing to this
pointing to this location will be created in actual compose directory. location will be created in actual compose directory.
**dogpile_cache_backend** **dogpile_cache_backend**
(*str*) -- If set, Pungi will use the configured Dogpile cache backend to (*str*) -- If set, Pungi will use the configured Dogpile cache backend to

View File

@ -294,30 +294,6 @@ This is a shortened configuration for Fedora Radhide compose as of 2019-10-14.
}) })
] ]
live_target = 'f32'
live_images_no_rename = True
live_images = [
('^Workstation$', {
'armhfp': {
'kickstart': 'fedora-arm-workstation.ks',
'name': 'Fedora-Workstation-armhfp',
# Again workstation takes packages from Everything.
'repo': 'Everything',
'type': 'appliance',
'failable': True,
}
}),
('^Server$', {
# But Server has its own repo.
'armhfp': {
'kickstart': 'fedora-arm-server.ks',
'name': 'Fedora-Server-armhfp',
'type': 'appliance',
'failable': True,
}
}),
]
ostree = { ostree = {
"^Silverblue$": { "^Silverblue$": {
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN", "version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
@ -343,6 +319,20 @@ This is a shortened configuration for Fedora Radhide compose as of 2019-10-14.
} }
} }
ostree_container = {
"^Sagano$": {
"treefile": "fedora-tier-0-38.yaml",
"config_url": "https://gitlab.com/CentOS/cloud/sagano.git",
"config_branch": "main",
# Consume packages from Everything
"repo": "Everything",
# Automatically generate a reasonable version
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
# Only run this for x86_64 even if Sagano has more arches
"arches": ["x86_64"],
}
}
ostree_installer = [ ostree_installer = [
("^Silverblue$", { ("^Silverblue$", {
"x86_64": { "x86_64": {

View File

@ -19,7 +19,7 @@ Contents:
scm_support scm_support
messaging messaging
gathering gathering
koji
comps comps
contributing contributing
testing testing
multi_compose

105
doc/koji.rst Normal file
View File

@ -0,0 +1,105 @@
======================
Getting data from koji
======================
When Pungi is configured to get packages from a Koji tag, it somehow needs to
access the actual RPM files.
Historically, this required the storage used by Koji to be directly available
on the host where Pungi was running. This was usually achieved by using NFS for
the Koji volume, and mounting it on the compose host.
The compose could be created directly on the same volume. In such case the
packages would be hardlinked, significantly reducing space consumption.
The compose could also be created on a different storage, in which case the
packages would either need to be copied over or symlinked. Using symlinks
requires that anything that accesses the compose (e.g. a download server) would
also need to mount the Koji volume in the same location.
There is also a risk with symlinks that the package in Koji can change (due to
being resigned for example), which would invalidate composes linking to it.
Using Koji without direct mount
===============================
It is possible now to run a compose from a Koji tag without direct access to
Koji storage.
Pungi can download the packages over HTTP protocol, store them in a local
cache, and consume them from there.
The local cache has similar structure to what is on the Koji volume.
When Pungi needs some package, it has a path on Koji volume. It will replace
the ``topdir`` with the cache location. If such file exists, it will be used.
If it doesn't exist, it will be downloaded from Koji (by replacing the
``topdir`` with ``topurl``).
::
Koji path /mnt/koji/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
Koji URL https://kojipkgs.fedoraproject.org/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
Local path /mnt/compose/cache/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
The packages can be hardlinked from this cache directory.
Cleanup
-------
While the approach above allows each RPM to be downloaded only once, it will
eventually result in the Koji volume being mirrored locally. Most of the
packages will however no longer be needed.
There is a script ``pungi-cache-cleanup`` that can help with that. It can find
and remove files from the cache that are no longer needed.
A file is no longer needed if it has a single link (meaning it is only in the
cache, not in any compose), and it has mtime older than a given threshold.
It doesn't make sense to delete files that are hardlinked in an existing
compose as it would not save any space anyway.
The mtime check is meant to preserve files that are downloaded but not actually
used in a compose, like a subpackage that is not included in any variant. Every
time its existence in the local cache is checked, the mtime is updated.
Race conditions?
----------------
It should be safe to have multiple compose hosts share the same storage volume
for generated composes and local cache.
If a cache file is accessed and it exists, there's no risk of race condition.
If two composes need the same file at the same time and it is not present yet,
one of them will take a lock on it and start downloading. The other will wait
until the download is finished.
The lock is only valid for a set amount of time (5 minutes) to avoid issues
where the downloading process is killed in a way that blocks it from releasing
the lock.
If the file is large and network slow, the limit may not be enough finish
downloading. In that case the second process will steal the lock while the
first process is still downloading. This will result in the same file being
downloaded twice.
When the first process finishes the download, it will put the file into the
local cache location. When the second process finishes, it will atomically
replace it, but since it's the same file it will be the same file.
If the first compose already managed to hardlink the file before it gets
replaced, there will be two copies of the file present locally.
Integrity checking
------------------
There is minimal integrity checking. RPM packages belonging to real builds will
be check to match the checksum provided by Koji hub.
There is no checking for scratch builds or any images.

View File

@ -1,107 +0,0 @@
.. _multi_compose:
Managing compose from multiple parts
====================================
There may be cases where it makes sense to split a big compose into separate
parts, but create a compose output that links all output into one familiar
structure.
The `pungi-orchestrate` tools allows that.
It works with an INI-style configuration file. The ``[general]`` section
contains information about identity of the main compose. Other sections define
individual parts.
The parts are scheduled to run in parallel, with the minimal amount of
serialization. The final compose directory will contain hard-links to the
files.
General settings
----------------
**target**
Path to directory where the final compose should be created.
**compose_type**
Type of compose to make.
**release_name**
Name of the product for the final compose.
**release_short**
Short name of the product for the final compose.
**release_version**
Version of the product for the final compose.
**release_type**
Type of the product for the final compose.
**extra_args**
Additional arguments that will be passed to the child Pungi processes.
**koji_profile**
If specified, a current event will be retrieved from the Koji instance and
used for all parts.
**kerberos**
If set to yes, a kerberos ticket will be automatically created at the start.
Set keytab and principal as well.
**kerberos_keytab**
Path to keytab file used to create the kerberos ticket.
**kerberos_principal**
Kerberos principal for the ticket
**pre_compose_script**
Commands to execute before first part is started. Can contain multiple
commands on separate lines.
**post_compose_script**
Commands to execute after the last part finishes and final status is
updated. Can contain multiple commands on separate lines. ::
post_compose_script =
compose-latest-symlink $COMPOSE_PATH
custom-post-compose-script.sh
Multiple environment variables are defined for the scripts:
* ``COMPOSE_PATH``
* ``COMPOSE_ID``
* ``COMPOSE_DATE``
* ``COMPOSE_TYPE``
* ``COMPOSE_RESPIN``
* ``COMPOSE_LABEL``
* ``RELEASE_ID``
* ``RELEASE_NAME``
* ``RELEASE_SHORT``
* ``RELEASE_VERSION``
* ``RELEASE_TYPE``
* ``RELEASE_IS_LAYERED`` ``YES`` for layered products, empty otherwise
* ``BASE_PRODUCT_NAME`` only set for layered products
* ``BASE_PRODUCT_SHORT`` only set for layered products
* ``BASE_PRODUCT_VERSION`` only set for layered products
* ``BASE_PRODUCT_TYPE`` only set for layered products
**notification_script**
Executable name (or path to a script) that will be used to send a message
once the compose is finished. In order for a valid URL to be included in the
message, at least one part must configure path translation that would apply
to location of main compose.
Only two messages will be sent, one for start and one for finish (either
successful or not).
Partial compose settings
------------------------
Each part should have a separate section in the config file.
It can specify these options:
**config**
Path to configuration file that describes this part. If relative, it is
resolved relative to the file with parts configuration.
**just_phase**, **skip_phase**
Customize which phases should run for this part.
**depends_on**
A comma separated list of other parts that must be finished before this part
starts.
**failable**
A boolean toggle to mark a part as failable. A failure in such part will
mark the final compose as incomplete, but still successful.

View File

@ -30,17 +30,14 @@ packages to architectures.
Buildinstall Buildinstall
------------ ------------
Spawns a bunch of threads, each of which runs either ``lorax`` or Spawns a bunch of threads, each of which runs the ``lorax`` command. The
``buildinstall`` command (the latter coming from ``anaconda`` package). The
commands create ``boot.iso`` and other boot configuration files. The image is commands create ``boot.iso`` and other boot configuration files. The image is
finally linked into the ``compose/`` directory as netinstall media. finally linked into the ``compose/`` directory as netinstall media.
The created images are also needed for creating live media or other images in The created images are also needed for creating live media or other images in
later phases. later phases.
With ``lorax`` this phase runs one task per variant.arch combination. For With ``lorax`` this phase runs one task per variant.arch combination.
``buildinstall`` command there is only one task per architecture and
``product.img`` should be used to customize the results.
Gather Gather
------ ------
@ -115,6 +112,12 @@ ImageBuild
This phase wraps up ``koji image-build``. It also updates the metadata This phase wraps up ``koji image-build``. It also updates the metadata
ultimately responsible for ``images.json`` manifest. ultimately responsible for ``images.json`` manifest.
KiwiBuild
---------
Similarly to image build, this phases creates a koji `kiwiBuild` task. In the
background it uses Kiwi to create images.
OSBuild OSBuild
------- -------

View File

@ -41,6 +41,14 @@ which can contain following keys.
* ``command`` -- defines a shell command to run after Git clone to generate the * ``command`` -- defines a shell command to run after Git clone to generate the
needed file (for example to run ``make``). Only supported in Git backend. needed file (for example to run ``make``). Only supported in Git backend.
* ``options`` -- a dictionary of additional configuration options. These are
specific to different backends.
Currently supported values for Git:
* ``credential_helper`` -- path to a credential helper used to supply
username/password for remotes that require authentication.
Koji examples Koji examples
------------- -------------

1121
pungi.spec

File diff suppressed because it is too large Load Diff

View File

@ -93,6 +93,11 @@ def split_name_arch(name_arch):
def is_excluded(package, arches, logger=None): def is_excluded(package, arches, logger=None):
"""Check if package is excluded from given architectures.""" """Check if package is excluded from given architectures."""
if any(
getBaseArch(exc_arch) == 'x86_64' for exc_arch in package.exclusivearch
) and 'x86_64_v2' not in package.exclusivearch:
package.exclusivearch.append('x86_64_v2')
if package.excludearch and set(package.excludearch) & set(arches): if package.excludearch and set(package.excludearch) & set(arches):
if logger: if logger:
logger.debug( logger.debug(

View File

@ -34,6 +34,8 @@ arches = {
"x86_64": "athlon", "x86_64": "athlon",
"amd64": "x86_64", "amd64": "x86_64",
"ia32e": "x86_64", "ia32e": "x86_64",
# x86-64-v2
"x86_64_v2": "noarch",
# ppc64le # ppc64le
"ppc64le": "noarch", "ppc64le": "noarch",
# ppc # ppc

View File

@ -227,8 +227,18 @@ def validate(config, offline=False, schema=None):
DefaultValidator = _extend_with_default_and_alias( DefaultValidator = _extend_with_default_and_alias(
jsonschema.Draft4Validator, offline=offline jsonschema.Draft4Validator, offline=offline
) )
if hasattr(jsonschema.Draft4Validator, "TYPE_CHECKER"):
# jsonschema >= 3.0 has new interface for checking types
validator = DefaultValidator(schema)
else:
validator = DefaultValidator( validator = DefaultValidator(
schema, schema,
{
"array": (tuple, list),
"regex": six.string_types,
"url": six.string_types,
},
) )
errors = [] errors = []
warnings = [] warnings = []
@ -377,6 +387,7 @@ def _extend_with_default_and_alias(validator_class, offline=False):
instance[property]["branch"] = resolver( instance[property]["branch"] = resolver(
instance[property]["repo"], instance[property]["repo"],
instance[property].get("branch") or "HEAD", instance[property].get("branch") or "HEAD",
instance[property].get("options"),
) )
for error in _hook_errors(properties, instance, schema): for error in _hook_errors(properties, instance, schema):
@ -444,13 +455,16 @@ def _extend_with_default_and_alias(validator_class, offline=False):
context=all_errors, context=all_errors,
) )
kwargs = {}
if hasattr(validator_class, "TYPE_CHECKER"):
# jsonschema >= 3
def is_array(checker, instance): def is_array(checker, instance):
return isinstance(instance, (tuple, list)) return isinstance(instance, (tuple, list))
def is_string_type(checker, instance): def is_string_type(checker, instance):
return isinstance(instance, six.string_types) return isinstance(instance, six.string_types)
type_checker = validator_class.TYPE_CHECKER.redefine_many( kwargs["type_checker"] = validator_class.TYPE_CHECKER.redefine_many(
{"array": is_array, "regex": is_string_type, "url": is_string_type} {"array": is_array, "regex": is_string_type, "url": is_string_type}
) )
@ -464,7 +478,7 @@ def _extend_with_default_and_alias(validator_class, offline=False):
"additionalProperties": _validate_additional_properties, "additionalProperties": _validate_additional_properties,
"anyOf": _validate_any_of, "anyOf": _validate_any_of,
}, },
type_checker=type_checker, **kwargs
) )
@ -507,6 +521,13 @@ def make_schema():
"file": {"type": "string"}, "file": {"type": "string"},
"dir": {"type": "string"}, "dir": {"type": "string"},
"command": {"type": "string"}, "command": {"type": "string"},
"options": {
"type": "object",
"properties": {
"credential_helper": {"type": "string"},
},
"additionalProperties": False,
},
}, },
"additionalProperties": False, "additionalProperties": False,
}, },
@ -532,27 +553,6 @@ def make_schema():
"list_of_strings": {"type": "array", "items": {"type": "string"}}, "list_of_strings": {"type": "array", "items": {"type": "string"}},
"strings": _one_or_list({"type": "string"}), "strings": _one_or_list({"type": "string"}),
"optional_string": {"anyOf": [{"type": "string"}, {"type": "null"}]}, "optional_string": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"live_image_config": {
"type": "object",
"properties": {
"kickstart": {"type": "string"},
"ksurl": {"type": "url"},
"name": {"type": "string"},
"subvariant": {"type": "string"},
"target": {"type": "string"},
"version": {"type": "string"},
"repo": {"$ref": "#/definitions/repos"},
"specfile": {"type": "string"},
"scratch": {"type": "boolean"},
"type": {"type": "string"},
"sign": {"type": "boolean"},
"failable": {"type": "boolean"},
"release": {"$ref": "#/definitions/optional_string"},
},
"required": ["kickstart"],
"additionalProperties": False,
"type": "object",
},
"osbs_config": { "osbs_config": {
"type": "object", "type": "object",
"properties": { "properties": {
@ -588,6 +588,7 @@ def make_schema():
"release_discinfo_description": {"type": "string"}, "release_discinfo_description": {"type": "string"},
"treeinfo_version": {"type": "string"}, "treeinfo_version": {"type": "string"},
"compose_type": {"type": "string", "enum": COMPOSE_TYPES}, "compose_type": {"type": "string", "enum": COMPOSE_TYPES},
"label": {"type": "string"},
"base_product_name": {"type": "string"}, "base_product_name": {"type": "string"},
"base_product_short": {"type": "string"}, "base_product_short": {"type": "string"},
"base_product_version": {"type": "string"}, "base_product_version": {"type": "string"},
@ -665,7 +666,11 @@ def make_schema():
"pkgset_allow_reuse": {"type": "boolean", "default": True}, "pkgset_allow_reuse": {"type": "boolean", "default": True},
"createiso_allow_reuse": {"type": "boolean", "default": True}, "createiso_allow_reuse": {"type": "boolean", "default": True},
"extraiso_allow_reuse": {"type": "boolean", "default": True}, "extraiso_allow_reuse": {"type": "boolean", "default": True},
"pkgset_source": {"type": "string", "enum": ["koji", "repos"]}, "pkgset_source": {"type": "string", "enum": [
"koji",
"repos",
"kojimock",
]},
"createrepo_c": {"type": "boolean", "default": True}, "createrepo_c": {"type": "boolean", "default": True},
"createrepo_checksum": { "createrepo_checksum": {
"type": "string", "type": "string",
@ -792,7 +797,15 @@ def make_schema():
"buildinstall_allow_reuse": {"type": "boolean", "default": False}, "buildinstall_allow_reuse": {"type": "boolean", "default": False},
"buildinstall_method": { "buildinstall_method": {
"type": "string", "type": "string",
"enum": ["lorax", "buildinstall"], "enum": ["lorax"],
},
# In phase `buildinstall` we should add to compose only the
# images that will be used only as netinstall
"netinstall_variants": {
"$ref": "#/definitions/list_of_strings",
"default": [
"BaseOS",
],
}, },
"buildinstall_topdir": {"type": "string"}, "buildinstall_topdir": {"type": "string"},
"buildinstall_kickstart": {"$ref": "#/definitions/str_or_scm_dict"}, "buildinstall_kickstart": {"$ref": "#/definitions/str_or_scm_dict"},
@ -811,8 +824,11 @@ def make_schema():
"pdc_insecure": {"deprecated": "Koji is queried instead"}, "pdc_insecure": {"deprecated": "Koji is queried instead"},
"cts_url": {"type": "string"}, "cts_url": {"type": "string"},
"cts_keytab": {"type": "string"}, "cts_keytab": {"type": "string"},
"cts_oidc_token_url": {"type": "url"},
"cts_oidc_client_id": {"type": "string"},
"koji_profile": {"type": "string"}, "koji_profile": {"type": "string"},
"koji_event": {"type": "number"}, "koji_event": {"type": "number"},
"koji_cache": {"type": "string"},
"pkgset_koji_tag": {"$ref": "#/definitions/strings"}, "pkgset_koji_tag": {"$ref": "#/definitions/strings"},
"pkgset_koji_builds": {"$ref": "#/definitions/strings"}, "pkgset_koji_builds": {"$ref": "#/definitions/strings"},
"pkgset_koji_scratch_tasks": {"$ref": "#/definitions/strings"}, "pkgset_koji_scratch_tasks": {"$ref": "#/definitions/strings"},
@ -830,6 +846,10 @@ def make_schema():
"type": "boolean", "type": "boolean",
"default": True, "default": True,
}, },
"pkgset_inherit_exclusive_arch_to_noarch": {
"type": "boolean",
"default": True,
},
"pkgset_scratch_modules": { "pkgset_scratch_modules": {
"type": "object", "type": "object",
"patternProperties": { "patternProperties": {
@ -842,7 +862,10 @@ def make_schema():
"paths_module": {"type": "string"}, "paths_module": {"type": "string"},
"skip_phases": { "skip_phases": {
"type": "array", "type": "array",
"items": {"type": "string", "enum": PHASES_NAMES + ["productimg"]}, "items": {
"type": "string",
"enum": PHASES_NAMES + ["productimg", "live_images"],
},
"default": [], "default": [],
}, },
"image_name_format": { "image_name_format": {
@ -876,11 +899,6 @@ def make_schema():
}, },
"restricted_volid": {"type": "boolean", "default": False}, "restricted_volid": {"type": "boolean", "default": False},
"volume_id_substitutions": {"type": "object", "default": {}}, "volume_id_substitutions": {"type": "object", "default": {}},
"live_images_no_rename": {"type": "boolean", "default": False},
"live_images_ksurl": {"type": "url"},
"live_images_target": {"type": "string"},
"live_images_release": {"$ref": "#/definitions/optional_string"},
"live_images_version": {"type": "string"},
"image_build_ksurl": {"type": "url"}, "image_build_ksurl": {"type": "url"},
"image_build_target": {"type": "string"}, "image_build_target": {"type": "string"},
"image_build_release": {"$ref": "#/definitions/optional_string"}, "image_build_release": {"$ref": "#/definitions/optional_string"},
@ -913,8 +931,6 @@ def make_schema():
"product_id": {"$ref": "#/definitions/str_or_scm_dict"}, "product_id": {"$ref": "#/definitions/str_or_scm_dict"},
"product_id_allow_missing": {"type": "boolean", "default": False}, "product_id_allow_missing": {"type": "boolean", "default": False},
"product_id_allow_name_prefix": {"type": "boolean", "default": True}, "product_id_allow_name_prefix": {"type": "boolean", "default": True},
# Deprecated in favour of regular local/phase/global setting.
"live_target": {"type": "string"},
"tree_arches": {"$ref": "#/definitions/list_of_strings", "default": []}, "tree_arches": {"$ref": "#/definitions/list_of_strings", "default": []},
"tree_variants": {"$ref": "#/definitions/list_of_strings", "default": []}, "tree_variants": {"$ref": "#/definitions/list_of_strings", "default": []},
"translate_paths": {"$ref": "#/definitions/string_pairs", "default": []}, "translate_paths": {"$ref": "#/definitions/string_pairs", "default": []},
@ -1032,11 +1048,13 @@ def make_schema():
"config_branch": {"type": "string"}, "config_branch": {"type": "string"},
"tag_ref": {"type": "boolean"}, "tag_ref": {"type": "boolean"},
"ostree_ref": {"type": "string"}, "ostree_ref": {"type": "string"},
"runroot_packages": {
"$ref": "#/definitions/list_of_strings",
},
}, },
"required": [ "required": [
"treefile", "treefile",
"config_url", "config_url",
"repo",
"ostree_repo", "ostree_repo",
], ],
"additionalProperties": False, "additionalProperties": False,
@ -1074,6 +1092,39 @@ def make_schema():
), ),
] ]
}, },
"ostree_container": {
"type": "object",
"patternProperties": {
# Warning: this pattern is a variant uid regex, but the
# format does not let us validate it as there is no regular
# expression to describe all regular expressions.
".+": _one_or_list(
{
"type": "object",
"properties": {
"treefile": {"type": "string"},
"config_url": {"type": "string"},
"repo": {"$ref": "#/definitions/repos"},
"keep_original_sources": {"type": "boolean"},
"config_branch": {"type": "string"},
"arches": {"$ref": "#/definitions/list_of_strings"},
"failable": {"$ref": "#/definitions/list_of_strings"},
"version": {"type": "string"},
"tag_ref": {"type": "boolean"},
"runroot_packages": {
"$ref": "#/definitions/list_of_strings",
},
},
"required": [
"treefile",
"config_url",
],
"additionalProperties": False,
}
),
},
"additionalProperties": False,
},
"ostree_installer": _variant_arch_mapping( "ostree_installer": _variant_arch_mapping(
{ {
"type": "object", "type": "object",
@ -1098,11 +1149,9 @@ def make_schema():
} }
), ),
"ostree_use_koji_plugin": {"type": "boolean", "default": False}, "ostree_use_koji_plugin": {"type": "boolean", "default": False},
"ostree_container_use_koji_plugin": {"type": "boolean", "default": False},
"ostree_installer_use_koji_plugin": {"type": "boolean", "default": False}, "ostree_installer_use_koji_plugin": {"type": "boolean", "default": False},
"ostree_installer_overwrite": {"type": "boolean", "default": False}, "ostree_installer_overwrite": {"type": "boolean", "default": False},
"live_images": _variant_arch_mapping(
_one_or_list({"$ref": "#/definitions/live_image_config"})
),
"image_build_allow_reuse": {"type": "boolean", "default": False}, "image_build_allow_reuse": {"type": "boolean", "default": False},
"image_build": { "image_build": {
"type": "object", "type": "object",
@ -1153,6 +1202,50 @@ def make_schema():
}, },
"additionalProperties": False, "additionalProperties": False,
}, },
"kiwibuild": {
"type": "object",
"patternProperties": {
# Warning: this pattern is a variant uid regex, but the
# format does not let us validate it as there is no regular
# expression to describe all regular expressions.
".+": {
"type": "array",
"items": {
"type": "object",
"properties": {
"target": {"type": "string"},
"description_scm": {"type": "url"},
"description_path": {"type": "string"},
"kiwi_profile": {"type": "string"},
"release": {"type": "string"},
"arches": {"$ref": "#/definitions/list_of_strings"},
"repos": {"$ref": "#/definitions/list_of_strings"},
"failable": {"$ref": "#/definitions/list_of_strings"},
"subvariant": {"type": "string"},
"type": {"type": "string"},
"type_attr": {"$ref": "#/definitions/list_of_strings"},
"bundle_name_format": {"type": "string"},
},
"required": [
# description_scm and description_path
# are really required, but as they can
# be set at the phase level we cannot
# enforce that here
"kiwi_profile",
],
"additionalProperties": False,
},
}
},
"additionalProperties": False,
},
"kiwibuild_description_scm": {"type": "url"},
"kiwibuild_description_path": {"type": "string"},
"kiwibuild_target": {"type": "string"},
"kiwibuild_release": {"$ref": "#/definitions/optional_string"},
"kiwibuild_type": {"type": "string"},
"kiwibuild_type_attr": {"$ref": "#/definitions/list_of_strings"},
"kiwibuild_bundle_name_format": {"type": "string"},
"osbuild_target": {"type": "string"}, "osbuild_target": {"type": "string"},
"osbuild_release": {"$ref": "#/definitions/optional_string"}, "osbuild_release": {"$ref": "#/definitions/optional_string"},
"osbuild_version": {"type": "string"}, "osbuild_version": {"type": "string"},
@ -1188,14 +1281,41 @@ def make_schema():
}, },
"arches": {"$ref": "#/definitions/list_of_strings"}, "arches": {"$ref": "#/definitions/list_of_strings"},
"release": {"type": "string"}, "release": {"type": "string"},
"repo": {"$ref": "#/definitions/list_of_strings"}, "repo": {
"type": "array",
"items": {
"oneOf": [
{
"type": "object",
"additionalProperties": False,
"required": ["baseurl"],
"properties": {
"baseurl": {"type": "string"},
"package_sets": {
"type": "array",
"items": {"type": "string"},
},
},
},
{"type": "string"},
]
},
},
"failable": {"$ref": "#/definitions/list_of_strings"}, "failable": {"$ref": "#/definitions/list_of_strings"},
"subvariant": {"type": "string"}, "subvariant": {"type": "string"},
"ostree_url": {"type": "string"}, "ostree_url": {"type": "string"},
"ostree_ref": {"type": "string"}, "ostree_ref": {"type": "string"},
"ostree_parent": {"type": "string"}, "ostree_parent": {"type": "string"},
"manifest_type": {"type": "string"},
"customizations": {
"type": "object",
"additionalProperties": True,
},
"upload_options": { "upload_options": {
"oneOf": [ # this should be really 'oneOf', but the minimal
# required properties in AWSEC2 and GCP options
# overlap.
"anyOf": [
# AWSEC2UploadOptions # AWSEC2UploadOptions
{ {
"type": "object", "type": "object",
@ -1234,7 +1354,6 @@ def make_schema():
"tenant_id", "tenant_id",
"subscription_id", "subscription_id",
"resource_group", "resource_group",
"location",
], ],
"properties": { "properties": {
"tenant_id": {"type": "string"}, "tenant_id": {"type": "string"},
@ -1250,7 +1369,7 @@ def make_schema():
{ {
"type": "object", "type": "object",
"additionalProperties": False, "additionalProperties": False,
"required": ["region", "bucket"], "required": ["region"],
"properties": { "properties": {
"region": {"type": "string"}, "region": {"type": "string"},
"bucket": {"type": "string"}, "bucket": {"type": "string"},
@ -1308,9 +1427,6 @@ def make_schema():
{"$ref": "#/definitions/strings"} {"$ref": "#/definitions/strings"}
), ),
"lorax_use_koji_plugin": {"type": "boolean", "default": False}, "lorax_use_koji_plugin": {"type": "boolean", "default": False},
"signing_key_id": {"type": "string"},
"signing_key_password_file": {"type": "string"},
"signing_command": {"type": "string"},
"productimg": { "productimg": {
"deprecated": "remove it. Productimg phase has been removed" "deprecated": "remove it. Productimg phase has been removed"
}, },
@ -1445,7 +1561,6 @@ def get_num_cpus():
CONFIG_DEPS = { CONFIG_DEPS = {
"buildinstall_method": { "buildinstall_method": {
"conflicts": ( "conflicts": (
(lambda val: val == "buildinstall", ["lorax_options"]),
(lambda val: not val, ["lorax_options", "buildinstall_kickstart"]), (lambda val: not val, ["lorax_options", "buildinstall_kickstart"]),
), ),
}, },

View File

@ -17,6 +17,7 @@
__all__ = ("Compose",) __all__ = ("Compose",)
import contextlib
import errno import errno
import logging import logging
import os import os
@ -38,6 +39,7 @@ from dogpile.cache import make_region
from pungi.graph import SimpleAcyclicOrientedGraph from pungi.graph import SimpleAcyclicOrientedGraph
from pungi.wrappers.variants import VariantsXmlParser from pungi.wrappers.variants import VariantsXmlParser
from pungi.paths import Paths from pungi.paths import Paths
from pungi.wrappers.kojiwrapper import KojiDownloadProxy
from pungi.wrappers.scm import get_file_from_scm from pungi.wrappers.scm import get_file_from_scm
from pungi.util import ( from pungi.util import (
makedirs, makedirs,
@ -57,14 +59,101 @@ except ImportError:
SUPPORTED_MILESTONES = ["RC", "Update", "SecurityFix"] SUPPORTED_MILESTONES = ["RC", "Update", "SecurityFix"]
def is_status_fatal(status_code):
"""Check if status code returned from CTS reports an error that is unlikely
to be fixed by retrying. Generally client errors (4XX) are fatal, with the
exception of 401 Unauthorized which could be caused by transient network
issue between compose host and KDC.
"""
if status_code == 401:
return False
return status_code >= 400 and status_code < 500
@retry(wait_on=RequestException) @retry(wait_on=RequestException)
def retry_request(method, url, data=None, auth=None): def retry_request(method, url, data=None, json_data=None, auth=None):
"""
:param str method: Reqest method.
:param str url: Target URL.
:param dict data: form-urlencoded data to send in the body of the request.
:param dict json_data: json data to send in the body of the request.
"""
request_method = getattr(requests, method) request_method = getattr(requests, method)
rv = request_method(url, json=data, auth=auth) rv = request_method(url, data=data, json=json_data, auth=auth)
if is_status_fatal(rv.status_code):
try:
error = rv.json()
except ValueError:
error = rv.text
raise RuntimeError("%s responded with %d: %s" % (url, rv.status_code, error))
rv.raise_for_status() rv.raise_for_status()
return rv return rv
class BearerAuth(requests.auth.AuthBase):
def __init__(self, token):
self.token = token
def __call__(self, r):
r.headers["authorization"] = "Bearer " + self.token
return r
@contextlib.contextmanager
def cts_auth(pungi_conf):
"""
:param dict pungi_conf: dict obj of pungi.json config.
"""
auth = None
token = None
cts_keytab = pungi_conf.get("cts_keytab")
cts_oidc_token_url = os.environ.get("CTS_OIDC_TOKEN_URL", "") or pungi_conf.get(
"cts_oidc_token_url"
)
try:
if cts_keytab:
# requests-kerberos cannot accept custom keytab, we need to use
# environment variable for this. But we need to change environment
# only temporarily just for this single requests.post.
# So at first backup the current environment and revert to it
# after the requests call.
from requests_kerberos import HTTPKerberosAuth
auth = HTTPKerberosAuth()
environ_copy = dict(os.environ)
if "$HOSTNAME" in cts_keytab:
cts_keytab = cts_keytab.replace("$HOSTNAME", socket.gethostname())
os.environ["KRB5_CLIENT_KTNAME"] = cts_keytab
os.environ["KRB5CCNAME"] = "DIR:%s" % tempfile.mkdtemp()
elif cts_oidc_token_url:
cts_oidc_client_id = os.environ.get(
"CTS_OIDC_CLIENT_ID", ""
) or pungi_conf.get("cts_oidc_client_id", "")
token = retry_request(
"post",
cts_oidc_token_url,
data={
"grant_type": "client_credentials",
"client_id": cts_oidc_client_id,
"client_secret": os.environ.get("CTS_OIDC_CLIENT_SECRET", ""),
},
).json()["access_token"]
auth = BearerAuth(token)
del token
yield auth
except Exception as e:
# Avoid leaking client secret in trackback
e.show_locals = False
raise e
finally:
if cts_keytab:
shutil.rmtree(os.environ["KRB5CCNAME"].split(":", 1)[1])
os.environ.clear()
os.environ.update(environ_copy)
def get_compose_info( def get_compose_info(
conf, conf,
compose_type="production", compose_type="production",
@ -94,38 +183,19 @@ def get_compose_info(
ci.compose.type = compose_type ci.compose.type = compose_type
ci.compose.date = compose_date or time.strftime("%Y%m%d", time.localtime()) ci.compose.date = compose_date or time.strftime("%Y%m%d", time.localtime())
ci.compose.respin = compose_respin or 0 ci.compose.respin = compose_respin or 0
cts_url = conf.get("cts_url", None)
if cts_url:
# Requests-kerberos cannot accept custom keytab, we need to use
# environment variable for this. But we need to change environment
# only temporarily just for this single requests.post.
# So at first backup the current environment and revert to it
# after the requests.post call.
cts_keytab = conf.get("cts_keytab", None)
authentication = get_authentication(conf)
if cts_keytab:
environ_copy = dict(os.environ)
if "$HOSTNAME" in cts_keytab:
cts_keytab = cts_keytab.replace("$HOSTNAME", socket.gethostname())
os.environ["KRB5_CLIENT_KTNAME"] = cts_keytab
os.environ["KRB5CCNAME"] = "DIR:%s" % tempfile.mkdtemp()
try:
# Create compose in CTS and get the reserved compose ID.
ci.compose.id = ci.create_compose_id() ci.compose.id = ci.create_compose_id()
cts_url = conf.get("cts_url")
if cts_url:
# Create compose in CTS and get the reserved compose ID.
url = os.path.join(cts_url, "api/1/composes/") url = os.path.join(cts_url, "api/1/composes/")
data = { data = {
"compose_info": json.loads(ci.dumps()), "compose_info": json.loads(ci.dumps()),
"parent_compose_ids": parent_compose_ids, "parent_compose_ids": parent_compose_ids,
"respin_of": respin_of, "respin_of": respin_of,
} }
rv = retry_request("post", url, data=data, auth=authentication) with cts_auth(conf) as authentication:
finally: rv = retry_request("post", url, json_data=data, auth=authentication)
if cts_keytab:
shutil.rmtree(os.environ["KRB5CCNAME"].split(":", 1)[1])
os.environ.clear()
os.environ.update(environ_copy)
# Update local ComposeInfo with received ComposeInfo. # Update local ComposeInfo with received ComposeInfo.
cts_ci = ComposeInfo() cts_ci = ComposeInfo()
@ -133,22 +203,9 @@ def get_compose_info(
ci.compose.respin = cts_ci.compose.respin ci.compose.respin = cts_ci.compose.respin
ci.compose.id = cts_ci.compose.id ci.compose.id = cts_ci.compose.id
else:
ci.compose.id = ci.create_compose_id()
return ci return ci
def get_authentication(conf):
authentication = None
cts_keytab = conf.get("cts_keytab", None)
if cts_keytab:
from requests_kerberos import HTTPKerberosAuth
authentication = HTTPKerberosAuth()
return authentication
def write_compose_info(compose_dir, ci): def write_compose_info(compose_dir, ci):
""" """
Write ComposeInfo `ci` to `compose_dir` subdirectories. Write ComposeInfo `ci` to `compose_dir` subdirectories.
@ -162,17 +219,20 @@ def write_compose_info(compose_dir, ci):
def update_compose_url(compose_id, compose_dir, conf): def update_compose_url(compose_id, compose_dir, conf):
authentication = get_authentication(conf)
cts_url = conf.get("cts_url", None) cts_url = conf.get("cts_url", None)
if cts_url: if cts_url:
url = os.path.join(cts_url, "api/1/composes", compose_id) url = os.path.join(cts_url, "api/1/composes", compose_id)
tp = conf.get("translate_paths", None) tp = conf.get("translate_paths", None)
compose_url = translate_path_raw(tp, compose_dir) compose_url = translate_path_raw(tp, compose_dir)
if compose_url == compose_dir:
# We do not have a URL, do not attempt the update.
return
data = { data = {
"action": "set_url", "action": "set_url",
"compose_url": compose_url, "compose_url": compose_url,
} }
return retry_request("patch", url, data=data, auth=authentication) with cts_auth(conf) as authentication:
return retry_request("patch", url, json_data=data, auth=authentication)
def get_compose_dir( def get_compose_dir(
@ -183,11 +243,19 @@ def get_compose_dir(
compose_respin=None, compose_respin=None,
compose_label=None, compose_label=None,
already_exists_callbacks=None, already_exists_callbacks=None,
parent_compose_ids=None,
respin_of=None,
): ):
already_exists_callbacks = already_exists_callbacks or [] already_exists_callbacks = already_exists_callbacks or []
ci = get_compose_info( ci = get_compose_info(
conf, compose_type, compose_date, compose_respin, compose_label conf,
compose_type,
compose_date,
compose_respin,
compose_label,
parent_compose_ids,
respin_of,
) )
cts_url = conf.get("cts_url", None) cts_url = conf.get("cts_url", None)
@ -342,6 +410,8 @@ class Compose(kobo.log.LoggingBase):
else: else:
self.cache_region = make_region().configure("dogpile.cache.null") self.cache_region = make_region().configure("dogpile.cache.null")
self.koji_downloader = KojiDownloadProxy.from_config(self.conf, self._logger)
get_compose_info = staticmethod(get_compose_info) get_compose_info = staticmethod(get_compose_info)
write_compose_info = staticmethod(write_compose_info) write_compose_info = staticmethod(write_compose_info)
get_compose_dir = staticmethod(get_compose_dir) get_compose_dir = staticmethod(get_compose_dir)
@ -637,7 +707,7 @@ class Compose(kobo.log.LoggingBase):
separators=(",", ": "), separators=(",", ": "),
) )
def traceback(self, detail=None): def traceback(self, detail=None, show_locals=True):
"""Store an extended traceback. This method should only be called when """Store an extended traceback. This method should only be called when
handling an exception. handling an exception.
@ -648,8 +718,10 @@ class Compose(kobo.log.LoggingBase):
basename += "-" + detail basename += "-" + detail
tb_path = self.paths.log.log_file("global", basename) tb_path = self.paths.log.log_file("global", basename)
self.log_error("Extended traceback in: %s", tb_path) self.log_error("Extended traceback in: %s", tb_path)
with open(tb_path, "wb") as f: tback = kobo.tback.Traceback(show_locals=show_locals).get_traceback()
f.write(kobo.tback.Traceback().get_traceback()) # Kobo 0.36.0 returns traceback as str, older versions return bytes
with open(tb_path, "wb" if isinstance(tback, bytes) else "w") as f:
f.write(tback)
def load_old_compose_config(self): def load_old_compose_config(self):
""" """

View File

@ -5,11 +5,14 @@ from __future__ import print_function
import os import os
import six import six
from collections import namedtuple from collections import namedtuple
from kobo.shortcuts import run
from six.moves import shlex_quote from six.moves import shlex_quote
from .wrappers import iso from .wrappers import iso
from .wrappers.jigdo import JigdoWrapper from .wrappers.jigdo import JigdoWrapper
from .phases.buildinstall import BOOT_CONFIGS, BOOT_IMAGES
CreateIsoOpts = namedtuple( CreateIsoOpts = namedtuple(
"CreateIsoOpts", "CreateIsoOpts",
@ -64,10 +67,6 @@ def make_image(f, opts):
os.path.join("$TEMPLATE", "config_files/ppc"), os.path.join("$TEMPLATE", "config_files/ppc"),
hfs_compat=opts.hfs_compat, hfs_compat=opts.hfs_compat,
) )
elif opts.buildinstall_method == "buildinstall":
mkisofs_kwargs["boot_args"] = iso.get_boot_options(
opts.arch, "/usr/lib/anaconda-runtime/boot"
)
# ppc(64) doesn't seem to support utf-8 # ppc(64) doesn't seem to support utf-8
if opts.arch in ("ppc", "ppc64", "ppc64le"): if opts.arch in ("ppc", "ppc64", "ppc64le"):
@ -118,23 +117,65 @@ def make_jigdo(f, opts):
emit(f, cmd) emit(f, cmd)
def _get_perms(fs_path):
"""Compute proper permissions for a file.
This mimicks what -rational-rock option of genisoimage does. All read bits
are set, so that files and directories are globally readable. If any
execute bit is set for a file, set them all. No writes are allowed and
special bits are erased too.
"""
statinfo = os.stat(fs_path)
perms = 0o444
if statinfo.st_mode & 0o111:
perms |= 0o111
return perms
def write_xorriso_commands(opts): def write_xorriso_commands(opts):
# Create manifest for the boot.iso listing all contents
boot_iso_manifest = "%s.manifest" % os.path.join(
opts.script_dir, os.path.basename(opts.boot_iso)
)
run(
iso.get_manifest_cmd(
opts.boot_iso, opts.use_xorrisofs, output_file=boot_iso_manifest
)
)
# Find which files may have been updated by pungi. This only includes a few
# files from tweaking buildinstall and .discinfo metadata. There's no good
# way to detect whether the boot config files actually changed, so we may
# be updating files in the ISO with the same data.
UPDATEABLE_FILES = set(BOOT_IMAGES + BOOT_CONFIGS + [".discinfo"])
updated_files = set()
excluded_files = set()
with open(boot_iso_manifest) as f:
for line in f:
path = line.lstrip("/").rstrip("\n")
if path in UPDATEABLE_FILES:
updated_files.add(path)
else:
excluded_files.add(path)
script = os.path.join(opts.script_dir, "xorriso-%s.txt" % id(opts)) script = os.path.join(opts.script_dir, "xorriso-%s.txt" % id(opts))
with open(script, "w") as f: with open(script, "w") as f:
emit(f, "-indev %s" % opts.boot_iso) for cmd in iso.xorriso_commands(
emit(f, "-outdev %s" % os.path.join(opts.output_dir, opts.iso_name)) opts.arch, opts.boot_iso, os.path.join(opts.output_dir, opts.iso_name)
emit(f, "-boot_image any replay") ):
emit(f, " ".join(cmd))
emit(f, "-volid %s" % opts.volid) emit(f, "-volid %s" % opts.volid)
with open(opts.graft_points) as gp: with open(opts.graft_points) as gp:
for line in gp: for line in gp:
iso_path, fs_path = line.strip().split("=", 1) iso_path, fs_path = line.strip().split("=", 1)
emit(f, "-map %s %s" % (fs_path, iso_path)) if iso_path in excluded_files:
continue
if opts.arch == "ppc64le": cmd = "-update" if iso_path in updated_files else "-map"
# This is needed for the image to be bootable. emit(f, "%s %s %s" % (cmd, fs_path, iso_path))
emit(f, "-as mkisofs -U --") emit(f, "-chmod 0%o %s" % (_get_perms(fs_path), iso_path))
emit(f, "-chown_r 0 /")
emit(f, "-chgrp_r 0 /")
emit(f, "-end") emit(f, "-end")
return script return script

View File

@ -1118,7 +1118,6 @@ class Pungi(PungiBase):
self.logger.info("Finished gathering package objects.") self.logger.info("Finished gathering package objects.")
def gather(self): def gather(self):
# get package objects according to the input list # get package objects according to the input list
self.getPackageObjects() self.getPackageObjects()
if self.is_sources: if self.is_sources:

View File

@ -15,17 +15,21 @@
from enum import Enum from enum import Enum
from itertools import count from functools import cmp_to_key
from itertools import count, groupby
import errno
import logging import logging
import os import os
import re import re
from kobo.rpmlib import parse_nvra from kobo.rpmlib import parse_nvra
import rpm
import pungi.common import pungi.common
import pungi.dnf_wrapper import pungi.dnf_wrapper
import pungi.multilib_dnf import pungi.multilib_dnf
import pungi.util import pungi.util
from pungi import arch_utils
from pungi.linker import Linker from pungi.linker import Linker
from pungi.profiler import Profiler from pungi.profiler import Profiler
from pungi.util import DEBUG_PATTERNS from pungi.util import DEBUG_PATTERNS
@ -36,6 +40,20 @@ def get_source_name(pkg):
return pkg.sourcerpm.rsplit("-", 2)[0] return pkg.sourcerpm.rsplit("-", 2)[0]
def filter_dotarch(queue, pattern, **kwargs):
"""Filter queue for packages matching the pattern. If pattern matches the
dotarch format of <name>.<arch>, it is processed as such. Otherwise it is
treated as just a name.
"""
kwargs["name__glob"] = pattern
if "." in pattern:
name, arch = pattern.split(".", 1)
if arch in arch_utils.arches or arch == "noarch":
kwargs["name__glob"] = name
kwargs["arch"] = arch
return queue.filter(**kwargs).apply()
class GatherOptions(pungi.common.OptionsBase): class GatherOptions(pungi.common.OptionsBase):
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(GatherOptions, self).__init__() super(GatherOptions, self).__init__()
@ -245,13 +263,37 @@ class Gather(GatherBase):
# from lookaside. This can be achieved by removing any package that is # from lookaside. This can be achieved by removing any package that is
# also in lookaside from the list. # also in lookaside from the list.
lookaside_pkgs = set() lookaside_pkgs = set()
if self.opts.lookaside_repos:
# We will call `latest()` to get the highest version packages only.
# However, that is per name and architecture. If a package switches
# from arched to noarch or the other way, it is possible that the
# package_list contains different versions in main repos and in
# lookaside repos.
# We need to manually filter the latest version.
def vercmp(x, y):
return rpm.labelCompare(x[1], y[1])
# Annotate the packages with their version.
versioned_packages = [
(pkg, (str(pkg.epoch) or "0", pkg.version, pkg.release))
for pkg in package_list
]
# Sort the packages newest first.
sorted_packages = sorted(
versioned_packages, key=cmp_to_key(vercmp), reverse=True
)
# Group packages by version, take the first group and discard the
# version info from the tuple.
package_list = list(
x[0] for x in next(groupby(sorted_packages, key=lambda x: x[1]))[1]
)
# Now we can decide what is used from lookaside.
for pkg in package_list: for pkg in package_list:
if pkg.repoid in self.opts.lookaside_repos: if pkg.repoid in self.opts.lookaside_repos:
lookaside_pkgs.add("{0.name}-{0.evr}".format(pkg)) lookaside_pkgs.add("{0.name}-{0.evr}".format(pkg))
if self.opts.greedy_method == "all":
return list(package_list)
all_pkgs = [] all_pkgs = []
for pkg in package_list: for pkg in package_list:
# Remove packages that are also in lookaside # Remove packages that are also in lookaside
@ -263,16 +305,21 @@ class Gather(GatherBase):
if not debuginfo: if not debuginfo:
native_pkgs = set( native_pkgs = set(
self.q_native_binary_packages.filter(pkg=all_pkgs).apply() self.q_native_binary_packages.filter(pkg=all_pkgs).latest().apply()
) )
multilib_pkgs = set( multilib_pkgs = set(
self.q_multilib_binary_packages.filter(pkg=all_pkgs).apply() self.q_multilib_binary_packages.filter(pkg=all_pkgs).latest().apply()
) )
else: else:
native_pkgs = set(self.q_native_debug_packages.filter(pkg=all_pkgs).apply()) native_pkgs = set(
multilib_pkgs = set( self.q_native_debug_packages.filter(pkg=all_pkgs).latest().apply()
self.q_multilib_debug_packages.filter(pkg=all_pkgs).apply()
) )
multilib_pkgs = set(
self.q_multilib_debug_packages.filter(pkg=all_pkgs).latest().apply()
)
if self.opts.greedy_method == "all":
return list(native_pkgs | multilib_pkgs)
result = set() result = set()
@ -392,9 +439,7 @@ class Gather(GatherBase):
"""Given an name of a queue (stored as attribute in `self`), exclude """Given an name of a queue (stored as attribute in `self`), exclude
all given packages and keep only the latest per package name and arch. all given packages and keep only the latest per package name and arch.
""" """
setattr( setattr(self, queue, getattr(self, queue).filter(pkg__neq=exclude).apply())
self, queue, getattr(self, queue).filter(pkg__neq=exclude).latest().apply()
)
@Profiler("Gather._apply_excludes()") @Profiler("Gather._apply_excludes()")
def _apply_excludes(self, excludes): def _apply_excludes(self, excludes):
@ -420,12 +465,16 @@ class Gather(GatherBase):
name__glob=pattern[:-4], reponame__neq=self.opts.lookaside_repos name__glob=pattern[:-4], reponame__neq=self.opts.lookaside_repos
) )
elif pungi.util.pkg_is_debug(pattern): elif pungi.util.pkg_is_debug(pattern):
pkgs = self.q_debug_packages.filter( pkgs = filter_dotarch(
name__glob=pattern, reponame__neq=self.opts.lookaside_repos self.q_debug_packages,
pattern,
reponame__neq=self.opts.lookaside_repos,
) )
else: else:
pkgs = self.q_binary_packages.filter( pkgs = filter_dotarch(
name__glob=pattern, reponame__neq=self.opts.lookaside_repos self.q_binary_packages,
pattern,
reponame__neq=self.opts.lookaside_repos,
) )
exclude.update(pkgs) exclude.update(pkgs)
@ -491,21 +540,19 @@ class Gather(GatherBase):
name__glob=pattern[:-2] name__glob=pattern[:-2]
).apply() ).apply()
else: else:
pkgs = self.q_debug_packages.filter( pkgs = filter_dotarch(self.q_debug_packages, pattern)
name__glob=pattern
).apply()
else: else:
if pattern.endswith(".+"): if pattern.endswith(".+"):
pkgs = self.q_multilib_binary_packages.filter( pkgs = self.q_multilib_binary_packages.filter(
name__glob=pattern[:-2] name__glob=pattern[:-2]
).apply() ).apply()
else: else:
pkgs = self.q_binary_packages.filter( pkgs = filter_dotarch(self.q_binary_packages, pattern)
name__glob=pattern
).apply()
if not pkgs: if not pkgs:
self.logger.error("No package matches pattern %s" % pattern) self.logger.error(
"Could not find a match for %s in any configured repo", pattern
)
# The pattern could have been a glob. In that case we want to # The pattern could have been a glob. In that case we want to
# group the packages by name and get best match in those # group the packages by name and get best match in those
@ -616,7 +663,6 @@ class Gather(GatherBase):
return added return added
for pkg in self.result_debug_packages.copy(): for pkg in self.result_debug_packages.copy():
if pkg not in self.finished_add_debug_package_deps: if pkg not in self.finished_add_debug_package_deps:
deps = self._get_package_deps(pkg, debuginfo=True) deps = self._get_package_deps(pkg, debuginfo=True)
for i, req in deps: for i, req in deps:
@ -784,7 +830,6 @@ class Gather(GatherBase):
continue continue
debug_pkgs = [] debug_pkgs = []
pkg_in_lookaside = pkg.repoid in self.opts.lookaside_repos
for i in candidates: for i in candidates:
if pkg.arch != i.arch: if pkg.arch != i.arch:
continue continue
@ -792,8 +837,14 @@ class Gather(GatherBase):
# If it's not debugsource package or does not match name of # If it's not debugsource package or does not match name of
# the package, we don't want it in. # the package, we don't want it in.
continue continue
if i.repoid in self.opts.lookaside_repos or pkg_in_lookaside: if self.is_from_lookaside(i):
self._set_flag(i, PkgFlag.lookaside) self._set_flag(i, PkgFlag.lookaside)
srpm_name = i.sourcerpm.rsplit("-", 2)[0]
if srpm_name in self.opts.fulltree_excludes:
self._set_flag(i, PkgFlag.fulltree_exclude)
if PkgFlag.input in self.result_package_flags.get(srpm_name, set()):
# If src rpm is marked as input, mark debuginfo as input too
self._set_flag(i, PkgFlag.input)
if i not in self.result_debug_packages: if i not in self.result_debug_packages:
added.add(i) added.add(i)
debug_pkgs.append(i) debug_pkgs.append(i)
@ -1030,8 +1081,11 @@ class Gather(GatherBase):
# Link downloaded package in (or link package from file repo) # Link downloaded package in (or link package from file repo)
try: try:
linker.link(pkg.localPkg(), target) linker.link(pkg.localPkg(), target)
except Exception: except Exception as ex:
self.logger.error("Unable to link %s from the yum cache." % pkg.name) if ex.errno == errno.EEXIST:
self.logger.warning("Downloaded package exists in %s", target)
else:
self.logger.error("Unable to link %s from the yum cache.", pkg.name)
raise raise
def log_count(self, msg, method, *args): def log_count(self, msg, method, *args):

View File

@ -306,11 +306,6 @@ def write_tree_info(compose, arch, variant, timestamp=None, bi=None):
if variant.type in ("addon",) or variant.is_empty: if variant.type in ("addon",) or variant.is_empty:
return return
compose.log_debug(
"on arch '%s' looking at variant '%s' of type '%s'"
% (arch, variant, variant.type)
)
if not timestamp: if not timestamp:
timestamp = int(time.time()) timestamp = int(time.time())
else: else:

View File

@ -19,6 +19,7 @@ import logging
from .tree import Tree from .tree import Tree
from .installer import Installer from .installer import Installer
from .container import Container
def main(args=None): def main(args=None):
@ -71,6 +72,43 @@ def main(args=None):
help="use unified core mode in rpm-ostree", help="use unified core mode in rpm-ostree",
) )
container = subparser.add_parser(
"container", help="Compose OSTree native container"
)
container.set_defaults(_class=Container, func="run")
container.add_argument(
"--name",
required=True,
help="the name of the the OCI archive (required)",
)
container.add_argument(
"--path",
required=True,
help="where to output the OCI archive (required)",
)
container.add_argument(
"--treefile",
metavar="FILE",
required=True,
help="treefile for rpm-ostree (required)",
)
container.add_argument(
"--log-dir",
metavar="DIR",
required=True,
help="where to log output (required).",
)
container.add_argument(
"--extra-config", metavar="FILE", help="JSON file contains extra configurations"
)
container.add_argument(
"-v",
"--version",
metavar="VERSION",
required=True,
help="version identifier (required)",
)
installerp = subparser.add_parser( installerp = subparser.add_parser(
"installer", help="Create an OSTree installer image" "installer", help="Create an OSTree installer image"
) )

86
pungi/ostree/container.py Normal file
View File

@ -0,0 +1,86 @@
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
import os
import json
import six
from six.moves import shlex_quote
from .base import OSTree
from .utils import tweak_treeconf
def emit(cmd):
"""Print line of shell code into the stream."""
if isinstance(cmd, six.string_types):
print(cmd)
else:
print(" ".join([shlex_quote(x) for x in cmd]))
class Container(OSTree):
def _make_container(self):
"""Compose OSTree Container Native image"""
stamp_file = os.path.join(self.logdir, "%s.stamp" % self.name)
cmd = [
"rpm-ostree",
"compose",
"image",
# Always initialize for now
"--initialize",
# Touch the file if a new commit was created. This can help us tell
# if the commitid file is missing because no commit was created or
# because something went wrong.
"--touch-if-changed=%s" % stamp_file,
self.treefile,
]
fullpath = os.path.join(self.path, "%s.ociarchive" % self.name)
cmd.append(fullpath)
# Set the umask to be more permissive so directories get group write
# permissions. See https://pagure.io/releng/issue/8811#comment-629051
emit("umask 0002")
emit(cmd)
def run(self):
self.name = self.args.name
self.path = self.args.path
self.treefile = self.args.treefile
self.logdir = self.args.log_dir
self.extra_config = self.args.extra_config
if self.extra_config:
self.extra_config = json.load(open(self.extra_config, "r"))
repos = self.extra_config.get("repo", [])
keep_original_sources = self.extra_config.get(
"keep_original_sources", False
)
else:
# missing extra_config mustn't affect tweak_treeconf call
repos = []
keep_original_sources = True
update_dict = {"automatic-version-prefix": self.args.version}
self.treefile = tweak_treeconf(
self.treefile,
source_repos=repos,
keep_original_sources=keep_original_sources,
update_dict=update_dict,
)
self._make_container()

View File

@ -25,9 +25,9 @@ from .buildinstall import BuildinstallPhase # noqa
from .extra_files import ExtraFilesPhase # noqa from .extra_files import ExtraFilesPhase # noqa
from .createiso import CreateisoPhase # noqa from .createiso import CreateisoPhase # noqa
from .extra_isos import ExtraIsosPhase # noqa from .extra_isos import ExtraIsosPhase # noqa
from .live_images import LiveImagesPhase # noqa
from .image_build import ImageBuildPhase # noqa from .image_build import ImageBuildPhase # noqa
from .image_container import ImageContainerPhase # noqa from .image_container import ImageContainerPhase # noqa
from .kiwibuild import KiwiBuildPhase # noqa
from .osbuild import OSBuildPhase # noqa from .osbuild import OSBuildPhase # noqa
from .repoclosure import RepoclosurePhase # noqa from .repoclosure import RepoclosurePhase # noqa
from .test import TestPhase # noqa from .test import TestPhase # noqa
@ -35,6 +35,7 @@ from .image_checksum import ImageChecksumPhase # noqa
from .livemedia_phase import LiveMediaPhase # noqa from .livemedia_phase import LiveMediaPhase # noqa
from .ostree import OSTreePhase # noqa from .ostree import OSTreePhase # noqa
from .ostree_installer import OstreeInstallerPhase # noqa from .ostree_installer import OstreeInstallerPhase # noqa
from .ostree_container import OSTreeContainerPhase # noqa
from .osbs import OSBSPhase # noqa from .osbs import OSBSPhase # noqa
from .phases_metadata import gather_phases_metadata # noqa from .phases_metadata import gather_phases_metadata # noqa

View File

@ -31,14 +31,14 @@ from six.moves import shlex_quote
from pungi.arch import get_valid_arches from pungi.arch import get_valid_arches
from pungi.util import get_volid, get_arch_variant_data from pungi.util import get_volid, get_arch_variant_data
from pungi.util import get_file_size, get_mtime, failable, makedirs from pungi.util import get_file_size, get_mtime, failable, makedirs
from pungi.util import copy_all, translate_path, move_all from pungi.util import copy_all, translate_path
from pungi.wrappers.lorax import LoraxWrapper from pungi.wrappers.lorax import LoraxWrapper
from pungi.wrappers import iso from pungi.wrappers import iso
from pungi.wrappers.scm import get_file from pungi.wrappers.scm import get_file
from pungi.wrappers.scm import get_file_from_scm from pungi.wrappers.scm import get_file_from_scm
from pungi.wrappers import kojiwrapper from pungi.wrappers import kojiwrapper
from pungi.phases.base import PhaseBase from pungi.phases.base import PhaseBase
from pungi.runroot import Runroot from pungi.runroot import Runroot, download_and_extract_archive
class BuildinstallPhase(PhaseBase): class BuildinstallPhase(PhaseBase):
@ -144,7 +144,7 @@ class BuildinstallPhase(PhaseBase):
) )
if self.compose.has_comps: if self.compose.has_comps:
comps_repo = self.compose.paths.work.comps_repo(arch, variant) comps_repo = self.compose.paths.work.comps_repo(arch, variant)
if final_output_dir != output_dir: if final_output_dir != output_dir or self.lorax_use_koji_plugin:
comps_repo = translate_path(self.compose, comps_repo) comps_repo = translate_path(self.compose, comps_repo)
repos.append(comps_repo) repos.append(comps_repo)
@ -169,7 +169,6 @@ class BuildinstallPhase(PhaseBase):
"rootfs-size": rootfs_size, "rootfs-size": rootfs_size,
"dracut-args": dracut_args, "dracut-args": dracut_args,
"skip_branding": skip_branding, "skip_branding": skip_branding,
"outputdir": output_dir,
"squashfs_only": squashfs_only, "squashfs_only": squashfs_only,
"configuration_file": configuration_file, "configuration_file": configuration_file,
} }
@ -219,10 +218,6 @@ class BuildinstallPhase(PhaseBase):
return repos return repos
def run(self): def run(self):
lorax = LoraxWrapper()
product = self.compose.conf["release_name"]
version = self.compose.conf["release_version"]
release = self.compose.conf["release_version"]
disc_type = self.compose.conf["disc_types"].get("dvd", "dvd") disc_type = self.compose.conf["disc_types"].get("dvd", "dvd")
# Prepare kickstart file for final images. # Prepare kickstart file for final images.
@ -239,7 +234,7 @@ class BuildinstallPhase(PhaseBase):
) )
makedirs(final_output_dir) makedirs(final_output_dir)
repo_baseurls = self.get_repos(arch) repo_baseurls = self.get_repos(arch)
if final_output_dir != output_dir: if final_output_dir != output_dir or self.lorax_use_koji_plugin:
repo_baseurls = [translate_path(self.compose, r) for r in repo_baseurls] repo_baseurls = [translate_path(self.compose, r) for r in repo_baseurls]
if self.buildinstall_method == "lorax": if self.buildinstall_method == "lorax":
@ -275,29 +270,12 @@ class BuildinstallPhase(PhaseBase):
), ),
) )
) )
elif self.buildinstall_method == "buildinstall":
volid = get_volid(self.compose, arch, disc_type=disc_type)
commands.append(
(
None,
lorax.get_buildinstall_cmd(
product,
version,
release,
repo_baseurls,
output_dir,
is_final=self.compose.supported,
buildarch=arch,
volid=volid,
),
)
)
else: else:
raise ValueError( raise ValueError(
"Unsupported buildinstall method: %s" % self.buildinstall_method "Unsupported buildinstall method: %s" % self.buildinstall_method
) )
for (variant, cmd) in commands: for variant, cmd in commands:
self.pool.add(BuildinstallThread(self.pool)) self.pool.add(BuildinstallThread(self.pool))
self.pool.queue_put( self.pool.queue_put(
(self.compose, arch, variant, cmd, self.pkgset_phase) (self.compose, arch, variant, cmd, self.pkgset_phase)
@ -364,9 +342,17 @@ BOOT_CONFIGS = [
"EFI/BOOT/BOOTX64.conf", "EFI/BOOT/BOOTX64.conf",
"EFI/BOOT/grub.cfg", "EFI/BOOT/grub.cfg",
] ]
BOOT_IMAGES = [
"images/efiboot.img",
]
def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None): def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None):
"""
Put escaped volume ID and possibly kickstart file into the boot
configuration files.
:returns: list of paths to modified config files
"""
volid_escaped = volid.replace(" ", r"\x20").replace("\\", "\\\\") volid_escaped = volid.replace(" ", r"\x20").replace("\\", "\\\\")
volid_escaped_2 = volid_escaped.replace("\\", "\\\\") volid_escaped_2 = volid_escaped.replace("\\", "\\\\")
found_configs = [] found_configs = []
@ -374,7 +360,6 @@ def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None):
config_path = os.path.join(path, config) config_path = os.path.join(path, config)
if not os.path.exists(config_path): if not os.path.exists(config_path):
continue continue
found_configs.append(config)
with open(config_path, "r") as f: with open(config_path, "r") as f:
data = original_data = f.read() data = original_data = f.read()
@ -394,7 +379,12 @@ def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None):
with open(config_path, "w") as f: with open(config_path, "w") as f:
f.write(data) f.write(data)
if logger and data != original_data: if data != original_data:
found_configs.append(config)
if logger:
# Generally lorax should create file with correct volume id
# already. If we don't have a kickstart, this function should
# be a no-op.
logger.info("Boot config %s changed" % config_path) logger.info("Boot config %s changed" % config_path)
return found_configs return found_configs
@ -434,9 +424,8 @@ def tweak_buildinstall(
if kickstart_file and found_configs: if kickstart_file and found_configs:
shutil.copy2(kickstart_file, os.path.join(dst, "ks.cfg")) shutil.copy2(kickstart_file, os.path.join(dst, "ks.cfg"))
images = [ images = [os.path.join(tmp_dir, img) for img in BOOT_IMAGES]
os.path.join(tmp_dir, "images", "efiboot.img"), if found_configs:
]
for image in images: for image in images:
if not os.path.isfile(image): if not os.path.isfile(image):
continue continue
@ -446,7 +435,9 @@ def tweak_buildinstall(
logger=compose._logger, logger=compose._logger,
use_guestmount=compose.conf.get("buildinstall_use_guestmount"), use_guestmount=compose.conf.get("buildinstall_use_guestmount"),
) as mount_tmp_dir: ) as mount_tmp_dir:
for config in BOOT_CONFIGS: for config in found_configs:
# Put each modified config file into the image (overwriting the
# original).
config_path = os.path.join(tmp_dir, config) config_path = os.path.join(tmp_dir, config)
config_in_image = os.path.join(mount_tmp_dir, config) config_in_image = os.path.join(mount_tmp_dir, config)
@ -530,9 +521,19 @@ def link_boot_iso(compose, arch, variant, can_fail):
setattr(img, "can_fail", can_fail) setattr(img, "can_fail", can_fail)
setattr(img, "deliverable", "buildinstall") setattr(img, "deliverable", "buildinstall")
try: try:
img.volume_id = iso.get_volume_id(new_boot_iso_path) img.volume_id = iso.get_volume_id(
new_boot_iso_path,
compose.conf.get("createiso_use_xorrisofs"),
)
except RuntimeError: except RuntimeError:
pass pass
# In this phase we should add to compose only the images that
# will be used only as netinstall.
# On this step lorax generates environment
# for creating isos and create them.
# On step `extra_isos` we overwrite the not needed iso `boot Minimal` by
# new iso. It already contains necessary packages from incldued variants.
if variant.uid in compose.conf['netinstall_variants']:
compose.im.add(variant.uid, arch, img) compose.im.add(variant.uid, arch, img)
compose.log_info("[DONE ] %s" % msg) compose.log_info("[DONE ] %s" % msg)
@ -718,8 +719,8 @@ class BuildinstallThread(WorkerThread):
# input on RPM level. # input on RPM level.
cmd_copy = copy(cmd) cmd_copy = copy(cmd)
for key in ["outputdir", "sources"]: for key in ["outputdir", "sources"]:
del cmd_copy[key] cmd_copy.pop(key, None)
del old_metadata["cmd"][key] old_metadata["cmd"].pop(key, None)
# Do not reuse if command line arguments are not the same. # Do not reuse if command line arguments are not the same.
if old_metadata["cmd"] != cmd_copy: if old_metadata["cmd"] != cmd_copy:
@ -814,8 +815,6 @@ class BuildinstallThread(WorkerThread):
if buildinstall_method == "lorax": if buildinstall_method == "lorax":
packages += ["lorax"] packages += ["lorax"]
chown_paths.append(_get_log_dir(compose, variant, arch)) chown_paths.append(_get_log_dir(compose, variant, arch))
elif buildinstall_method == "buildinstall":
packages += ["anaconda"]
packages += get_arch_variant_data( packages += get_arch_variant_data(
compose.conf, "buildinstall_packages", arch, variant compose.conf, "buildinstall_packages", arch, variant
) )
@ -836,13 +835,13 @@ class BuildinstallThread(WorkerThread):
# Start the runroot task. # Start the runroot task.
runroot = Runroot(compose, phase="buildinstall") runroot = Runroot(compose, phase="buildinstall")
task_id = None
if buildinstall_method == "lorax" and lorax_use_koji_plugin: if buildinstall_method == "lorax" and lorax_use_koji_plugin:
runroot.run_pungi_buildinstall( task_id = runroot.run_pungi_buildinstall(
cmd, cmd,
log_file=log_file, log_file=log_file,
arch=arch, arch=arch,
packages=packages, packages=packages,
mounts=[compose.topdir],
weight=compose.conf["runroot_weights"].get("buildinstall"), weight=compose.conf["runroot_weights"].get("buildinstall"),
) )
else: else:
@ -875,19 +874,17 @@ class BuildinstallThread(WorkerThread):
log_dir = os.path.join(output_dir, "logs") log_dir = os.path.join(output_dir, "logs")
copy_all(log_dir, final_log_dir) copy_all(log_dir, final_log_dir)
elif lorax_use_koji_plugin: elif lorax_use_koji_plugin:
# If Koji pungi-buildinstall is used, then the buildinstall results are # If Koji pungi-buildinstall is used, then the buildinstall results
# not stored directly in `output_dir` dir, but in "results" and "logs" # are attached as outputs to the Koji task. Download and unpack
# subdirectories. We need to move them to final_output_dir. # them to the correct location.
results_dir = os.path.join(output_dir, "results") download_and_extract_archive(
move_all(results_dir, final_output_dir, rm_src_dir=True) compose, task_id, "results.tar.gz", final_output_dir
)
# Get the log_dir into which we should copy the resulting log files. # Download the logs into proper location too.
log_fname = "buildinstall-%s-logs/dummy" % variant.uid log_fname = "buildinstall-%s-logs/dummy" % variant.uid
final_log_dir = os.path.dirname(compose.paths.log.log_file(arch, log_fname)) final_log_dir = os.path.dirname(compose.paths.log.log_file(arch, log_fname))
if not os.path.exists(final_log_dir): download_and_extract_archive(compose, task_id, "logs.tar.gz", final_log_dir)
makedirs(final_log_dir)
log_dir = os.path.join(output_dir, "logs")
move_all(log_dir, final_log_dir, rm_src_dir=True)
rpms = runroot.get_buildroot_rpms() rpms = runroot.get_buildroot_rpms()
self._write_buildinstall_metadata( self._write_buildinstall_metadata(

View File

@ -14,6 +14,7 @@
# along with this program; if not, see <https://gnu.org/licenses/>. # along with this program; if not, see <https://gnu.org/licenses/>.
import itertools
import os import os
import random import random
import shutil import shutil
@ -23,7 +24,7 @@ import json
import productmd.treeinfo import productmd.treeinfo
from productmd.images import Image from productmd.images import Image
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool, WorkerThread
from kobo.shortcuts import run, relative_path from kobo.shortcuts import run, relative_path, compute_file_checksums
from six.moves import shlex_quote from six.moves import shlex_quote
from pungi.wrappers import iso from pungi.wrappers import iso
@ -154,6 +155,13 @@ class CreateisoPhase(PhaseLoggerMixin, PhaseBase):
disc_num=cmd["disc_num"], disc_num=cmd["disc_num"],
disc_count=cmd["disc_count"], disc_count=cmd["disc_count"],
) )
if self.compose.notifier:
self.compose.notifier.send(
"createiso-imagedone",
file=cmd["iso_path"],
arch=arch,
variant=str(variant),
)
def try_reuse(self, cmd, variant, arch, opts): def try_reuse(self, cmd, variant, arch, opts):
"""Try to reuse image from previous compose. """Try to reuse image from previous compose.
@ -181,6 +189,14 @@ class CreateisoPhase(PhaseLoggerMixin, PhaseBase):
if not old_config: if not old_config:
self.logger.info("%s - no config for old compose", log_msg) self.logger.info("%s - no config for old compose", log_msg)
return False return False
# Disable reuse if unsigned packages are allowed. The older compose
# could have unsigned packages, and those may have been signed since
# then. We want to regenerate the ISO to have signatures.
if None in self.compose.conf["sigkeys"]:
self.logger.info("%s - unsigned packages are allowed", log_msg)
return False
# Convert current configuration to JSON and back to encode it similarly # Convert current configuration to JSON and back to encode it similarly
# to the old one # to the old one
config = json.loads(json.dumps(self.compose.conf)) config = json.loads(json.dumps(self.compose.conf))
@ -369,7 +385,7 @@ class CreateisoPhase(PhaseLoggerMixin, PhaseBase):
if self.compose.notifier: if self.compose.notifier:
self.compose.notifier.send("createiso-targets", deliverables=deliverables) self.compose.notifier.send("createiso-targets", deliverables=deliverables)
for (cmd, variant, arch) in commands: for cmd, variant, arch in commands:
self.pool.add(CreateIsoThread(self.pool)) self.pool.add(CreateIsoThread(self.pool))
self.pool.queue_put((self.compose, cmd, variant, arch)) self.pool.queue_put((self.compose, cmd, variant, arch))
@ -450,7 +466,14 @@ class CreateIsoThread(WorkerThread):
try: try:
run_createiso_command( run_createiso_command(
num, compose, bootable, arch, cmd["cmd"], mounts, log_file num,
compose,
bootable,
arch,
cmd["cmd"],
mounts,
log_file,
cmd["iso_path"],
) )
except Exception: except Exception:
self.fail(compose, cmd, variant, arch) self.fail(compose, cmd, variant, arch)
@ -517,7 +540,10 @@ def add_iso_to_metadata(
setattr(img, "can_fail", compose.can_fail(variant, arch, "iso")) setattr(img, "can_fail", compose.can_fail(variant, arch, "iso"))
setattr(img, "deliverable", "iso") setattr(img, "deliverable", "iso")
try: try:
img.volume_id = iso.get_volume_id(iso_path) img.volume_id = iso.get_volume_id(
iso_path,
compose.conf.get("createiso_use_xorrisofs"),
)
except RuntimeError: except RuntimeError:
pass pass
if arch == "src": if arch == "src":
@ -528,7 +554,9 @@ def add_iso_to_metadata(
return img return img
def run_createiso_command(num, compose, bootable, arch, cmd, mounts, log_file): def run_createiso_command(
num, compose, bootable, arch, cmd, mounts, log_file, iso_path
):
packages = [ packages = [
"coreutils", "coreutils",
"xorriso" if compose.conf.get("createiso_use_xorrisofs") else "genisoimage", "xorriso" if compose.conf.get("createiso_use_xorrisofs") else "genisoimage",
@ -539,7 +567,6 @@ def run_createiso_command(num, compose, bootable, arch, cmd, mounts, log_file):
if bootable: if bootable:
extra_packages = { extra_packages = {
"lorax": ["lorax", "which"], "lorax": ["lorax", "which"],
"buildinstall": ["anaconda"],
} }
packages.extend(extra_packages[compose.conf["buildinstall_method"]]) packages.extend(extra_packages[compose.conf["buildinstall_method"]])
@ -571,6 +598,76 @@ def run_createiso_command(num, compose, bootable, arch, cmd, mounts, log_file):
weight=compose.conf["runroot_weights"].get("createiso"), weight=compose.conf["runroot_weights"].get("createiso"),
) )
if bootable and compose.conf.get("createiso_use_xorrisofs"):
fix_treeinfo_checksums(compose, iso_path, arch)
def fix_treeinfo_checksums(compose, iso_path, arch):
"""It is possible for the ISO to contain a .treefile with incorrect
checksums. By modifying the ISO (adding files) some of the images may
change.
This function fixes that after the fact by looking for incorrect checksums,
recalculating them and updating the .treeinfo file. Since the size of the
file doesn't change, this seems to not change any images.
"""
modified = False
with iso.mount(iso_path, compose._logger) as mountpoint:
ti = productmd.TreeInfo()
ti.load(os.path.join(mountpoint, ".treeinfo"))
for image, (type_, expected) in ti.checksums.checksums.items():
checksums = compute_file_checksums(os.path.join(mountpoint, image), [type_])
actual = checksums[type_]
if actual == expected:
# Everything fine here, skip to next image.
continue
compose.log_debug("%s: %s: checksum mismatch", iso_path, image)
# Update treeinfo with correct checksum
ti.checksums.checksums[image] = (type_, actual)
modified = True
if not modified:
compose.log_debug("%s: All checksums match, nothing to do.", iso_path)
return
try:
tmpdir = compose.mkdtemp(arch, prefix="fix-checksum-")
# Write modified .treeinfo
ti_path = os.path.join(tmpdir, ".treeinfo")
compose.log_debug("Storing modified .treeinfo in %s", ti_path)
ti.dump(ti_path)
# Write a modified DVD into a temporary path, that is atomically moved
# over the original file.
fixed_path = os.path.join(tmpdir, "fixed-checksum-dvd.iso")
cmd = ["xorriso"]
cmd.extend(
itertools.chain.from_iterable(
iso.xorriso_commands(arch, iso_path, fixed_path)
)
)
cmd.extend(["-map", ti_path, ".treeinfo"])
run(
cmd,
logfile=compose.paths.log.log_file(
arch, "checksum-fix_generate_%s" % os.path.basename(iso_path)
),
)
# The modified ISO no longer has implanted MD5, so that needs to be
# fixed again.
compose.log_debug("Implanting new MD5 to %s", fixed_path)
run(
iso.get_implantisomd5_cmd(fixed_path, compose.supported),
logfile=compose.paths.log.log_file(
arch, "checksum-fix_implantisomd5_%s" % os.path.basename(iso_path)
),
)
# All done, move the updated image to the final location.
compose.log_debug("Updating %s", iso_path)
os.rename(fixed_path, iso_path)
finally:
shutil.rmtree(tmpdir)
def split_iso(compose, arch, variant, no_split=False, logger=None): def split_iso(compose, arch, variant, no_split=False, logger=None):
""" """

View File

@ -76,7 +76,7 @@ class ExtraIsosPhase(PhaseLoggerMixin, ConfigGuardedPhase, PhaseBase):
for arch in sorted(arches): for arch in sorted(arches):
commands.append((config, variant, arch)) commands.append((config, variant, arch))
for (config, variant, arch) in commands: for config, variant, arch in commands:
self.pool.add(ExtraIsosThread(self.pool, self.bi)) self.pool.add(ExtraIsosThread(self.pool, self.bi))
self.pool.queue_put((self.compose, config, variant, arch)) self.pool.queue_put((self.compose, config, variant, arch))
@ -166,6 +166,7 @@ class ExtraIsosThread(WorkerThread):
log_file=compose.paths.log.log_file( log_file=compose.paths.log.log_file(
arch, "extraiso-%s" % os.path.basename(iso_path) arch, "extraiso-%s" % os.path.basename(iso_path)
), ),
iso_path=iso_path,
) )
img = add_iso_to_metadata( img = add_iso_to_metadata(
@ -204,6 +205,14 @@ class ExtraIsosThread(WorkerThread):
if not old_config: if not old_config:
self.pool.log_info("%s - no config for old compose", log_msg) self.pool.log_info("%s - no config for old compose", log_msg)
return False return False
# Disable reuse if unsigned packages are allowed. The older compose
# could have unsigned packages, and those may have been signed since
# then. We want to regenerate the ISO to have signatures.
if None in compose.conf["sigkeys"]:
self.pool.log_info("%s - unsigned packages are allowed", log_msg)
return False
# Convert current configuration to JSON and back to encode it similarly # Convert current configuration to JSON and back to encode it similarly
# to the old one # to the old one
config = json.loads(json.dumps(compose.conf)) config = json.loads(json.dumps(compose.conf))
@ -420,6 +429,12 @@ def get_iso_contents(
original_treeinfo, original_treeinfo,
os.path.join(extra_files_dir, ".treeinfo"), os.path.join(extra_files_dir, ".treeinfo"),
) )
tweak_repo_treeinfo(
compose,
include_variants,
original_treeinfo,
original_treeinfo,
)
# Add extra files specific for the ISO # Add extra files specific for the ISO
files.update( files.update(
@ -431,6 +446,45 @@ def get_iso_contents(
return gp return gp
def tweak_repo_treeinfo(compose, include_variants, source_file, dest_file):
"""
The method includes the variants to file .treeinfo of a variant. It takes
the variants which are described
by options `extra_isos -> include_variants`.
"""
ti = productmd.treeinfo.TreeInfo()
ti.load(source_file)
main_variant = next(iter(ti.variants))
for variant_uid in include_variants:
variant = compose.all_variants[variant_uid]
var = productmd.treeinfo.Variant(ti)
var.id = variant.id
var.uid = variant.uid
var.name = variant.name
var.type = variant.type
ti.variants.add(var)
for variant_id in ti.variants:
var = ti.variants[variant_id]
if variant_id == main_variant:
var.paths.packages = 'Packages'
var.paths.repository = '.'
else:
var.paths.packages = os.path.join(
'../../..',
var.uid,
var.arch,
'os/Packages',
)
var.paths.repository = os.path.join(
'../../..',
var.uid,
var.arch,
'os',
)
ti.dump(dest_file, main_variant=main_variant)
def tweak_treeinfo(compose, include_variants, source_file, dest_file): def tweak_treeinfo(compose, include_variants, source_file, dest_file):
ti = load_and_tweak_treeinfo(source_file) ti = load_and_tweak_treeinfo(source_file)
for variant_uid in include_variants: for variant_uid in include_variants:
@ -446,7 +500,6 @@ def tweak_treeinfo(compose, include_variants, source_file, dest_file):
var = ti.variants[variant_id] var = ti.variants[variant_id]
var.paths.packages = os.path.join(var.uid, "Packages") var.paths.packages = os.path.join(var.uid, "Packages")
var.paths.repository = var.uid var.paths.repository = var.uid
ti.dump(dest_file) ti.dump(dest_file)

View File

@ -23,6 +23,7 @@ import threading
from kobo.rpmlib import parse_nvra from kobo.rpmlib import parse_nvra
from kobo.shortcuts import run from kobo.shortcuts import run
from productmd.rpms import Rpms from productmd.rpms import Rpms
from pungi.phases.pkgset.common import get_all_arches
from six.moves import cPickle as pickle from six.moves import cPickle as pickle
try: try:
@ -90,7 +91,7 @@ class GatherPhase(PhaseBase):
# check whether variants from configuration value # check whether variants from configuration value
# 'variant_as_lookaside' are correct # 'variant_as_lookaside' are correct
for (requiring, required) in variant_as_lookaside: for requiring, required in variant_as_lookaside:
if requiring in all_variants and required not in all_variants: if requiring in all_variants and required not in all_variants:
errors.append( errors.append(
"variant_as_lookaside: variant %r doesn't exist but is " "variant_as_lookaside: variant %r doesn't exist but is "
@ -99,7 +100,7 @@ class GatherPhase(PhaseBase):
# check whether variants from configuration value # check whether variants from configuration value
# 'variant_as_lookaside' have same architectures # 'variant_as_lookaside' have same architectures
for (requiring, required) in variant_as_lookaside: for requiring, required in variant_as_lookaside:
if ( if (
requiring in all_variants requiring in all_variants
and required in all_variants and required in all_variants
@ -235,7 +236,7 @@ def reuse_old_gather_packages(compose, arch, variant, package_sets, methods):
if not hasattr(compose, "_gather_reused_variant_arch"): if not hasattr(compose, "_gather_reused_variant_arch"):
setattr(compose, "_gather_reused_variant_arch", []) setattr(compose, "_gather_reused_variant_arch", [])
variant_as_lookaside = compose.conf.get("variant_as_lookaside", []) variant_as_lookaside = compose.conf.get("variant_as_lookaside", [])
for (requiring, required) in variant_as_lookaside: for requiring, required in variant_as_lookaside:
if ( if (
requiring == variant.uid requiring == variant.uid
and (required, arch) not in compose._gather_reused_variant_arch and (required, arch) not in compose._gather_reused_variant_arch
@ -468,9 +469,7 @@ def gather_packages(compose, arch, variant, package_sets, fulltree_excludes=None
) )
else: else:
for source_name in ("module", "comps", "json"): for source_name in ("module", "comps", "json"):
packages, groups, filter_packages = get_variant_packages( packages, groups, filter_packages = get_variant_packages(
compose, arch, variant, source_name, package_sets compose, arch, variant, source_name, package_sets
) )
@ -575,7 +574,6 @@ def trim_packages(compose, arch, variant, pkg_map, parent_pkgs=None, remove_pkgs
move_to_parent_pkgs = _mk_pkg_map() move_to_parent_pkgs = _mk_pkg_map()
removed_pkgs = _mk_pkg_map() removed_pkgs = _mk_pkg_map()
for pkg_type, pkgs in pkg_map.items(): for pkg_type, pkgs in pkg_map.items():
new_pkgs = [] new_pkgs = []
for pkg in pkgs: for pkg in pkgs:
pkg_path = pkg["path"] pkg_path = pkg["path"]
@ -647,8 +645,14 @@ def _make_lookaside_repo(compose, variant, arch, pkg_map, package_sets=None):
compose.paths.work.topdir(arch="global"), "download" compose.paths.work.topdir(arch="global"), "download"
) )
+ "/", + "/",
"koji": lambda: pungi.wrappers.kojiwrapper.KojiWrapper( "koji": lambda: compose.conf.get(
compose "koji_cache",
pungi.wrappers.kojiwrapper.KojiWrapper(compose).koji_module.config.topdir,
).rstrip("/")
+ "/",
"kojimock": lambda: pungi.wrappers.kojiwrapper.KojiMockWrapper(
compose,
get_all_arches(compose),
).koji_module.config.topdir.rstrip("/") ).koji_module.config.topdir.rstrip("/")
+ "/", + "/",
} }
@ -662,6 +666,11 @@ def _make_lookaside_repo(compose, variant, arch, pkg_map, package_sets=None):
# we need a union of all SRPMs. # we need a union of all SRPMs.
if pkg_type == "srpm" or pkg_arch == arch: if pkg_type == "srpm" or pkg_arch == arch:
for pkg in packages: for pkg in packages:
if "lookaside" in pkg.get("flags", []):
# We want to ignore lookaside packages, those will
# be visible to the depending variants from the
# lookaside repo directly.
continue
pkg = pkg["path"] pkg = pkg["path"]
if path_prefix and pkg.startswith(path_prefix): if path_prefix and pkg.startswith(path_prefix):
pkg = pkg[len(path_prefix) :] pkg = pkg[len(path_prefix) :]

View File

@ -47,9 +47,15 @@ class FakePackage(object):
@property @property
def files(self): def files(self):
return [ paths = []
os.path.join(dirname, basename) for (_, dirname, basename) in self.pkg.files # createrepo_c.Package.files is a tuple, but its length differs across
] # versions. The constants define index at which the related value is
# located.
for entry in self.pkg.files:
paths.append(
os.path.join(entry[cr.FILE_ENTRY_PATH], entry[cr.FILE_ENTRY_NAME])
)
return paths
@property @property
def provides(self): def provides(self):

View File

@ -25,6 +25,7 @@ from productmd.rpms import Rpms
# results will be pulled into the compose. # results will be pulled into the compose.
EXTENSIONS = { EXTENSIONS = {
"docker": ["tar.gz", "tar.xz"], "docker": ["tar.gz", "tar.xz"],
"iso": ["iso"],
"liveimg-squashfs": ["liveimg.squashfs"], "liveimg-squashfs": ["liveimg.squashfs"],
"qcow": ["qcow"], "qcow": ["qcow"],
"qcow2": ["qcow2"], "qcow2": ["qcow2"],
@ -39,6 +40,7 @@ EXTENSIONS = {
"vdi": ["vdi"], "vdi": ["vdi"],
"vmdk": ["vmdk"], "vmdk": ["vmdk"],
"vpc": ["vhd"], "vpc": ["vhd"],
"vhd-compressed": ["vhd.gz", "vhd.xz"],
"vsphere-ova": ["vsphere.ova"], "vsphere-ova": ["vsphere.ova"],
} }
@ -344,7 +346,9 @@ class CreateImageBuildThread(WorkerThread):
# let's not change filename of koji outputs # let's not change filename of koji outputs
image_dest = os.path.join(image_dir, os.path.basename(image_info["path"])) image_dest = os.path.join(image_dir, os.path.basename(image_info["path"]))
src_file = os.path.realpath(image_info["path"]) src_file = compose.koji_downloader.get_file(
os.path.realpath(image_info["path"])
)
linker.link(src_file, image_dest, link_type=cmd["link_type"]) linker.link(src_file, image_dest, link_type=cmd["link_type"])
# Update image manifest # Update image manifest

View File

@ -76,7 +76,7 @@ class ImageContainerThread(WorkerThread):
) )
if koji.watch_task(task_id, log_file) != 0: if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError( raise RuntimeError(
"ImageContainer: task %s failed: see %s for details" "ImageContainer task failed: %s. See %s for details"
% (task_id, log_file) % (task_id, log_file)
) )

229
pungi/phases/kiwibuild.py Normal file
View File

@ -0,0 +1,229 @@
# -*- coding: utf-8 -*-
import os
from kobo.threads import ThreadPool, WorkerThread
from kobo import shortcuts
from productmd.images import Image
from . import base
from .. import util
from ..linker import Linker
from ..wrappers import kojiwrapper
from .image_build import EXTENSIONS
KIWIEXTENSIONS = [
("vhd-compressed", ["vhdfixed.xz"], "vhd.xz"),
("vagrant-libvirt", ["vagrant.libvirt.box"], "vagrant-libvirt.box"),
("vagrant-virtualbox", ["vagrant.virtualbox.box"], "vagrant-virtualbox.box"),
]
class KiwiBuildPhase(
base.PhaseLoggerMixin, base.ImageConfigMixin, base.ConfigGuardedPhase
):
name = "kiwibuild"
def __init__(self, compose):
super(KiwiBuildPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.logger)
def _get_arches(self, image_conf, arches):
"""Get an intersection of arches in the config dict and the given ones."""
if "arches" in image_conf:
arches = set(image_conf["arches"]) & arches
return sorted(arches)
@staticmethod
def _get_repo_urls(compose, repos, arch="$basearch"):
"""
Get list of repos with resolved repo URLs. Preserve repos defined
as dicts.
"""
resolved_repos = []
for repo in repos:
repo = util.get_repo_url(compose, repo, arch=arch)
if repo is None:
raise RuntimeError("Failed to resolve repo URL for %s" % repo)
resolved_repos.append(repo)
return resolved_repos
def _get_repo(self, image_conf, variant):
"""
Get a list of repos. First included are those explicitly listed in
config, followed by by repo for current variant if it's not included in
the list already.
"""
repos = shortcuts.force_list(image_conf.get("repos", []))
if not variant.is_empty and variant.uid not in repos:
repos.append(variant.uid)
return KiwiBuildPhase._get_repo_urls(self.compose, repos, arch="$arch")
def run(self):
for variant in self.compose.get_variants():
arches = set([x for x in variant.arches if x != "src"])
for image_conf in self.get_config_block(variant):
build_arches = self._get_arches(image_conf, arches)
if not build_arches:
self.log_debug("skip: no arches")
continue
# these properties can be set per-image *or* as e.g.
# kiwibuild_description_scm or global_release in the config
generics = {
"release": self.get_release(image_conf),
"target": self.get_config(image_conf, "target"),
"descscm": self.get_config(image_conf, "description_scm"),
"descpath": self.get_config(image_conf, "description_path"),
"type": self.get_config(image_conf, "type"),
"type_attr": self.get_config(image_conf, "type_attr"),
"bundle_name_format": self.get_config(
image_conf, "bundle_name_format"
),
}
repo = self._get_repo(image_conf, variant)
failable_arches = image_conf.pop("failable", [])
if failable_arches == ["*"]:
failable_arches = image_conf["arches"]
self.pool.add(RunKiwiBuildThread(self.pool))
self.pool.queue_put(
(
self.compose,
variant,
image_conf,
build_arches,
generics,
repo,
failable_arches,
)
)
self.pool.start()
class RunKiwiBuildThread(WorkerThread):
def process(self, item, num):
(compose, variant, config, arches, generics, repo, failable_arches) = item
self.failable_arches = failable_arches
# the Koji task as a whole can only fail if *all* arches are failable
can_task_fail = set(failable_arches).issuperset(set(arches))
self.num = num
with util.failable(
compose,
can_task_fail,
variant,
"*",
"kiwibuild",
logger=self.pool._logger,
):
self.worker(compose, variant, config, arches, generics, repo)
def worker(self, compose, variant, config, arches, generics, repo):
msg = "kiwibuild task for variant %s" % variant.uid
self.pool.log_info("[BEGIN] %s" % msg)
koji = kojiwrapper.KojiWrapper(compose)
koji.login()
task_id = koji.koji_proxy.kiwiBuild(
generics["target"],
arches,
generics["descscm"],
generics["descpath"],
profile=config["kiwi_profile"],
release=generics["release"],
repos=repo,
type=generics["type"],
type_attr=generics["type_attr"],
result_bundle_name_format=generics["bundle_name_format"],
# this ensures the task won't fail if only failable arches fail
optional_arches=self.failable_arches,
)
koji.save_task_id(task_id)
# Wait for it to finish and capture the output into log file.
log_dir = os.path.join(compose.paths.log.topdir(), "kiwibuild")
util.makedirs(log_dir)
log_file = os.path.join(
log_dir, "%s-%s-watch-task.log" % (variant.uid, self.num)
)
if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError(
"kiwiBuild task failed: %s. See %s for details" % (task_id, log_file)
)
# Refresh koji session which may have timed out while the task was
# running. Watching is done via a subprocess, so the session is
# inactive.
koji = kojiwrapper.KojiWrapper(compose)
linker = Linker(logger=self.pool._logger)
# Process all images in the build. There should be one for each
# architecture, but we don't verify that.
paths = koji.get_image_paths(task_id)
for arch, paths in paths.items():
for path in paths:
type_, format_ = _find_type_and_format(path)
if not format_:
# Path doesn't match any known type.
continue
# image_dir is absolute path to which the image should be copied.
# We also need the same path as relative to compose directory for
# including in the metadata.
image_dir = compose.paths.compose.image_dir(variant) % {"arch": arch}
rel_image_dir = compose.paths.compose.image_dir(
variant, relative=True
) % {"arch": arch}
util.makedirs(image_dir)
filename = os.path.basename(path)
image_dest = os.path.join(image_dir, filename)
src_file = compose.koji_downloader.get_file(path)
linker.link(src_file, image_dest, link_type=compose.conf["link_type"])
# Update image manifest
img = Image(compose.im)
# Get the manifest type from the config if supplied, otherwise we
# determine the manifest type based on the koji output
img.type = type_
img.format = format_
img.path = os.path.join(rel_image_dir, filename)
img.mtime = util.get_mtime(image_dest)
img.size = util.get_file_size(image_dest)
img.arch = arch
img.disc_number = 1 # We don't expect multiple disks
img.disc_count = 1
img.bootable = False
img.subvariant = config.get("subvariant", variant.uid)
setattr(img, "can_fail", arch in self.failable_arches)
setattr(img, "deliverable", "kiwibuild")
compose.im.add(variant=variant.uid, arch=arch, image=img)
self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, task_id))
def _find_type_and_format(path):
for type_, suffixes in EXTENSIONS.items():
for suffix in suffixes:
if path.endswith(suffix):
return type_, suffix
# these are our kiwi-exclusive mappings for images whose extensions
# aren't quite the same as imagefactory
for type_, suffixes, format_ in KIWIEXTENSIONS:
if any(path.endswith(suffix) for suffix in suffixes):
return type_, format_
return None, None

View File

@ -1,406 +0,0 @@
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
import os
import sys
import time
import shutil
from kobo.threads import ThreadPool, WorkerThread
from kobo.shortcuts import run, save_to_file, force_list
from productmd.images import Image
from six.moves import shlex_quote
from pungi.wrappers.kojiwrapper import KojiWrapper
from pungi.wrappers import iso
from pungi.phases import base
from pungi.util import makedirs, get_mtime, get_file_size, failable
from pungi.util import get_repo_urls
# HACK: define cmp in python3
if sys.version_info[0] == 3:
def cmp(a, b):
return (a > b) - (a < b)
class LiveImagesPhase(
base.PhaseLoggerMixin, base.ImageConfigMixin, base.ConfigGuardedPhase
):
name = "live_images"
def __init__(self, compose):
super(LiveImagesPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.logger)
def _get_repos(self, arch, variant, data):
repos = []
if not variant.is_empty:
repos.append(variant.uid)
repos.extend(force_list(data.get("repo", [])))
return get_repo_urls(self.compose, repos, arch=arch)
def run(self):
symlink_isos_to = self.compose.conf.get("symlink_isos_to")
commands = []
for variant in self.compose.all_variants.values():
for arch in variant.arches + ["src"]:
for data in self.get_config_block(variant, arch):
subvariant = data.get("subvariant", variant.uid)
type = data.get("type", "live")
if type == "live":
dest_dir = self.compose.paths.compose.iso_dir(
arch, variant, symlink_to=symlink_isos_to
)
elif type == "appliance":
dest_dir = self.compose.paths.compose.image_dir(
variant, symlink_to=symlink_isos_to
)
dest_dir = dest_dir % {"arch": arch}
makedirs(dest_dir)
else:
raise RuntimeError("Unknown live image type %s" % type)
if not dest_dir:
continue
cmd = {
"name": data.get("name"),
"version": self.get_version(data),
"release": self.get_release(data),
"dest_dir": dest_dir,
"build_arch": arch,
"ks_file": data["kickstart"],
"ksurl": self.get_ksurl(data),
# Used for images wrapped in RPM
"specfile": data.get("specfile", None),
# Scratch (only taken in consideration if specfile
# specified) For images wrapped in rpm is scratch
# disabled by default For other images is scratch
# always on
"scratch": data.get("scratch", False),
"sign": False,
"type": type,
"label": "", # currently not used
"subvariant": subvariant,
"failable_arches": data.get("failable", []),
# First see if live_target is specified, then fall back
# to regular setup of local, phase and global setting.
"target": self.compose.conf.get("live_target")
or self.get_config(data, "target"),
}
cmd["repos"] = self._get_repos(arch, variant, data)
# Signing of the rpm wrapped image
if not cmd["scratch"] and data.get("sign"):
cmd["sign"] = True
cmd["filename"] = self._get_file_name(
arch, variant, cmd["name"], cmd["version"]
)
commands.append((cmd, variant, arch))
for (cmd, variant, arch) in commands:
self.pool.add(CreateLiveImageThread(self.pool))
self.pool.queue_put((self.compose, cmd, variant, arch))
self.pool.start()
def _get_file_name(self, arch, variant, name=None, version=None):
if self.compose.conf["live_images_no_rename"]:
return None
disc_type = self.compose.conf["disc_types"].get("live", "live")
format = (
"%(compose_id)s-%(variant)s-%(arch)s-%(disc_type)s%(disc_num)s%(suffix)s"
)
# Custom name (prefix)
if name:
custom_iso_name = name
if version:
custom_iso_name += "-%s" % version
format = (
custom_iso_name
+ "-%(variant)s-%(arch)s-%(disc_type)s%(disc_num)s%(suffix)s"
)
# XXX: hardcoded disc_num
return self.compose.get_image_name(
arch, variant, disc_type=disc_type, disc_num=None, format=format
)
class CreateLiveImageThread(WorkerThread):
EXTS = (".iso", ".raw.xz")
def process(self, item, num):
compose, cmd, variant, arch = item
self.failable_arches = cmd.get("failable_arches", [])
self.can_fail = bool(self.failable_arches)
with failable(
compose,
self.can_fail,
variant,
arch,
"live",
cmd.get("subvariant"),
logger=self.pool._logger,
):
self.worker(compose, cmd, variant, arch, num)
def worker(self, compose, cmd, variant, arch, num):
self.basename = "%(name)s-%(version)s-%(release)s" % cmd
log_file = compose.paths.log.log_file(arch, "liveimage-%s" % self.basename)
subvariant = cmd.pop("subvariant")
imgname = "%s-%s-%s-%s" % (
compose.ci_base.release.short,
subvariant,
"Live" if cmd["type"] == "live" else "Disk",
arch,
)
msg = "Creating ISO (arch: %s, variant: %s): %s" % (
arch,
variant,
self.basename,
)
self.pool.log_info("[BEGIN] %s" % msg)
koji_wrapper = KojiWrapper(compose)
_, version = compose.compose_id.rsplit("-", 1)
name = cmd["name"] or imgname
version = cmd["version"] or version
archive = False
if cmd["specfile"] and not cmd["scratch"]:
# Non scratch build are allowed only for rpm wrapped images
archive = True
koji_cmd = koji_wrapper.get_create_image_cmd(
name,
version,
cmd["target"],
cmd["build_arch"],
cmd["ks_file"],
cmd["repos"],
image_type=cmd["type"],
wait=True,
archive=archive,
specfile=cmd["specfile"],
release=cmd["release"],
ksurl=cmd["ksurl"],
)
# avoid race conditions?
# Kerberos authentication failed:
# Permission denied in replay cache code (-1765328215)
time.sleep(num * 3)
output = koji_wrapper.run_blocking_cmd(koji_cmd, log_file=log_file)
if output["retcode"] != 0:
raise RuntimeError(
"LiveImage task failed: %s. See %s for more details."
% (output["task_id"], log_file)
)
# copy finished image to isos/
image_path = [
path
for path in koji_wrapper.get_image_path(output["task_id"])
if self._is_image(path)
]
if len(image_path) != 1:
raise RuntimeError(
"Got %d images from task %d, expected 1."
% (len(image_path), output["task_id"])
)
image_path = image_path[0]
filename = cmd.get("filename") or os.path.basename(image_path)
destination = os.path.join(cmd["dest_dir"], filename)
shutil.copy2(image_path, destination)
# copy finished rpm to isos/ (if rpm wrapped ISO was built)
if cmd["specfile"]:
rpm_paths = koji_wrapper.get_wrapped_rpm_path(output["task_id"])
if cmd["sign"]:
# Sign the rpm wrapped images and get their paths
self.pool.log_info(
"Signing rpm wrapped images in task_id: %s (expected key ID: %s)"
% (output["task_id"], compose.conf.get("signing_key_id"))
)
signed_rpm_paths = self._sign_image(
koji_wrapper, compose, cmd, output["task_id"]
)
if signed_rpm_paths:
rpm_paths = signed_rpm_paths
for rpm_path in rpm_paths:
shutil.copy2(rpm_path, cmd["dest_dir"])
if cmd["type"] == "live":
# ISO manifest only makes sense for live images
self._write_manifest(destination)
self._add_to_images(
compose,
variant,
subvariant,
arch,
cmd["type"],
self._get_format(image_path),
destination,
)
self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, output["task_id"]))
def _add_to_images(self, compose, variant, subvariant, arch, type, format, path):
"""Adds the image to images.json"""
img = Image(compose.im)
img.type = "raw-xz" if type == "appliance" else type
img.format = format
img.path = os.path.relpath(path, compose.paths.compose.topdir())
img.mtime = get_mtime(path)
img.size = get_file_size(path)
img.arch = arch
img.disc_number = 1 # We don't expect multiple disks
img.disc_count = 1
img.bootable = True
img.subvariant = subvariant
setattr(img, "can_fail", self.can_fail)
setattr(img, "deliverable", "live")
compose.im.add(variant=variant.uid, arch=arch, image=img)
def _is_image(self, path):
for ext in self.EXTS:
if path.endswith(ext):
return True
return False
def _get_format(self, path):
"""Get format based on extension."""
for ext in self.EXTS:
if path.endswith(ext):
return ext[1:]
raise RuntimeError("Getting format for unknown image %s" % path)
def _write_manifest(self, iso_path):
"""Generate manifest for ISO at given path.
:param iso_path: (str) absolute path to the ISO
"""
dir, filename = os.path.split(iso_path)
run("cd %s && %s" % (shlex_quote(dir), iso.get_manifest_cmd(filename)))
def _sign_image(self, koji_wrapper, compose, cmd, koji_task_id):
signing_key_id = compose.conf.get("signing_key_id")
signing_command = compose.conf.get("signing_command")
if not signing_key_id:
self.pool.log_warning(
"Signing is enabled but signing_key_id is not specified"
)
self.pool.log_warning("Signing skipped")
return None
if not signing_command:
self.pool.log_warning(
"Signing is enabled but signing_command is not specified"
)
self.pool.log_warning("Signing skipped")
return None
# Prepare signing log file
signing_log_file = compose.paths.log.log_file(
cmd["build_arch"], "live_images-signing-%s" % self.basename
)
# Sign the rpm wrapped images
try:
sign_builds_in_task(
koji_wrapper,
koji_task_id,
signing_command,
log_file=signing_log_file,
signing_key_password=compose.conf.get("signing_key_password"),
)
except RuntimeError:
self.pool.log_error(
"Error while signing rpm wrapped images. See log: %s" % signing_log_file
)
raise
# Get pats to the signed rpms
signing_key_id = signing_key_id.lower() # Koji uses lowercase in paths
rpm_paths = koji_wrapper.get_signed_wrapped_rpms_paths(
koji_task_id, signing_key_id
)
# Wait until files are available
if wait_paths(rpm_paths, 60 * 15):
# Files are ready
return rpm_paths
# Signed RPMs are not available
self.pool.log_warning("Signed files are not available: %s" % rpm_paths)
self.pool.log_warning("Unsigned files will be used")
return None
def wait_paths(paths, timeout=60):
started = time.time()
remaining = paths[:]
while True:
for path in remaining[:]:
if os.path.exists(path):
remaining.remove(path)
if not remaining:
break
time.sleep(1)
if timeout >= 0 and (time.time() - started) > timeout:
return False
return True
def sign_builds_in_task(
koji_wrapper, task_id, signing_command, log_file=None, signing_key_password=None
):
# Get list of nvrs that should be signed
nvrs = koji_wrapper.get_build_nvrs(task_id)
if not nvrs:
# No builds are available (scratch build, etc.?)
return
# Append builds to sign_cmd
for nvr in nvrs:
signing_command += " '%s'" % nvr
# Log signing command before password is filled in it
if log_file:
save_to_file(log_file, signing_command, append=True)
# Fill password into the signing command
if signing_key_password:
signing_command = signing_command % {
"signing_key_password": signing_key_password
}
# Sign the builds
run(signing_command, can_fail=False, show_cmd=False, logfile=log_file)

View File

@ -182,7 +182,9 @@ class LiveMediaThread(WorkerThread):
# let's not change filename of koji outputs # let's not change filename of koji outputs
image_dest = os.path.join(image_dir, os.path.basename(image_info["path"])) image_dest = os.path.join(image_dir, os.path.basename(image_info["path"]))
src_file = os.path.realpath(image_info["path"]) src_file = compose.koji_downloader.get_file(
os.path.realpath(image_info["path"])
)
linker.link(src_file, image_dest, link_type=link_type) linker.link(src_file, image_dest, link_type=link_type)
# Update image manifest # Update image manifest

View File

@ -134,7 +134,7 @@ class OSBSThread(WorkerThread):
# though there is not much there). # though there is not much there).
if koji.watch_task(task_id, log_file) != 0: if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError( raise RuntimeError(
"OSBS: task %s failed: see %s for details" % (task_id, log_file) "OSBS task failed: %s. See %s for details" % (task_id, log_file)
) )
scratch = config.get("scratch", False) scratch = config.get("scratch", False)
@ -154,7 +154,7 @@ class OSBSThread(WorkerThread):
reuse_file, reuse_file,
) )
self.pool.log_info("[DONE ] %s" % msg) self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, task_id))
def _get_image_conf(self, compose, config): def _get_image_conf(self, compose, config):
"""Get image-build.conf from git repo. """Get image-build.conf from git repo.

View File

@ -27,6 +27,35 @@ class OSBuildPhase(
arches = set(image_conf["arches"]) & arches arches = set(image_conf["arches"]) & arches
return sorted(arches) return sorted(arches)
@staticmethod
def _get_repo_urls(compose, repos, arch="$basearch"):
"""
Get list of repos with resolved repo URLs. Preserve repos defined
as dicts.
"""
resolved_repos = []
for repo in repos:
if isinstance(repo, dict):
try:
url = repo["baseurl"]
except KeyError:
raise RuntimeError(
"`baseurl` is required in repo dict %s" % str(repo)
)
url = util.get_repo_url(compose, url, arch=arch)
if url is None:
raise RuntimeError("Failed to resolve repo URL for %s" % str(repo))
repo["baseurl"] = url
resolved_repos.append(repo)
else:
repo = util.get_repo_url(compose, repo, arch=arch)
if repo is None:
raise RuntimeError("Failed to resolve repo URL for %s" % repo)
resolved_repos.append(repo)
return resolved_repos
def _get_repo(self, image_conf, variant): def _get_repo(self, image_conf, variant):
""" """
Get a list of repos. First included are those explicitly listed in Get a list of repos. First included are those explicitly listed in
@ -38,7 +67,7 @@ class OSBuildPhase(
if not variant.is_empty and variant.uid not in repos: if not variant.is_empty and variant.uid not in repos:
repos.append(variant.uid) repos.append(variant.uid)
return util.get_repo_urls(self.compose, repos, arch="$arch") return OSBuildPhase._get_repo_urls(self.compose, repos, arch="$arch")
def run(self): def run(self):
for variant in self.compose.get_variants(): for variant in self.compose.get_variants():
@ -130,6 +159,10 @@ class RunOSBuildThread(WorkerThread):
if upload_options: if upload_options:
opts["upload_options"] = upload_options opts["upload_options"] = upload_options
customizations = config.get("customizations")
if customizations:
opts["customizations"] = customizations
if release: if release:
opts["release"] = release opts["release"] = release
task_id = koji.koji_proxy.osbuildImage( task_id = koji.koji_proxy.osbuildImage(
@ -152,7 +185,7 @@ class RunOSBuildThread(WorkerThread):
) )
if koji.watch_task(task_id, log_file) != 0: if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError( raise RuntimeError(
"OSBuild: task %s failed: see %s for details" % (task_id, log_file) "OSBuild task failed: %s. See %s for details" % (task_id, log_file)
) )
# Refresh koji session which may have timed out while the task was # Refresh koji session which may have timed out while the task was
@ -183,16 +216,27 @@ class RunOSBuildThread(WorkerThread):
# image_dir is absolute path to which the image should be copied. # image_dir is absolute path to which the image should be copied.
# We also need the same path as relative to compose directory for # We also need the same path as relative to compose directory for
# including in the metadata. # including in the metadata.
if archive["type_name"] == "iso":
# If the produced image is actually an ISO, it should go to
# iso/ subdirectory.
image_dir = compose.paths.compose.iso_dir(arch, variant)
rel_image_dir = compose.paths.compose.iso_dir(
arch, variant, relative=True
)
else:
image_dir = compose.paths.compose.image_dir(variant) % {"arch": arch} image_dir = compose.paths.compose.image_dir(variant) % {"arch": arch}
rel_image_dir = compose.paths.compose.image_dir(variant, relative=True) % { rel_image_dir = compose.paths.compose.image_dir(
"arch": arch variant, relative=True
} ) % {"arch": arch}
util.makedirs(image_dir) util.makedirs(image_dir)
image_dest = os.path.join(image_dir, archive["filename"]) image_dest = os.path.join(image_dir, archive["filename"])
src_file = os.path.join( src_file = compose.koji_downloader.get_file(
koji.koji_module.pathinfo.imagebuild(build_info), archive["filename"] os.path.join(
koji.koji_module.pathinfo.imagebuild(build_info),
archive["filename"],
),
) )
linker.link(src_file, image_dest, link_type=compose.conf["link_type"]) linker.link(src_file, image_dest, link_type=compose.conf["link_type"])
@ -209,7 +253,24 @@ class RunOSBuildThread(WorkerThread):
# Update image manifest # Update image manifest
img = Image(compose.im) img = Image(compose.im)
# Get the manifest type from the config if supplied, otherwise we
# determine the manifest type based on the koji output
img.type = config.get("manifest_type")
if not img.type:
if archive["type_name"] != "iso":
img.type = archive["type_name"] img.type = archive["type_name"]
else:
fn = archive["filename"].lower()
if "ostree" in fn:
img.type = "dvd-ostree-osbuild"
elif "live" in fn:
img.type = "live-osbuild"
elif "netinst" in fn or "boot" in fn:
img.type = "boot"
else:
img.type = "dvd"
img.format = suffix img.format = suffix
img.path = os.path.join(rel_image_dir, archive["filename"]) img.path = os.path.join(rel_image_dir, archive["filename"])
img.mtime = util.get_mtime(image_dest) img.mtime = util.get_mtime(image_dest)

View File

@ -85,7 +85,7 @@ class OSTreeThread(WorkerThread):
comps_repo = compose.paths.work.comps_repo( comps_repo = compose.paths.work.comps_repo(
"$basearch", variant=variant, create_dir=False "$basearch", variant=variant, create_dir=False
) )
repos = shortcuts.force_list(config["repo"]) + self.repos repos = shortcuts.force_list(config.get("repo", [])) + self.repos
if compose.has_comps: if compose.has_comps:
repos.append(translate_path(compose, comps_repo)) repos.append(translate_path(compose, comps_repo))
repos = get_repo_dicts(repos, logger=self.pool) repos = get_repo_dicts(repos, logger=self.pool)
@ -168,7 +168,9 @@ class OSTreeThread(WorkerThread):
("unified-core", config.get("unified_core", False)), ("unified-core", config.get("unified_core", False)),
] ]
) )
packages = ["pungi", "ostree", "rpm-ostree"] default_packages = ["pungi", "ostree", "rpm-ostree"]
additional_packages = config.get("runroot_packages", [])
packages = default_packages + additional_packages
log_file = os.path.join(self.logdir, "runroot.log") log_file = os.path.join(self.logdir, "runroot.log")
mounts = [compose.topdir, config["ostree_repo"]] mounts = [compose.topdir, config["ostree_repo"]]
runroot = Runroot(compose, phase="ostree") runroot = Runroot(compose, phase="ostree")

View File

@ -0,0 +1,190 @@
# -*- coding: utf-8 -*-
import copy
import json
import os
from kobo import shortcuts
from kobo.threads import ThreadPool, WorkerThread
from productmd.images import Image
from pungi.runroot import Runroot
from .base import ConfigGuardedPhase
from .. import util
from ..util import get_repo_dicts, translate_path
from ..wrappers import scm
class OSTreeContainerPhase(ConfigGuardedPhase):
name = "ostree_container"
def __init__(self, compose, pkgset_phase=None):
super(OSTreeContainerPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.compose._logger)
self.pkgset_phase = pkgset_phase
def get_repos(self):
return [
translate_path(
self.compose,
self.compose.paths.work.pkgset_repo(
pkgset.name, "$basearch", create_dir=False
),
)
for pkgset in self.pkgset_phase.package_sets
]
def _enqueue(self, variant, arch, conf):
self.pool.add(OSTreeContainerThread(self.pool, self.get_repos()))
self.pool.queue_put((self.compose, variant, arch, conf))
def run(self):
if isinstance(self.compose.conf.get(self.name), dict):
for variant in self.compose.get_variants():
for conf in self.get_config_block(variant):
for arch in conf.get("arches", []) or variant.arches:
self._enqueue(variant, arch, conf)
else:
# Legacy code path to support original configuration.
for variant in self.compose.get_variants():
for arch in variant.arches:
for conf in self.get_config_block(variant, arch):
self._enqueue(variant, arch, conf)
self.pool.start()
class OSTreeContainerThread(WorkerThread):
def __init__(self, pool, repos):
super(OSTreeContainerThread, self).__init__(pool)
self.repos = repos
def process(self, item, num):
compose, variant, arch, config = item
self.num = num
failable_arches = config.get("failable", [])
self.can_fail = util.can_arch_fail(failable_arches, arch)
with util.failable(compose, self.can_fail, variant, arch, "ostree-container"):
self.worker(compose, variant, arch, config)
def worker(self, compose, variant, arch, config):
msg = "OSTree container phase for variant %s, arch %s" % (variant.uid, arch)
self.pool.log_info("[BEGIN] %s" % msg)
workdir = compose.paths.work.topdir("ostree-container-%d" % self.num)
self.logdir = compose.paths.log.topdir(
"%s/%s/ostree-container-%d" % (arch, variant.uid, self.num)
)
repodir = os.path.join(workdir, "config_repo")
self._clone_repo(
compose,
repodir,
config["config_url"],
config.get("config_branch", "main"),
)
repos = shortcuts.force_list(config.get("repo", [])) + self.repos
repos = get_repo_dicts(repos, logger=self.pool)
# copy the original config and update before save to a json file
new_config = copy.copy(config)
# repos in configuration can have repo url set to variant UID,
# update it to have the actual url that we just translated.
new_config.update({"repo": repos})
# remove unnecessary (for 'pungi-make-ostree container' script ) elements
# from config, it doesn't hurt to have them, however remove them can
# reduce confusion
for k in [
"treefile",
"config_url",
"config_branch",
"failable",
"version",
]:
new_config.pop(k, None)
# write a json file to save the configuration, so 'pungi-make-ostree tree'
# can take use of it
extra_config_file = os.path.join(workdir, "extra_config.json")
with open(extra_config_file, "w") as f:
json.dump(new_config, f, indent=4)
self._run_ostree_container_cmd(
compose, variant, arch, config, repodir, extra_config_file=extra_config_file
)
self.pool.log_info("[DONE ] %s" % (msg))
def _run_ostree_container_cmd(
self, compose, variant, arch, config, config_repo, extra_config_file=None
):
target_dir = compose.paths.compose.image_dir(variant) % {"arch": arch}
util.makedirs(target_dir)
version = util.version_generator(compose, config.get("version"))
archive_name = "%s-%s-%s" % (
compose.conf["release_short"],
variant.uid,
version,
)
# Run the pungi-make-ostree command locally to create a script to
# execute in runroot environment.
cmd = [
"pungi-make-ostree",
"container",
"--log-dir=%s" % self.logdir,
"--name=%s" % archive_name,
"--path=%s" % target_dir,
"--treefile=%s" % os.path.join(config_repo, config["treefile"]),
"--extra-config=%s" % extra_config_file,
"--version=%s" % version,
]
_, runroot_script = shortcuts.run(cmd, universal_newlines=True)
default_packages = ["ostree", "rpm-ostree", "selinux-policy-targeted"]
additional_packages = config.get("runroot_packages", [])
packages = default_packages + additional_packages
log_file = os.path.join(self.logdir, "runroot.log")
# TODO: Use to get previous build
mounts = [compose.topdir]
runroot = Runroot(compose, phase="ostree_container")
runroot.run(
" && ".join(runroot_script.splitlines()),
log_file=log_file,
arch=arch,
packages=packages,
mounts=mounts,
new_chroot=True,
weight=compose.conf["runroot_weights"].get("ostree"),
)
fullpath = os.path.join(target_dir, "%s.ociarchive" % archive_name)
# Update image manifest
img = Image(compose.im)
# Get the manifest type from the config if supplied, otherwise we
# determine the manifest type based on the koji output
img.type = "ociarchive"
img.format = "ociarchive"
img.path = os.path.relpath(fullpath, compose.paths.compose.topdir())
img.mtime = util.get_mtime(fullpath)
img.size = util.get_file_size(fullpath)
img.arch = arch
img.disc_number = 1
img.disc_count = 1
img.bootable = False
img.subvariant = config.get("subvariant", variant.uid)
setattr(img, "can_fail", self.can_fail)
setattr(img, "deliverable", "ostree-container")
compose.im.add(variant=variant.uid, arch=arch, image=img)
def _clone_repo(self, compose, repodir, url, branch):
scm.get_dir_from_scm(
{"scm": "git", "repo": url, "branch": branch, "dir": "."},
repodir,
compose=compose,
)

View File

@ -38,12 +38,17 @@ from pungi.phases.createrepo import add_modular_metadata
def populate_arch_pkgsets(compose, path_prefix, global_pkgset): def populate_arch_pkgsets(compose, path_prefix, global_pkgset):
result = {} result = {}
exclusive_noarch = compose.conf["pkgset_exclusive_arch_considers_noarch"]
for arch in compose.get_arches(): for arch in compose.get_arches():
compose.log_info("Populating package set for arch: %s", arch) compose.log_info("Populating package set for arch: %s", arch)
is_multilib = is_arch_multilib(compose.conf, arch) is_multilib = is_arch_multilib(compose.conf, arch)
arches = get_valid_arches(arch, is_multilib, add_src=True) arches = get_valid_arches(arch, is_multilib, add_src=True)
pkgset = global_pkgset.subset(arch, arches, exclusive_noarch=exclusive_noarch) pkgset = global_pkgset.subset(
arch,
arches,
exclusive_noarch=compose.conf["pkgset_exclusive_arch_considers_noarch"],
inherit_to_noarch=compose.conf["pkgset_inherit_exclusive_arch_to_noarch"],
)
pkgset.save_file_list( pkgset.save_file_list(
compose.paths.work.package_list(arch=arch, pkgset=global_pkgset), compose.paths.work.package_list(arch=arch, pkgset=global_pkgset),
remove_path_prefix=path_prefix, remove_path_prefix=path_prefix,

View File

@ -23,11 +23,15 @@ import itertools
import json import json
import os import os
import time import time
import pgpy
import rpm
from six.moves import cPickle as pickle from six.moves import cPickle as pickle
from functools import partial
import kobo.log import kobo.log
import kobo.pkgset import kobo.pkgset
import kobo.rpmlib import kobo.rpmlib
from kobo.shortcuts import compute_file_checksums
from kobo.threads import WorkerThread, ThreadPool from kobo.threads import WorkerThread, ThreadPool
@ -150,9 +154,15 @@ class PackageSetBase(kobo.log.LoggingBase):
""" """
def nvr_formatter(package_info): def nvr_formatter(package_info):
# joins NVR parts of the package with '-' character. epoch_suffix = ''
return "-".join( if package_info['epoch'] is not None:
(package_info["name"], package_info["version"], package_info["release"]) epoch_suffix = ':' + package_info['epoch']
return (
f"{package_info['name']}"
f"{epoch_suffix}-"
f"{package_info['version']}-"
f"{package_info['release']}."
f"{package_info['arch']}"
) )
def get_error(sigkeys, infos): def get_error(sigkeys, infos):
@ -203,16 +213,31 @@ class PackageSetBase(kobo.log.LoggingBase):
return self.rpms_by_arch return self.rpms_by_arch
def subset(self, primary_arch, arch_list, exclusive_noarch=True): def subset(
self, primary_arch, arch_list, exclusive_noarch=True, inherit_to_noarch=True
):
"""Create a subset of this package set that only includes """Create a subset of this package set that only includes
packages compatible with""" packages compatible with"""
pkgset = PackageSetBase( pkgset = PackageSetBase(
self.name, self.sigkey_ordering, logger=self._logger, arches=arch_list self.name, self.sigkey_ordering, logger=self._logger, arches=arch_list
) )
pkgset.merge(self, primary_arch, arch_list, exclusive_noarch=exclusive_noarch) pkgset.merge(
self,
primary_arch,
arch_list,
exclusive_noarch=exclusive_noarch,
inherit_to_noarch=inherit_to_noarch,
)
return pkgset return pkgset
def merge(self, other, primary_arch, arch_list, exclusive_noarch=True): def merge(
self,
other,
primary_arch,
arch_list,
exclusive_noarch=True,
inherit_to_noarch=True,
):
""" """
Merge ``other`` package set into this instance. Merge ``other`` package set into this instance.
""" """
@ -251,7 +276,7 @@ class PackageSetBase(kobo.log.LoggingBase):
if i.file_path in self.file_cache: if i.file_path in self.file_cache:
# TODO: test if it really works # TODO: test if it really works
continue continue
if exclusivearch_list and arch == "noarch": if inherit_to_noarch and exclusivearch_list and arch == "noarch":
if is_excluded(i, exclusivearch_list, logger=self._logger): if is_excluded(i, exclusivearch_list, logger=self._logger):
continue continue
@ -318,6 +343,11 @@ class FilelistPackageSet(PackageSetBase):
return result return result
# This is a marker to indicate package set with only extra builds/tasks and no
# tasks.
MISSING_KOJI_TAG = object()
class KojiPackageSet(PackageSetBase): class KojiPackageSet(PackageSetBase):
def __init__( def __init__(
self, self,
@ -334,6 +364,7 @@ class KojiPackageSet(PackageSetBase):
extra_tasks=None, extra_tasks=None,
signed_packages_retries=0, signed_packages_retries=0,
signed_packages_wait=30, signed_packages_wait=30,
downloader=None,
): ):
""" """
Creates new KojiPackageSet. Creates new KojiPackageSet.
@ -371,7 +402,7 @@ class KojiPackageSet(PackageSetBase):
:param int signed_packages_wait: How long to wait between search attemts. :param int signed_packages_wait: How long to wait between search attemts.
""" """
super(KojiPackageSet, self).__init__( super(KojiPackageSet, self).__init__(
name, name if name != MISSING_KOJI_TAG else "no-tag",
sigkey_ordering=sigkey_ordering, sigkey_ordering=sigkey_ordering,
arches=arches, arches=arches,
logger=logger, logger=logger,
@ -388,6 +419,8 @@ class KojiPackageSet(PackageSetBase):
self.signed_packages_retries = signed_packages_retries self.signed_packages_retries = signed_packages_retries
self.signed_packages_wait = signed_packages_wait self.signed_packages_wait = signed_packages_wait
self.downloader = downloader
def __getstate__(self): def __getstate__(self):
result = self.__dict__.copy() result = self.__dict__.copy()
del result["koji_wrapper"] del result["koji_wrapper"]
@ -478,7 +511,8 @@ class KojiPackageSet(PackageSetBase):
response = None response = None
if self.cache_region: if self.cache_region:
cache_key = "KojiPackageSet.get_latest_rpms_%s_%s_%s" % ( cache_key = "%s.get_latest_rpms_%s_%s_%s" % (
str(self.__class__.__name__),
str(tag), str(tag),
str(event), str(event),
str(inherit), str(inherit),
@ -500,17 +534,36 @@ class KojiPackageSet(PackageSetBase):
return response return response
def get_package_path(self, queue_item): def get_package_path(self, queue_item):
rpm_info, build_info = queue_item rpm_info, build_info = queue_item
# Check if this RPM is coming from scratch task. In this case, we already # Check if this RPM is coming from scratch task. In this case, we already
# know the path. # know the path.
if "path_from_task" in rpm_info: if "path_from_task" in rpm_info:
return rpm_info["path_from_task"] return self.downloader.get_file(rpm_info["path_from_task"])
pathinfo = self.koji_wrapper.koji_module.pathinfo pathinfo = self.koji_wrapper.koji_module.pathinfo
paths = [] paths = []
if "getRPMChecksums" in self.koji_proxy.system.listMethods():
def checksum_validator(keyname, pkg_path):
checksums = self.koji_proxy.getRPMChecksums(
rpm_info["id"], checksum_types=("sha256",)
)
if "sha256" in checksums.get(keyname, {}):
computed = compute_file_checksums(pkg_path, ("sha256",))
if computed["sha256"] != checksums[keyname]["sha256"]:
raise RuntimeError("Checksum mismatch for %s" % pkg_path)
else:
def checksum_validator(keyname, pkg_path):
# Koji doesn't support checksums yet
pass
attempts_left = self.signed_packages_retries + 1 attempts_left = self.signed_packages_retries + 1
while attempts_left > 0: while attempts_left > 0:
for sigkey in self.sigkey_ordering: for sigkey in self.sigkey_ordering:
@ -523,8 +576,11 @@ class KojiPackageSet(PackageSetBase):
) )
if rpm_path not in paths: if rpm_path not in paths:
paths.append(rpm_path) paths.append(rpm_path)
if os.path.isfile(rpm_path): path = self.downloader.get_file(
return rpm_path rpm_path, partial(checksum_validator, sigkey)
)
if path:
return path
# No signed copy was found, wait a little and try again. # No signed copy was found, wait a little and try again.
attempts_left -= 1 attempts_left -= 1
@ -537,16 +593,18 @@ class KojiPackageSet(PackageSetBase):
# use an unsigned copy (if allowed) # use an unsigned copy (if allowed)
rpm_path = os.path.join(pathinfo.build(build_info), pathinfo.rpm(rpm_info)) rpm_path = os.path.join(pathinfo.build(build_info), pathinfo.rpm(rpm_info))
paths.append(rpm_path) paths.append(rpm_path)
if os.path.isfile(rpm_path): path = self.downloader.get_file(rpm_path, partial(checksum_validator, ""))
return rpm_path if path:
return path
if self._allow_invalid_sigkeys and rpm_info["name"] not in self.packages: if self._allow_invalid_sigkeys and rpm_info["name"] not in self.packages:
# use an unsigned copy (if allowed) # use an unsigned copy (if allowed)
rpm_path = os.path.join(pathinfo.build(build_info), pathinfo.rpm(rpm_info)) rpm_path = os.path.join(pathinfo.build(build_info), pathinfo.rpm(rpm_info))
paths.append(rpm_path) paths.append(rpm_path)
if os.path.isfile(rpm_path): path = self.downloader.get_file(rpm_path)
if path:
self._invalid_sigkey_rpms.append(rpm_info) self._invalid_sigkey_rpms.append(rpm_info)
return rpm_path return path
self._invalid_sigkey_rpms.append(rpm_info) self._invalid_sigkey_rpms.append(rpm_info)
self.log_error( self.log_error(
@ -567,7 +625,7 @@ class KojiPackageSet(PackageSetBase):
result_srpms = [] result_srpms = []
include_packages = set(include_packages or []) include_packages = set(include_packages or [])
if type(event) is dict: if isinstance(event, dict):
event = event["id"] event = event["id"]
msg = "Getting latest RPMs (tag: %s, event: %s, inherit: %s)" % ( msg = "Getting latest RPMs (tag: %s, event: %s, inherit: %s)" % (
@ -576,6 +634,8 @@ class KojiPackageSet(PackageSetBase):
inherit, inherit,
) )
self.log_info("[BEGIN] %s" % msg) self.log_info("[BEGIN] %s" % msg)
rpms, builds = [], []
if tag != MISSING_KOJI_TAG:
rpms, builds = self.get_latest_rpms(tag, event, inherit=inherit) rpms, builds = self.get_latest_rpms(tag, event, inherit=inherit)
extra_rpms, extra_builds = self.get_extra_rpms() extra_rpms, extra_builds = self.get_extra_rpms()
rpms += extra_rpms rpms += extra_rpms
@ -681,6 +741,15 @@ class KojiPackageSet(PackageSetBase):
:param include_packages: an iterable of tuples (package name, arch) that should :param include_packages: an iterable of tuples (package name, arch) that should
be included. be included.
""" """
if len(self.sigkey_ordering) > 1 and (
None in self.sigkey_ordering or "" in self.sigkey_ordering
):
self.log_warning(
"Stop writing reuse file as unsigned packages are allowed "
"in the compose."
)
return
reuse_file = compose.paths.work.pkgset_reuse_file(self.name) reuse_file = compose.paths.work.pkgset_reuse_file(self.name)
self.log_info("Writing pkgset reuse file: %s" % reuse_file) self.log_info("Writing pkgset reuse file: %s" % reuse_file)
try: try:
@ -697,6 +766,13 @@ class KojiPackageSet(PackageSetBase):
"srpms_by_name": self.srpms_by_name, "srpms_by_name": self.srpms_by_name,
"extra_builds": self.extra_builds, "extra_builds": self.extra_builds,
"include_packages": include_packages, "include_packages": include_packages,
"inherit_to_noarch": compose.conf[
"pkgset_inherit_exclusive_arch_to_noarch"
],
"exclusive_noarch": compose.conf[
"pkgset_exclusive_arch_considers_noarch"
],
"module_defaults_dir": compose.conf.get("module_defaults_dir"),
}, },
f, f,
protocol=pickle.HIGHEST_PROTOCOL, protocol=pickle.HIGHEST_PROTOCOL,
@ -791,6 +867,9 @@ class KojiPackageSet(PackageSetBase):
self.log_debug("Failed to load reuse file: %s" % str(e)) self.log_debug("Failed to load reuse file: %s" % str(e))
return False return False
inherit_to_noarch = compose.conf["pkgset_inherit_exclusive_arch_to_noarch"]
exclusive_noarch = compose.conf["pkgset_exclusive_arch_considers_noarch"]
module_defaults_dir = compose.conf.get("module_defaults_dir")
if ( if (
reuse_data["allow_invalid_sigkeys"] == self._allow_invalid_sigkeys reuse_data["allow_invalid_sigkeys"] == self._allow_invalid_sigkeys
and reuse_data["packages"] == self.packages and reuse_data["packages"] == self.packages
@ -798,6 +877,11 @@ class KojiPackageSet(PackageSetBase):
and reuse_data["extra_builds"] == self.extra_builds and reuse_data["extra_builds"] == self.extra_builds
and reuse_data["sigkeys"] == self.sigkey_ordering and reuse_data["sigkeys"] == self.sigkey_ordering
and reuse_data["include_packages"] == include_packages and reuse_data["include_packages"] == include_packages
# If the value is not present in reuse data, the compose was
# generated with older version of Pungi. Best to not reuse.
and reuse_data.get("inherit_to_noarch") == inherit_to_noarch
and reuse_data.get("exclusive_noarch") == exclusive_noarch
and reuse_data.get("module_defaults_dir") == module_defaults_dir
): ):
self.log_info("Copying repo data for reuse: %s" % old_repo_dir) self.log_info("Copying repo data for reuse: %s" % old_repo_dir)
copy_all(old_repo_dir, repo_dir) copy_all(old_repo_dir, repo_dir)
@ -812,6 +896,67 @@ class KojiPackageSet(PackageSetBase):
return False return False
class KojiMockPackageSet(KojiPackageSet):
def _is_rpm_signed(self, rpm_path) -> bool:
ts = rpm.TransactionSet()
ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES)
sigkeys = [
sigkey.lower() for sigkey in self.sigkey_ordering
if sigkey is not None
]
if not sigkeys:
return True
with open(rpm_path, 'rb') as fd:
header = ts.hdrFromFdno(fd)
signature = header[rpm.RPMTAG_SIGGPG] or header[rpm.RPMTAG_SIGPGP]
if signature is None:
return False
pgp_msg = pgpy.PGPMessage.from_blob(signature)
return any(
signature.signer.lower() in sigkeys
for signature in pgp_msg.signatures
)
def get_package_path(self, queue_item):
rpm_info, build_info = queue_item
# Check if this RPM is coming from scratch task.
# In this case, we already know the path.
if "path_from_task" in rpm_info:
return rpm_info["path_from_task"]
# we replaced this part because pungi uses way
# of guessing path of package on koji based on sigkey
# we don't need that because all our packages will
# be ready for release
# signature verification is still done during deps resolution
pathinfo = self.koji_wrapper.koji_module.pathinfo
rpm_path = os.path.join(pathinfo.topdir, pathinfo.rpm(rpm_info))
if os.path.isfile(rpm_path):
if not self._is_rpm_signed(rpm_path):
self._invalid_sigkey_rpms.append(rpm_info)
self.log_error(
'RPM "%s" not found for sigs: "%s". Path checked: "%s"',
rpm_info, self.sigkey_ordering, rpm_path
)
return
return rpm_path
else:
self.log_warning("RPM %s not found" % rpm_path)
return None
def populate(self, tag, event=None, inherit=True, include_packages=None):
result = super().populate(
tag=tag,
event=event,
inherit=inherit,
include_packages=include_packages,
)
return result
def _is_src(rpm_info): def _is_src(rpm_info):
"""Check if rpm info object returned by Koji refers to source packages.""" """Check if rpm info object returned by Koji refers to source packages."""
return rpm_info["arch"] in ("src", "nosrc") return rpm_info["arch"] in ("src", "nosrc")

View File

@ -15,8 +15,10 @@
from .source_koji import PkgsetSourceKoji from .source_koji import PkgsetSourceKoji
from .source_repos import PkgsetSourceRepos from .source_repos import PkgsetSourceRepos
from .source_kojimock import PkgsetSourceKojiMock
ALL_SOURCES = { ALL_SOURCES = {
"koji": PkgsetSourceKoji, "koji": PkgsetSourceKoji,
"repos": PkgsetSourceRepos, "repos": PkgsetSourceRepos,
"kojimock": PkgsetSourceKojiMock,
} }

View File

@ -193,17 +193,13 @@ class PkgsetSourceKoji(pungi.phases.pkgset.source.PkgsetSourceBase):
def __call__(self): def __call__(self):
compose = self.compose compose = self.compose
self.koji_wrapper = pungi.wrappers.kojiwrapper.KojiWrapper(compose) self.koji_wrapper = pungi.wrappers.kojiwrapper.KojiWrapper(compose)
# path prefix must contain trailing '/' package_sets = get_pkgset_from_koji(self.compose, self.koji_wrapper)
path_prefix = self.koji_wrapper.koji_module.config.topdir.rstrip("/") + "/" return (package_sets, self.compose.koji_downloader.path_prefix)
package_sets = get_pkgset_from_koji(
self.compose, self.koji_wrapper, path_prefix
)
return (package_sets, path_prefix)
def get_pkgset_from_koji(compose, koji_wrapper, path_prefix): def get_pkgset_from_koji(compose, koji_wrapper):
event_info = get_koji_event_info(compose, koji_wrapper) event_info = get_koji_event_info(compose, koji_wrapper)
return populate_global_pkgset(compose, koji_wrapper, path_prefix, event_info) return populate_global_pkgset(compose, koji_wrapper, event_info)
def _add_module_to_variant( def _add_module_to_variant(
@ -226,20 +222,23 @@ def _add_module_to_variant(
""" """
mmds = {} mmds = {}
archives = koji_wrapper.koji_proxy.listArchives(build["id"]) archives = koji_wrapper.koji_proxy.listArchives(build["id"])
available_arches = set()
for archive in archives: for archive in archives:
if archive["btype"] != "module": if archive["btype"] != "module":
# Skip non module archives # Skip non module archives
continue continue
typedir = koji_wrapper.koji_module.pathinfo.typedir(build, archive["btype"]) typedir = koji_wrapper.koji_module.pathinfo.typedir(build, archive["btype"])
filename = archive["filename"] filename = archive["filename"]
file_path = os.path.join(typedir, filename) file_path = compose.koji_downloader.get_file(os.path.join(typedir, filename))
try: try:
# If there are two dots, the arch is in the middle. MBS uploads # If there are two dots, the arch is in the middle. MBS uploads
# files with actual architecture in the filename, but Pungi deals # files with actual architecture in the filename, but Pungi deals
# in basearch. This assumes that each arch in the build maps to a # in basearch. This assumes that each arch in the build maps to a
# unique basearch. # unique basearch.
_, arch, _ = filename.split(".") _, arch, _ = filename.split(".")
filename = "modulemd.%s.txt" % getBaseArch(arch) basearch = getBaseArch(arch)
filename = "modulemd.%s.txt" % basearch
available_arches.add(basearch)
except ValueError: except ValueError:
pass pass
mmds[filename] = file_path mmds[filename] = file_path
@ -264,15 +263,26 @@ def _add_module_to_variant(
compose.log_debug("Module %s is filtered from %s.%s", nsvc, variant, arch) compose.log_debug("Module %s is filtered from %s.%s", nsvc, variant, arch)
continue continue
if arch not in available_arches:
compose.log_debug(
"Module %s is not available for arch %s.%s", nsvc, variant, arch
)
continue
filename = "modulemd.%s.txt" % arch filename = "modulemd.%s.txt" % arch
if filename not in mmds: if filename not in mmds:
raise RuntimeError( raise RuntimeError(
"Module %s does not have metadata for arch %s and is not filtered " "Module %s does not have metadata for arch %s and is not filtered "
"out via filter_modules option." % (nsvc, arch) "out via filter_modules option." % (nsvc, arch)
) )
try:
mod_stream = read_single_module_stream_from_file( mod_stream = read_single_module_stream_from_file(
mmds[filename], compose, arch, build mmds[filename], compose, arch, build
) )
except Exception as exc:
# libmodulemd raises various GLib exceptions with not very helpful
# messages. Let's replace it with something more useful.
raise RuntimeError("Failed to read %s: %s", mmds[filename], str(exc))
if mod_stream: if mod_stream:
added = True added = True
variant.arch_mmds.setdefault(arch, {})[nsvc] = mod_stream variant.arch_mmds.setdefault(arch, {})[nsvc] = mod_stream
@ -395,7 +405,13 @@ def _is_filtered_out(compose, variant, arch, module_name, module_stream):
def _get_modules_from_koji( def _get_modules_from_koji(
compose, koji_wrapper, event, variant, variant_tags, tag_to_mmd, exclude_module_ns compose,
koji_wrapper,
event,
variant,
variant_tags,
tag_to_mmd,
exclude_module_ns,
): ):
""" """
Loads modules for given `variant` from koji `session`, adds them to Loads modules for given `variant` from koji `session`, adds them to
@ -480,7 +496,16 @@ def filter_inherited(koji_proxy, event, module_builds, top_tag):
# And keep only builds from that topmost tag # And keep only builds from that topmost tag
result.extend(build for build in builds if build["tag_name"] == tag) result.extend(build for build in builds if build["tag_name"] == tag)
return result # If the same module was inherited multiple times, it will be in result
# multiple times. We need to deduplicate.
deduplicated_result = []
included_nvrs = set()
for build in result:
if build["nvr"] not in included_nvrs:
deduplicated_result.append(build)
included_nvrs.add(build["nvr"])
return deduplicated_result
def filter_by_whitelist(compose, module_builds, input_modules, expected_modules): def filter_by_whitelist(compose, module_builds, input_modules, expected_modules):
@ -670,7 +695,7 @@ def _get_modules_from_koji_tags(
) )
def populate_global_pkgset(compose, koji_wrapper, path_prefix, event): def populate_global_pkgset(compose, koji_wrapper, event):
all_arches = get_all_arches(compose) all_arches = get_all_arches(compose)
# List of compose tags from which we create this compose # List of compose tags from which we create this compose
@ -764,7 +789,12 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
if extra_modules: if extra_modules:
_add_extra_modules_to_variant( _add_extra_modules_to_variant(
compose, koji_wrapper, variant, extra_modules, variant_tags, tag_to_mmd compose,
koji_wrapper,
variant,
extra_modules,
variant_tags,
tag_to_mmd,
) )
variant_scratch_modules = get_variant_data( variant_scratch_modules = get_variant_data(
@ -791,17 +821,23 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
pkgsets = [] pkgsets = []
extra_builds = force_list(compose.conf.get("pkgset_koji_builds", []))
extra_tasks = force_list(compose.conf.get("pkgset_koji_scratch_tasks", []))
if not pkgset_koji_tags and (extra_builds or extra_tasks):
# We have extra packages to pull in, but no tag to merge them with.
compose_tags.append(pungi.phases.pkgset.pkgsets.MISSING_KOJI_TAG)
pkgset_koji_tags.append(pungi.phases.pkgset.pkgsets.MISSING_KOJI_TAG)
# Get package set for each compose tag and merge it to global package # Get package set for each compose tag and merge it to global package
# list. Also prepare per-variant pkgset, because we do not have list # list. Also prepare per-variant pkgset, because we do not have list
# of binary RPMs in module definition - there is just list of SRPMs. # of binary RPMs in module definition - there is just list of SRPMs.
for compose_tag in compose_tags: for compose_tag in compose_tags:
compose.log_info("Loading package set for tag %s", compose_tag) compose.log_info("Loading package set for tag %s", compose_tag)
kwargs = {}
if compose_tag in pkgset_koji_tags: if compose_tag in pkgset_koji_tags:
extra_builds = force_list(compose.conf.get("pkgset_koji_builds", [])) kwargs["extra_builds"] = extra_builds
extra_tasks = force_list(compose.conf.get("pkgset_koji_scratch_tasks", [])) kwargs["extra_tasks"] = extra_tasks
else:
extra_builds = []
extra_tasks = []
pkgset = pungi.phases.pkgset.pkgsets.KojiPackageSet( pkgset = pungi.phases.pkgset.pkgsets.KojiPackageSet(
compose_tag, compose_tag,
@ -813,10 +849,10 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
allow_invalid_sigkeys=allow_invalid_sigkeys, allow_invalid_sigkeys=allow_invalid_sigkeys,
populate_only_packages=populate_only_packages_to_gather, populate_only_packages=populate_only_packages_to_gather,
cache_region=compose.cache_region, cache_region=compose.cache_region,
extra_builds=extra_builds,
extra_tasks=extra_tasks,
signed_packages_retries=compose.conf["signed_packages_retries"], signed_packages_retries=compose.conf["signed_packages_retries"],
signed_packages_wait=compose.conf["signed_packages_wait"], signed_packages_wait=compose.conf["signed_packages_wait"],
downloader=compose.koji_downloader,
**kwargs
) )
# Check if we have cache for this tag from previous compose. If so, use # Check if we have cache for this tag from previous compose. If so, use
@ -874,13 +910,18 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
if pkgset.reuse is None: if pkgset.reuse is None:
pkgset.populate( pkgset.populate(
compose_tag, compose_tag,
event, # We care about packages as they existed on the specified
# event. However, modular content tags are not expected to
# change, so the event doesn't matter there. If an exact NSVC
# of a module is specified, the code above would happily find
# its content tag, but fail here if the content tag doesn't
# exist at the given event.
event=event if is_traditional else None,
inherit=should_inherit, inherit=should_inherit,
include_packages=modular_packages, include_packages=modular_packages,
) )
for variant in compose.all_variants.values(): for variant in compose.all_variants.values():
if compose_tag in variant_tags[variant]: if compose_tag in variant_tags[variant]:
# If it's a modular tag, store the package set for the module. # If it's a modular tag, store the package set for the module.
for nsvc, koji_tag in variant.module_uid_to_koji_tag.items(): for nsvc, koji_tag in variant.module_uid_to_koji_tag.items():
if compose_tag == koji_tag: if compose_tag == koji_tag:
@ -903,7 +944,7 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
MaterializedPackageSet.create, MaterializedPackageSet.create,
compose, compose,
pkgset, pkgset,
path_prefix, compose.koji_downloader.path_prefix,
mmd=tag_to_mmd.get(pkgset.name), mmd=tag_to_mmd.get(pkgset.name),
) )
) )

File diff suppressed because it is too large Load Diff

View File

@ -13,13 +13,19 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>. # along with this program; if not, see <https://gnu.org/licenses/>.
import contextlib
import os import os
import re import re
import shutil
import tarfile
import requests
import six import six
from six.moves import shlex_quote from six.moves import shlex_quote
import kobo.log import kobo.log
from kobo.shortcuts import run from kobo.shortcuts import run
from pungi import util
from pungi.wrappers import kojiwrapper from pungi.wrappers import kojiwrapper
@ -94,7 +100,7 @@ class Runroot(kobo.log.LoggingBase):
log_file = os.path.join(log_dir, "program.log") log_file = os.path.join(log_dir, "program.log")
try: try:
with open(log_file) as f: with open(log_file) as f:
for line in f: for line in f.readlines():
if "losetup: cannot find an unused loop device" in line: if "losetup: cannot find an unused loop device" in line:
return True return True
if re.match("losetup: .* failed to set up loop device", line): if re.match("losetup: .* failed to set up loop device", line):
@ -230,9 +236,9 @@ class Runroot(kobo.log.LoggingBase):
fmt_dict["runroot_key"] = runroot_key fmt_dict["runroot_key"] = runroot_key
self._ssh_run(hostname, user, run_template, fmt_dict, log_file=log_file) self._ssh_run(hostname, user, run_template, fmt_dict, log_file=log_file)
fmt_dict[ fmt_dict["command"] = (
"command" "rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}\n'"
] = "rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}\n'" )
buildroot_rpms = self._ssh_run( buildroot_rpms = self._ssh_run(
hostname, hostname,
user, user,
@ -314,7 +320,8 @@ class Runroot(kobo.log.LoggingBase):
arch, arch,
args, args,
channel=runroot_channel, channel=runroot_channel,
chown_uid=os.getuid(), # We want to change owner only if shared NFS directory is used.
chown_uid=os.getuid() if kwargs.get("mounts") else None,
**kwargs **kwargs
) )
@ -325,6 +332,7 @@ class Runroot(kobo.log.LoggingBase):
% (output["task_id"], log_file) % (output["task_id"], log_file)
) )
self._result = output self._result = output
return output["task_id"]
def run_pungi_ostree(self, args, log_file=None, arch=None, **kwargs): def run_pungi_ostree(self, args, log_file=None, arch=None, **kwargs):
""" """
@ -381,3 +389,72 @@ class Runroot(kobo.log.LoggingBase):
return self._result return self._result
else: else:
raise ValueError("Unknown runroot_method %r." % self.runroot_method) raise ValueError("Unknown runroot_method %r." % self.runroot_method)
@util.retry(wait_on=requests.exceptions.RequestException)
def _download_file(url, dest):
# contextlib.closing is only needed in requests<2.18
with contextlib.closing(requests.get(url, stream=True, timeout=5)) as r:
if r.status_code == 404:
raise RuntimeError("Archive %s not found" % url)
r.raise_for_status()
with open(dest, "wb") as f:
shutil.copyfileobj(r.raw, f)
def _download_archive(task_id, fname, archive_url, dest_dir):
"""Download file from URL to a destination, with retries."""
temp_file = os.path.join(dest_dir, fname)
_download_file(archive_url, temp_file)
return temp_file
def _extract_archive(task_id, fname, archive_file, dest_path):
"""Extract the archive into given destination.
All items of the archive must match the name of the archive, i.e. all
paths in foo.tar.gz must start with foo/.
"""
basename = os.path.basename(fname).split(".")[0]
strip_prefix = basename + "/"
with tarfile.open(archive_file, "r") as archive:
for member in archive.getmembers():
# Check if each item is either the root directory or is within it.
if member.name != basename and not member.name.startswith(strip_prefix):
raise RuntimeError(
"Archive %s from task %s contains file without expected prefix: %s"
% (fname, task_id, member)
)
dest = os.path.join(dest_path, member.name[len(strip_prefix) :])
if member.isdir():
# Create directories where needed...
util.makedirs(dest)
elif member.isfile():
# ... and extract files into them.
with open(dest, "wb") as dest_obj:
shutil.copyfileobj(archive.extractfile(member), dest_obj)
elif member.islnk():
# We have a hardlink. Let's also link it.
linked_file = os.path.join(
dest_path, member.linkname[len(strip_prefix) :]
)
os.link(linked_file, dest)
else:
# Any other file type is an error.
raise RuntimeError(
"Unexpected file type in %s from task %s: %s"
% (fname, task_id, member)
)
def download_and_extract_archive(compose, task_id, fname, destination):
"""Download a tar archive from task outputs and extract it to the destination."""
koji = kojiwrapper.KojiWrapper(compose).koji_module
# Koji API provides downloadTaskOutput method, but it's not usable as it
# will attempt to load the entire file into memory.
# So instead let's generate a patch and attempt to convert it to a URL.
server_path = os.path.join(koji.pathinfo.task(task_id), fname)
archive_url = server_path.replace(koji.config.topdir, koji.config.topurl)
with util.temp_dir(prefix="buildinstall-download") as tmp_dir:
local_path = _download_archive(task_id, fname, archive_url, tmp_dir)
_extract_archive(task_id, fname, local_path, destination)

View File

@ -0,0 +1,63 @@
import argparse
import os
import re
import time
from pungi.util import format_size
LOCK_RE = re.compile(r".*\.lock(\|[A-Za-z0-9]+)*$")
def should_be_cleaned_up(path, st, threshold):
if st.st_nlink == 1 and st.st_mtime < threshold:
# No other instances, older than limit
return True
if LOCK_RE.match(path) and st.st_mtime < threshold:
# Suspiciously old lock
return True
return False
def main():
parser = argparse.ArgumentParser()
parser.add_argument("CACHE_DIR")
parser.add_argument("-n", "--dry-run", action="store_true")
parser.add_argument("--verbose", action="store_true")
parser.add_argument(
"--max-age",
help="how old files should be considered for deletion",
default=7,
type=int,
)
args = parser.parse_args()
topdir = os.path.abspath(args.CACHE_DIR)
max_age = args.max_age * 24 * 3600
cleaned_up = 0
threshold = time.time() - max_age
for dirpath, dirnames, filenames in os.walk(topdir):
for f in filenames:
filepath = os.path.join(dirpath, f)
st = os.stat(filepath)
if should_be_cleaned_up(filepath, st, threshold):
if args.verbose:
print("RM %s" % filepath)
cleaned_up += st.st_size
if not args.dry_run:
os.remove(filepath)
if not dirnames and not filenames:
if args.verbose:
print("RMDIR %s" % dirpath)
if not args.dry_run:
os.rmdir(dirpath)
if args.dry_run:
print("Would reclaim %s bytes." % format_size(cleaned_up))
else:
print("Reclaimed %s bytes." % format_size(cleaned_up))

View File

@ -171,32 +171,11 @@ def main():
group.add_argument( group.add_argument(
"--offline", action="store_true", help="Do not resolve git references." "--offline", action="store_true", help="Do not resolve git references."
) )
parser.add_argument(
"--multi",
metavar="DIR",
help=(
"Treat source as config for pungi-orchestrate and store dump into "
"given directory."
),
)
args = parser.parse_args() args = parser.parse_args()
defines = config_utils.extract_defines(args.define) defines = config_utils.extract_defines(args.define)
if args.multi:
if len(args.sources) > 1:
parser.error("Only one multi config can be specified.")
return dump_multi_config(
args.sources[0],
dest=args.multi,
defines=defines,
just_dump=args.just_dump,
event=args.freeze_event,
offline=args.offline,
)
return process_file( return process_file(
args.sources, args.sources,
defines=defines, defines=defines,

View File

@ -128,7 +128,6 @@ def run(config, topdir, has_old, offline, defined_variables, schema_overrides):
pungi.phases.OSTreePhase(compose), pungi.phases.OSTreePhase(compose),
pungi.phases.CreateisoPhase(compose, buildinstall_phase), pungi.phases.CreateisoPhase(compose, buildinstall_phase),
pungi.phases.ExtraIsosPhase(compose, buildinstall_phase), pungi.phases.ExtraIsosPhase(compose, buildinstall_phase),
pungi.phases.LiveImagesPhase(compose),
pungi.phases.LiveMediaPhase(compose), pungi.phases.LiveMediaPhase(compose),
pungi.phases.ImageBuildPhase(compose), pungi.phases.ImageBuildPhase(compose),
pungi.phases.ImageChecksumPhase(compose), pungi.phases.ImageChecksumPhase(compose),

View File

@ -0,0 +1,441 @@
# coding=utf-8
import argparse
import os
import subprocess
import tempfile
from shutil import rmtree
from typing import (
AnyStr,
List,
Dict,
Optional,
)
import createrepo_c as cr
import requests
import yaml
from dataclasses import dataclass, field
from .create_packages_json import (
PackagesGenerator,
RepoInfo,
VariantInfo,
)
@dataclass
class ExtraVariantInfo(VariantInfo):
modules: List[AnyStr] = field(default_factory=list)
packages: List[AnyStr] = field(default_factory=list)
class CreateExtraRepo(PackagesGenerator):
def __init__(
self,
variants: List[ExtraVariantInfo],
bs_auth_token: AnyStr,
local_repository_path: AnyStr,
clear_target_repo: bool = True,
):
self.variants = [] # type: List[ExtraVariantInfo]
super().__init__(variants, [], [])
self.auth_headers = {
'Authorization': f'Bearer {bs_auth_token}',
}
# modules data of modules.yaml.gz from an existing local repo
self.local_modules_data = []
self.local_repository_path = local_repository_path
# path to modules.yaml, which generated by the class
self.default_modules_yaml_path = os.path.join(
local_repository_path,
'modules.yaml',
)
if clear_target_repo:
if os.path.exists(self.local_repository_path):
rmtree(self.local_repository_path)
os.makedirs(self.local_repository_path, exist_ok=True)
else:
self._read_local_modules_yaml()
def _read_local_modules_yaml(self):
"""
Read modules data from an existin local repo
"""
repomd_file_path = os.path.join(
self.local_repository_path,
'repodata',
'repomd.xml',
)
repomd_object = self._parse_repomd(repomd_file_path)
for repomd_record in repomd_object.records:
if repomd_record.type != 'modules':
continue
modules_yaml_path = os.path.join(
self.local_repository_path,
repomd_record.location_href,
)
self.local_modules_data = list(self._parse_modules_file(
modules_yaml_path,
))
break
def _dump_local_modules_yaml(self):
"""
Dump merged modules data to an local repo
"""
if self.local_modules_data:
with open(self.default_modules_yaml_path, 'w') as yaml_file:
yaml.dump_all(
self.local_modules_data,
yaml_file,
)
@staticmethod
def get_repo_info_from_bs_repo(
auth_token: AnyStr,
build_id: AnyStr,
arch: AnyStr,
packages: Optional[List[AnyStr]] = None,
modules: Optional[List[AnyStr]] = None,
) -> List[ExtraVariantInfo]:
"""
Get info about a BS repo and save it to
an object of class ExtraRepoInfo
:param auth_token: Auth token to Build System
:param build_id: ID of a build from BS
:param arch: an architecture of repo which will be used
:param packages: list of names of packages which will be put to an
local repo from a BS repo
:param modules: list of names of modules which will be put to an
local repo from a BS repo
:return: list of ExtraRepoInfo with info about the BS repos
"""
bs_url = 'https://build.cloudlinux.com'
api_uri = 'api/v1'
bs_repo_suffix = 'build_repos'
variants_info = []
# get the full info about a BS repo
repo_request = requests.get(
url=os.path.join(
bs_url,
api_uri,
'builds',
build_id,
),
headers={
'Authorization': f'Bearer {auth_token}',
},
)
repo_request.raise_for_status()
result = repo_request.json()
for build_platform in result['build_platforms']:
platform_name = build_platform['name']
for architecture in build_platform['architectures']:
# skip repo with unsuitable architecture
if architecture != arch:
continue
variant_info = ExtraVariantInfo(
name=f'{build_id}-{platform_name}-{architecture}',
arch=architecture,
packages=packages,
modules=modules,
repos=[
RepoInfo(
path=os.path.join(
bs_url,
bs_repo_suffix,
build_id,
platform_name,
),
folder=architecture,
is_remote=True,
)
]
)
variants_info.append(variant_info)
return variants_info
def _create_local_extra_repo(self):
"""
Call `createrepo_c <path_to_repo>` for creating a local repo
"""
subprocess.call(
f'createrepo_c {self.local_repository_path}',
shell=True,
)
# remove an unnecessary temporary modules.yaml
if os.path.exists(self.default_modules_yaml_path):
os.remove(self.default_modules_yaml_path)
def get_remote_file_content(
self,
file_url: AnyStr,
) -> AnyStr:
"""
Get content from a remote file and write it to a temp file
:param file_url: url of a remote file
:return: path to a temp file
"""
file_request = requests.get(
url=file_url,
# for the case when we get a file from BS
headers=self.auth_headers,
)
file_request.raise_for_status()
with tempfile.NamedTemporaryFile(delete=False) as file_stream:
file_stream.write(file_request.content)
return file_stream.name
def _download_rpm_to_local_repo(
self,
package_location: AnyStr,
repo_info: RepoInfo,
) -> None:
"""
Download a rpm package from a remote repo and save it to a local repo
:param package_location: relative uri of a package in a remote repo
:param repo_info: info about a remote repo which contains a specific
rpm package
"""
rpm_package_remote_path = os.path.join(
repo_info.path,
repo_info.folder,
package_location,
)
rpm_package_local_path = os.path.join(
self.local_repository_path,
os.path.basename(package_location),
)
rpm_request = requests.get(
url=rpm_package_remote_path,
headers=self.auth_headers,
)
rpm_request.raise_for_status()
with open(rpm_package_local_path, 'wb') as rpm_file:
rpm_file.write(rpm_request.content)
def _download_packages(
self,
packages: Dict[AnyStr, cr.Package],
variant_info: ExtraVariantInfo
):
"""
Download all defined packages from a remote repo
:param packages: information about all packages (including
modularity) in a remote repo
:param variant_info: information about a remote variant
"""
for package in packages.values():
package_name = package.name
# Skip a current package from a remote repo if we defined
# the list packages and a current package doesn't belong to it
if variant_info.packages and \
package_name not in variant_info.packages:
continue
for repo_info in variant_info.repos:
self._download_rpm_to_local_repo(
package_location=package.location_href,
repo_info=repo_info,
)
def _download_modules(
self,
modules_data: List[Dict],
variant_info: ExtraVariantInfo,
packages: Dict[AnyStr, cr.Package]
):
"""
Download all defined modularity packages and their data from
a remote repo
:param modules_data: information about all modules in a remote repo
:param variant_info: information about a remote variant
:param packages: information about all packages (including
modularity) in a remote repo
"""
for module in modules_data:
module_data = module['data']
# Skip a current module from a remote repo if we defined
# the list modules and a current module doesn't belong to it
if variant_info.modules and \
module_data['name'] not in variant_info.modules:
continue
# we should add info about a module if the local repodata
# doesn't have it
if module not in self.local_modules_data:
self.local_modules_data.append(module)
# just skip a module's record if it doesn't have rpm artifact
if module['document'] != 'modulemd' or \
'artifacts' not in module_data or \
'rpms' not in module_data['artifacts']:
continue
for rpm in module['data']['artifacts']['rpms']:
# Empty repo_info.packages means that we will download
# all packages from repo including
# the modularity packages
if not variant_info.packages:
break
# skip a rpm if it doesn't belong to a processed repo
if rpm not in packages:
continue
for repo_info in variant_info.repos:
self._download_rpm_to_local_repo(
package_location=packages[rpm].location_href,
repo_info=repo_info,
)
def create_extra_repo(self):
"""
1. Get from the remote repos the specific (or all) packages/modules
2. Save them to a local repo
3. Save info about the modules to a local repo
3. Call `createrepo_c` which creates a local repo
with the right repodata
"""
for variant_info in self.variants:
for repo_info in variant_info.repos:
repomd_records = self._get_repomd_records(
repo_info=repo_info,
)
packages_iterator = self.get_packages_iterator(repo_info)
# parse the repodata (including modules.yaml.gz)
modules_data = self._parse_module_repomd_record(
repo_info=repo_info,
repomd_records=repomd_records,
)
# convert the packages dict to more usable form
# for future checking that a rpm from the module's artifacts
# belongs to a processed repository
packages = {
f'{package.name}-{package.epoch}:{package.version}-'
f'{package.release}.{package.arch}':
package for package in packages_iterator
}
self._download_modules(
modules_data=modules_data,
variant_info=variant_info,
packages=packages,
)
self._download_packages(
packages=packages,
variant_info=variant_info,
)
self._dump_local_modules_yaml()
self._create_local_extra_repo()
def create_parser():
parser = argparse.ArgumentParser()
parser.add_argument(
'--bs-auth-token',
help='Auth token for Build System',
)
parser.add_argument(
'--local-repo-path',
help='Path to a local repo. E.g. /var/repo/test_repo',
required=True,
)
parser.add_argument(
'--clear-local-repo',
help='Clear a local repo before creating a new',
action='store_true',
default=False,
)
parser.add_argument(
'--repo',
action='append',
help='Path to a folder with repofolders or build id. E.g. '
'"http://koji.cloudlinux.com/mirrors/rhel_mirror" or '
'"601809b3c2f5b0e458b14cd3"',
required=True,
)
parser.add_argument(
'--repo-folder',
action='append',
help='A folder which contains folder repodata . E.g. "baseos-stream"',
required=True,
)
parser.add_argument(
'--repo-arch',
action='append',
help='What architecture packages a repository contains. E.g. "x86_64"',
required=True,
)
parser.add_argument(
'--packages',
action='append',
type=str,
default=[],
help='A list of packages names which we want to download to local '
'extra repo. We will download all of packages if param is empty',
required=True,
)
parser.add_argument(
'--modules',
action='append',
type=str,
default=[],
help='A list of modules names which we want to download to local '
'extra repo. We will download all of modules if param is empty',
required=True,
)
return parser
def cli_main():
args = create_parser().parse_args()
repos_info = []
for repo, repo_folder, repo_arch, packages, modules in zip(
args.repo,
args.repo_folder,
args.repo_arch,
args.packages,
args.modules,
):
modules = modules.split()
packages = packages.split()
if repo.startswith('http://'):
repos_info.append(
ExtraVariantInfo(
name=repo_folder,
arch=repo_arch,
repos=[
RepoInfo(
path=repo,
folder=repo_folder,
is_remote=True,
)
],
modules=modules,
packages=packages,
)
)
else:
repos_info.extend(
CreateExtraRepo.get_repo_info_from_bs_repo(
auth_token=args.bs_auth_token,
build_id=repo,
arch=repo_arch,
modules=modules,
packages=packages,
)
)
cer = CreateExtraRepo(
variants=repos_info,
bs_auth_token=args.bs_auth_token,
local_repository_path=args.local_repo_path,
clear_target_repo=args.clear_local_repo,
)
cer.create_extra_repo()
if __name__ == '__main__':
cli_main()

View File

@ -0,0 +1,514 @@
# coding=utf-8
"""
The tool allow to generate package.json. This file is used by pungi
# as parameter `gather_prepopulate`
Sample of using repodata files taken from
https://github.com/rpm-software-management/createrepo_c/blob/master/examples/python/repodata_parsing.py
"""
import argparse
import gzip
import json
import logging
import lzma
import os
import re
import tempfile
from collections import defaultdict
from itertools import tee
from pathlib import Path
from typing import (
AnyStr,
Dict,
List,
Any,
Iterator,
Optional,
Tuple,
Union,
)
import binascii
from urllib.parse import urljoin
import requests
import rpm
import yaml
from createrepo_c import (
Package,
PackageIterator,
Repomd,
RepomdRecord,
)
from dataclasses import dataclass, field
from kobo.rpmlib import parse_nvra
logging.basicConfig(level=logging.INFO)
def _is_compressed_file(first_two_bytes: bytes, initial_bytes: bytes):
return binascii.hexlify(first_two_bytes) == initial_bytes
def is_gzip_file(first_two_bytes):
return _is_compressed_file(
first_two_bytes=first_two_bytes,
initial_bytes=b'1f8b',
)
def is_xz_file(first_two_bytes):
return _is_compressed_file(
first_two_bytes=first_two_bytes,
initial_bytes=b'fd37',
)
@dataclass
class RepoInfo:
# path to a directory with repo directories. E.g. '/var/repos' contains
# 'appstream', 'baseos', etc.
# Or 'http://koji.cloudlinux.com/mirrors/rhel_mirror' if you are
# using remote repo
path: str
# name of folder with a repodata folder. E.g. 'baseos', 'appstream', etc
folder: str
# Is a repo remote or local
is_remote: bool
# Is a reference repository (usually it's a RHEL repo)
# Layout of packages from such repository will be taken as example
# Only layout of specific package (which doesn't exist
# in a reference repository) will be taken as example
is_reference: bool = False
# The packages from 'present' repo will be added to a variant.
# The packages from 'absent' repo will be removed from a variant.
repo_type: str = 'present'
@dataclass
class VariantInfo:
# name of variant. E.g. 'BaseOS', 'AppStream', etc
name: AnyStr
# architecture of variant. E.g. 'x86_64', 'i686', etc
arch: AnyStr
# The packages which will be not added to a variant
excluded_packages: List[str] = field(default_factory=list)
# Repos of a variant
repos: List[RepoInfo] = field(default_factory=list)
class PackagesGenerator:
repo_arches = defaultdict(lambda: list(('noarch',)))
addon_repos = {
'x86_64': ['i686'],
'ppc64le': [],
'aarch64': [],
's390x': [],
'i686': [],
}
def __init__(
self,
variants: List[VariantInfo],
excluded_packages: List[AnyStr],
included_packages: List[AnyStr],
):
self.variants = variants
self.pkgs = dict()
self.excluded_packages = excluded_packages
self.included_packages = included_packages
self.tmp_files = [] # type: list[Path]
for arch, arch_list in self.addon_repos.items():
self.repo_arches[arch].extend(arch_list)
self.repo_arches[arch].append(arch)
def __del__(self):
for tmp_file in self.tmp_files:
if tmp_file.exists():
tmp_file.unlink()
@staticmethod
def _get_full_repo_path(repo_info: RepoInfo):
result = os.path.join(
repo_info.path,
repo_info.folder
)
if repo_info.is_remote:
result = urljoin(
repo_info.path + '/',
repo_info.folder,
)
return result
@staticmethod
def _warning_callback(warning_type, message):
"""
Warning callback for createrepo_c parsing functions
"""
print(f'Warning message: "{message}"; warning type: "{warning_type}"')
return True
def get_remote_file_content(self, file_url: AnyStr) -> AnyStr:
"""
Get content from a remote file and write it to a temp file
:param file_url: url of a remote file
:return: path to a temp file
"""
file_request = requests.get(
url=file_url,
)
file_request.raise_for_status()
with tempfile.NamedTemporaryFile(delete=False) as file_stream:
file_stream.write(file_request.content)
self.tmp_files.append(Path(file_stream.name))
return file_stream.name
@staticmethod
def _parse_repomd(repomd_file_path: AnyStr) -> Repomd:
"""
Parse file repomd.xml and create object Repomd
:param repomd_file_path: path to local repomd.xml
"""
return Repomd(repomd_file_path)
@classmethod
def _parse_modules_file(
cls,
modules_file_path: AnyStr,
) -> Iterator[Any]:
"""
Parse modules.yaml.gz and returns parsed data
:param modules_file_path: path to local modules.yaml.gz
:return: List of dict for each module in a repo
"""
with open(modules_file_path, 'rb') as modules_file:
data = modules_file.read()
if is_gzip_file(data[:2]):
data = gzip.decompress(data)
elif is_xz_file(data[:2]):
data = lzma.decompress(data)
return yaml.load_all(
data,
Loader=yaml.BaseLoader,
)
def _get_repomd_records(
self,
repo_info: RepoInfo,
) -> List[RepomdRecord]:
"""
Get, parse file repomd.xml and extract from it repomd records
:param repo_info: structure which contains info about a current repo
:return: list with repomd records
"""
repomd_file_path = os.path.join(
repo_info.path,
repo_info.folder,
'repodata',
'repomd.xml',
)
if repo_info.is_remote:
repomd_file_path = urljoin(
urljoin(
repo_info.path + '/',
repo_info.folder
) + '/',
'repodata/repomd.xml'
)
repomd_file_path = self.get_remote_file_content(repomd_file_path)
repomd_object = self._parse_repomd(repomd_file_path)
if repo_info.is_remote:
os.remove(repomd_file_path)
return repomd_object.records
def _download_repomd_records(
self,
repo_info: RepoInfo,
repomd_records: List[RepomdRecord],
repomd_records_dict: Dict[str, str],
):
"""
Download repomd records
:param repo_info: structure which contains info about a current repo
:param repomd_records: list with repomd records
:param repomd_records_dict: dict with paths to repodata files
"""
for repomd_record in repomd_records:
if repomd_record.type not in (
'primary',
'filelists',
'other',
):
continue
repomd_record_file_path = os.path.join(
repo_info.path,
repo_info.folder,
repomd_record.location_href,
)
if repo_info.is_remote:
repomd_record_file_path = self.get_remote_file_content(
repomd_record_file_path)
repomd_records_dict[repomd_record.type] = repomd_record_file_path
def _parse_module_repomd_record(
self,
repo_info: RepoInfo,
repomd_records: List[RepomdRecord],
) -> List[Dict]:
"""
Download repomd records
:param repo_info: structure which contains info about a current repo
:param repomd_records: list with repomd records
"""
for repomd_record in repomd_records:
if repomd_record.type != 'modules':
continue
repomd_record_file_path = os.path.join(
repo_info.path,
repo_info.folder,
repomd_record.location_href,
)
if repo_info.is_remote:
repomd_record_file_path = self.get_remote_file_content(
repomd_record_file_path)
return list(self._parse_modules_file(
repomd_record_file_path,
))
return []
@staticmethod
def compare_pkgs_version(package_1: Package, package_2: Package) -> int:
version_tuple_1 = (
package_1.epoch,
package_1.version,
package_1.release,
)
version_tuple_2 = (
package_2.epoch,
package_2.version,
package_2.release,
)
return rpm.labelCompare(version_tuple_1, version_tuple_2)
def get_packages_iterator(
self,
repo_info: RepoInfo,
) -> Union[PackageIterator, Iterator]:
full_repo_path = self._get_full_repo_path(repo_info)
pkgs_iterator = self.pkgs.get(full_repo_path)
if pkgs_iterator is None:
repomd_records = self._get_repomd_records(
repo_info=repo_info,
)
repomd_records_dict = {} # type: Dict[str, str]
self._download_repomd_records(
repo_info=repo_info,
repomd_records=repomd_records,
repomd_records_dict=repomd_records_dict,
)
pkgs_iterator = PackageIterator(
primary_path=repomd_records_dict['primary'],
filelists_path=repomd_records_dict['filelists'],
other_path=repomd_records_dict['other'],
warningcb=self._warning_callback,
)
pkgs_iterator, self.pkgs[full_repo_path] = tee(pkgs_iterator)
return pkgs_iterator
def get_package_arch(
self,
package: Package,
variant_arch: str,
) -> str:
result = variant_arch
if package.arch in self.repo_arches[variant_arch]:
result = package.arch
return result
def is_skipped_module_package(
self,
package: Package,
variant_arch: str,
) -> bool:
package_key = self.get_package_key(package, variant_arch)
# Even a module package will be added to packages.json if
# it presents in the list of included packages
return 'module' in package.release and not any(
re.search(
f'^{included_pkg}$',
package_key,
) or included_pkg in (package.name, package_key)
for included_pkg in self.included_packages
)
def is_excluded_package(
self,
package: Package,
variant_arch: str,
excluded_packages: List[str],
) -> bool:
package_key = self.get_package_key(package, variant_arch)
return any(
re.search(
f'^{excluded_pkg}$',
package_key,
) or excluded_pkg in (package.name, package_key)
for excluded_pkg in excluded_packages
)
@staticmethod
def get_source_rpm_name(package: Package) -> str:
source_rpm_nvra = parse_nvra(package.rpm_sourcerpm)
return source_rpm_nvra['name']
def get_package_key(self, package: Package, variant_arch: str) -> str:
return (
f'{package.name}.'
f'{self.get_package_arch(package, variant_arch)}'
)
def generate_packages_json(
self
) -> Dict[AnyStr, Dict[AnyStr, Dict[AnyStr, List[AnyStr]]]]:
"""
Generate packages.json
"""
packages = defaultdict(lambda: defaultdict(lambda: {
'variants': list(),
}))
for variant_info in self.variants:
for repo_info in variant_info.repos:
is_reference = repo_info.is_reference
for package in self.get_packages_iterator(repo_info=repo_info):
if self.is_skipped_module_package(
package=package,
variant_arch=variant_info.arch,
):
continue
if self.is_excluded_package(
package=package,
variant_arch=variant_info.arch,
excluded_packages=self.excluded_packages,
):
continue
if self.is_excluded_package(
package=package,
variant_arch=variant_info.arch,
excluded_packages=variant_info.excluded_packages,
):
continue
package_key = self.get_package_key(
package,
variant_info.arch,
)
source_rpm_name = self.get_source_rpm_name(package)
package_info = packages[source_rpm_name][package_key]
if 'is_reference' not in package_info:
package_info['variants'].append(variant_info.name)
package_info['is_reference'] = is_reference
package_info['package'] = package
elif not package_info['is_reference'] or \
package_info['is_reference'] == is_reference and \
self.compare_pkgs_version(
package_1=package,
package_2=package_info['package'],
) > 0:
package_info['variants'] = [variant_info.name]
package_info['is_reference'] = is_reference
package_info['package'] = package
elif self.compare_pkgs_version(
package_1=package,
package_2=package_info['package'],
) == 0 and repo_info.repo_type != 'absent':
package_info['variants'].append(variant_info.name)
result = defaultdict(lambda: defaultdict(
lambda: defaultdict(list),
))
for variant_info in self.variants:
for source_rpm_name, packages_info in packages.items():
for package_key, package_info in packages_info.items():
variant_pkgs = result[variant_info.name][variant_info.arch]
if variant_info.name not in package_info['variants']:
continue
variant_pkgs[source_rpm_name].append(package_key)
return result
def create_parser():
parser = argparse.ArgumentParser()
parser.add_argument(
'-c',
'--config',
type=Path,
default=Path('config.yaml'),
required=False,
help='Path to a config',
)
parser.add_argument(
'-o',
'--json-output-path',
type=str,
help='Full path to output json file',
required=True,
)
return parser
def read_config(config_path: Path) -> Optional[Dict]:
if not config_path.exists():
logging.error('A config by path "%s" does not exist', config_path)
exit(1)
with config_path.open('r') as config_fd:
return yaml.safe_load(config_fd)
def process_config(config_data: Dict) -> Tuple[
List[VariantInfo],
List[str],
List[str],
]:
excluded_packages = config_data.get('excluded_packages', [])
included_packages = config_data.get('included_packages', [])
variants = [VariantInfo(
name=variant_name,
arch=variant_info['arch'],
excluded_packages=variant_info.get('excluded_packages', []),
repos=[RepoInfo(
path=variant_repo['path'],
folder=variant_repo['folder'],
is_remote=variant_repo['remote'],
is_reference=variant_repo['reference'],
repo_type=variant_repo.get('repo_type', 'present'),
) for variant_repo in variant_info['repos']]
) for variant_name, variant_info in config_data['variants'].items()]
return variants, excluded_packages, included_packages
def cli_main():
args = create_parser().parse_args()
variants, excluded_packages, included_packages = process_config(
config_data=read_config(args.config)
)
pg = PackagesGenerator(
variants=variants,
excluded_packages=excluded_packages,
included_packages=included_packages,
)
result = pg.generate_packages_json()
with open(args.json_output_path, 'w') as packages_file:
json.dump(
result,
packages_file,
indent=4,
sort_keys=True,
)
if __name__ == '__main__':
cli_main()

View File

@ -14,6 +14,9 @@ def send(cmd, data):
topic = "compose.%s" % cmd.replace("-", ".").lower() topic = "compose.%s" % cmd.replace("-", ".").lower()
try: try:
msg = fedora_messaging.api.Message(topic="pungi.{}".format(topic), body=data) msg = fedora_messaging.api.Message(topic="pungi.{}".format(topic), body=data)
if cmd == "ostree":
# https://pagure.io/fedora-infrastructure/issue/10899
msg.priority = 3
fedora_messaging.api.publish(msg) fedora_messaging.api.publish(msg)
except fedora_messaging.exceptions.PublishReturned as e: except fedora_messaging.exceptions.PublishReturned as e:
print("Fedora Messaging broker rejected message %s: %s" % (msg.id, e)) print("Fedora Messaging broker rejected message %s: %s" % (msg.id, e))

View File

@ -0,0 +1,255 @@
import gzip
import lzma
import os
from argparse import ArgumentParser, FileType
from glob import iglob
from io import BytesIO
from pathlib import Path
from typing import List, AnyStr, Iterable, Union, Optional
import logging
from urllib.parse import urljoin
import yaml
import createrepo_c as cr
from typing.io import BinaryIO
from .create_packages_json import PackagesGenerator, is_gzip_file, is_xz_file
EMPTY_FILE = '.empty'
def read_modules_yaml(modules_yaml_path: Union[str, Path]) -> BytesIO:
with open(modules_yaml_path, 'rb') as fp:
return BytesIO(fp.read())
def grep_list_of_modules_yaml(repos_path: AnyStr) -> Iterable[BytesIO]:
"""
Find all of valid *modules.yaml.gz in repos
:param repos_path: path to a directory which contains repo dirs
:return: iterable object of content from *modules.yaml.*
"""
return (
read_modules_yaml_from_specific_repo(repo_path=Path(path).parent)
for path in iglob(
str(Path(repos_path).joinpath('**/repodata')),
recursive=True
)
)
def _is_remote(path: str):
return any(str(path).startswith(protocol)
for protocol in ('http', 'https'))
def read_modules_yaml_from_specific_repo(
repo_path: Union[str, Path]
) -> Optional[BytesIO]:
"""
Read modules_yaml from a specific repo (remote or local)
:param repo_path: path/url to a specific repo
(final dir should contain dir `repodata`)
:return: iterable object of content from *modules.yaml.*
"""
if _is_remote(repo_path):
repomd_url = urljoin(
repo_path + '/',
'repodata/repomd.xml',
)
packages_generator = PackagesGenerator(
variants=[],
excluded_packages=[],
included_packages=[],
)
repomd_file_path = packages_generator.get_remote_file_content(
file_url=repomd_url
)
else:
repomd_file_path = os.path.join(
repo_path,
'repodata/repomd.xml',
)
repomd_obj = cr.Repomd(str(repomd_file_path))
for record in repomd_obj.records:
if record.type != 'modules':
continue
else:
if _is_remote(repo_path):
modules_yaml_url = urljoin(
repo_path + '/',
record.location_href,
)
packages_generator = PackagesGenerator(
variants=[],
excluded_packages=[],
included_packages=[],
)
modules_yaml_path = packages_generator.get_remote_file_content(
file_url=modules_yaml_url
)
else:
modules_yaml_path = os.path.join(
repo_path,
record.location_href,
)
return read_modules_yaml(modules_yaml_path=modules_yaml_path)
else:
return None
def _should_grep_defaults(
document_type: str,
grep_only_modules_data: bool = False,
grep_only_modules_defaults_data: bool = False,
) -> bool:
xor_flag = grep_only_modules_data == grep_only_modules_defaults_data
if document_type == 'modulemd' and (xor_flag or grep_only_modules_data):
return True
return False
def _should_grep_modules(
document_type: str,
grep_only_modules_data: bool = False,
grep_only_modules_defaults_data: bool = False,
) -> bool:
xor_flag = grep_only_modules_data == grep_only_modules_defaults_data
if document_type == 'modulemd-defaults' and \
(xor_flag or grep_only_modules_defaults_data):
return True
return False
def collect_modules(
modules_paths: List[BinaryIO],
target_dir: str,
grep_only_modules_data: bool = False,
grep_only_modules_defaults_data: bool = False,
):
"""
Read given modules.yaml.gz files and export modules
and modulemd files from it.
Returns:
object:
"""
xor_flag = grep_only_modules_defaults_data is grep_only_modules_data
modules_path = os.path.join(target_dir, 'modules')
module_defaults_path = os.path.join(target_dir, 'module_defaults')
if grep_only_modules_data or xor_flag:
os.makedirs(modules_path, exist_ok=True)
if grep_only_modules_defaults_data or xor_flag:
os.makedirs(module_defaults_path, exist_ok=True)
# Defaults modules can be empty, but pungi detects
# empty folder while copying and raises the exception in this case
Path(os.path.join(module_defaults_path, EMPTY_FILE)).touch()
for module_file in modules_paths:
data = module_file.read()
if is_gzip_file(data[:2]):
data = gzip.decompress(data)
elif is_xz_file(data[:2]):
data = lzma.decompress(data)
documents = yaml.load_all(data, Loader=yaml.BaseLoader)
for doc in documents:
path = None
if _should_grep_modules(
doc['document'],
grep_only_modules_data,
grep_only_modules_defaults_data,
):
name = f"{doc['data']['module']}.yaml"
path = os.path.join(module_defaults_path, name)
logging.info('Found %s module defaults', name)
elif _should_grep_defaults(
doc['document'],
grep_only_modules_data,
grep_only_modules_defaults_data,
):
# pungi.phases.pkgset.sources.source_koji.get_koji_modules
stream = doc['data']['stream'].replace('-', '_')
doc_data = doc['data']
name = f"{doc_data['name']}-{stream}-" \
f"{doc_data['version']}.{doc_data['context']}"
arch_dir = os.path.join(
modules_path,
doc_data['arch']
)
os.makedirs(arch_dir, exist_ok=True)
path = os.path.join(
arch_dir,
name,
)
logging.info('Found module %s', name)
if 'artifacts' not in doc['data']:
logging.warning(
'RPM %s does not have explicit list of artifacts',
name
)
if path is not None:
with open(path, 'w') as f:
yaml.dump(doc, f, default_flow_style=False)
def cli_main():
parser = ArgumentParser()
content_type_group = parser.add_mutually_exclusive_group(required=False)
content_type_group.add_argument(
'--get-only-modules-data',
action='store_true',
help='Parse and get only modules data',
)
content_type_group.add_argument(
'--get-only-modules-defaults-data',
action='store_true',
help='Parse and get only modules_defaults data',
)
path_group = parser.add_mutually_exclusive_group(required=True)
path_group.add_argument(
'-p', '--path',
type=FileType('rb'), nargs='+',
help='Path to modules.yaml.gz file. '
'You may pass multiple files by passing -p path1 path2'
)
path_group.add_argument(
'-rp', '--repo-path',
required=False,
type=str,
default=None,
help='Path to a directory which contains repodirs. E.g. /var/repos'
)
path_group.add_argument(
'-rd', '--repodata-paths',
required=False,
type=str,
nargs='+',
default=[],
help='Paths/urls to the directories with directory `repodata`',
)
parser.add_argument('-t', '--target', required=True)
namespace = parser.parse_args()
if namespace.repodata_paths:
modules = []
for repodata_path in namespace.repodata_paths:
modules.append(read_modules_yaml_from_specific_repo(
repodata_path,
))
elif namespace.path is not None:
modules = namespace.path
else:
modules = grep_list_of_modules_yaml(namespace.repo_path)
modules = list(filter(lambda i: i is not None, modules))
collect_modules(
modules,
namespace.target,
namespace.get_only_modules_data,
namespace.get_only_modules_defaults_data,
)
if __name__ == '__main__':
cli_main()

View File

@ -0,0 +1,96 @@
import re
from argparse import ArgumentParser
import os
from glob import iglob
from typing import List
from pathlib import Path
from dataclasses import dataclass
from productmd.common import parse_nvra
@dataclass
class Package:
nvra: dict
path: Path
def search_rpms(top_dir: Path) -> List[Package]:
"""
Search for all *.rpm files recursively
in given top directory
Returns:
list: list of paths
"""
return [Package(
nvra=parse_nvra(Path(path).stem),
path=Path(path),
) for path in iglob(str(top_dir.joinpath('**/*.rpm')), recursive=True)]
def is_excluded_package(
package: Package,
excluded_packages: List[str],
) -> bool:
package_key = f'{package.nvra["name"]}.{package.nvra["arch"]}'
return any(
re.search(
f'^{excluded_pkg}$',
package_key,
) or excluded_pkg in (package.nvra['name'], package_key)
for excluded_pkg in excluded_packages
)
def copy_rpms(
packages: List[Package],
target_top_dir: Path,
excluded_packages: List[str],
):
"""
Search synced repos for rpms and prepare
koji-like structure for pungi
Instead of repos, use following structure:
# ls /mnt/koji/
i686/ noarch/ x86_64/
Returns:
Nothing:
"""
for package in packages:
if is_excluded_package(package, excluded_packages):
continue
target_arch_dir = target_top_dir.joinpath(package.nvra['arch'])
target_file = target_arch_dir.joinpath(package.path.name)
os.makedirs(target_arch_dir, exist_ok=True)
if not target_file.exists():
try:
os.link(package.path, target_file)
except OSError:
# hardlink failed, try symlinking
package.path.symlink_to(target_file)
def cli_main():
parser = ArgumentParser()
parser.add_argument('-p', '--path', required=True, type=Path)
parser.add_argument('-t', '--target', required=True, type=Path)
parser.add_argument(
'-e',
'--excluded-packages',
required=False,
nargs='+',
type=str,
default=[],
)
namespace = parser.parse_args()
rpms = search_rpms(namespace.path)
copy_rpms(rpms, namespace.target, namespace.excluded_packages)
if __name__ == '__main__':
cli_main()

View File

@ -319,7 +319,6 @@ def get_arguments(config):
def main(): def main():
config = pungi.config.Config() config = pungi.config.Config()
opts = get_arguments(config) opts = get_arguments(config)
@ -479,8 +478,7 @@ def main():
print("RPM size: %s MiB" % (mypungi.size_packages() / 1024**2)) print("RPM size: %s MiB" % (mypungi.size_packages() / 1024**2))
if not opts.nodebuginfo: if not opts.nodebuginfo:
print( print(
"DEBUGINFO size: %s MiB" "DEBUGINFO size: %s MiB" % (mypungi.size_debuginfo() / 1024**2)
% (mypungi.size_debuginfo() / 1024**2)
) )
if not opts.nosource: if not opts.nosource:
print("SRPM size: %s MiB" % (mypungi.size_srpms() / 1024**2)) print("SRPM size: %s MiB" % (mypungi.size_srpms() / 1024**2))

View File

@ -97,6 +97,7 @@ def main(ns, persistdir, cachedir):
dnf_conf = Conf(ns.arch) dnf_conf = Conf(ns.arch)
dnf_conf.persistdir = persistdir dnf_conf.persistdir = persistdir
dnf_conf.cachedir = cachedir dnf_conf.cachedir = cachedir
dnf_conf.optional_metadata_types = ["filelists"]
dnf_obj = DnfWrapper(dnf_conf) dnf_obj = DnfWrapper(dnf_conf)
gather_opts = GatherOptions() gather_opts = GatherOptions()

View File

@ -23,6 +23,7 @@ from pungi.phases import PHASES_NAMES
from pungi import get_full_version, util from pungi import get_full_version, util
from pungi.errors import UnsignedPackagesError from pungi.errors import UnsignedPackagesError
from pungi.wrappers import kojiwrapper from pungi.wrappers import kojiwrapper
from pungi.util import rmtree
# force C locales # force C locales
@ -251,9 +252,15 @@ def main():
kobo.log.add_stderr_logger(logger) kobo.log.add_stderr_logger(logger)
conf = util.load_config(opts.config) conf = util.load_config(opts.config)
compose_type = opts.compose_type or conf.get("compose_type", "production") compose_type = opts.compose_type or conf.get("compose_type", "production")
if compose_type == "production" and not opts.label and not opts.no_label: label = opts.label or conf.get("label")
if label:
try:
productmd.composeinfo.verify_label(label)
except ValueError as ex:
abort(str(ex))
if compose_type == "production" and not label and not opts.no_label:
abort("must specify label for a production compose") abort("must specify label for a production compose")
if ( if (
@ -300,7 +307,12 @@ def main():
if opts.target_dir: if opts.target_dir:
compose_dir = Compose.get_compose_dir( compose_dir = Compose.get_compose_dir(
opts.target_dir, conf, compose_type=compose_type, compose_label=opts.label opts.target_dir,
conf,
compose_type=compose_type,
compose_label=label,
parent_compose_ids=opts.parent_compose_id,
respin_of=opts.respin_of,
) )
else: else:
compose_dir = opts.compose_dir compose_dir = opts.compose_dir
@ -309,7 +321,7 @@ def main():
ci = Compose.get_compose_info( ci = Compose.get_compose_info(
conf, conf,
compose_type=compose_type, compose_type=compose_type,
compose_label=opts.label, compose_label=label,
parent_compose_ids=opts.parent_compose_id, parent_compose_ids=opts.parent_compose_id,
respin_of=opts.respin_of, respin_of=opts.respin_of,
) )
@ -380,6 +392,14 @@ def run_compose(
compose.log_info("Current timezone offset: %s" % pungi.util.get_tz_offset()) compose.log_info("Current timezone offset: %s" % pungi.util.get_tz_offset())
compose.log_info("COMPOSE_ID=%s" % compose.compose_id) compose.log_info("COMPOSE_ID=%s" % compose.compose_id)
installed_pkgs_log = compose.paths.log.log_file("global", "installed-pkgs")
compose.log_info("Logging installed packages to %s" % installed_pkgs_log)
try:
with open(installed_pkgs_log, "w") as f:
subprocess.Popen(["rpm", "-qa"], stdout=f)
except Exception as e:
compose.log_warning("Failed to log installed packages: %s" % str(e))
compose.read_variants() compose.read_variants()
# dump the config file # dump the config file
@ -403,11 +423,12 @@ def run_compose(
compose, buildinstall_phase, pkgset_phase compose, buildinstall_phase, pkgset_phase
) )
ostree_phase = pungi.phases.OSTreePhase(compose, pkgset_phase) ostree_phase = pungi.phases.OSTreePhase(compose, pkgset_phase)
ostree_container_phase = pungi.phases.OSTreeContainerPhase(compose, pkgset_phase)
createiso_phase = pungi.phases.CreateisoPhase(compose, buildinstall_phase) createiso_phase = pungi.phases.CreateisoPhase(compose, buildinstall_phase)
extra_isos_phase = pungi.phases.ExtraIsosPhase(compose, buildinstall_phase) extra_isos_phase = pungi.phases.ExtraIsosPhase(compose, buildinstall_phase)
liveimages_phase = pungi.phases.LiveImagesPhase(compose)
livemedia_phase = pungi.phases.LiveMediaPhase(compose) livemedia_phase = pungi.phases.LiveMediaPhase(compose)
image_build_phase = pungi.phases.ImageBuildPhase(compose, buildinstall_phase) image_build_phase = pungi.phases.ImageBuildPhase(compose, buildinstall_phase)
kiwibuild_phase = pungi.phases.KiwiBuildPhase(compose)
osbuild_phase = pungi.phases.OSBuildPhase(compose) osbuild_phase = pungi.phases.OSBuildPhase(compose)
osbs_phase = pungi.phases.OSBSPhase(compose, pkgset_phase, buildinstall_phase) osbs_phase = pungi.phases.OSBSPhase(compose, pkgset_phase, buildinstall_phase)
image_container_phase = pungi.phases.ImageContainerPhase(compose) image_container_phase = pungi.phases.ImageContainerPhase(compose)
@ -424,17 +445,18 @@ def run_compose(
gather_phase, gather_phase,
extrafiles_phase, extrafiles_phase,
createiso_phase, createiso_phase,
liveimages_phase,
livemedia_phase, livemedia_phase,
image_build_phase, image_build_phase,
image_checksum_phase, image_checksum_phase,
test_phase, test_phase,
ostree_phase, ostree_phase,
ostree_installer_phase, ostree_installer_phase,
ostree_container_phase,
extra_isos_phase, extra_isos_phase,
osbs_phase, osbs_phase,
osbuild_phase, osbuild_phase,
image_container_phase, image_container_phase,
kiwibuild_phase,
): ):
if phase.skip(): if phase.skip():
continue continue
@ -449,50 +471,6 @@ def run_compose(
print(i) print(i)
raise RuntimeError("Configuration is not valid") raise RuntimeError("Configuration is not valid")
# PREP
# Note: This may be put into a new method of phase classes (e.g. .prep())
# in same way as .validate() or .run()
# Prep for liveimages - Obtain a password for signing rpm wrapped images
if (
"signing_key_password_file" in compose.conf
and "signing_command" in compose.conf
and "%(signing_key_password)s" in compose.conf["signing_command"]
and not liveimages_phase.skip()
):
# TODO: Don't require key if signing is turned off
# Obtain signing key password
signing_key_password = None
# Use appropriate method
if compose.conf["signing_key_password_file"] == "-":
# Use stdin (by getpass module)
try:
signing_key_password = getpass.getpass("Signing key password: ")
except EOFError:
compose.log_debug("Ignoring signing key password")
pass
else:
# Use text file with password
try:
signing_key_password = (
open(compose.conf["signing_key_password_file"], "r")
.readline()
.rstrip("\n")
)
except IOError:
# Filename is not print intentionally in case someone puts
# password directly into the option
err_msg = "Cannot load password from file specified by 'signing_key_password_file' option" # noqa: E501
compose.log_error(err_msg)
print(err_msg)
raise RuntimeError(err_msg)
if signing_key_password:
# Store the password
compose.conf["signing_key_password"] = signing_key_password
init_phase.start() init_phase.start()
init_phase.stop() init_phase.stop()
@ -505,6 +483,7 @@ def run_compose(
(gather_phase, createrepo_phase), (gather_phase, createrepo_phase),
extrafiles_phase, extrafiles_phase,
(ostree_phase, ostree_installer_phase), (ostree_phase, ostree_installer_phase),
ostree_container_phase,
) )
essentials_phase = pungi.phases.WeaverPhase(compose, essentials_schema) essentials_phase = pungi.phases.WeaverPhase(compose, essentials_schema)
essentials_phase.start() essentials_phase.start()
@ -529,10 +508,10 @@ def run_compose(
compose_images_schema = ( compose_images_schema = (
createiso_phase, createiso_phase,
extra_isos_phase, extra_isos_phase,
liveimages_phase,
image_build_phase, image_build_phase,
livemedia_phase, livemedia_phase,
osbuild_phase, osbuild_phase,
kiwibuild_phase,
) )
post_image_phase = pungi.phases.WeaverPhase( post_image_phase = pungi.phases.WeaverPhase(
compose, (image_checksum_phase, image_container_phase) compose, (image_checksum_phase, image_container_phase)
@ -554,10 +533,11 @@ def run_compose(
and ostree_installer_phase.skip() and ostree_installer_phase.skip()
and createiso_phase.skip() and createiso_phase.skip()
and extra_isos_phase.skip() and extra_isos_phase.skip()
and liveimages_phase.skip()
and livemedia_phase.skip() and livemedia_phase.skip()
and image_build_phase.skip() and image_build_phase.skip()
and kiwibuild_phase.skip()
and osbuild_phase.skip() and osbuild_phase.skip()
and ostree_container_phase.skip()
): ):
compose.im.dump(compose.paths.compose.metadata("images.json")) compose.im.dump(compose.paths.compose.metadata("images.json"))
compose.dump_containers_metadata() compose.dump_containers_metadata()
@ -671,7 +651,7 @@ def cli_main():
except (Exception, KeyboardInterrupt) as ex: except (Exception, KeyboardInterrupt) as ex:
if COMPOSE: if COMPOSE:
COMPOSE.log_error("Compose run failed: %s" % ex) COMPOSE.log_error("Compose run failed: %s" % ex)
COMPOSE.traceback() COMPOSE.traceback(show_locals=getattr(ex, "show_locals", True))
COMPOSE.log_critical("Compose failed: %s" % COMPOSE.topdir) COMPOSE.log_critical("Compose failed: %s" % COMPOSE.topdir)
COMPOSE.write_status("DOOMED") COMPOSE.write_status("DOOMED")
else: else:
@ -680,3 +660,8 @@ def cli_main():
sys.stdout.flush() sys.stdout.flush()
sys.stderr.flush() sys.stderr.flush()
sys.exit(1) sys.exit(1)
finally:
# Remove repositories cloned during ExtraFiles phase
process_id = os.getpid()
directoy_to_remove = "/tmp/pungi-temp-git-repos-" + str(process_id) + "/"
rmtree(directoy_to_remove)

View File

@ -279,7 +279,7 @@ class GitUrlResolveError(RuntimeError):
pass pass
def resolve_git_ref(repourl, ref): def resolve_git_ref(repourl, ref, credential_helper=None):
"""Resolve a reference in a Git repo to a commit. """Resolve a reference in a Git repo to a commit.
Raises RuntimeError if there was an error. Most likely cause is failure to Raises RuntimeError if there was an error. Most likely cause is failure to
@ -289,7 +289,7 @@ def resolve_git_ref(repourl, ref):
# This looks like a commit ID already. # This looks like a commit ID already.
return ref return ref
try: try:
_, output = git_ls_remote(repourl, ref) _, output = git_ls_remote(repourl, ref, credential_helper)
except RuntimeError as e: except RuntimeError as e:
raise GitUrlResolveError( raise GitUrlResolveError(
"ref does not exist in remote repo %s with the error %s %s" "ref does not exist in remote repo %s with the error %s %s"
@ -316,7 +316,7 @@ def resolve_git_ref(repourl, ref):
return lines[0].split()[0] return lines[0].split()[0]
def resolve_git_url(url): def resolve_git_url(url, credential_helper=None):
"""Given a url to a Git repo specifying HEAD or origin/<branch> as a ref, """Given a url to a Git repo specifying HEAD or origin/<branch> as a ref,
replace that specifier with actual SHA1 of the commit. replace that specifier with actual SHA1 of the commit.
@ -335,7 +335,7 @@ def resolve_git_url(url):
scheme = r.scheme.replace("git+", "") scheme = r.scheme.replace("git+", "")
baseurl = urllib.parse.urlunsplit((scheme, r.netloc, r.path, "", "")) baseurl = urllib.parse.urlunsplit((scheme, r.netloc, r.path, "", ""))
fragment = resolve_git_ref(baseurl, ref) fragment = resolve_git_ref(baseurl, ref, credential_helper)
result = urllib.parse.urlunsplit((r.scheme, r.netloc, r.path, r.query, fragment)) result = urllib.parse.urlunsplit((r.scheme, r.netloc, r.path, r.query, fragment))
if "?#" in url: if "?#" in url:
@ -354,13 +354,18 @@ class GitUrlResolver(object):
self.offline = offline self.offline = offline
self.cache = {} self.cache = {}
def __call__(self, url, branch=None): def __call__(self, url, branch=None, options=None):
credential_helper = options.get("credential_helper") if options else None
if self.offline: if self.offline:
return branch or url return branch or url
key = (url, branch) key = (url, branch)
if key not in self.cache: if key not in self.cache:
try: try:
res = resolve_git_ref(url, branch) if branch else resolve_git_url(url) res = (
resolve_git_ref(url, branch, credential_helper)
if branch
else resolve_git_url(url, credential_helper)
)
self.cache[key] = res self.cache[key] = res
except GitUrlResolveError as exc: except GitUrlResolveError as exc:
self.cache[key] = exc self.cache[key] = exc
@ -456,6 +461,9 @@ def get_volid(compose, arch, variant=None, disc_type=False, formats=None, **kwar
if not variant_uid and "%(variant)s" in i: if not variant_uid and "%(variant)s" in i:
continue continue
try: try:
# fmt: off
# Black wants to add a comma after kwargs, but that's not valid in
# Python 2.7
args = get_format_substs( args = get_format_substs(
compose, compose,
variant=variant_uid, variant=variant_uid,
@ -467,6 +475,7 @@ def get_volid(compose, arch, variant=None, disc_type=False, formats=None, **kwar
base_product_version=base_product_version, base_product_version=base_product_version,
**kwargs **kwargs
) )
# fmt: on
volid = (i % args).format(**args) volid = (i % args).format(**args)
except KeyError as err: except KeyError as err:
raise RuntimeError( raise RuntimeError(
@ -478,10 +487,7 @@ def get_volid(compose, arch, variant=None, disc_type=False, formats=None, **kwar
tried.add(volid) tried.add(volid)
if volid and len(volid) > 32: if volid and len(volid) > 32:
raise ValueError( volid = volid[:32]
"Could not create volume ID longer than 32 bytes, options are %r",
sorted(tried, key=len),
)
if compose.conf["restricted_volid"]: if compose.conf["restricted_volid"]:
# Replace all non-alphanumeric characters and non-underscores) with # Replace all non-alphanumeric characters and non-underscores) with
@ -991,8 +997,12 @@ def retry(timeout=120, interval=30, wait_on=Exception):
@retry(wait_on=RuntimeError) @retry(wait_on=RuntimeError)
def git_ls_remote(baseurl, ref): def git_ls_remote(baseurl, ref, credential_helper=None):
return run(["git", "ls-remote", baseurl, ref], universal_newlines=True) cmd = ["git"]
if credential_helper:
cmd.extend(["-c", "credential.useHttpPath=true"])
cmd.extend(["-c", "credential.helper=%s" % credential_helper])
return run(cmd + ["ls-remote", baseurl, ref], universal_newlines=True)
def get_tz_offset(): def get_tz_offset():
@ -1137,3 +1147,16 @@ def read_json_file(file_path):
"""A helper function to read a JSON file.""" """A helper function to read a JSON file."""
with open(file_path) as f: with open(file_path) as f:
return json.load(f) return json.load(f)
UNITS = ["", "Ki", "Mi", "Gi", "Ti"]
def format_size(sz):
sz = float(sz)
unit = 0
while sz > 1024:
sz /= 1024
unit += 1
return "%.3g %sB" % (sz, UNITS[unit])

View File

@ -183,11 +183,12 @@ class CompsFilter(object):
""" """
all_groups = self.tree.xpath("/comps/group/id/text()") + lookaside_groups all_groups = self.tree.xpath("/comps/group/id/text()") + lookaside_groups
for environment in self.tree.xpath("/comps/environment"): for environment in self.tree.xpath("/comps/environment"):
for group in environment.xpath("grouplist/groupid"): for parent_tag in ("grouplist", "optionlist"):
for group in environment.xpath("%s/groupid" % parent_tag):
if group.text not in all_groups: if group.text not in all_groups:
group.getparent().remove(group) group.getparent().remove(group)
for group in environment.xpath("grouplist/groupid[@arch]"): for group in environment.xpath("%s/groupid[@arch]" % parent_tag):
value = group.attrib.get("arch") value = group.attrib.get("arch")
values = [v for v in re.split(r"[, ]+", value) if v] values = [v for v in re.split(r"[, ]+", value) if v]
if arch not in values: if arch not in values:
@ -305,6 +306,8 @@ class CompsWrapper(object):
append_common_info(doc, group_node, group, force_description=True) append_common_info(doc, group_node, group, force_description=True)
append_bool(doc, group_node, "default", group.default) append_bool(doc, group_node, "default", group.default)
append_bool(doc, group_node, "uservisible", group.uservisible) append_bool(doc, group_node, "uservisible", group.uservisible)
if group.display_order is not None:
append(doc, group_node, "display_order", str(group.display_order))
if group.lang_only: if group.lang_only:
append(doc, group_node, "langonly", group.lang_only) append(doc, group_node, "langonly", group.lang_only)

View File

@ -88,5 +88,12 @@ def parse_output(output):
packages.add((name, arch, frozenset(flags))) packages.add((name, arch, frozenset(flags)))
else: else:
name, arch = nevra.rsplit(".", 1) name, arch = nevra.rsplit(".", 1)
modules.add(name.split(":", 1)[1]) # replace dash by underscore in stream of module's nerva
# source of name looks like
# module:llvm-toolset:rhel8:8040020210411062713:9f9e2e7e.x86_64
name = ':'.join(
item.replace('-', '_') if i == 1 else item for
i, item in enumerate(name.split(':')[1:])
)
modules.add(name)
return packages, modules return packages, modules

View File

@ -260,24 +260,34 @@ def get_isohybrid_cmd(iso_path, arch):
return cmd return cmd
def get_manifest_cmd(iso_name, xorriso=False): def get_manifest_cmd(iso_name, xorriso=False, output_file=None):
if not output_file:
output_file = "%s.manifest" % iso_name
if xorriso: if xorriso:
return """xorriso -dev %s --find | return """xorriso -dev %s --find |
tail -n+2 | tail -n+2 |
tr -d "'" | tr -d "'" |
cut -c2- | cut -c2- |
sort >> %s.manifest""" % ( sort >> %s""" % (
shlex_quote(iso_name),
shlex_quote(iso_name), shlex_quote(iso_name),
shlex_quote(output_file),
) )
else: else:
return "isoinfo -R -f -i %s | grep -v '/TRANS.TBL$' | sort >> %s.manifest" % ( return "isoinfo -R -f -i %s | grep -v '/TRANS.TBL$' | sort >> %s" % (
shlex_quote(iso_name),
shlex_quote(iso_name), shlex_quote(iso_name),
shlex_quote(output_file),
) )
def get_volume_id(path): def get_volume_id(path, xorriso=False):
if xorriso:
cmd = ["xorriso", "-indev", path]
retcode, output = run(cmd, universal_newlines=True)
for line in output.splitlines():
if line.startswith("Volume id"):
return line.split("'")[1]
else:
cmd = ["isoinfo", "-d", "-i", path] cmd = ["isoinfo", "-d", "-i", path]
retcode, output = run(cmd, universal_newlines=True) retcode, output = run(cmd, universal_newlines=True)
@ -506,3 +516,21 @@ def mount(image, logger=None, use_guestmount=True):
util.run_unmount_cmd(["fusermount", "-u", mount_dir], path=mount_dir) util.run_unmount_cmd(["fusermount", "-u", mount_dir], path=mount_dir)
else: else:
util.run_unmount_cmd(["umount", mount_dir], path=mount_dir) util.run_unmount_cmd(["umount", mount_dir], path=mount_dir)
def xorriso_commands(arch, input, output):
"""List of xorriso commands to modify a bootable image."""
commands = [
("-indev", input),
("-outdev", output),
# isoinfo -J uses the Joliet tree, and it's used by virt-install
("-joliet", "on"),
# Support long filenames in the Joliet trees. Repodata is particularly
# likely to run into this limit.
("-compliance", "joliet_long_names"),
("-boot_image", "any", "replay"),
]
if arch == "ppc64le":
# This is needed for the image to be bootable.
commands.append(("-as", "mkisofs", "-U", "--"))
return commands

299
pungi/wrappers/kojimock.py Normal file
View File

@ -0,0 +1,299 @@
import os
import time
from pathlib import Path
from attr import dataclass
from kobo.rpmlib import parse_nvra
from pungi.module_util import Modulemd
# just a random value which we don't
# use in mock currently
# originally builds are filtered by this value
# to get consistent snapshot of tags and packages
from pungi.scripts.gather_rpms import search_rpms
LAST_EVENT_ID = 999999
# last event time is not important but build
# time should be less then it
LAST_EVENT_TIME = time.time()
BUILD_TIME = 0
# virtual build that collects all
# packages built for some arch
RELEASE_BUILD_ID = 15270
# tag that should have all packages available
ALL_PACKAGES_TAG = 'dist-c8-compose'
# tag that should have all modules available
ALL_MODULES_TAG = 'dist-c8-module-compose'
@dataclass
class Module:
build_id: int
name: str
nvr: str
stream: str
version: str
context: str
arch: str
class KojiMock:
"""
Class that acts like real koji (for some needed methods)
but uses local storage as data source
"""
def __init__(self, packages_dir, modules_dir, all_arches):
self._modules = self._gather_modules(modules_dir)
self._modules_dir = modules_dir
self._packages_dir = packages_dir
self._all_arches = all_arches
@staticmethod
def _gather_modules(modules_dir):
modules = {}
for index, (f, arch) in enumerate(
(sub_path.name, sub_path.parent.name)
for path in Path(modules_dir).glob('*')
for sub_path in path.iterdir()
):
parsed = parse_nvra(f)
modules[index] = Module(
name=parsed['name'],
nvr=f,
version=parsed['release'],
context=parsed['arch'],
stream=parsed['version'],
build_id=index,
arch=arch,
)
return modules
@staticmethod
def getLastEvent(*args, **kwargs):
return {'id': LAST_EVENT_ID, 'ts': LAST_EVENT_TIME}
def listTagged(self, tag_name, *args, **kwargs):
"""
Returns list of virtual 'builds' that contain packages by given tag
There are two kinds of tags: modular and distributive.
For now, only one kind, distributive one, is needed.
"""
if tag_name != ALL_MODULES_TAG:
raise ValueError("I don't know what tag is %s" % tag_name)
builds = []
for module in self._modules.values():
builds.append({
'build_id': module.build_id,
'owner_name': 'centos',
'package_name': module.name,
'nvr': module.nvr,
'version': module.stream,
'release': '%s.%s' % (module.version, module.context),
'name': module.name,
'id': module.build_id,
'tag_name': tag_name,
'arch': module.arch,
# Following fields are currently not
# used but returned by real koji
# left them here just for reference
#
# 'task_id': None,
# 'state': 1,
# 'start_time': '2020-12-23 16:43:59',
# 'creation_event_id': 309485,
# 'creation_time': '2020-12-23 17:05:33.553748',
# 'epoch': None, 'tag_id': 533,
# 'completion_time': '2020-12-23 17:05:23',
# 'volume_id': 0,
# 'package_id': 3221,
# 'owner_id': 11,
# 'volume_name': 'DEFAULT',
})
return builds
@staticmethod
def getFullInheritance(*args, **kwargs):
"""
Unneeded because we use local storage.
"""
return []
def getBuild(self, build_id, *args, **kwargs):
"""
Used to get information about build
(used in pungi only for modules currently)
"""
module = self._modules[build_id]
result = {
'id': build_id,
'name': module.name,
'version': module.stream,
'release': '%s.%s' % (module.version, module.context),
'completion_ts': BUILD_TIME,
'state': 'COMPLETE',
'arch': module.arch,
'extra': {
'typeinfo': {
'module': {
'stream': module.stream,
'version': module.version,
'name': module.name,
'context': module.context,
'content_koji_tag': '-'.join([
module.name,
module.stream,
module.version
]) + '.' + module.context
}
}
}
}
return result
def listArchives(self, build_id, *args, **kwargs):
"""
Originally lists artifacts for build, but in pungi used
only to get list of modulemd files for some module
"""
module = self._modules[build_id]
return [
{
'build_id': module.build_id,
'filename': f'modulemd.{module.arch}.txt',
'btype': 'module'
},
# noone ever uses this file
# but it should be because pungi ignores builds
# with len(files) <= 1
{
'build_id': module.build_id,
'filename': 'modulemd.txt',
'btype': 'module'
}
]
def listTaggedRPMS(self, tag_name, *args, **kwargs):
"""
Get information about packages that are tagged by tag.
There are two kings of tags: per-module and per-distr.
"""
if tag_name == ALL_PACKAGES_TAG:
builds, packages = self._get_release_packages()
else:
builds, packages = self._get_module_packages(tag_name)
return [
packages,
builds
]
def _get_release_packages(self):
"""
Search packages dir and keep only
packages that are non-modular.
This is quite the way how real koji works:
- modular packages are tagged by module-* tag
- all other packages are tagged with dist* tag
"""
packages = []
# get all rpms in folder
rpms = search_rpms(Path(self._packages_dir))
for rpm in rpms:
info = parse_nvra(rpm.path.stem)
if 'module' in info['release']:
continue
packages.append({
"build_id": RELEASE_BUILD_ID,
"name": info['name'],
"extra": None,
"arch": info['arch'],
"epoch": info['epoch'] or None,
"version": info['version'],
"metadata_only": False,
"release": info['release'],
# not used currently
# "id": 262555,
# "size": 0
})
builds = []
return builds, packages
def _get_module_packages(self, tag_name):
"""
Get list of builds for module and given module tag name.
"""
builds = []
packages = []
modules = self._get_modules_by_name(tag_name)
for module in modules:
if module is None:
raise ValueError('Module %s is not found' % tag_name)
path = os.path.join(
self._modules_dir,
module.arch,
tag_name,
)
builds.append({
"build_id": module.build_id,
"package_name": module.name,
"nvr": module.nvr,
"tag_name": module.nvr,
"version": module.stream,
"release": module.version,
"id": module.build_id,
"name": module.name,
"volume_name": "DEFAULT",
# Following fields are currently not
# used but returned by real koji
# left them here just for reference
#
# "owner_name": "mbox-mbs-backend",
# "task_id": 195937,
# "state": 1,
# "start_time": "2020-12-22 19:20:12.504578",
# "creation_event_id": 306731,
# "creation_time": "2020-12-22 19:20:12.504578",
# "epoch": None,
# "tag_id": 1192,
# "completion_time": "2020-12-22 19:34:34.716615",
# "volume_id": 0,
# "package_id": 104,
# "owner_id": 6,
})
if os.path.exists(path):
info = Modulemd.ModuleStream.read_string(open(path).read(), strict=True)
for art in info.get_rpm_artifacts():
data = parse_nvra(art)
packages.append({
"build_id": module.build_id,
"name": data['name'],
"extra": None,
"arch": data['arch'],
"epoch": data['epoch'] or None,
"version": data['version'],
"metadata_only": False,
"release": data['release'],
"id": 262555,
"size": 0
})
else:
raise RuntimeError('Unable to find module %s' % path)
return builds, packages
def _get_modules_by_name(self, tag_name):
modules = []
for arch in self._all_arches:
for module in self._modules.values():
if module.nvr != tag_name or module.arch != arch:
continue
modules.append(module)
return modules

View File

@ -14,18 +14,25 @@
# along with this program; if not, see <https://gnu.org/licenses/>. # along with this program; if not, see <https://gnu.org/licenses/>.
import contextlib
import os import os
import re import re
import socket
import shutil
import time import time
import threading import threading
import contextlib
import requests
import koji import koji
from kobo.shortcuts import run, force_list from kobo.shortcuts import run, force_list
import six import six
from six.moves import configparser, shlex_quote from six.moves import configparser, shlex_quote
import six.moves.xmlrpc_client as xmlrpclib import six.moves.xmlrpc_client as xmlrpclib
from flufl.lock import Lock
from datetime import timedelta
from .kojimock import KojiMock
from .. import util from .. import util
from ..arch_utils import getBaseArch from ..arch_utils import getBaseArch
@ -407,92 +414,6 @@ class KojiWrapper(object):
return cmd return cmd
def get_create_image_cmd(
self,
name,
version,
target,
arch,
ks_file,
repos,
image_type="live",
image_format=None,
release=None,
wait=True,
archive=False,
specfile=None,
ksurl=None,
):
# Usage: koji spin-livecd [options] <name> <version> <target> <arch> <kickstart-file> # noqa: E501
# Usage: koji spin-appliance [options] <name> <version> <target> <arch> <kickstart-file> # noqa: E501
# Examples:
# * name: RHEL-7.0
# * name: Satellite-6.0.1-RHEL-6
# ** -<type>.<arch>
# * version: YYYYMMDD[.n|.t].X
# * release: 1
cmd = self._get_cmd()
if image_type == "live":
cmd.append("spin-livecd")
elif image_type == "appliance":
cmd.append("spin-appliance")
else:
raise ValueError("Invalid image type: %s" % image_type)
if not archive:
cmd.append("--scratch")
cmd.append("--noprogress")
if wait:
cmd.append("--wait")
else:
cmd.append("--nowait")
if specfile:
cmd.append("--specfile=%s" % specfile)
if ksurl:
cmd.append("--ksurl=%s" % ksurl)
if isinstance(repos, list):
for repo in repos:
cmd.append("--repo=%s" % repo)
else:
cmd.append("--repo=%s" % repos)
if image_format:
if image_type != "appliance":
raise ValueError("Format can be specified only for appliance images'")
supported_formats = ["raw", "qcow", "qcow2", "vmx"]
if image_format not in supported_formats:
raise ValueError(
"Format is not supported: %s. Supported formats: %s"
% (image_format, " ".join(sorted(supported_formats)))
)
cmd.append("--format=%s" % image_format)
if release is not None:
cmd.append("--release=%s" % release)
# IMPORTANT: all --opts have to be provided *before* args
# Usage:
# koji spin-livecd [options] <name> <version> <target> <arch> <kickstart-file>
cmd.append(name)
cmd.append(version)
cmd.append(target)
# i686 -> i386 etc.
arch = getBaseArch(arch)
cmd.append(arch)
cmd.append(ks_file)
return cmd
def _has_connection_error(self, output): def _has_connection_error(self, output):
"""Checks if output indicates connection error.""" """Checks if output indicates connection error."""
return re.search("error: failed to connect\n$", output) return re.search("error: failed to connect\n$", output)
@ -606,6 +527,7 @@ class KojiWrapper(object):
"createImage", "createImage",
"createLiveMedia", "createLiveMedia",
"createAppliance", "createAppliance",
"createKiwiImage",
]: ]:
continue continue
@ -785,11 +707,10 @@ class KojiWrapper(object):
if list_of_args is None and list_of_kwargs is None: if list_of_args is None and list_of_kwargs is None:
raise ValueError("One of list_of_args or list_of_kwargs must be set.") raise ValueError("One of list_of_args or list_of_kwargs must be set.")
if type(list_of_args) not in [type(None), list] or type(list_of_kwargs) not in [ if list_of_args is not None and not isinstance(list_of_args, list):
type(None), raise ValueError("list_of_args must be list or None.")
list, if list_of_kwargs is not None and not isinstance(list_of_kwargs, list):
]: raise ValueError("list_of_kwargs must be list or None.")
raise ValueError("list_of_args and list_of_kwargs must be list or None.")
if list_of_kwargs is None: if list_of_kwargs is None:
list_of_kwargs = [{}] * len(list_of_args) list_of_kwargs = [{}] * len(list_of_args)
@ -803,9 +724,9 @@ class KojiWrapper(object):
koji_session.multicall = True koji_session.multicall = True
for args, kwargs in zip(list_of_args, list_of_kwargs): for args, kwargs in zip(list_of_args, list_of_kwargs):
if type(args) != list: if not isinstance(args, list):
args = [args] args = [args]
if type(kwargs) != dict: if not isinstance(kwargs, dict):
raise ValueError("Every item in list_of_kwargs must be a dict") raise ValueError("Every item in list_of_kwargs must be a dict")
koji_session_fnc(*args, **kwargs) koji_session_fnc(*args, **kwargs)
@ -813,7 +734,7 @@ class KojiWrapper(object):
if not responses: if not responses:
return None return None
if type(responses) != list: if not isinstance(responses, list):
raise ValueError( raise ValueError(
"Fault element was returned for multicall of method %r: %r" "Fault element was returned for multicall of method %r: %r"
% (koji_session_fnc, responses) % (koji_session_fnc, responses)
@ -829,7 +750,7 @@ class KojiWrapper(object):
# a one-item array containing the result value, # a one-item array containing the result value,
# or a struct of the form found inside the standard <fault> element. # or a struct of the form found inside the standard <fault> element.
for response, args, kwargs in zip(responses, list_of_args, list_of_kwargs): for response, args, kwargs in zip(responses, list_of_args, list_of_kwargs):
if type(response) == list: if isinstance(response, list):
if not response: if not response:
raise ValueError( raise ValueError(
"Empty list returned for multicall of method %r with args %r, %r" # noqa: E501 "Empty list returned for multicall of method %r with args %r, %r" # noqa: E501
@ -864,6 +785,45 @@ class KojiWrapper(object):
pass pass
class KojiMockWrapper(object):
lock = threading.Lock()
def __init__(self, compose, all_arches):
self.all_arches = all_arches
self.compose = compose
try:
self.profile = self.compose.conf["koji_profile"]
except KeyError:
raise RuntimeError("Koji profile must be configured")
with self.lock:
self.koji_module = koji.get_profile_module(self.profile)
session_opts = {}
for key in (
"timeout",
"keepalive",
"max_retries",
"retry_interval",
"anon_retry",
"offline_retry",
"offline_retry_interval",
"debug",
"debug_xmlrpc",
"serverca",
"use_fast_upload",
):
value = getattr(self.koji_module.config, key, None)
if value is not None:
session_opts[key] = value
self.koji_proxy = KojiMock(
packages_dir=self.koji_module.config.topdir,
modules_dir=os.path.join(
self.koji_module.config.topdir,
'modules',
),
all_arches=self.all_arches,
)
def get_buildroot_rpms(compose, task_id): def get_buildroot_rpms(compose, task_id):
"""Get build root RPMs - either from runroot or local""" """Get build root RPMs - either from runroot or local"""
result = [] result = []
@ -895,3 +855,177 @@ def get_buildroot_rpms(compose, task_id):
continue continue
result.append(i) result.append(i)
return sorted(result) return sorted(result)
class KojiDownloadProxy:
def __init__(self, topdir, topurl, cache_dir, logger):
if not topdir:
# This will only happen if there is either no koji_profile
# configured, or the profile doesn't have a topdir. In the first
# case there will be no koji interaction, and the second indicates
# broken koji configuration.
# We can pretend to have local access in both cases to avoid any
# external requests.
self.has_local_access = True
return
self.cache_dir = cache_dir
self.logger = logger
self.topdir = topdir
self.topurl = topurl
# If cache directory is configured, we want to use it (even if we
# actually have local access to the storage).
self.has_local_access = not bool(cache_dir)
# This is used for temporary downloaded files. The suffix is unique
# per-process. To prevent threads in the same process from colliding, a
# thread id is added later.
self.unique_suffix = "%s.%s" % (socket.gethostname(), os.getpid())
self.session = None
if not self.has_local_access:
self.session = requests.Session()
@property
def path_prefix(self):
dir = self.topdir if self.has_local_access else self.cache_dir
return dir.rstrip("/") + "/"
@classmethod
def from_config(klass, conf, logger):
topdir = None
topurl = None
cache_dir = None
if "koji_profile" in conf:
koji_module = koji.get_profile_module(conf["koji_profile"])
topdir = koji_module.config.topdir
topurl = koji_module.config.topurl
cache_dir = conf.get("koji_cache")
if cache_dir:
cache_dir = cache_dir.rstrip("/") + "/"
return klass(topdir, topurl, cache_dir, logger)
@util.retry(wait_on=requests.exceptions.RequestException)
def _download(self, url, dest):
"""Download file into given location
:param str url: URL of the file to download
:param str dest: file path to store the result in
:returns: path to the downloaded file (same as dest) or None if the URL
"""
# contextlib.closing is only needed in requests<2.18
with contextlib.closing(self.session.get(url, stream=True)) as r:
if r.status_code == 404:
self.logger.warning("GET %s NOT FOUND", url)
return None
if r.status_code != 200:
self.logger.error("GET %s %s", url, r.status_code)
r.raise_for_status()
# The exception from here will be retried by the decorator.
file_size = int(r.headers.get("Content-Length", 0))
self.logger.info("GET %s OK %s", url, util.format_size(file_size))
with open(dest, "wb") as f:
shutil.copyfileobj(r.raw, f)
return dest
def _delete(self, path):
"""Try to delete file at given path and ignore errors."""
try:
os.remove(path)
except Exception:
self.logger.warning("Failed to delete %s", path)
def _atomic_download(self, url, dest, validator):
"""Atomically download a file
:param str url: URL of the file to download
:param str dest: file path to store the result in
:returns: path to the downloaded file (same as dest) or None if the URL
return 404.
"""
temp_file = "%s.%s.%s" % (dest, self.unique_suffix, threading.get_ident())
# First download to the temporary location.
try:
if self._download(url, temp_file) is None:
# The file was not found.
return None
except Exception:
# Download failed, let's make sure to clean up potentially partial
# temporary file.
self._delete(temp_file)
raise
# Check if the temporary file is correct (assuming we were provided a
# validator function).
try:
if validator:
validator(temp_file)
except Exception:
# Validation failed. Let's delete the problematic file and re-raise
# the exception.
self._delete(temp_file)
raise
# Atomically move the temporary file into final location
os.rename(temp_file, dest)
return dest
def _download_file(self, path, validator):
"""Ensure file on Koji volume in ``path`` is present in the local
cache.
:returns: path to the local file or None if file is not found
"""
url = path.replace(self.topdir, self.topurl)
destination_file = path.replace(self.topdir, self.cache_dir)
util.makedirs(os.path.dirname(destination_file))
lock = Lock(destination_file + ".lock")
# Hold the lock for this file for 5 minutes. If another compose needs
# the same file but it's not downloaded yet, the process will wait.
#
# If the download finishes in time, the downloaded file will be used
# here.
#
# If the download takes longer, this process will steal the lock and
# start its own download.
#
# That should not be a problem: the same file will be downloaded and
# then replaced atomically on the filesystem. If the original process
# managed to hardlink the first file already, that hardlink will be
# broken, but that will only result in the same file stored twice.
lock.lifetime = timedelta(minutes=5)
with lock:
# Check if the file already exists. If yes, return the path.
if os.path.exists(destination_file):
# Update mtime of the file. This covers the case of packages in the
# tag that are not included in the compose. Updating mtime will
# exempt them from cleanup for extra time.
os.utime(destination_file)
return destination_file
return self._atomic_download(url, destination_file, validator)
def get_file(self, path, validator=None):
"""
If path refers to an existing file in Koji, return a valid local path
to it. If no such file exists, return None.
:param validator: A callable that will be called with the path to the
downloaded file if and only if the file was actually downloaded.
Any exception raised from there will be abort the download and be
propagated.
"""
if self.has_local_access:
# We have koji volume mounted locally. No transformation needed for
# the path, just check it exists.
if os.path.exists(path):
return path
return None
else:
# We need to download the file.
return self._download_file(path, validator)

View File

@ -109,55 +109,3 @@ class LoraxWrapper(object):
# TODO: workdir # TODO: workdir
return cmd return cmd
def get_buildinstall_cmd(
self,
product,
version,
release,
repo_baseurl,
output_dir,
variant=None,
bugurl=None,
nomacboot=False,
noupgrade=False,
is_final=False,
buildarch=None,
volid=None,
brand=None,
):
# RHEL 6 compatibility
# Usage: buildinstall [--debug] --version <version> --brand <brand> --product <product> --release <comment> --final [--output outputdir] [--discs <discstring>] <root> # noqa: E501
brand = brand or "redhat"
# HACK: ignore provided release
release = "%s %s" % (brand, version)
bugurl = bugurl or "https://bugzilla.redhat.com"
cmd = ["/usr/lib/anaconda-runtime/buildinstall"]
cmd.append("--debug")
cmd.extend(["--version", version])
cmd.extend(["--brand", brand])
cmd.extend(["--product", product])
cmd.extend(["--release", release])
if is_final:
cmd.append("--final")
if buildarch:
cmd.extend(["--buildarch", buildarch])
if bugurl:
cmd.extend(["--bugurl", bugurl])
output_dir = os.path.abspath(output_dir)
cmd.extend(["--output", output_dir])
for i in force_list(repo_baseurl):
if "://" not in i:
i = "file://%s" % os.path.abspath(i)
cmd.append(i)
return cmd

View File

@ -20,6 +20,7 @@ import os
import shutil import shutil
import glob import glob
import six import six
import threading
from six.moves import shlex_quote from six.moves import shlex_quote
from six.moves.urllib.request import urlretrieve from six.moves.urllib.request import urlretrieve
from fnmatch import fnmatch from fnmatch import fnmatch
@ -29,12 +30,15 @@ from kobo.shortcuts import run, force_list
from pungi.util import explode_rpm_package, makedirs, copy_all, temp_dir, retry from pungi.util import explode_rpm_package, makedirs, copy_all, temp_dir, retry
from .kojiwrapper import KojiWrapper from .kojiwrapper import KojiWrapper
lock = threading.Lock()
class ScmBase(kobo.log.LoggingBase): class ScmBase(kobo.log.LoggingBase):
def __init__(self, logger=None, command=None, compose=None): def __init__(self, logger=None, command=None, compose=None, options=None):
kobo.log.LoggingBase.__init__(self, logger=logger) kobo.log.LoggingBase.__init__(self, logger=logger)
self.command = command self.command = command
self.compose = compose self.compose = compose
self.options = options or {}
@retry(interval=60, timeout=300, wait_on=RuntimeError) @retry(interval=60, timeout=300, wait_on=RuntimeError)
def retry_run(self, cmd, **kwargs): def retry_run(self, cmd, **kwargs):
@ -156,22 +160,31 @@ class GitWrapper(ScmBase):
if "://" not in repo: if "://" not in repo:
repo = "file://%s" % repo repo = "file://%s" % repo
git_cmd = ["git"]
if "credential_helper" in self.options:
git_cmd.extend(["-c", "credential.useHttpPath=true"])
git_cmd.extend(
["-c", "credential.helper=%s" % self.options["credential_helper"]]
)
run(["git", "init"], workdir=destdir) run(["git", "init"], workdir=destdir)
try: try:
run(["git", "fetch", "--depth=1", repo, branch], workdir=destdir) run(git_cmd + ["fetch", "--depth=1", repo, branch], workdir=destdir)
run(["git", "checkout", "FETCH_HEAD"], workdir=destdir) run(["git", "checkout", "FETCH_HEAD"], workdir=destdir)
except RuntimeError as e: except RuntimeError as e:
# Fetch failed, to do a full clone we add a remote to our empty # Fetch failed, to do a full clone we add a remote to our empty
# repo, get its content and check out the reference we want. # repo, get its content and check out the reference we want.
self.log_debug( self.log_debug(
"Trying to do a full clone because shallow clone failed: %s %s" "Trying to do a full clone because shallow clone failed: %s %s"
% (e, e.output) % (e, getattr(e, "output", ""))
) )
try: try:
# Re-run git init in case of previous failure breaking .git dir # Re-run git init in case of previous failure breaking .git dir
run(["git", "init"], workdir=destdir) run(["git", "init"], workdir=destdir)
run(["git", "remote", "add", "origin", repo], workdir=destdir) run(["git", "remote", "add", "origin", repo], workdir=destdir)
self.retry_run(["git", "remote", "update", "origin"], workdir=destdir) self.retry_run(
git_cmd + ["remote", "update", "origin"], workdir=destdir
)
run(["git", "checkout", branch], workdir=destdir) run(["git", "checkout", branch], workdir=destdir)
except RuntimeError: except RuntimeError:
if self.compose: if self.compose:
@ -185,19 +198,38 @@ class GitWrapper(ScmBase):
copy_all(destdir, debugdir) copy_all(destdir, debugdir)
raise raise
self.run_process_command(destdir) def get_temp_repo_path(self, scm_root, scm_branch):
scm_repo = scm_root.split("/")[-1]
process_id = os.getpid()
tmp_dir = (
"/tmp/pungi-temp-git-repos-"
+ str(process_id)
+ "/"
+ scm_repo
+ "-"
+ scm_branch
)
return tmp_dir
def setup_repo(self, scm_root, scm_branch):
tmp_dir = self.get_temp_repo_path(scm_root, scm_branch)
if not os.path.isdir(tmp_dir):
makedirs(tmp_dir)
self._clone(scm_root, scm_branch, tmp_dir)
self.run_process_command(tmp_dir)
return tmp_dir
def export_dir(self, scm_root, scm_dir, target_dir, scm_branch=None): def export_dir(self, scm_root, scm_dir, target_dir, scm_branch=None):
scm_dir = scm_dir.lstrip("/") scm_dir = scm_dir.lstrip("/")
scm_branch = scm_branch or "master" scm_branch = scm_branch or "master"
with temp_dir() as tmp_dir:
self.log_debug( self.log_debug(
"Exporting directory %s from git %s (branch %s)..." "Exporting directory %s from git %s (branch %s)..."
% (scm_dir, scm_root, scm_branch) % (scm_dir, scm_root, scm_branch)
) )
self._clone(scm_root, scm_branch, tmp_dir) with lock:
tmp_dir = self.setup_repo(scm_root, scm_branch)
copy_all(os.path.join(tmp_dir, scm_dir), target_dir) copy_all(os.path.join(tmp_dir, scm_dir), target_dir)
@ -205,7 +237,6 @@ class GitWrapper(ScmBase):
scm_file = scm_file.lstrip("/") scm_file = scm_file.lstrip("/")
scm_branch = scm_branch or "master" scm_branch = scm_branch or "master"
with temp_dir() as tmp_dir:
target_path = os.path.join(target_dir, os.path.basename(scm_file)) target_path = os.path.join(target_dir, os.path.basename(scm_file))
self.log_debug( self.log_debug(
@ -213,7 +244,8 @@ class GitWrapper(ScmBase):
% (scm_file, scm_root, scm_branch) % (scm_file, scm_root, scm_branch)
) )
self._clone(scm_root, scm_branch, tmp_dir) with lock:
tmp_dir = self.setup_repo(scm_root, scm_branch)
makedirs(target_dir) makedirs(target_dir)
shutil.copy2(os.path.join(tmp_dir, scm_file), target_path) shutil.copy2(os.path.join(tmp_dir, scm_file), target_path)
@ -361,15 +393,19 @@ def get_file_from_scm(scm_dict, target_path, compose=None):
scm_file = os.path.abspath(scm_dict) scm_file = os.path.abspath(scm_dict)
scm_branch = None scm_branch = None
command = None command = None
options = {}
else: else:
scm_type = scm_dict["scm"] scm_type = scm_dict["scm"]
scm_repo = scm_dict["repo"] scm_repo = scm_dict["repo"]
scm_file = scm_dict["file"] scm_file = scm_dict["file"]
scm_branch = scm_dict.get("branch", None) scm_branch = scm_dict.get("branch", None)
command = scm_dict.get("command") command = scm_dict.get("command")
options = scm_dict.get("options", {})
logger = compose._logger if compose else None logger = compose._logger if compose else None
scm = _get_wrapper(scm_type, logger=logger, command=command, compose=compose) scm = _get_wrapper(
scm_type, logger=logger, command=command, compose=compose, options=options
)
files_copied = [] files_copied = []
for i in force_list(scm_file): for i in force_list(scm_file):
@ -450,15 +486,19 @@ def get_dir_from_scm(scm_dict, target_path, compose=None):
scm_dir = os.path.abspath(scm_dict) scm_dir = os.path.abspath(scm_dict)
scm_branch = None scm_branch = None
command = None command = None
options = {}
else: else:
scm_type = scm_dict["scm"] scm_type = scm_dict["scm"]
scm_repo = scm_dict.get("repo", None) scm_repo = scm_dict.get("repo", None)
scm_dir = scm_dict["dir"] scm_dir = scm_dict["dir"]
scm_branch = scm_dict.get("branch", None) scm_branch = scm_dict.get("branch", None)
command = scm_dict.get("command") command = scm_dict.get("command")
options = scm_dict.get("options", {})
logger = compose._logger if compose else None logger = compose._logger if compose else None
scm = _get_wrapper(scm_type, logger=logger, command=command, compose=compose) scm = _get_wrapper(
scm_type, logger=logger, command=command, compose=compose, options=options
)
with temp_dir(prefix="scm_checkout_") as tmp_dir: with temp_dir(prefix="scm_checkout_") as tmp_dir:
scm.export_dir(scm_repo, scm_dir, scm_branch=scm_branch, target_dir=tmp_dir) scm.export_dir(scm_repo, scm_dir, scm_branch=scm_branch, target_dir=tmp_dir)

View File

@ -276,7 +276,6 @@ class Variant(object):
modules=None, modules=None,
modular_koji_tags=None, modular_koji_tags=None,
): ):
environments = environments or [] environments = environments or []
buildinstallpackages = buildinstallpackages or [] buildinstallpackages = buildinstallpackages or []

View File

@ -1,705 +0,0 @@
# -*- coding: utf-8 -*-
from __future__ import print_function
import argparse
import atexit
import errno
import json
import logging
import os
import re
import shutil
import subprocess
import sys
import tempfile
import time
import threading
from collections import namedtuple
import kobo.conf
import kobo.log
import productmd
from kobo import shortcuts
from six.moves import configparser, shlex_quote
import pungi.util
from pungi.compose import get_compose_dir
from pungi.linker import linker_pool
from pungi.phases.pkgset.sources.source_koji import get_koji_event_raw
from pungi.util import find_old_compose, parse_koji_event, temp_dir
from pungi.wrappers.kojiwrapper import KojiWrapper
Config = namedtuple(
"Config",
[
# Path to directory with the compose
"target",
"compose_type",
"label",
# Path to the selected old compose that will be reused
"old_compose",
# Path to directory with config file copies
"config_dir",
# Which koji event to use (if any)
"event",
# Additional arguments to pungi-koji executable
"extra_args",
],
)
log = logging.getLogger(__name__)
class Status(object):
# Ready to start
READY = "READY"
# Waiting for dependencies to finish.
WAITING = "WAITING"
# Part is currently running
STARTED = "STARTED"
# A dependency failed, this one will never start.
BLOCKED = "BLOCKED"
class ComposePart(object):
def __init__(self, name, config, just_phase=[], skip_phase=[], dependencies=[]):
self.name = name
self.config = config
self.status = Status.WAITING if dependencies else Status.READY
self.just_phase = just_phase
self.skip_phase = skip_phase
self.blocked_on = set(dependencies)
self.depends_on = set(dependencies)
self.path = None
self.log_file = None
self.failable = False
def __str__(self):
return self.name
def __repr__(self):
return (
"ComposePart({0.name!r},"
" {0.config!r},"
" {0.status!r},"
" just_phase={0.just_phase!r},"
" skip_phase={0.skip_phase!r},"
" dependencies={0.depends_on!r})"
).format(self)
def refresh_status(self):
"""Refresh status of this part with the result of the compose. This
should only be called once the compose finished.
"""
try:
with open(os.path.join(self.path, "STATUS")) as fh:
self.status = fh.read().strip()
except IOError as exc:
log.error("Failed to update status of %s: %s", self.name, exc)
log.error("Assuming %s is DOOMED", self.name)
self.status = "DOOMED"
def is_finished(self):
return "FINISHED" in self.status
def unblock_on(self, finished_part):
"""Update set of blockers for this part. If it's empty, mark us as ready."""
self.blocked_on.discard(finished_part)
if self.status == Status.WAITING and not self.blocked_on:
log.debug("%s is ready to start", self)
self.status = Status.READY
def setup_start(self, global_config, parts):
substitutions = dict(
("part-%s" % name, p.path) for name, p in parts.items() if p.is_finished()
)
substitutions["configdir"] = global_config.config_dir
config = pungi.util.load_config(self.config)
for f in config.opened_files:
# apply substitutions
fill_in_config_file(f, substitutions)
self.status = Status.STARTED
self.path = get_compose_dir(
os.path.join(global_config.target, "parts"),
config,
compose_type=global_config.compose_type,
compose_label=global_config.label,
)
self.log_file = os.path.join(global_config.target, "logs", "%s.log" % self.name)
log.info("Starting %s in %s", self.name, self.path)
def get_cmd(self, global_config):
cmd = ["pungi-koji", "--config", self.config, "--compose-dir", self.path]
cmd.append("--%s" % global_config.compose_type)
if global_config.label:
cmd.extend(["--label", global_config.label])
for phase in self.just_phase:
cmd.extend(["--just-phase", phase])
for phase in self.skip_phase:
cmd.extend(["--skip-phase", phase])
if global_config.old_compose:
cmd.extend(
["--old-compose", os.path.join(global_config.old_compose, "parts")]
)
if global_config.event:
cmd.extend(["--koji-event", str(global_config.event)])
if global_config.extra_args:
cmd.extend(global_config.extra_args)
cmd.extend(["--no-latest-link"])
return cmd
@classmethod
def from_config(cls, config, section, config_dir):
part = cls(
name=section,
config=os.path.join(config_dir, config.get(section, "config")),
just_phase=_safe_get_list(config, section, "just_phase", []),
skip_phase=_safe_get_list(config, section, "skip_phase", []),
dependencies=_safe_get_list(config, section, "depends_on", []),
)
if config.has_option(section, "failable"):
part.failable = config.getboolean(section, "failable")
return part
def _safe_get_list(config, section, option, default=None):
"""Get a value from config parser. The result is split into a list on
commas or spaces, and `default` is returned if the key does not exist.
"""
if config.has_option(section, option):
value = config.get(section, option)
return [x.strip() for x in re.split(r"[, ]+", value) if x]
return default
def fill_in_config_file(fp, substs):
"""Templating function. It works with Jinja2 style placeholders such as
{{foo}}. Whitespace around the key name is fine. The file is modified in place.
:param fp string: path to the file to process
:param substs dict: a mapping for values to put into the file
"""
def repl(match):
try:
return substs[match.group(1)]
except KeyError as exc:
raise RuntimeError(
"Unknown placeholder %s in %s" % (exc, os.path.basename(fp))
)
with open(fp, "r") as f:
contents = re.sub(r"{{ *([a-zA-Z-_]+) *}}", repl, f.read())
with open(fp, "w") as f:
f.write(contents)
def start_part(global_config, parts, part):
part.setup_start(global_config, parts)
fh = open(part.log_file, "w")
cmd = part.get_cmd(global_config)
log.debug("Running command %r", " ".join(shlex_quote(x) for x in cmd))
return subprocess.Popen(cmd, stdout=fh, stderr=subprocess.STDOUT)
def handle_finished(global_config, linker, parts, proc, finished_part):
finished_part.refresh_status()
log.info("%s finished with status %s", finished_part, finished_part.status)
if proc.returncode == 0:
# Success, unblock other parts...
for part in parts.values():
part.unblock_on(finished_part.name)
# ...and link the results into final destination.
copy_part(global_config, linker, finished_part)
update_metadata(global_config, finished_part)
else:
# Failure, other stuff may be blocked.
log.info("See details in %s", finished_part.log_file)
block_on(parts, finished_part.name)
def copy_part(global_config, linker, part):
c = productmd.Compose(part.path)
for variant in c.info.variants:
data_path = os.path.join(part.path, "compose", variant)
link = os.path.join(global_config.target, "compose", variant)
log.info("Hardlinking content %s -> %s", data_path, link)
hardlink_dir(linker, data_path, link)
def hardlink_dir(linker, srcdir, dstdir):
for root, dirs, files in os.walk(srcdir):
root = os.path.relpath(root, srcdir)
for f in files:
src = os.path.normpath(os.path.join(srcdir, root, f))
dst = os.path.normpath(os.path.join(dstdir, root, f))
linker.queue_put((src, dst))
def update_metadata(global_config, part):
part_metadata_dir = os.path.join(part.path, "compose", "metadata")
final_metadata_dir = os.path.join(global_config.target, "compose", "metadata")
for f in os.listdir(part_metadata_dir):
# Load the metadata
with open(os.path.join(part_metadata_dir, f)) as fh:
part_metadata = json.load(fh)
final_metadata = os.path.join(final_metadata_dir, f)
if os.path.exists(final_metadata):
# We already have this file, will need to merge.
merge_metadata(final_metadata, part_metadata)
else:
# A new file, just copy it.
copy_metadata(global_config, final_metadata, part_metadata)
def copy_metadata(global_config, final_metadata, source):
"""Copy file to final location, but update compose information."""
with open(
os.path.join(global_config.target, "compose/metadata/composeinfo.json")
) as f:
composeinfo = json.load(f)
try:
source["payload"]["compose"].update(composeinfo["payload"]["compose"])
except KeyError:
# No [payload][compose], probably OSBS metadata
pass
with open(final_metadata, "w") as f:
json.dump(source, f, indent=2, sort_keys=True)
def merge_metadata(final_metadata, source):
with open(final_metadata) as f:
metadata = json.load(f)
try:
key = {
"productmd.composeinfo": "variants",
"productmd.modules": "modules",
"productmd.images": "images",
"productmd.rpms": "rpms",
}[source["header"]["type"]]
# TODO what if multiple parts create images for the same variant
metadata["payload"][key].update(source["payload"][key])
except KeyError:
# OSBS metadata, merge whole file
metadata.update(source)
with open(final_metadata, "w") as f:
json.dump(metadata, f, indent=2, sort_keys=True)
def block_on(parts, name):
"""Part ``name`` failed, mark everything depending on it as blocked."""
for part in parts.values():
if name in part.blocked_on:
log.warning("%s is blocked now and will not run", part)
part.status = Status.BLOCKED
block_on(parts, part.name)
def check_finished_processes(processes):
"""Walk through all active processes and check if something finished."""
for proc in processes.keys():
proc.poll()
if proc.returncode is not None:
yield proc, processes[proc]
def run_all(global_config, parts):
# Mapping subprocess.Popen -> ComposePart
processes = dict()
remaining = set(p.name for p in parts.values() if not p.is_finished())
with linker_pool("hardlink") as linker:
while remaining or processes:
update_status(global_config, parts)
for proc, part in check_finished_processes(processes):
del processes[proc]
handle_finished(global_config, linker, parts, proc, part)
# Start new available processes.
for name in list(remaining):
part = parts[name]
# Start all ready parts
if part.status == Status.READY:
remaining.remove(name)
processes[start_part(global_config, parts, part)] = part
# Remove blocked parts from todo list
elif part.status == Status.BLOCKED:
remaining.remove(part.name)
# Wait for any child process to finish if there is any.
if processes:
pid, reason = os.wait()
for proc in processes.keys():
# Set the return code for process that we caught by os.wait().
# Calling poll() on it would not set the return code properly
# since the value was already consumed by os.wait().
if proc.pid == pid:
proc.returncode = (reason >> 8) & 0xFF
log.info("Waiting for linking to finish...")
return update_status(global_config, parts)
def get_target_dir(config, compose_info, label, reldir=""):
"""Find directory where this compose will be.
@param reldir: if target path in config is relative, it will be resolved
against this directory
"""
dir = os.path.realpath(os.path.join(reldir, config.get("general", "target")))
target_dir = get_compose_dir(
dir,
compose_info,
compose_type=config.get("general", "compose_type"),
compose_label=label,
)
return target_dir
def setup_logging(debug=False):
FORMAT = "%(asctime)s: %(levelname)s: %(message)s"
level = logging.DEBUG if debug else logging.INFO
kobo.log.add_stderr_logger(log, log_level=level, format=FORMAT)
log.setLevel(level)
def compute_status(statuses):
if any(map(lambda x: x[0] in ("STARTED", "WAITING"), statuses)):
# If there is anything still running or waiting to start, the whole is
# still running.
return "STARTED"
elif any(map(lambda x: x[0] in ("DOOMED", "BLOCKED") and not x[1], statuses)):
# If any required part is doomed or blocked, the whole is doomed
return "DOOMED"
elif all(map(lambda x: x[0] == "FINISHED", statuses)):
# If all parts are complete, the whole is complete
return "FINISHED"
else:
return "FINISHED_INCOMPLETE"
def update_status(global_config, parts):
log.debug("Updating status metadata")
metadata = {}
statuses = set()
for part in parts.values():
metadata[part.name] = {"status": part.status, "path": part.path}
statuses.add((part.status, part.failable))
metadata_path = os.path.join(
global_config.target, "compose", "metadata", "parts.json"
)
with open(metadata_path, "w") as fh:
json.dump(metadata, fh, indent=2, sort_keys=True, separators=(",", ": "))
status = compute_status(statuses)
log.info("Overall status is %s", status)
with open(os.path.join(global_config.target, "STATUS"), "w") as fh:
fh.write(status)
return status != "DOOMED"
def prepare_compose_dir(config, args, main_config_file, compose_info):
if not hasattr(args, "compose_path"):
# Creating a brand new compose
target_dir = get_target_dir(
config, compose_info, args.label, reldir=os.path.dirname(main_config_file)
)
for dir in ("logs", "parts", "compose/metadata", "work/global"):
try:
os.makedirs(os.path.join(target_dir, dir))
except OSError as exc:
if exc.errno != errno.EEXIST:
raise
with open(os.path.join(target_dir, "STATUS"), "w") as fh:
fh.write("STARTED")
# Copy initial composeinfo for new compose
shutil.copy(
os.path.join(target_dir, "work/global/composeinfo-base.json"),
os.path.join(target_dir, "compose/metadata/composeinfo.json"),
)
else:
# Restarting a particular compose
target_dir = args.compose_path
return target_dir
def load_parts_metadata(global_config):
parts_metadata = os.path.join(global_config.target, "compose/metadata/parts.json")
with open(parts_metadata) as f:
return json.load(f)
def setup_for_restart(global_config, parts, to_restart):
has_stuff_to_do = False
metadata = load_parts_metadata(global_config)
for key in metadata:
# Update state to match what is on disk
log.debug(
"Reusing %s (%s) from %s",
key,
metadata[key]["status"],
metadata[key]["path"],
)
parts[key].status = metadata[key]["status"]
parts[key].path = metadata[key]["path"]
for key in to_restart:
# Set restarted parts to run again
parts[key].status = Status.WAITING
parts[key].path = None
for key in to_restart:
# Remove blockers that are already finished
for blocker in list(parts[key].blocked_on):
if parts[blocker].is_finished():
parts[key].blocked_on.discard(blocker)
if not parts[key].blocked_on:
log.debug("Part %s in not blocked", key)
# Nothing blocks it; let's go
parts[key].status = Status.READY
has_stuff_to_do = True
if not has_stuff_to_do:
raise RuntimeError("All restarted parts are blocked. Nothing to do.")
def run_kinit(config):
if not config.getboolean("general", "kerberos"):
return
keytab = config.get("general", "kerberos_keytab")
principal = config.get("general", "kerberos_principal")
fd, fname = tempfile.mkstemp(prefix="krb5cc_pungi-orchestrate_")
os.close(fd)
os.environ["KRB5CCNAME"] = fname
shortcuts.run(["kinit", "-k", "-t", keytab, principal])
log.debug("Created a kerberos ticket for %s", principal)
atexit.register(os.remove, fname)
def get_compose_data(compose_path):
try:
compose = productmd.compose.Compose(compose_path)
data = {
"compose_id": compose.info.compose.id,
"compose_date": compose.info.compose.date,
"compose_type": compose.info.compose.type,
"compose_respin": str(compose.info.compose.respin),
"compose_label": compose.info.compose.label,
"release_id": compose.info.release_id,
"release_name": compose.info.release.name,
"release_short": compose.info.release.short,
"release_version": compose.info.release.version,
"release_type": compose.info.release.type,
"release_is_layered": compose.info.release.is_layered,
}
if compose.info.release.is_layered:
data.update(
{
"base_product_name": compose.info.base_product.name,
"base_product_short": compose.info.base_product.short,
"base_product_version": compose.info.base_product.version,
"base_product_type": compose.info.base_product.type,
}
)
return data
except Exception:
return {}
def get_script_env(compose_path):
env = os.environ.copy()
env["COMPOSE_PATH"] = compose_path
for key, value in get_compose_data(compose_path).items():
if isinstance(value, bool):
env[key.upper()] = "YES" if value else ""
else:
env[key.upper()] = str(value) if value else ""
return env
def run_scripts(prefix, compose_dir, scripts):
env = get_script_env(compose_dir)
for idx, script in enumerate(scripts.strip().splitlines()):
command = script.strip()
logfile = os.path.join(compose_dir, "logs", "%s%s.log" % (prefix, idx))
log.debug("Running command: %r", command)
log.debug("See output in %s", logfile)
shortcuts.run(command, env=env, logfile=logfile)
def try_translate_path(parts, path):
translation = []
for part in parts.values():
conf = pungi.util.load_config(part.config)
translation.extend(conf.get("translate_paths", []))
return pungi.util.translate_path_raw(translation, path)
def send_notification(compose_dir, command, parts):
if not command:
return
from pungi.notifier import PungiNotifier
data = get_compose_data(compose_dir)
data["location"] = try_translate_path(parts, compose_dir)
notifier = PungiNotifier([command])
with open(os.path.join(compose_dir, "STATUS")) as f:
status = f.read().strip()
notifier.send("status-change", workdir=compose_dir, status=status, **data)
def setup_progress_monitor(global_config, parts):
"""Update configuration so that each part send notifications about its
progress to the orchestrator.
There is a file to which the notification is written. The orchestrator is
reading it and mapping the entries to particular parts. The path to this
file is stored in an environment variable.
"""
tmp_file = tempfile.NamedTemporaryFile(prefix="pungi-progress-monitor_")
os.environ["_PUNGI_ORCHESTRATOR_PROGRESS_MONITOR"] = tmp_file.name
atexit.register(os.remove, tmp_file.name)
global_config.extra_args.append(
"--notification-script=pungi-notification-report-progress"
)
def reader():
while True:
line = tmp_file.readline()
if not line:
time.sleep(0.1)
continue
path, msg = line.split(":", 1)
for part in parts:
if parts[part].path == os.path.dirname(path):
log.debug("%s: %s", part, msg.strip())
break
monitor = threading.Thread(target=reader)
monitor.daemon = True
monitor.start()
def run(work_dir, main_config_file, args):
config_dir = os.path.join(work_dir, "config")
shutil.copytree(os.path.dirname(main_config_file), config_dir)
# Read main config
parser = configparser.RawConfigParser(
defaults={
"kerberos": "false",
"pre_compose_script": "",
"post_compose_script": "",
"notification_script": "",
}
)
parser.read(main_config_file)
# Create kerberos ticket
run_kinit(parser)
compose_info = dict(parser.items("general"))
compose_type = parser.get("general", "compose_type")
target_dir = prepare_compose_dir(parser, args, main_config_file, compose_info)
kobo.log.add_file_logger(log, os.path.join(target_dir, "logs", "orchestrator.log"))
log.info("Composing %s", target_dir)
run_scripts("pre_compose_", target_dir, parser.get("general", "pre_compose_script"))
old_compose = find_old_compose(
os.path.dirname(target_dir),
compose_info["release_short"],
compose_info["release_version"],
"",
)
if old_compose:
log.info("Reusing old compose %s", old_compose)
global_config = Config(
target=target_dir,
compose_type=compose_type,
label=args.label,
old_compose=old_compose,
config_dir=os.path.dirname(main_config_file),
event=args.koji_event,
extra_args=_safe_get_list(parser, "general", "extra_args"),
)
if not global_config.event and parser.has_option("general", "koji_profile"):
koji_wrapper = KojiWrapper(parser.get("general", "koji_profile"))
event_file = os.path.join(global_config.target, "work/global/koji-event")
result = get_koji_event_raw(koji_wrapper, None, event_file)
global_config = global_config._replace(event=result["id"])
parts = {}
for section in parser.sections():
if section == "general":
continue
parts[section] = ComposePart.from_config(parser, section, config_dir)
if hasattr(args, "part"):
setup_for_restart(global_config, parts, args.part)
setup_progress_monitor(global_config, parts)
send_notification(target_dir, parser.get("general", "notification_script"), parts)
retcode = run_all(global_config, parts)
if retcode:
# Only run the script if we are not doomed.
run_scripts(
"post_compose_", target_dir, parser.get("general", "post_compose_script")
)
send_notification(target_dir, parser.get("general", "notification_script"), parts)
return retcode
def parse_args(argv):
parser = argparse.ArgumentParser()
parser.add_argument("--debug", action="store_true")
parser.add_argument("--koji-event", metavar="ID", type=parse_koji_event)
subparsers = parser.add_subparsers()
start = subparsers.add_parser("start")
start.add_argument("config", metavar="CONFIG")
start.add_argument("--label")
restart = subparsers.add_parser("restart")
restart.add_argument("config", metavar="CONFIG")
restart.add_argument("compose_path", metavar="COMPOSE_PATH")
restart.add_argument(
"part", metavar="PART", nargs="*", help="which parts to restart"
)
restart.add_argument("--label")
return parser.parse_args(argv)
def main(argv=None):
args = parse_args(argv)
setup_logging(args.debug)
main_config_file = os.path.abspath(args.config)
with temp_dir() as work_dir:
try:
if not run(work_dir, main_config_file, args):
sys.exit(1)
except Exception:
log.exception("Unhandled exception!")
sys.exit(1)

View File

@ -148,6 +148,15 @@ class UnifiedISO(object):
new_path = os.path.join(self.temp_dir, "trees", arch, old_relpath) new_path = os.path.join(self.temp_dir, "trees", arch, old_relpath)
makedirs(os.path.dirname(new_path)) makedirs(os.path.dirname(new_path))
# Resolve symlinks to external files. Symlinks within the
# provided `dir` are kept.
if os.path.islink(old_path):
real_path = os.readlink(old_path)
abspath = os.path.normpath(
os.path.join(os.path.dirname(old_path), real_path)
)
if not abspath.startswith(dir):
old_path = real_path
try: try:
self.linker.link(old_path, new_path) self.linker.link(old_path, new_path)
except OSError as exc: except OSError as exc:

View File

@ -1,8 +1,7 @@
# Some packages must be installed via dnf/yum first, see doc/contributing.rst # Some packages must be installed via dnf/yum first, see doc/contributing.rst
dict.sorted
dogpile.cache dogpile.cache
fedmsg flufl.lock ; python_version >= '3.0'
funcsigs flufl.lock < 3.0 ; python_version <= '2.7'
jsonschema jsonschema
kobo kobo
koji koji
@ -13,4 +12,4 @@ ordered_set
productmd productmd
pykickstart pykickstart
python-multilib python-multilib
urlgrabber urlgrabber ; python_version < '3.0'

2
setup.cfg Normal file
View File

@ -0,0 +1,2 @@
[sdist]
formats=bztar

View File

@ -5,14 +5,9 @@
import os import os
import glob import glob
import distutils.command.sdist
from setuptools import setup from setuptools import setup
# override default tarball format with bzip2
distutils.command.sdist.sdist.default_format = {"posix": "bztar"}
# recursively scan for python modules to be included # recursively scan for python modules to be included
package_root_dirs = ["pungi", "pungi_utils"] package_root_dirs = ["pungi", "pungi_utils"]
packages = set() packages = set()
@ -25,7 +20,7 @@ packages = sorted(packages)
setup( setup(
name="pungi", name="pungi",
version="4.3.6", version="4.7.0",
description="Distribution compose tool", description="Distribution compose tool",
url="https://pagure.io/pungi", url="https://pagure.io/pungi",
author="Dennis Gilmore", author="Dennis Gilmore",
@ -41,16 +36,21 @@ setup(
"pungi-patch-iso = pungi.scripts.patch_iso:cli_main", "pungi-patch-iso = pungi.scripts.patch_iso:cli_main",
"pungi-make-ostree = pungi.ostree:main", "pungi-make-ostree = pungi.ostree:main",
"pungi-notification-report-progress = pungi.scripts.report_progress:main", "pungi-notification-report-progress = pungi.scripts.report_progress:main",
"pungi-orchestrate = pungi_utils.orchestrator:main",
"pungi-wait-for-signed-ostree-handler = pungi.scripts.wait_for_signed_ostree_handler:main", # noqa: E501 "pungi-wait-for-signed-ostree-handler = pungi.scripts.wait_for_signed_ostree_handler:main", # noqa: E501
"pungi-koji = pungi.scripts.pungi_koji:cli_main", "pungi-koji = pungi.scripts.pungi_koji:cli_main",
"pungi-gather = pungi.scripts.pungi_gather:cli_main", "pungi-gather = pungi.scripts.pungi_gather:cli_main",
"pungi-config-dump = pungi.scripts.config_dump:cli_main", "pungi-config-dump = pungi.scripts.config_dump:cli_main",
"pungi-config-validate = pungi.scripts.config_validate:cli_main", "pungi-config-validate = pungi.scripts.config_validate:cli_main",
"pungi-cache-cleanup = pungi.scripts.cache_cleanup:main",
"pungi-gather-modules = pungi.scripts.gather_modules:cli_main",
"pungi-gather-rpms = pungi.scripts.gather_rpms:cli_main",
"pungi-generate-packages-json = pungi.scripts.create_packages_json:cli_main", # noqa: E501
"pungi-create-extra-repo = pungi.scripts.create_extra_repo:cli_main"
] ]
}, },
scripts=["contrib/yum-dnf-compare/pungi-compare-depsolving"], scripts=["contrib/yum-dnf-compare/pungi-compare-depsolving"],
data_files=[ data_files=[
("/usr/lib/tmpfiles.d", glob.glob("contrib/tmpfiles.d/*.conf")),
("/usr/share/pungi", glob.glob("share/*.xsl")), ("/usr/share/pungi", glob.glob("share/*.xsl")),
("/usr/share/pungi", glob.glob("share/*.ks")), ("/usr/share/pungi", glob.glob("share/*.ks")),
("/usr/share/pungi", glob.glob("share/*.dtd")), ("/usr/share/pungi", glob.glob("share/*.dtd")),
@ -66,5 +66,5 @@ setup(
"dogpile.cache", "dogpile.cache",
], ],
extras_require={':python_version=="2.7"': ["enum34", "lockfile"]}, extras_require={':python_version=="2.7"': ["enum34", "lockfile"]},
tests_require=["mock", "pytest", "pytest-cov"], tests_require=["pytest", "pytest-cov", "pyfakefs"],
) )

1
sources Normal file
View File

@ -0,0 +1 @@
SHA512 (pungi-4.7.0.tar.bz2) = 55c7527a0dff6efa8ed13b1ccdfd3628686fadb55b78fb456e552f4972b831aa96f3ff37ac54837462d91df834157f38426e6b66b52216e1e5861628df724eca

View File

@ -1,5 +1,5 @@
mock mock; python_version < '3.3'
parameterized parameterized
pytest pytest
pytest-cov pytest-cov
unittest2 unittest2; python_version < '3.0'

View File

@ -1,4 +1,4 @@
FROM fedora:33 FROM registry.fedoraproject.org/fedora:latest
LABEL \ LABEL \
name="Pungi test" \ name="Pungi test" \
description="Run tests using tox with Python 3" \ description="Run tests using tox with Python 3" \

View File

@ -108,6 +108,7 @@
<groupid>core</groupid> <groupid>core</groupid>
</grouplist> </grouplist>
<optionlist> <optionlist>
<groupid arch="x86_64">standard</groupid>
</optionlist> </optionlist>
</environment> </environment>

View File

@ -0,0 +1,36 @@
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
<revision>1612479076</revision>
<data type="primary">
<checksum type="sha256">08941fae6bdb14f3b22bfad38b9d7dcb685a9df58fe8f515a3a0b2fe1af903bb</checksum>
<open-checksum type="sha256">2a15e618f049a883d360ccbf3e764b30640255f47dc526c633b1722fe23cbcbc</open-checksum>
<location href="repodata/08941fae6bdb14f3b22bfad38b9d7dcb685a9df58fe8f515a3a0b2fe1af903bb-primary.xml.gz"/>
<timestamp>1612479075</timestamp>
<size>1240</size>
<open-size>3888</open-size>
</data>
<data type="filelists">
<checksum type="sha256">e37a0b4a63b2b245dca1727195300cd3961f80aebc82ae7b9849dbf7482f5d0f</checksum>
<open-checksum type="sha256">b1782bc4207a5b7c3e64115d5a1d001802e8d363f022ea165df7cdab6f14651c</open-checksum>
<location href="repodata/e37a0b4a63b2b245dca1727195300cd3961f80aebc82ae7b9849dbf7482f5d0f-filelists.xml.gz"/>
<timestamp>1612479075</timestamp>
<size>439</size>
<open-size>1295</open-size>
</data>
<data type="other">
<checksum type="sha256">92992176bce71dcde9e4b6ad1442e7b5c7f3de9b7f019a2cd27d042ab38ea2b1</checksum>
<open-checksum type="sha256">3b847919691ad32279b13463de6c08f1f8b32f51e87b7d8d7e95a3ec2f46ef51</open-checksum>
<location href="repodata/92992176bce71dcde9e4b6ad1442e7b5c7f3de9b7f019a2cd27d042ab38ea2b1-other.xml.gz"/>
<timestamp>1612479075</timestamp>
<size>630</size>
<open-size>1911</open-size>
</data>
<data type="modules">
<checksum type="sha256">e7a671401f8e207e4cd3b90b4ac92d621f84a34dc9026f57c3f427fbed444c57</checksum>
<open-checksum type="sha256">d59fee86c18018cc18bb7325aa74aa0abf923c64d29a4ec45e08dcd01a0c3966</open-checksum>
<location href="repodata/e7a671401f8e207e4cd3b90b4ac92d621f84a34dc9026f57c3f427fbed444c57-modules.yaml.gz"/>
<timestamp>1612479075</timestamp>
<size>920</size>
<open-size>3308</open-size>
</data>
</repomd>

View File

@ -0,0 +1,55 @@
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
<revision>1666177486</revision>
<data type="primary">
<checksum type="sha256">89cb9cc1181635c9147864a7076d91fb81072641d481cd202832a2d257453576</checksum>
<open-checksum type="sha256">07255d9856f7531b52a6459f6fc7701c6d93c6d6c29d1382d83afcc53f13494a</open-checksum>
<location href="repodata/89cb9cc1181635c9147864a7076d91fb81072641d481cd202832a2d257453576-primary.xml.gz"/>
<timestamp>1666177486</timestamp>
<size>1387</size>
<open-size>6528</open-size>
</data>
<data type="filelists">
<checksum type="sha256">f69ca03957574729fd5150335b0d87afddcfb37a97aed5b06272212854f1773d</checksum>
<open-checksum type="sha256">c2e1e674d7d48bccaa16cae0a5f70cb55ef4cd7352b4d9d4fdaa619075d07dbc</open-checksum>
<location href="repodata/f69ca03957574729fd5150335b0d87afddcfb37a97aed5b06272212854f1773d-filelists.xml.gz"/>
<timestamp>1666177486</timestamp>
<size>1252</size>
<open-size>5594</open-size>
</data>
<data type="other">
<checksum type="sha256">b3827bd6c9ea67ffa3912002515c64e4d9fe5c4dacbf7c46b0d8768b7abbb84f</checksum>
<open-checksum type="sha256">9ce24c526239e349d023c577b2ae3872c8b0f1888aed1fb24b9b9aa12063fdf3</open-checksum>
<location href="repodata/b3827bd6c9ea67ffa3912002515c64e4d9fe5c4dacbf7c46b0d8768b7abbb84f-other.xml.gz"/>
<timestamp>1666177486</timestamp>
<size>999</size>
<open-size>6320</open-size>
</data>
<data type="primary_db">
<checksum type="sha256">ab8df35061dfa0285069b843f24a7076e31266d9a8abe8282340bcb936aa61d7</checksum>
<open-checksum type="sha256">2bce9554ce4496cef34b5cd69f186f7f3143c7cabae8fa384fc5c9eeab326f7f</open-checksum>
<location href="repodata/ab8df35061dfa0285069b843f24a7076e31266d9a8abe8282340bcb936aa61d7-primary.sqlite.bz2"/>
<timestamp>1666177486</timestamp>
<size>3558</size>
<open-size>106496</open-size>
<database_version>10</database_version>
</data>
<data type="filelists_db">
<checksum type="sha256">8bcf6d40db4e922934ac47e8ac7fb8d15bdacf579af8c819d2134ed54d30550b</checksum>
<open-checksum type="sha256">f7001d1df7f5f7e4898919b15710bea8ed9711ce42faf68e22b757e63169b1fb</open-checksum>
<location href="repodata/8bcf6d40db4e922934ac47e8ac7fb8d15bdacf579af8c819d2134ed54d30550b-filelists.sqlite.bz2"/>
<timestamp>1666177486</timestamp>
<size>2360</size>
<open-size>28672</open-size>
<database_version>10</database_version>
</data>
<data type="other_db">
<checksum type="sha256">01b82e9eb7ee9151f283c6e761ae450de18ed2d64b5e32de88689eaf95216a80</checksum>
<open-checksum type="sha256">07f5b9750af1e440d37ca216e719dd288149e79e9132f2fdccb6f73b2e5dd541</open-checksum>
<location href="repodata/01b82e9eb7ee9151f283c6e761ae450de18ed2d64b5e32de88689eaf95216a80-other.sqlite.bz2"/>
<timestamp>1666177486</timestamp>
<size>2196</size>
<open-size>32768</open-size>
<database_version>10</database_version>
</data>
</repomd>

View File

@ -0,0 +1,55 @@
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
<revision>1666177500</revision>
<data type="primary">
<checksum type="sha256">a1d342aa7cef3a2034fc3f9d6ee02d63572780bc76e61749a57e50b6b3ca9869</checksum>
<open-checksum type="sha256">a9e3eae447dd44282d7d96db5f15f049b757925397adb752f4df982176bab7e0</open-checksum>
<location href="repodata/a1d342aa7cef3a2034fc3f9d6ee02d63572780bc76e61749a57e50b6b3ca9869-primary.xml.gz"/>
<timestamp>1666177500</timestamp>
<size>3501</size>
<open-size>37296</open-size>
</data>
<data type="filelists">
<checksum type="sha256">6778922d5853d20f213ae7702699a76f1e87e55d6bfb5e4ac6a117d904d47b3c</checksum>
<open-checksum type="sha256">e30b666d9d88a70de69a08f45e6696bcd600c45485d856bd0213395d7da7bd49</open-checksum>
<location href="repodata/6778922d5853d20f213ae7702699a76f1e87e55d6bfb5e4ac6a117d904d47b3c-filelists.xml.gz"/>
<timestamp>1666177500</timestamp>
<size>27624</size>
<open-size>318187</open-size>
</data>
<data type="other">
<checksum type="sha256">5a60d79d8bce6a805f4fdb22fd891524359dce8ccc665c0b54e7299e79debe84</checksum>
<open-checksum type="sha256">b18138f4a3de45714e578fb1f30b7ec54fdcdaf1a22585891625b6af0894388e</open-checksum>
<location href="repodata/5a60d79d8bce6a805f4fdb22fd891524359dce8ccc665c0b54e7299e79debe84-other.xml.gz"/>
<timestamp>1666177500</timestamp>
<size>1876</size>
<open-size>28701</open-size>
</data>
<data type="primary_db">
<checksum type="sha256">c27bc2ce947173aba305041552c3c6d8db71442c1a2e5dcaf35ff750fe0469fc</checksum>
<open-checksum type="sha256">586e1af8934229925adb9e746ae5ced119859dfd97f4e3237399bb36a7d7f071</open-checksum>
<location href="repodata/c27bc2ce947173aba305041552c3c6d8db71442c1a2e5dcaf35ff750fe0469fc-primary.sqlite.bz2"/>
<timestamp>1666177500</timestamp>
<size>11528</size>
<open-size>126976</open-size>
<database_version>10</database_version>
</data>
<data type="filelists_db">
<checksum type="sha256">ed350865982e7a1e45b144839b56eac888e5d8f680571dd2cd06b37dc83e0fd8</checksum>
<open-checksum type="sha256">697903989d0f77de2d44a2b603e75c9b4ca23b3795eb136d175caf5666ce6459</open-checksum>
<location href="repodata/ed350865982e7a1e45b144839b56eac888e5d8f680571dd2cd06b37dc83e0fd8-filelists.sqlite.bz2"/>
<timestamp>1666177500</timestamp>
<size>20440</size>
<open-size>163840</open-size>
<database_version>10</database_version>
</data>
<data type="other_db">
<checksum type="sha256">35eff699131e0976429144c6f4514d21568177dc64bb4091c3ff62f76b293725</checksum>
<open-checksum type="sha256">3bd999a1bdf300df836a4607b7b75f845d8e1432e3e4e1ab6f0c7cc8a853db39</open-checksum>
<location href="repodata/35eff699131e0976429144c6f4514d21568177dc64bb4091c3ff62f76b293725-other.sqlite.bz2"/>
<timestamp>1666177500</timestamp>
<size>4471</size>
<open-size>49152</open-size>
<database_version>10</database_version>
</data>
</repomd>

View File

@ -0,0 +1,58 @@
[checksums]
images/boot.iso = sha256:fc8a4be604b6425746f12fa706116eb940f93358f036b8fbbe518b516cb6870c
[general]
; WARNING.0 = This section provides compatibility with pre-productmd treeinfos.
; WARNING.1 = Read productmd documentation for details about new format.
arch = x86_64
family = Test
name = Test 1.0
packagedir = Packages
platforms = x86_64,xen
repository = .
timestamp = 1531881582
variant = Server
variants = Client,Server
version = 1.0
[header]
type = productmd.treeinfo
version = 1.2
[images-x86_64]
boot.iso = images/boot.iso
[images-xen]
initrd = images/pxeboot/initrd.img
kernel = images/pxeboot/vmlinuz
[release]
name = Test
short = T
version = 1.0
[stage2]
mainimage = images/install.img
[tree]
arch = x86_64
build_timestamp = 1531881582
platforms = x86_64,xen
variants = Client,Server
[variant-Client]
id = Client
name = Client
packages = ../../../Client/x86_64/os/Packages
repository = ../../../Client/x86_64/os
type = variant
uid = Client
[variant-Server]
id = Server
name = Server
packages = Packages
repository = .
type = variant
uid = Server

View File

@ -0,0 +1,20 @@
---
document: modulemd
version: 2
data:
name: module
stream: master
version: 20190318
context: abcdef
arch: x86_64
summary: Dummy module
description: Dummy module
license:
module:
- Beerware
content:
- Beerware
artifacts:
rpms:
- foobar-0:1.0-1.noarch
...

View File

@ -0,0 +1,20 @@
---
document: modulemd
version: 2
data:
name: module
stream: master
version: 20190318
context: abcdef
arch: x86_64
summary: Dummy module
description: Dummy module
license:
module:
- Beerware
content:
- Beerware
artifacts:
rpms:
- foobar-0:1.0-1.noarch
...

View File

@ -0,0 +1,20 @@
---
document: modulemd
version: 2
data:
name: scratch-module
stream: master
version: 20200710
context: abcdef
arch: x86_64
summary: Dummy module
description: Dummy module
license:
module:
- Beerware
content:
- Beerware
artifacts:
rpms:
- foobar-0:1.0-1.noarch
...

View File

@ -0,0 +1,20 @@
---
document: modulemd
version: 2
data:
name: scratch-module
stream: master
version: 20200710
context: abcdef
arch: x86_64
summary: Dummy module
description: Dummy module
license:
module:
- Beerware
content:
- Beerware
artifacts:
rpms:
- foobar-0:1.0-1.noarch
...

View File

@ -2,12 +2,13 @@
import difflib import difflib
import errno import errno
import hashlib
import os import os
import shutil import shutil
import tempfile import tempfile
from collections import defaultdict from collections import defaultdict
import mock from unittest import mock
import six import six
from kobo.rpmlib import parse_nvr from kobo.rpmlib import parse_nvr
@ -21,6 +22,15 @@ from pungi import paths, checks
from pungi.module_util import Modulemd from pungi.module_util import Modulemd
GIT_WITH_CREDS = [
"git",
"-c",
"credential.useHttpPath=true",
"-c",
"credential.helper=!ch",
]
class BaseTestCase(unittest.TestCase): class BaseTestCase(unittest.TestCase):
def assertFilesEqual(self, fn1, fn2): def assertFilesEqual(self, fn1, fn2):
with open(fn1, "rb") as f1: with open(fn1, "rb") as f1:
@ -158,6 +168,20 @@ class IterableMock(mock.Mock):
return iter([]) return iter([])
class FSKojiDownloader(object):
"""Mock for KojiDownloadProxy that checks provided path."""
def get_file(self, path, validator=None):
return path if os.path.isfile(path) else None
class DummyKojiDownloader(object):
"""Mock for KojiDownloadProxy that always finds the file in original location."""
def get_file(self, path, validator=None):
return path
class DummyCompose(object): class DummyCompose(object):
def __init__(self, topdir, config): def __init__(self, topdir, config):
self.supported = True self.supported = True
@ -232,6 +256,8 @@ class DummyCompose(object):
self.cache_region = None self.cache_region = None
self.containers_metadata = {} self.containers_metadata = {}
self.load_old_compose_config = mock.Mock(return_value=None) self.load_old_compose_config = mock.Mock(return_value=None)
self.koji_downloader = DummyKojiDownloader()
self.koji_downloader.path_prefix = "/prefix"
def setup_optional(self): def setup_optional(self):
self.all_variants["Server-optional"] = MockVariant( self.all_variants["Server-optional"] = MockVariant(
@ -272,7 +298,7 @@ class DummyCompose(object):
return tempfile.mkdtemp(suffix=suffix, prefix=prefix, dir=self.topdir) return tempfile.mkdtemp(suffix=suffix, prefix=prefix, dir=self.topdir)
def touch(path, content=None): def touch(path, content=None, mode=None):
"""Helper utility that creates an dummy file in given location. Directories """Helper utility that creates an dummy file in given location. Directories
will be created.""" will be created."""
content = content or (path + "\n") content = content or (path + "\n")
@ -284,6 +310,8 @@ def touch(path, content=None):
content = content.encode() content = content.encode()
with open(path, "wb") as f: with open(path, "wb") as f:
f.write(content) f.write(content)
if mode:
os.chmod(path, mode)
return path return path
@ -334,3 +362,9 @@ def fake_run_in_threads(func, params, threads=None):
"""Like run_in_threads from Kobo, but actually runs tasks serially.""" """Like run_in_threads from Kobo, but actually runs tasks serially."""
for num, param in enumerate(params): for num, param in enumerate(params):
func(None, param, num) func(None, param, num)
def hash_string(alg, s):
m = hashlib.new(alg)
m.update(s.encode("utf-8"))
return m.hexdigest()

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import mock from unittest import mock
import unittest import unittest
from pungi.arch import ( from pungi.arch import (

Some files were not shown because too many files have changed in this diff Show More