Compare commits

..

66 Commits

Author SHA1 Message Date
Adam Williamson
aa7fcc1c20 Improve autodetection of productmd image type for osbuild images
I don't love inferring the type from the filename like this -
it's kinda backwards - but it's an improvement on the current
logic (I don't think 'dvd' is ever currently the correct value
here, I don't think osbuild *can* currently build the type of
image that 'dvd' is meant to indicate). I can't immediately see
any better source of data here (we could use the 'name' or
'package_name' from 'build_info', but those are pretty much
just inputs to the filenames anyway).

Types that are possible in productmd but not covered here are
'cd' (never likely to be used again in Fedora at least, not sure
about RHEL), 'dvd-debuginfo' (again not used in Fedora, may be
used in RHEL), 'ec2', 'kvm' (not sure about those), 'netinst'
(this is a synonym for 'boot', we use 'boot' in practice in
Fedora metadata), 'p2v' and 'rescue' (not sure).

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2023-11-06 08:24:12 -10:00
Lubomír Sedlář
b32c8f3e5e pkgset: ignore events for modular content tags
Generally we want all packages to come from particular event.

There are two exceptions: packages configured via `pkgset_koji_builds`
are pulled in by exact NVR and skip event; and modules in
`pkgset_koji_modules` are pulled in by NSVC and also ignore events.

However, the modular content tag did honor event, and could lead to a
crashed compose if the content tag did not exist at the configured
event.

This patch is a slightly too big hammer. It ignores events for all
modules, not just ones configured by explicit NSVC. It's not a huge deal
as the content tags are created before the corresponding module build is
created, and once all rpm builds are tagged into the content tag, MBS
will never change it again.

JIRA: RHELCMP-12765
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-10-27 08:16:30 +02:00
Lubomír Sedlář
935da7c246 pkgset: Ignore duplicated module builds
If the module tag contains the same module build multiple times (because
it's in multiple tags in the inheritance), Pungi will not process that
correctly and try to include the same NSVC in the compose multiple
times. That leads to a crash.

This patch adds another step to the inheritance filter to ensure the
result contains each module only once.

JIRA: RHELCMP-12768
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-10-26 11:09:26 +02:00
Aditya Bisoi
b513c8cd00 Drop buildinstall method
JIRA: RHELCMP-12388

Signed-off-by: Aditya Bisoi <abisoi@redhat.com>
2023-10-18 06:38:14 +00:00
Lingyan Zhuang
8cf1d98312 Add step to send UMB message
If reuse old ISO finished, send out UMB message.

Signed-off-by: Lingyan Zhuang <lzhuang@redhat.com>
2023-10-11 18:18:28 +08:00
Timothée Ravier
2534ddee99 Fix minor Ruff/flake8 warnings
```
pungi/checks.py:575:17: F601 [*] Dictionary key literal `"type"` repeated
pungi/phases/pkgset/pkgsets.py:617:12: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:241:16: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:244:16: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:370:16: E721 Do not compare types, use `isinstance()`
tests/test_pkgset_source_koji.py:374:20: E721 Do not compare types, use `isinstance()`
```

Signed-off-by: Timothée Ravier <tim@siosm.fr>
2023-10-03 13:36:19 +00:00
Simon de Vlieger
f30a8b4d15 osbuild: manifest type in config
Allow the manifest type used to be specified in the pungi configuration
instead of always selecting the manifest type based on the koji output.

Signed-off-by: Simon de Vlieger <cmdr@supakeen.com>
2023-09-25 06:26:53 +00:00
Lubomír Sedlář
3ffb991bac 4.5.1 release
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-09-07 15:00:59 +02:00
Ozan Unsal
dbc0e531b2 gather_dnf.py: Do not raise error when the downloaded package is exists.
If the packages are pulled from different repos and a package is already
exists in target directory, pungi raises File exists error and breaks. This
behavior can be suspended and skipped if the package is already available.

Merges: https://pagure.io/pungi/pull-request/1696
Signed-off-by: Ozan Unsal <ounsal@redhat.com>
2023-09-07 14:54:18 +02:00
Aditya Bisoi
4c7611291d 4.5.0 release
Signed-off-by: Aditya Bisoi <abisoi@redhat.com>
2023-08-31 11:26:37 +05:30
Lubomír Sedlář
0d3cd150bd kojiwrapper: Stop being smart about local access
Rather than trying to use local access when it's accessible, let user
make the decision:

 * if koji_cache is configured use it and download stuff
 * if not, fall back to local access

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-08-23 07:26:56 +00:00
Ozan Unsal
aa0aae3d3e Fix unittest errors
Signed-off-by: Ozan Unsal <ounsal@redhat.com>
2023-08-23 07:26:56 +00:00
Lubomír Sedlář
77f8fa25ad Add integrity checking for builds
When a real build is downloaded, Koji can provide a checksum via API.
This commit adds verification of that checksum.

A mismatch will abort the compose. If Koji doesn't provide a checksum
for the particular sigkey, no checking will happen.

Nothing is still checked for scratch builds and images.

This patch requires Koji 1.32. When talking to an older version, there
is no checking done.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-08-23 07:26:56 +00:00
Lubomír Sedlář
e6d9f31ef4 Add script for cleaning up the cache
Pungi would by default only ever add files to the cache. That would
eventually result in essentially a mirror of the Koji volume.

This patch adds a helper cleanup script. When called, it goes through
files in the cache and deletes anything that is not hardlinked from
elsewhere and with mtime not updated recently.

Cleaning up files that hardlinked from some compose would not save any
space anyway. The mtime check should account for cases like subpackage
being downloaded but not included in any compose. This would avoid it
from being downloaded over and over again.

When a compose fails or is aborted, there can be a stale lock file left
behind in the cache. This script cleans that up too.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-08-23 07:26:56 +00:00
Lubomír Sedlář
bf3e9bc53a Add ability to download images
This patch extends the ability to download files from Koji to image
building phases too.

There is no integrity checking for the downloaded images.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-08-23 07:26:56 +00:00
Lubomír Sedlář
631bb01d8f Add support for not having koji volume mounted locally
With this patch, Pungi can be configured with a local directory to be
used as a cache for RPMs, and it will download packages from Koji over
HTTP instead of reading them from filesystem directly.

The files from the cache can then be hardlink as usual.

There is locking in place to avoid different composes running at the
same time to step on each other.

This is now supported for RPMs only, be it real builds or scratch
builds.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-08-23 07:26:56 +00:00
Aditya Bisoi
b6296bdfcd Remove repository cloning multiple times
JIRA: RHELCMP-8913
Signed-off-by: Aditya Bisoi <abisoi@redhat.com>
2023-08-23 07:20:35 +00:00
Lubomír Sedlář
1c4275bbfa Support require_all_comps_packages on DNF backend
It's not a great name anymore though, because it will fail the compose
if any input package is missing, no matter whether it's from comps,
prepopulate or additional_packages.

JIRA: RHELCMP-12484
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-08-03 08:58:51 +00:00
Lubomír Sedlář
fe2dad3b3c Fix new warnings from flake8
Use isinstance rather than directly comparing types.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-08-03 10:10:30 +02:00
Aditya Bisoi
7128021654 4.4.1 release
Signed-off-by: Aditya Bisoi <abisoi@redhat.com>
2023-07-25 11:59:23 +05:30
Lubomír Sedlář
bd64894a03 ostree: Add configuration for custom runroot packages
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-07-18 08:44:26 +02:00
Lubomír Sedlář
14e025a5a1 pkgset: Emit better error for missing modulemd file
The exceptions from libmodulemd are not particularly helpful as they do
not contain information about what file caused it.

   modulemd-yaml-error-quark: Failed to open file: Permission denied (0)

This patch should add the path to the problematic file into the message.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-07-10 11:59:26 +02:00
Lubomír Sedlář
ada8f4e346 Add support for git-credential-helper
This patch adds an additional field `options` to scm_dict, which can be
used to provide additional information to the backends.

It implements a single new option for GitWrapper. This option allows
setting a custom git credentials wrapper. This can be useful if Pungi
needs to get files from a git repository that requires authentication.

The helper can be as simple as this (assuming the username is already
provided in the url):

    #!/bin/sh
    echo password=i-am-secret

The helper would need to be referenced by an absolute path from the
pungi configuration, or prefixed with ! to have git interpret it as a
shell script and look it up in PATH.

See https://git-scm.com/docs/gitcredentials for more details.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
JIRA: RHELCMP-11808
2023-06-28 09:44:40 +00:00
Haibo Lin
e4c525ecbf Support OIDC Client Credentials authentication to CTS
JIRA: RHELCMP-11324
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-06-28 15:49:08 +08:00
Lubomír Sedlář
091d228219 4.4.0 release
JIRA: RHELCMP-11764
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-06-06 15:50:31 +02:00
Lubomír Sedlář
bcc440491e gather-dnf: Run latest() later
The initial version of the filtered the latest builds at the start. That
doesn't matter in many cases:

* When there are no lookaside repos, there is generally a single version
  of each package.
* When lookaside repos do not overlap with compose repos, or contain
  only older versions.

It is however a problem when the lookaside repos contain higher version
of a package than what is in a compose repo, and some package explicitly
requires the older version.

Consider this scenario:

* lookaside contains bar-1.1
* compose repo contains bar-1.0 and foo-1.0
* foo-1.0 `Requires: bar < 1.1`

The original code would filter out the bar-1.0 package, and then fail on
unresolved dependencies.

This patch moves the computation of latest packages much later, to part
of code where all options to satisfy a dependency are selected and the
best match is chosen. At that point if there are multiple versions
available, we do want the latest one.

JIRA: SPMM-13483
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-06-06 12:48:36 +00:00
Lubomír Sedlář
fa50eedfad iso: Support joliet long names
Without this option the names reported by joliet tree are truncated.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-06-05 15:30:18 +02:00
Lubomír Sedlář
b7adbf8a91 Drop pungi-orchestrator code
This was never actually used.

JIRA: RHELCMP-10218
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-06-02 06:14:10 +00:00
Lubomír Sedlář
82ae9e86d5 isos: Ensure proper file ownership and permissions
The genisoimage backend uses the -rational-rock option, which sets uid
and gid to 0, and makes file readable by everyone.

With xorriso this must be done explicitly. Setting ownership is a single
command, but the permissions require a per-file command to not make
files executable where not needed.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2203888
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-06-01 06:29:02 +00:00
Lubomír Sedlář
2ad341a01c gather: Always get latest packages
If lookaside contains an older version of a package, but with a
different arch, the depsolver doesn't notice that and prefers the
lookaside version.

This is not correct. The latest package should be used no matter if
there are different arches available.

The filtering in DNF doesn't ensure this, so we have to build it
ourselves. To limit the performance impact, only run this filtering when
there actually are some lookaside repos configured.

JIRA: RHELCMP-11728

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-05-31 12:32:35 +00:00
Lubomír Sedlář
e888e76992 Add back compatibility with jsonschema <3.0.0
Resolves: https://pagure.io/pungi/issue/1667
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-05-31 07:29:51 +00:00
Lubomír Sedlář
6e72de7efe Remove useless debug message
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-05-30 15:46:17 +02:00
Lubomír Sedlář
c8263fcd39 Remove fedmsg from requirements
The code for sending messages in Fedora actually relies on
fedora-messaging library now. However, we do not have any tests for
that, so there's little reason to pull the library in via
requirements.txt

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-05-17 09:29:55 +02:00
Lubomír Sedlář
82ca4f4e65 gather: Support dotarch in DNF backend
The documentation claims that dotarch syntax is supported for additional
packages. For yum backend this seems to be handled automatically, but
the dnf backend could not interpret this.

This patch checks if a package is specified in the syntax and contains a
valid architecture. If so, the query will honor the arch.

JIRA: RHELCMP-11728
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-05-15 08:42:21 +02:00
Aurélien Bompard
b8b6b46ce7
Set the priority in the fedora-messaging notifier
According to [infra ticket #10899](https://pagure.io/fedora-infrastructure/issue/10899),
ostree messages should have prioriy 3.

Signed-off-by: Aurélien Bompard <aurelien@bompard.org>
2023-05-03 14:20:57 +02:00
Lubomír Sedlář
e9d836c115 Fix compatibility with createrepo_c 0.21.1
The length of the file entry tuple has changed, it can not be unpacked
reliably.

Relates: https://github.com/rpm-software-management/createrepo_c/issues/360
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-04-25 10:25:19 +00:00
Lubomír Sedlář
d3f0701e01 comps: Apply arch filtering to environment/optionlist
Let's filter this list too, not just the grouplist tag.

JIRA: RHELCMP-7926
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-04-24 08:29:15 +02:00
Haibo Lin
8f6f0f463f Add config file for cleaning up cache files
systemd-tmpfiles is required to enable the auto clean up.

JIRA: RHELCMP-6327
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-04-12 09:56:43 +08:00
Haibo Lin
467c7a7f6a 4.3.8 release
JIRA: RHELCMP-11448
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-03-28 18:05:15 +08:00
Lubomír Sedlář
e1d7544c2b createiso: Update possibly changed file on DVD
There's no good way of detecting if buildinstall phase tweaked boot
configuration (and efiboot.img). We should update those files in the DVD
just to be sure.

The .discinfo file is always different and needs to be updated.

Relates: https://pagure.io/pungi/issue/1647
JIRA: RHELCMP-10811
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-27 12:40:39 +00:00
Lubomír Sedlář
a71c8e23be pkgset: Stop reuse if configuration changed
When options controlling excluding arches change, it should break reuse.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-22 12:56:02 +00:00
Lubomír Sedlář
ab508c1511 Allow disabling inheriting ExcludeArch to noarch packages
Copying ExcludeArch/ExclusiveArch from source rpm to noarch is an easy
option to block shipping that particular noarch package from a certain
architecture. However, there is no way to bypass it, and it is rather
confusing and not discoverable.

An alternative way to remove an unwanted package is to use the good old
`filter_packages`, which has enough granularity to remove pretty much
anything from anywhere. The only downside is that it requires a change
in configuration, so it can't be done by a packager directly from a spec
file.

When we decide to break backwards compatibility, this option should be
removed and the entire ExcludeArch/ExclusiveArch inheritance removed
completely.

JIRA: ENGCMP-2606
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-22 12:56:02 +00:00
Lubomír Sedlář
f960b4d155 pkgset: Support extra builds with no tags
This is a rather fringe use case. If the configuration contains
pkgset_koji_builds or pkgset_koji_scratch_tasks but no pkgset_koji_tag,
the compose will be empty.

The expectation though is that the packages should be pulled.

The extra RPMs are added to all non-modular tags because they are
supposed to mask builds from the same packages (e.g. user may want to
explicitly pull in older version than tagged).

This patch adds support for composes containing only explicitly listed
builds by creating a dummy package set that is not actually using any
tag.

JIRA: RHELCMP-11385
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-17 15:10:35 +01:00
Lubomír Sedlář
602b698080 buildinstall: Avoid pointlessly tweaking the boot images
Only modify boot images if there actually is some change.

The tweak function updates config files with volume id and kickstart
file. Even if we don't have a kickstart and there is no change in the
config files, the image will be regenerated. This leads to a change in
checksum for no good reason.

This patch keeps track of modified config files. If there are none, it
avoids touching anything else.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-16 07:46:56 +00:00
Haibo Lin
b30f7e0d83 Prevent to reuse if unsigned packages are allowed
JIRA: RHELCMP-8415
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-03-16 15:32:09 +08:00
Lubomír Sedlář
0c3b6e22f9 Pass parent id/respin id to CTS
When the --target-dir option is used, the compose can be created in CTS,
but the parent and respin information is not passed through. That leads
to data missing later on.

JIRA: RHELCMP-11411
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-14 10:51:34 +01:00
Haibo Lin
3175ede38a Exclude existing files in boot.iso
JIRA: RHELCMP-10811
Fixes: https://pagure.io/pungi/issue/1647
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-03-09 15:33:25 +08:00
Lubomír Sedlář
8920eef339 image-build/osbuild: Pull ISOs into the compose
OSBuild tasks can produce ISO files. If they do, we should include them
in the compose, and we should pull them into the iso/ subdirectory
together with other ISOs.

Fixes: https://pagure.io/pungi/issue/1657
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-06 09:35:47 +01:00
Lubomír Sedlář
58036eab84 Retry 401 error from CTS
This could be a transient error caused by kerberos server instability.

JIRA: RHELCMP-11251
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-02-28 10:14:02 +01:00
Lubomír Sedlář
a4476f2570 gather: Better detection of debuginfo in lookaside
If the depsolver wants to include a package that is present in both the
source repo and a lookaside repo, it reliably detects binary packages
present in lookaside, but for debuginfo it's not so reliable.

There is a separate package object for each package in each repo.
Depending on which one is used, debuginfo could be included in the
result or not. This patch fixes that by actually looking if the same
package is present in any lookaside repo.

JIRA: RHELCMP-9373
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-02-27 15:33:19 +01:00
Haibo Lin
8c06b7a3f1 Log versions of all installed packages
JIRA: RHELCMP-9493
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-02-06 18:24:20 +08:00
Lubomír Sedlář
64ae81b416 Use authentication for all CTS calls
The update of compose URL relied on environment being set from the
initial import. This got broken when a unique credentials cache started
to be used, and was cleaned up after the import.

JIRA: RHELCMP-11072
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-02-02 13:52:11 +00:00
Lubomír Sedlář
826169af7c Fix black complaints
These are newly detected by black 23.1.0.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-02-02 12:53:32 +01:00
Lubomír Sedlář
d97b8bdd33 Add vhd.gz extension to compressed VHD images
JIRA: RHELCMP-11027
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-01-31 11:16:58 +01:00
Lubomír Sedlář
8768b23cbe Add vhd-compressed image type
JIRA: RHELCMP-11027
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-01-30 09:27:22 +00:00
Lubomír Sedlář
51628a974d Update to work with latest mock
The `called_once` attribute now raises an exception. Switch to
`assert_called_once` method. Also replace `assertTrue(x.called)` with
`x.assert_called()`.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-01-26 13:05:48 +01:00
Ondrej Nosek
88327d5784 Default bztar format for sdist command
Usage of the 'bztar' format is unchanged, just changing the way
of configuration. The previous method was deprecated.

Signed-off-by: Ondrej Nosek <onosek@redhat.com>
2022-12-12 12:10:54 +01:00
Ondrej Nosek
6e0a9385f2 4.3.7 release
Signed-off-by: Ondrej Nosek <onosek@redhat.com>
2022-12-09 13:50:53 +01:00
Lubomír Sedlář
8be0d84f8a
osbuild: test passing of rich repos from configuration
Test that "rich" repositories defined as dicts in the configuration
stay as dicts in the arguments passed to the osbuild phase.

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-11-28 14:47:11 +01:00
Tomáš Hozza
8f0906be53
osbuild: support specifying package_sets for repos
The `koji-osbuild` plugin supports additional formats for the `repo`
property since v4 [1]. Specifically, a repo can be specified as a
dictionary with `baseurl` key and `package_sets` list containing
specific package set names, that the repository should be used for.

Extend the configuration schema to reflect the plugin change.
Extend the documentation to cover the new repository format.
Extend an existing unit test to specify additional repository using the
added format.

[1] https://github.com/osbuild/koji-osbuild/pull/82

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-11-28 14:47:11 +01:00
Tomáš Hozza
e3072c3d5f
osbuild: don't use util.get_repo_urls()
Don't use `util.get_repo_urls()` to resolve provided repositories, but
implement osbuild-specific variant of the function named
`_get_repo_urls(). The reason is that the function from `utils`
transforms repositories defined as dicts to strings, which is
undesired for osbuild. The requirement for osbuild is to preserve the
dict as is, just to resolve the string in `baseurl` to the actual
repository URL.

Add a unit test covering the newly added function. It is inspired by a
similar test from `test_util.py`.

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-11-28 14:47:11 +01:00
Tomáš Hozza
ef6d40dce4
osbuild: update schema and config documentation
The `koji-osbuild` Hub schema has been relaxed a bit in the latest
release (v11). Adjust the schema in Pungi to reflect changes in
`koji-osbuild`.

For more information on the changes in `koji-osbuild`, see:
https://github.com/osbuild/koji-osbuild/pull/108

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-11-28 14:17:42 +01:00
Lubomír Sedlář
df6664098d Speed up tests by 30 seconds
The retry test for CTS doesn't actually need to wait. Let's mock the
sleep function.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2022-11-23 11:48:12 +01:00
Lubomír Sedlář
147df93f75 Stop sending compose paths to CTS
The tracking service will reject it as it's not an HTTP URL. Let's not
even try.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2022-11-23 11:48:12 +01:00
Lubomír Sedlář
dd8c1002d4 Report errors from CTS
If the service returns a status code indicating a user error, report
that and do not retry.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2022-11-23 11:48:12 +01:00
Lubomír Sedlář
12e3a46390 createiso: Create Joliet tree with xorriso
This structure is important for isoinfo -J, which is in turn called by
virt-install.

This can be tested by using a bootable ISO by modifying it with a dummy
additional file and preserving boot records:

    $ xorriso -indev netinst.iso -outdev test.iso -boot_image any replay -map setup.py setup.py -end
    ...
    $ isoinfo -J -i test.iso
    isoinfo: Unable to find Joliet SVD
    $ rm test.iso
    $ xorriso -indev netinst.iso -outdev test.iso -joliet on -boot_image any replay -map setup.py setup.py -end
    ...
    $ isoinfo -J -i test.iso
    $

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2144105
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2022-11-22 12:58:46 +01:00
132 changed files with 2443 additions and 7685 deletions

View File

@ -1,41 +0,0 @@
From 432b0bce0401c4bbcd1a958a89305c475a794f26 Mon Sep 17 00:00:00 2001
From: Adam Williamson <awilliam@redhat.com>
Date: Jan 19 2024 07:25:09 +0000
Subject: checks: don't require "repo" in the "ostree" schema
Per @siosm in https://pagure.io/pungi-fedora/pull-request/1227
this option "is deprecated and not needed anymore", so Pungi
should not be requiring it.
Merges: https://pagure.io/pungi/pull-request/1714
Signed-off-by: Adam Williamson <awilliam@redhat.com>
---
diff --git a/pungi/checks.py b/pungi/checks.py
index a340f93..db8b297 100644
--- a/pungi/checks.py
+++ b/pungi/checks.py
@@ -1066,7 +1066,6 @@ def make_schema():
"required": [
"treefile",
"config_url",
- "repo",
"ostree_repo",
],
"additionalProperties": False,
diff --git a/pungi/phases/ostree.py b/pungi/phases/ostree.py
index 90578ae..2649cdb 100644
--- a/pungi/phases/ostree.py
+++ b/pungi/phases/ostree.py
@@ -85,7 +85,7 @@ class OSTreeThread(WorkerThread):
comps_repo = compose.paths.work.comps_repo(
"$basearch", variant=variant, create_dir=False
)
- repos = shortcuts.force_list(config["repo"]) + self.repos
+ repos = shortcuts.force_list(config.get("repo", [])) + self.repos
if compose.has_comps:
repos.append(translate_path(compose, comps_repo))
repos = get_repo_dicts(repos, logger=self.pool)

190
doc/_static/phases.svg vendored
View File

@ -1,22 +1,22 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg <svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="610.46454" width="610.46454"
height="327.16599" height="301.1662"
viewBox="0 0 610.46457 327.16599" viewBox="0 0 610.46457 301.1662"
id="svg2" id="svg2"
version="1.1" version="1.1"
inkscape:version="1.3.2 (091e20e, 2023-11-25)" inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
sodipodi:docname="phases.svg" sodipodi:docname="phases.svg"
inkscape:export-filename="/home/lsedlar/repos/pungi/doc/_static/phases.png" inkscape:export-filename="/home/lsedlar/repos/pungi/doc/_static/phases.png"
inkscape:export-xdpi="90" inkscape:export-xdpi="90"
inkscape:export-ydpi="90" inkscape:export-ydpi="90">
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<sodipodi:namedview <sodipodi:namedview
id="base" id="base"
pagecolor="#ffffff" pagecolor="#ffffff"
@ -25,15 +25,15 @@
inkscape:pageopacity="1" inkscape:pageopacity="1"
inkscape:pageshadow="2" inkscape:pageshadow="2"
inkscape:zoom="1.5" inkscape:zoom="1.5"
inkscape:cx="268" inkscape:cx="9.4746397"
inkscape:cy="260.66667" inkscape:cy="58.833855"
inkscape:document-units="px" inkscape:document-units="px"
inkscape:current-layer="layer1" inkscape:current-layer="layer1"
showgrid="false" showgrid="false"
inkscape:window-width="1920" inkscape:window-width="2560"
inkscape:window-height="1027" inkscape:window-height="1376"
inkscape:window-x="0" inkscape:window-x="0"
inkscape:window-y="25" inkscape:window-y="0"
inkscape:window-maximized="1" inkscape:window-maximized="1"
units="px" units="px"
inkscape:document-rotation="0" inkscape:document-rotation="0"
@ -43,10 +43,7 @@
fit-margin-left="7.4" fit-margin-left="7.4"
fit-margin-right="7.4" fit-margin-right="7.4"
fit-margin-bottom="7.4" fit-margin-bottom="7.4"
lock-margins="true" lock-margins="true" />
inkscape:showpageshadow="2"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1" />
<defs <defs
id="defs4"> id="defs4">
<marker <marker
@ -73,6 +70,7 @@
<dc:format>image/svg+xml</dc:format> <dc:format>image/svg+xml</dc:format>
<dc:type <dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /> rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title />
</cc:Work> </cc:Work>
</rdf:RDF> </rdf:RDF>
</metadata> </metadata>
@ -105,7 +103,7 @@
style="font-size:13.1479px;line-height:1.25">Pkgset</tspan></text> style="font-size:13.1479px;line-height:1.25">Pkgset</tspan></text>
</g> </g>
<g <g
transform="translate(56.378954,-80.817124)" transform="translate(58.253953,-80.817124)"
id="g3398"> id="g3398">
<rect <rect
y="553.98242" y="553.98242"
@ -303,29 +301,25 @@
</g> </g>
</g> </g>
</g> </g>
<g <rect
id="g2" transform="matrix(0,1,1,0,0,0)"
transform="translate(-1.4062678e-8,9.3749966)"> style="fill:#e9b96e;fill-rule:evenodd;stroke:none;stroke-width:1.85901px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
<rect id="rect3338-1"
transform="matrix(0,1,1,0,0,0)" width="90.874992"
style="fill:#e9b96e;fill-rule:evenodd;stroke:none;stroke-width:1.85901px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" height="115.80065"
id="rect3338-1" x="872.67383"
width="103.12497" y="486.55563" />
height="115.80065" <text
x="863.29883" id="text3384-0"
y="486.55563" /> y="921.73846"
<text x="489.56451"
id="text3384-0" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
y="921.73846" xml:space="preserve"><tspan
style="font-size:13.1475px;line-height:1.25"
id="tspan3391"
sodipodi:role="line"
x="489.56451" x="489.56451"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" y="921.73846">ImageChecksum</tspan></text>
xml:space="preserve"><tspan
style="font-size:13.1475px;line-height:1.25"
id="tspan3391"
sodipodi:role="line"
x="489.56451"
y="921.73846">ImageChecksum</tspan></text>
</g>
<g <g
transform="translate(-42.209584,-80.817124)" transform="translate(-42.209584,-80.817124)"
id="g3458"> id="g3458">
@ -423,16 +417,16 @@
id="rect290" id="rect290"
width="26.295755" width="26.295755"
height="224.35098" height="224.35098"
x="1091.7223" x="1063.5973"
y="378.43698" y="378.43698"
transform="matrix(0,1,1,0,0,0)" /> transform="matrix(0,1,1,0,0,0)" />
<text <text
xml:space="preserve" xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="380.74133" x="380.74133"
y="1106.6223" y="1080.3723"
id="text294"><tspan id="text294"><tspan
y="1106.6223" y="1080.3723"
x="380.74133" x="380.74133"
sodipodi:role="line" sodipodi:role="line"
id="tspan301" id="tspan301"
@ -460,9 +454,32 @@
y="1069.0087" y="1069.0087"
id="tspan3812">ExtraIsos</tspan></text> id="tspan3812">ExtraIsos</tspan></text>
</g> </g>
<g
id="g1031"
transform="translate(-40.740337,29.23522)">
<rect
transform="matrix(0,1,1,0,0,0)"
style="fill:#5ed4ec;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect206"
width="26.295755"
height="102.36562"
x="1066.8611"
y="418.66275" />
<text
id="text210"
y="1084.9105"
x="421.51923"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
y="1084.9105"
x="421.51923"
id="tspan208"
sodipodi:role="line"
style="font-size:13.1479px;line-height:1.25">Repoclosure</tspan></text>
</g>
<rect <rect
y="377.92242" y="377.92242"
x="1122.3463" x="1096.0963"
height="224.24059" height="224.24059"
width="26.295755" width="26.295755"
id="rect87" id="rect87"
@ -472,18 +489,17 @@
xml:space="preserve" xml:space="preserve"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="380.7789" x="380.7789"
y="1140.3958" y="1114.1458"
id="text91"><tspan id="text91"><tspan
style="font-size:13.1479px;line-height:1.25" style="font-size:13.1479px;line-height:1.25"
sodipodi:role="line" sodipodi:role="line"
id="tspan89" id="tspan89"
x="380.7789" x="380.7789"
y="1140.3958">Repoclosure</tspan></text> y="1114.1458">Repoclosure</tspan></text>
<g <g
id="g206" id="g206">
transform="translate(0,-1.8749994)">
<rect <rect
style="fill:#fcd9a4;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:1.00033px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:1.00033px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect290-6" id="rect290-6"
width="26.295755" width="26.295755"
height="101.91849" height="101.91849"
@ -500,58 +516,26 @@
x="380.23166" x="380.23166"
sodipodi:role="line" sodipodi:role="line"
id="tspan301-5" id="tspan301-5"
style="font-size:12px;line-height:0">KiwiBuild</tspan></text>
</g>
<g
id="g3">
<g
id="g1">
<g
id="g4">
<rect
transform="matrix(0,1,1,0,0,0)"
style="fill:#729fcf;fill-rule:evenodd;stroke:none;stroke-width:1.83502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect3338-1-3"
width="103.12497"
height="115.80065"
x="983.44263"
y="486.55563" />
<text
id="text3384-0-6"
y="1038.8422"
x="489.56451"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-size:13.1475px;line-height:1.25"
id="tspan3391-7"
sodipodi:role="line"
x="489.56451"
y="1038.8422">ImageContainer</tspan></text>
</g>
</g>
</g>
<g
id="g206-1"
transform="translate(-0.04628921,28.701853)">
<rect
style="fill:#fcaf3e;fill-rule:evenodd;stroke:none;stroke-width:1.00033px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect290-6-7"
width="26.295755"
height="101.91849"
x="1032.3469"
y="377.92731"
transform="matrix(0,1,1,0,0,0)" />
<text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="380.23166"
y="1049.1219"
id="text294-7-5"><tspan
y="1049.1219"
x="380.23166"
sodipodi:role="line"
id="tspan301-5-5"
style="font-size:12px;line-height:0">OSBuild</tspan></text> style="font-size:12px;line-height:0">OSBuild</tspan></text>
</g> </g>
<rect
transform="matrix(0,1,1,0,0,0)"
style="fill:#729fcf;fill-rule:evenodd;stroke:none;stroke-width:1.83502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="rect3338-1-3"
width="88.544876"
height="115.80065"
x="970.31763"
y="486.55563" />
<text
id="text3384-0-6"
y="1018.2172"
x="489.56451"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-size:13.1475px;line-height:1.25"
id="tspan3391-7"
sodipodi:role="line"
x="489.56451"
y="1018.2172">ImageContainer</tspan></text>
</g> </g>
</svg> </svg>

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -51,9 +51,9 @@ copyright = "2016, Red Hat, Inc."
# built documents. # built documents.
# #
# The short X.Y version. # The short X.Y version.
version = "4.7" version = "4.5"
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = "4.7.0" release = "4.5.1"
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.

View File

@ -292,8 +292,8 @@ There a couple common format specifiers available for both the options:
format string. The pattern should not overlap, otherwise it is undefined format string. The pattern should not overlap, otherwise it is undefined
which one will be used. which one will be used.
This format will be used for some phases generating images. Currently that This format will be used for all phases generating images. Currently that
means ``createiso``, ``buildinstall`` and ``ostree_installer``. means ``createiso``, ``live_images`` and ``buildinstall``.
Available extra keys are: Available extra keys are:
* ``disc_num`` * ``disc_num``
@ -323,6 +323,7 @@ There a couple common format specifiers available for both the options:
Available keys are: Available keys are:
* ``boot`` -- for ``boot.iso`` images created in *buildinstall* phase * ``boot`` -- for ``boot.iso`` images created in *buildinstall* phase
* ``live`` -- for images created by *live_images* phase
* ``dvd`` -- for images created by *createiso* phase * ``dvd`` -- for images created by *createiso* phase
* ``ostree`` -- for ostree installer images * ``ostree`` -- for ostree installer images
@ -350,10 +351,48 @@ Example
disc_types = { disc_types = {
'boot': 'netinst', 'boot': 'netinst',
'live': 'Live',
'dvd': 'DVD', 'dvd': 'DVD',
} }
Signing
=======
If you want to sign deliverables generated during pungi run like RPM wrapped
images. You must provide few configuration options:
**signing_command** [optional]
(*str*) -- Command that will be run with a koji build as a single
argument. This command must not require any user interaction.
If you need to pass a password for a signing key to the command,
do this via command line option of the command and use string
formatting syntax ``%(signing_key_password)s``.
(See **signing_key_password_file**).
**signing_key_id** [optional]
(*str*) -- ID of the key that will be used for the signing.
This ID will be used when crafting koji paths to signed files
(``kojipkgs.fedoraproject.org/packages/NAME/VER/REL/data/signed/KEYID/..``).
**signing_key_password_file** [optional]
(*str*) -- Path to a file with password that will be formatted
into **signing_command** string via ``%(signing_key_password)s``
string format syntax (if used).
Because pungi config is usually stored in git and is part of compose
logs we don't want password to be included directly in the config.
Note: If ``-`` string is used instead of a filename, then you will be asked
for the password interactivelly right after pungi starts.
Example
-------
::
signing_command = '~/git/releng/scripts/sigulsign_unsigned.py -vv --password=%(signing_key_password)s fedora-24'
signing_key_id = '81b46521'
signing_key_password_file = '~/password_for_fedora-24_key'
.. _git-urls: .. _git-urls:
Git URLs Git URLs
@ -1329,8 +1368,8 @@ All non-``RC`` milestones from label get appended to the version. For release
either label is used or date, type and respin. either label is used or date, type and respin.
Common options for Live Media and Image Build Common options for Live Images, Live Media and Image Build
============================================= ==========================================================
All images can have ``ksurl``, ``version``, ``release`` and ``target`` All images can have ``ksurl``, ``version``, ``release`` and ``target``
specified. Since this can create a lot of duplication, there are global options specified. Since this can create a lot of duplication, there are global options
@ -1346,12 +1385,14 @@ The kickstart URL is configured by these options.
* ``global_ksurl`` -- global fallback setting * ``global_ksurl`` -- global fallback setting
* ``live_media_ksurl`` * ``live_media_ksurl``
* ``image_build_ksurl`` * ``image_build_ksurl``
* ``live_images_ksurl``
Target is specified by these settings. Target is specified by these settings.
* ``global_target`` -- global fallback setting * ``global_target`` -- global fallback setting
* ``live_media_target`` * ``live_media_target``
* ``image_build_target`` * ``image_build_target``
* ``live_images_target``
* ``osbuild_target`` * ``osbuild_target``
Version is specified by these options. If no version is set, a default value Version is specified by these options. If no version is set, a default value
@ -1360,6 +1401,7 @@ will be provided according to :ref:`automatic versioning <auto-version>`.
* ``global_version`` -- global fallback setting * ``global_version`` -- global fallback setting
* ``live_media_version`` * ``live_media_version``
* ``image_build_version`` * ``image_build_version``
* ``live_images_version``
* ``osbuild_version`` * ``osbuild_version``
Release is specified by these options. If set to a magic value to Release is specified by these options. If set to a magic value to
@ -1369,14 +1411,44 @@ to :ref:`automatic versioning <auto-version>`.
* ``global_release`` -- global fallback setting * ``global_release`` -- global fallback setting
* ``live_media_release`` * ``live_media_release``
* ``image_build_release`` * ``image_build_release``
* ``live_images_release``
* ``osbuild_release`` * ``osbuild_release``
Each configuration block can also optionally specify a ``failable`` key. It Each configuration block can also optionally specify a ``failable`` key. For
live images it should have a boolean value. For live media and image build it
should be a list of strings containing architectures that are optional. If any should be a list of strings containing architectures that are optional. If any
deliverable fails on an optional architecture, it will not abort the whole deliverable fails on an optional architecture, it will not abort the whole
compose. If the list contains only ``"*"``, all arches will be substituted. compose. If the list contains only ``"*"``, all arches will be substituted.
Live Images Settings
====================
**live_images**
(*list*) -- Configuration for the particular image. The elements of the
list should be tuples ``(variant_uid_regex, {arch|*: config})``. The config
should be a dict with these keys:
* ``kickstart`` (*str*)
* ``ksurl`` (*str*) [optional] -- where to get the kickstart from
* ``name`` (*str*)
* ``version`` (*str*)
* ``target`` (*str*)
* ``repo`` (*str|[str]*) -- repos specified by URL or variant UID
* ``specfile`` (*str*) -- for images wrapped in RPM
* ``scratch`` (*bool*) -- only RPM-wrapped images can use scratch builds,
but by default this is turned off
* ``type`` (*str*) -- what kind of task to start in Koji. Defaults to
``live`` meaning ``koji spin-livecd`` will be used. Alternative option
is ``appliance`` corresponding to ``koji spin-appliance``.
* ``sign`` (*bool*) -- only RPM-wrapped images can be signed
**live_images_no_rename**
(*bool*) -- When set to ``True``, filenames generated by Koji will be used.
When ``False``, filenames will be generated based on ``image_name_format``
configuration option.
Live Media Settings Live Media Settings
=================== ===================
@ -1532,61 +1604,6 @@ Example
} }
KiwiBuild Settings
==================
**kiwibuild**
(*dict*) -- configuration for building images using kiwi by a Koji plugin.
Pungi will trigger a Koji task delegating to kiwi, which will build the image,
import it to Koji via content generators.
Format: ``{variant_uid_regex: [{...}]}``.
Required keys in the configuration dict:
* ``kiwi_profile`` -- (*str*) select profile from description file.
Description scm, description path and target have to be provided too, but
instead of specifying them for each image separately, you can use the
``kiwibuild_*`` options or ``global_target``.
Optional keys:
* ``description_scm`` -- (*str*) scm URL of description kiwi description.
* ``description_path`` -- (*str*) path to kiwi description inside the scm
repo.
* ``repos`` -- additional repos used to install RPMs in the image. The
compose repository for the enclosing variant is added automatically.
Either variant name or a URL is supported.
* ``target`` -- (*str*) which build target to use for the task. If not
provided, then either ``kiwibuild_target`` or ``global_target`` is
needed.
* ``release`` -- (*str*) release of the output image.
* ``arches`` -- (*[str]*) List of architectures to build for. If not
provided, all variant architectures will be built.
* ``failable`` -- (*[str]*) List of architectures for which this
deliverable is not release blocking.
* ``type`` -- (*str*) override default type from the bundle with this value.
* ``type_attr`` -- (*[str]*) override default attributes for the build type
from description.
* ``bundle_name_format`` -- (*str*) override default bundle format name.
**kiwibuild_description_scm**
(*str*) -- URL for scm containing the description files
**kiwibuild_description_path**
(*str*) -- path to a description file within the description scm
**kiwibuild_type**
(*str*) -- override default type from the bundle with this value.
**kiwibuild_type_attr**
(*[str]*) -- override default attributes for the build type from description.
**kiwibuild_bundle_name_format**
(*str*) -- override default bundle format name.
OSBuild Composer for building images OSBuild Composer for building images
==================================== ====================================
@ -1642,10 +1659,6 @@ OSBuild Composer for building images
* ``ostree_ref`` -- name of the ostree branch * ``ostree_ref`` -- name of the ostree branch
* ``ostree_parent`` -- commit hash or a a branch-like reference to the * ``ostree_parent`` -- commit hash or a a branch-like reference to the
parent commit. parent commit.
* ``customizations`` -- a dictionary with customizations to use for the
image build. For the list of supported customizations, see the **hosted**
variants in the `Image Builder documentation
<https://osbuild.org/docs/user-guide/blueprint-reference#installation-device>`.
* ``upload_options`` -- a dictionary with upload options specific to the * ``upload_options`` -- a dictionary with upload options specific to the
target cloud environment. If provided, the image will be uploaded to the target cloud environment. If provided, the image will be uploaded to the
cloud environment, in addition to the Koji server. One can't combine cloud environment, in addition to the Koji server. One can't combine
@ -1753,16 +1766,16 @@ another directory. Any new packages in the compose will be added to the
repository with a new commit. repository with a new commit.
**ostree** **ostree**
(*dict*) -- a mapping of configuration for each variant. The format should (*dict*) -- a mapping of configuration for each. The format should be
be ``{variant_uid_regex: config_dict}``. It is possible to use a list of ``{variant_uid_regex: config_dict}``. It is possible to use a list of
configuration dicts as well. configuration dicts as well.
The configuration dict for each variant arch pair must have these keys: The configuration dict for each variant arch pair must have these keys:
* ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``. * ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``.
* ``config_url`` -- (*str*) URL for Git repository with the ``treefile``. * ``config_url`` -- (*str*) URL for Git repository with the ``treefile``.
* ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or a dict of * ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or variant UID
repo options, ``baseurl`` is required in the dict. or a dict of repo options, ``baseurl`` is required in the dict.
* ``ostree_repo`` -- (*str*) Where to put the ostree repository * ``ostree_repo`` -- (*str*) Where to put the ostree repository
These keys are optional: These keys are optional:
@ -1804,11 +1817,13 @@ Example config
"^Atomic$": { "^Atomic$": {
"treefile": "fedora-atomic-docker-host.json", "treefile": "fedora-atomic-docker-host.json",
"config_url": "https://git.fedorahosted.org/git/fedora-atomic.git", "config_url": "https://git.fedorahosted.org/git/fedora-atomic.git",
"keep_original_sources": True,
"repo": [ "repo": [
"Server",
"http://example.com/repo/x86_64/os", "http://example.com/repo/x86_64/os",
{"baseurl": "Everything"},
{"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"}, {"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
], ],
"keep_original_sources": True,
"ostree_repo": "/mnt/koji/compose/atomic/Rawhide/", "ostree_repo": "/mnt/koji/compose/atomic/Rawhide/",
"update_summary": True, "update_summary": True,
# Automatically generate a reasonable version # Automatically generate a reasonable version
@ -1824,79 +1839,6 @@ Example config
has the pungi_ostree plugin installed. has the pungi_ostree plugin installed.
OSTree Native Container Settings
================================
The ``ostree_container`` phase of *Pungi* can create an ostree native container
image as an OCI archive. This is done by running ``rpm-ostree compose image``
in a Koji runroot environment.
While rpm-ostree can use information from previously built images to improve
the split in container layers, we can not use that functionnality until
https://github.com/containers/skopeo/pull/2114 is resolved. Each invocation
will thus create a new OCI archive image *from scratch*.
**ostree_container**
(*dict*) -- a mapping of configuration for each variant. The format should
be ``{variant_uid_regex: config_dict}``. It is possible to use a list of
configuration dicts as well.
The configuration dict for each variant arch pair must have these keys:
* ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``.
* ``config_url`` -- (*str*) URL for Git repository with the ``treefile``.
These keys are optional:
* ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or a dict of
repo options, ``baseurl`` is required in the dict.
* ``keep_original_sources`` -- (*bool*) Keep the existing source repos in
the tree config file. If not enabled, all the original source repos will
be removed from the tree config file.
* ``config_branch`` -- (*str*) Git branch of the repo to use. Defaults to
``main``.
* ``arches`` -- (*[str]*) List of architectures for which to generate
ostree native container images. There will be one task per architecture.
By default all architectures in the variant are used.
* ``failable`` -- (*[str]*) List of architectures for which this
deliverable is not release blocking.
* ``version`` -- (*str*) Version string to be added to the OCI archive name.
If this option is set to ``!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN``,
a value will be generated automatically as ``$VERSION.$RELEASE``.
If this option is set to ``!VERSION_FROM_VERSION_DATE_RESPIN``,
a value will be generated automatically as ``$VERSION.$DATE.$RESPIN``.
:ref:`See how those values are created <auto-version>`.
* ``tag_ref`` -- (*bool*, default ``True``) If set to ``False``, a git
reference will not be created.
* ``runroot_packages`` -- (*list*) A list of additional package names to be
installed in the runroot environment in Koji.
Example config
--------------
::
ostree_container = {
"^Sagano$": {
"treefile": "fedora-tier-0-38.yaml",
"config_url": "https://gitlab.com/CentOS/cloud/sagano.git",
"config_branch": "main",
"repo": [
"http://example.com/repo/x86_64/os",
{"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
],
# Automatically generate a reasonable version
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
# Only run this for x86_64 even if Sagano has more arches
"arches": ["x86_64"],
}
}
**ostree_container_use_koji_plugin** = False
(*bool*) -- When set to ``True``, the Koji pungi_ostree task will be
used to execute rpm-ostree instead of runroot. Use only if the Koji instance
has the pungi_ostree plugin installed.
Ostree Installer Settings Ostree Installer Settings
========================= =========================
@ -2247,9 +2189,9 @@ Miscellaneous Settings
format string accepting ``%(variant_name)s`` and ``%(arch)s`` placeholders. format string accepting ``%(variant_name)s`` and ``%(arch)s`` placeholders.
**symlink_isos_to** **symlink_isos_to**
(*str*) -- If set, the ISO files from ``buildinstall`` and ``createiso`` (*str*) -- If set, the ISO files from ``buildinstall``, ``createiso`` and
phases will be put into this destination, and a symlink pointing to this ``live_images`` phases will be put into this destination, and a symlink
location will be created in actual compose directory. pointing to this location will be created in actual compose directory.
**dogpile_cache_backend** **dogpile_cache_backend**
(*str*) -- If set, Pungi will use the configured Dogpile cache backend to (*str*) -- If set, Pungi will use the configured Dogpile cache backend to

View File

@ -294,6 +294,30 @@ This is a shortened configuration for Fedora Radhide compose as of 2019-10-14.
}) })
] ]
live_target = 'f32'
live_images_no_rename = True
live_images = [
('^Workstation$', {
'armhfp': {
'kickstart': 'fedora-arm-workstation.ks',
'name': 'Fedora-Workstation-armhfp',
# Again workstation takes packages from Everything.
'repo': 'Everything',
'type': 'appliance',
'failable': True,
}
}),
('^Server$', {
# But Server has its own repo.
'armhfp': {
'kickstart': 'fedora-arm-server.ks',
'name': 'Fedora-Server-armhfp',
'type': 'appliance',
'failable': True,
}
}),
]
ostree = { ostree = {
"^Silverblue$": { "^Silverblue$": {
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN", "version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
@ -319,20 +343,6 @@ This is a shortened configuration for Fedora Radhide compose as of 2019-10-14.
} }
} }
ostree_container = {
"^Sagano$": {
"treefile": "fedora-tier-0-38.yaml",
"config_url": "https://gitlab.com/CentOS/cloud/sagano.git",
"config_branch": "main",
# Consume packages from Everything
"repo": "Everything",
# Automatically generate a reasonable version
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
# Only run this for x86_64 even if Sagano has more arches
"arches": ["x86_64"],
}
}
ostree_installer = [ ostree_installer = [
("^Silverblue$", { ("^Silverblue$", {
"x86_64": { "x86_64": {

View File

@ -112,12 +112,6 @@ ImageBuild
This phase wraps up ``koji image-build``. It also updates the metadata This phase wraps up ``koji image-build``. It also updates the metadata
ultimately responsible for ``images.json`` manifest. ultimately responsible for ``images.json`` manifest.
KiwiBuild
---------
Similarly to image build, this phases creates a koji `kiwiBuild` task. In the
background it uses Kiwi to create images.
OSBuild OSBuild
------- -------

1065
pungi.spec

File diff suppressed because it is too large Load Diff

View File

@ -93,11 +93,6 @@ def split_name_arch(name_arch):
def is_excluded(package, arches, logger=None): def is_excluded(package, arches, logger=None):
"""Check if package is excluded from given architectures.""" """Check if package is excluded from given architectures."""
if any(
getBaseArch(exc_arch) == 'x86_64' for exc_arch in package.exclusivearch
) and 'x86_64_v2' not in package.exclusivearch:
package.exclusivearch.append('x86_64_v2')
if package.excludearch and set(package.excludearch) & set(arches): if package.excludearch and set(package.excludearch) & set(arches):
if logger: if logger:
logger.debug( logger.debug(

View File

@ -34,8 +34,6 @@ arches = {
"x86_64": "athlon", "x86_64": "athlon",
"amd64": "x86_64", "amd64": "x86_64",
"ia32e": "x86_64", "ia32e": "x86_64",
# x86-64-v2
"x86_64_v2": "noarch",
# ppc64le # ppc64le
"ppc64le": "noarch", "ppc64le": "noarch",
# ppc # ppc

View File

@ -553,6 +553,26 @@ def make_schema():
"list_of_strings": {"type": "array", "items": {"type": "string"}}, "list_of_strings": {"type": "array", "items": {"type": "string"}},
"strings": _one_or_list({"type": "string"}), "strings": _one_or_list({"type": "string"}),
"optional_string": {"anyOf": [{"type": "string"}, {"type": "null"}]}, "optional_string": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"live_image_config": {
"type": "object",
"properties": {
"kickstart": {"type": "string"},
"ksurl": {"type": "url"},
"name": {"type": "string"},
"subvariant": {"type": "string"},
"target": {"type": "string"},
"version": {"type": "string"},
"repo": {"$ref": "#/definitions/repos"},
"specfile": {"type": "string"},
"scratch": {"type": "boolean"},
"type": {"type": "string"},
"sign": {"type": "boolean"},
"failable": {"type": "boolean"},
"release": {"$ref": "#/definitions/optional_string"},
},
"required": ["kickstart"],
"additionalProperties": False,
},
"osbs_config": { "osbs_config": {
"type": "object", "type": "object",
"properties": { "properties": {
@ -588,7 +608,6 @@ def make_schema():
"release_discinfo_description": {"type": "string"}, "release_discinfo_description": {"type": "string"},
"treeinfo_version": {"type": "string"}, "treeinfo_version": {"type": "string"},
"compose_type": {"type": "string", "enum": COMPOSE_TYPES}, "compose_type": {"type": "string", "enum": COMPOSE_TYPES},
"label": {"type": "string"},
"base_product_name": {"type": "string"}, "base_product_name": {"type": "string"},
"base_product_short": {"type": "string"}, "base_product_short": {"type": "string"},
"base_product_version": {"type": "string"}, "base_product_version": {"type": "string"},
@ -666,11 +685,7 @@ def make_schema():
"pkgset_allow_reuse": {"type": "boolean", "default": True}, "pkgset_allow_reuse": {"type": "boolean", "default": True},
"createiso_allow_reuse": {"type": "boolean", "default": True}, "createiso_allow_reuse": {"type": "boolean", "default": True},
"extraiso_allow_reuse": {"type": "boolean", "default": True}, "extraiso_allow_reuse": {"type": "boolean", "default": True},
"pkgset_source": {"type": "string", "enum": [ "pkgset_source": {"type": "string", "enum": ["koji", "repos"]},
"koji",
"repos",
"kojimock",
]},
"createrepo_c": {"type": "boolean", "default": True}, "createrepo_c": {"type": "boolean", "default": True},
"createrepo_checksum": { "createrepo_checksum": {
"type": "string", "type": "string",
@ -799,14 +814,6 @@ def make_schema():
"type": "string", "type": "string",
"enum": ["lorax"], "enum": ["lorax"],
}, },
# In phase `buildinstall` we should add to compose only the
# images that will be used only as netinstall
"netinstall_variants": {
"$ref": "#/definitions/list_of_strings",
"default": [
"BaseOS",
],
},
"buildinstall_topdir": {"type": "string"}, "buildinstall_topdir": {"type": "string"},
"buildinstall_kickstart": {"$ref": "#/definitions/str_or_scm_dict"}, "buildinstall_kickstart": {"$ref": "#/definitions/str_or_scm_dict"},
"buildinstall_use_guestmount": {"type": "boolean", "default": True}, "buildinstall_use_guestmount": {"type": "boolean", "default": True},
@ -862,10 +869,7 @@ def make_schema():
"paths_module": {"type": "string"}, "paths_module": {"type": "string"},
"skip_phases": { "skip_phases": {
"type": "array", "type": "array",
"items": { "items": {"type": "string", "enum": PHASES_NAMES + ["productimg"]},
"type": "string",
"enum": PHASES_NAMES + ["productimg", "live_images"],
},
"default": [], "default": [],
}, },
"image_name_format": { "image_name_format": {
@ -899,6 +903,11 @@ def make_schema():
}, },
"restricted_volid": {"type": "boolean", "default": False}, "restricted_volid": {"type": "boolean", "default": False},
"volume_id_substitutions": {"type": "object", "default": {}}, "volume_id_substitutions": {"type": "object", "default": {}},
"live_images_no_rename": {"type": "boolean", "default": False},
"live_images_ksurl": {"type": "url"},
"live_images_target": {"type": "string"},
"live_images_release": {"$ref": "#/definitions/optional_string"},
"live_images_version": {"type": "string"},
"image_build_ksurl": {"type": "url"}, "image_build_ksurl": {"type": "url"},
"image_build_target": {"type": "string"}, "image_build_target": {"type": "string"},
"image_build_release": {"$ref": "#/definitions/optional_string"}, "image_build_release": {"$ref": "#/definitions/optional_string"},
@ -931,6 +940,8 @@ def make_schema():
"product_id": {"$ref": "#/definitions/str_or_scm_dict"}, "product_id": {"$ref": "#/definitions/str_or_scm_dict"},
"product_id_allow_missing": {"type": "boolean", "default": False}, "product_id_allow_missing": {"type": "boolean", "default": False},
"product_id_allow_name_prefix": {"type": "boolean", "default": True}, "product_id_allow_name_prefix": {"type": "boolean", "default": True},
# Deprecated in favour of regular local/phase/global setting.
"live_target": {"type": "string"},
"tree_arches": {"$ref": "#/definitions/list_of_strings", "default": []}, "tree_arches": {"$ref": "#/definitions/list_of_strings", "default": []},
"tree_variants": {"$ref": "#/definitions/list_of_strings", "default": []}, "tree_variants": {"$ref": "#/definitions/list_of_strings", "default": []},
"translate_paths": {"$ref": "#/definitions/string_pairs", "default": []}, "translate_paths": {"$ref": "#/definitions/string_pairs", "default": []},
@ -1055,6 +1066,7 @@ def make_schema():
"required": [ "required": [
"treefile", "treefile",
"config_url", "config_url",
"repo",
"ostree_repo", "ostree_repo",
], ],
"additionalProperties": False, "additionalProperties": False,
@ -1092,39 +1104,6 @@ def make_schema():
), ),
] ]
}, },
"ostree_container": {
"type": "object",
"patternProperties": {
# Warning: this pattern is a variant uid regex, but the
# format does not let us validate it as there is no regular
# expression to describe all regular expressions.
".+": _one_or_list(
{
"type": "object",
"properties": {
"treefile": {"type": "string"},
"config_url": {"type": "string"},
"repo": {"$ref": "#/definitions/repos"},
"keep_original_sources": {"type": "boolean"},
"config_branch": {"type": "string"},
"arches": {"$ref": "#/definitions/list_of_strings"},
"failable": {"$ref": "#/definitions/list_of_strings"},
"version": {"type": "string"},
"tag_ref": {"type": "boolean"},
"runroot_packages": {
"$ref": "#/definitions/list_of_strings",
},
},
"required": [
"treefile",
"config_url",
],
"additionalProperties": False,
}
),
},
"additionalProperties": False,
},
"ostree_installer": _variant_arch_mapping( "ostree_installer": _variant_arch_mapping(
{ {
"type": "object", "type": "object",
@ -1149,9 +1128,11 @@ def make_schema():
} }
), ),
"ostree_use_koji_plugin": {"type": "boolean", "default": False}, "ostree_use_koji_plugin": {"type": "boolean", "default": False},
"ostree_container_use_koji_plugin": {"type": "boolean", "default": False},
"ostree_installer_use_koji_plugin": {"type": "boolean", "default": False}, "ostree_installer_use_koji_plugin": {"type": "boolean", "default": False},
"ostree_installer_overwrite": {"type": "boolean", "default": False}, "ostree_installer_overwrite": {"type": "boolean", "default": False},
"live_images": _variant_arch_mapping(
_one_or_list({"$ref": "#/definitions/live_image_config"})
),
"image_build_allow_reuse": {"type": "boolean", "default": False}, "image_build_allow_reuse": {"type": "boolean", "default": False},
"image_build": { "image_build": {
"type": "object", "type": "object",
@ -1202,50 +1183,6 @@ def make_schema():
}, },
"additionalProperties": False, "additionalProperties": False,
}, },
"kiwibuild": {
"type": "object",
"patternProperties": {
# Warning: this pattern is a variant uid regex, but the
# format does not let us validate it as there is no regular
# expression to describe all regular expressions.
".+": {
"type": "array",
"items": {
"type": "object",
"properties": {
"target": {"type": "string"},
"description_scm": {"type": "url"},
"description_path": {"type": "string"},
"kiwi_profile": {"type": "string"},
"release": {"type": "string"},
"arches": {"$ref": "#/definitions/list_of_strings"},
"repos": {"$ref": "#/definitions/list_of_strings"},
"failable": {"$ref": "#/definitions/list_of_strings"},
"subvariant": {"type": "string"},
"type": {"type": "string"},
"type_attr": {"$ref": "#/definitions/list_of_strings"},
"bundle_name_format": {"type": "string"},
},
"required": [
# description_scm and description_path
# are really required, but as they can
# be set at the phase level we cannot
# enforce that here
"kiwi_profile",
],
"additionalProperties": False,
},
}
},
"additionalProperties": False,
},
"kiwibuild_description_scm": {"type": "url"},
"kiwibuild_description_path": {"type": "string"},
"kiwibuild_target": {"type": "string"},
"kiwibuild_release": {"$ref": "#/definitions/optional_string"},
"kiwibuild_type": {"type": "string"},
"kiwibuild_type_attr": {"$ref": "#/definitions/list_of_strings"},
"kiwibuild_bundle_name_format": {"type": "string"},
"osbuild_target": {"type": "string"}, "osbuild_target": {"type": "string"},
"osbuild_release": {"$ref": "#/definitions/optional_string"}, "osbuild_release": {"$ref": "#/definitions/optional_string"},
"osbuild_version": {"type": "string"}, "osbuild_version": {"type": "string"},
@ -1307,10 +1244,6 @@ def make_schema():
"ostree_ref": {"type": "string"}, "ostree_ref": {"type": "string"},
"ostree_parent": {"type": "string"}, "ostree_parent": {"type": "string"},
"manifest_type": {"type": "string"}, "manifest_type": {"type": "string"},
"customizations": {
"type": "object",
"additionalProperties": True,
},
"upload_options": { "upload_options": {
# this should be really 'oneOf', but the minimal # this should be really 'oneOf', but the minimal
# required properties in AWSEC2 and GCP options # required properties in AWSEC2 and GCP options
@ -1427,6 +1360,9 @@ def make_schema():
{"$ref": "#/definitions/strings"} {"$ref": "#/definitions/strings"}
), ),
"lorax_use_koji_plugin": {"type": "boolean", "default": False}, "lorax_use_koji_plugin": {"type": "boolean", "default": False},
"signing_key_id": {"type": "string"},
"signing_key_password_file": {"type": "string"},
"signing_command": {"type": "string"},
"productimg": { "productimg": {
"deprecated": "remove it. Productimg phase has been removed" "deprecated": "remove it. Productimg phase has been removed"
}, },

View File

@ -718,10 +718,8 @@ class Compose(kobo.log.LoggingBase):
basename += "-" + detail basename += "-" + detail
tb_path = self.paths.log.log_file("global", basename) tb_path = self.paths.log.log_file("global", basename)
self.log_error("Extended traceback in: %s", tb_path) self.log_error("Extended traceback in: %s", tb_path)
tback = kobo.tback.Traceback(show_locals=show_locals).get_traceback() with open(tb_path, "wb") as f:
# Kobo 0.36.0 returns traceback as str, older versions return bytes f.write(kobo.tback.Traceback(show_locals=show_locals).get_traceback())
with open(tb_path, "wb" if isinstance(tback, bytes) else "w") as f:
f.write(tback)
def load_old_compose_config(self): def load_old_compose_config(self):
""" """

View File

@ -159,11 +159,15 @@ def write_xorriso_commands(opts):
script = os.path.join(opts.script_dir, "xorriso-%s.txt" % id(opts)) script = os.path.join(opts.script_dir, "xorriso-%s.txt" % id(opts))
with open(script, "w") as f: with open(script, "w") as f:
for cmd in iso.xorriso_commands( emit(f, "-indev %s" % opts.boot_iso)
opts.arch, opts.boot_iso, os.path.join(opts.output_dir, opts.iso_name) emit(f, "-outdev %s" % os.path.join(opts.output_dir, opts.iso_name))
): emit(f, "-boot_image any replay")
emit(f, " ".join(cmd))
emit(f, "-volid %s" % opts.volid) emit(f, "-volid %s" % opts.volid)
# isoinfo -J uses the Joliet tree, and it's used by virt-install
emit(f, "-joliet on")
# Support long filenames in the Joliet trees. Repodata is particularly
# likely to run into this limit.
emit(f, "-compliance joliet_long_names")
with open(opts.graft_points) as gp: with open(opts.graft_points) as gp:
for line in gp: for line in gp:
@ -174,6 +178,10 @@ def write_xorriso_commands(opts):
emit(f, "%s %s %s" % (cmd, fs_path, iso_path)) emit(f, "%s %s %s" % (cmd, fs_path, iso_path))
emit(f, "-chmod 0%o %s" % (_get_perms(fs_path), iso_path)) emit(f, "-chmod 0%o %s" % (_get_perms(fs_path), iso_path))
if opts.arch == "ppc64le":
# This is needed for the image to be bootable.
emit(f, "-as mkisofs -U --")
emit(f, "-chown_r 0 /") emit(f, "-chown_r 0 /")
emit(f, "-chgrp_r 0 /") emit(f, "-chgrp_r 0 /")
emit(f, "-end") emit(f, "-end")

View File

@ -40,20 +40,6 @@ def get_source_name(pkg):
return pkg.sourcerpm.rsplit("-", 2)[0] return pkg.sourcerpm.rsplit("-", 2)[0]
def filter_dotarch(queue, pattern, **kwargs):
"""Filter queue for packages matching the pattern. If pattern matches the
dotarch format of <name>.<arch>, it is processed as such. Otherwise it is
treated as just a name.
"""
kwargs["name__glob"] = pattern
if "." in pattern:
name, arch = pattern.split(".", 1)
if arch in arch_utils.arches or arch == "noarch":
kwargs["name__glob"] = name
kwargs["arch"] = arch
return queue.filter(**kwargs).apply()
class GatherOptions(pungi.common.OptionsBase): class GatherOptions(pungi.common.OptionsBase):
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(GatherOptions, self).__init__() super(GatherOptions, self).__init__()
@ -465,16 +451,12 @@ class Gather(GatherBase):
name__glob=pattern[:-4], reponame__neq=self.opts.lookaside_repos name__glob=pattern[:-4], reponame__neq=self.opts.lookaside_repos
) )
elif pungi.util.pkg_is_debug(pattern): elif pungi.util.pkg_is_debug(pattern):
pkgs = filter_dotarch( pkgs = self.q_debug_packages.filter(
self.q_debug_packages, name__glob=pattern, reponame__neq=self.opts.lookaside_repos
pattern,
reponame__neq=self.opts.lookaside_repos,
) )
else: else:
pkgs = filter_dotarch( pkgs = self.q_binary_packages.filter(
self.q_binary_packages, name__glob=pattern, reponame__neq=self.opts.lookaside_repos
pattern,
reponame__neq=self.opts.lookaside_repos,
) )
exclude.update(pkgs) exclude.update(pkgs)
@ -540,14 +522,25 @@ class Gather(GatherBase):
name__glob=pattern[:-2] name__glob=pattern[:-2]
).apply() ).apply()
else: else:
pkgs = filter_dotarch(self.q_debug_packages, pattern) pkgs = self.q_debug_packages.filter(
name__glob=pattern
).apply()
else: else:
if pattern.endswith(".+"): if pattern.endswith(".+"):
pkgs = self.q_multilib_binary_packages.filter( pkgs = self.q_multilib_binary_packages.filter(
name__glob=pattern[:-2] name__glob=pattern[:-2]
).apply() ).apply()
else: else:
pkgs = filter_dotarch(self.q_binary_packages, pattern) kwargs = {"name__glob": pattern}
if "." in pattern:
# The pattern could be name.arch. Check if the
# arch is valid, and if yes, make a more
# specific query.
name, arch = pattern.split(".", 1)
if arch in arch_utils.arches:
kwargs["name__glob"] = name
kwargs["arch__eq"] = arch
pkgs = self.q_binary_packages.filter(**kwargs).apply()
if not pkgs: if not pkgs:
self.logger.error( self.logger.error(
@ -839,12 +832,6 @@ class Gather(GatherBase):
continue continue
if self.is_from_lookaside(i): if self.is_from_lookaside(i):
self._set_flag(i, PkgFlag.lookaside) self._set_flag(i, PkgFlag.lookaside)
srpm_name = i.sourcerpm.rsplit("-", 2)[0]
if srpm_name in self.opts.fulltree_excludes:
self._set_flag(i, PkgFlag.fulltree_exclude)
if PkgFlag.input in self.result_package_flags.get(srpm_name, set()):
# If src rpm is marked as input, mark debuginfo as input too
self._set_flag(i, PkgFlag.input)
if i not in self.result_debug_packages: if i not in self.result_debug_packages:
added.add(i) added.add(i)
debug_pkgs.append(i) debug_pkgs.append(i)

View File

@ -19,7 +19,6 @@ import logging
from .tree import Tree from .tree import Tree
from .installer import Installer from .installer import Installer
from .container import Container
def main(args=None): def main(args=None):
@ -72,43 +71,6 @@ def main(args=None):
help="use unified core mode in rpm-ostree", help="use unified core mode in rpm-ostree",
) )
container = subparser.add_parser(
"container", help="Compose OSTree native container"
)
container.set_defaults(_class=Container, func="run")
container.add_argument(
"--name",
required=True,
help="the name of the the OCI archive (required)",
)
container.add_argument(
"--path",
required=True,
help="where to output the OCI archive (required)",
)
container.add_argument(
"--treefile",
metavar="FILE",
required=True,
help="treefile for rpm-ostree (required)",
)
container.add_argument(
"--log-dir",
metavar="DIR",
required=True,
help="where to log output (required).",
)
container.add_argument(
"--extra-config", metavar="FILE", help="JSON file contains extra configurations"
)
container.add_argument(
"-v",
"--version",
metavar="VERSION",
required=True,
help="version identifier (required)",
)
installerp = subparser.add_parser( installerp = subparser.add_parser(
"installer", help="Create an OSTree installer image" "installer", help="Create an OSTree installer image"
) )

View File

@ -1,86 +0,0 @@
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
import os
import json
import six
from six.moves import shlex_quote
from .base import OSTree
from .utils import tweak_treeconf
def emit(cmd):
"""Print line of shell code into the stream."""
if isinstance(cmd, six.string_types):
print(cmd)
else:
print(" ".join([shlex_quote(x) for x in cmd]))
class Container(OSTree):
def _make_container(self):
"""Compose OSTree Container Native image"""
stamp_file = os.path.join(self.logdir, "%s.stamp" % self.name)
cmd = [
"rpm-ostree",
"compose",
"image",
# Always initialize for now
"--initialize",
# Touch the file if a new commit was created. This can help us tell
# if the commitid file is missing because no commit was created or
# because something went wrong.
"--touch-if-changed=%s" % stamp_file,
self.treefile,
]
fullpath = os.path.join(self.path, "%s.ociarchive" % self.name)
cmd.append(fullpath)
# Set the umask to be more permissive so directories get group write
# permissions. See https://pagure.io/releng/issue/8811#comment-629051
emit("umask 0002")
emit(cmd)
def run(self):
self.name = self.args.name
self.path = self.args.path
self.treefile = self.args.treefile
self.logdir = self.args.log_dir
self.extra_config = self.args.extra_config
if self.extra_config:
self.extra_config = json.load(open(self.extra_config, "r"))
repos = self.extra_config.get("repo", [])
keep_original_sources = self.extra_config.get(
"keep_original_sources", False
)
else:
# missing extra_config mustn't affect tweak_treeconf call
repos = []
keep_original_sources = True
update_dict = {"automatic-version-prefix": self.args.version}
self.treefile = tweak_treeconf(
self.treefile,
source_repos=repos,
keep_original_sources=keep_original_sources,
update_dict=update_dict,
)
self._make_container()

View File

@ -25,9 +25,9 @@ from .buildinstall import BuildinstallPhase # noqa
from .extra_files import ExtraFilesPhase # noqa from .extra_files import ExtraFilesPhase # noqa
from .createiso import CreateisoPhase # noqa from .createiso import CreateisoPhase # noqa
from .extra_isos import ExtraIsosPhase # noqa from .extra_isos import ExtraIsosPhase # noqa
from .live_images import LiveImagesPhase # noqa
from .image_build import ImageBuildPhase # noqa from .image_build import ImageBuildPhase # noqa
from .image_container import ImageContainerPhase # noqa from .image_container import ImageContainerPhase # noqa
from .kiwibuild import KiwiBuildPhase # noqa
from .osbuild import OSBuildPhase # noqa from .osbuild import OSBuildPhase # noqa
from .repoclosure import RepoclosurePhase # noqa from .repoclosure import RepoclosurePhase # noqa
from .test import TestPhase # noqa from .test import TestPhase # noqa
@ -35,7 +35,6 @@ from .image_checksum import ImageChecksumPhase # noqa
from .livemedia_phase import LiveMediaPhase # noqa from .livemedia_phase import LiveMediaPhase # noqa
from .ostree import OSTreePhase # noqa from .ostree import OSTreePhase # noqa
from .ostree_installer import OstreeInstallerPhase # noqa from .ostree_installer import OstreeInstallerPhase # noqa
from .ostree_container import OSTreeContainerPhase # noqa
from .osbs import OSBSPhase # noqa from .osbs import OSBSPhase # noqa
from .phases_metadata import gather_phases_metadata # noqa from .phases_metadata import gather_phases_metadata # noqa

View File

@ -31,14 +31,14 @@ from six.moves import shlex_quote
from pungi.arch import get_valid_arches from pungi.arch import get_valid_arches
from pungi.util import get_volid, get_arch_variant_data from pungi.util import get_volid, get_arch_variant_data
from pungi.util import get_file_size, get_mtime, failable, makedirs from pungi.util import get_file_size, get_mtime, failable, makedirs
from pungi.util import copy_all, translate_path from pungi.util import copy_all, translate_path, move_all
from pungi.wrappers.lorax import LoraxWrapper from pungi.wrappers.lorax import LoraxWrapper
from pungi.wrappers import iso from pungi.wrappers import iso
from pungi.wrappers.scm import get_file from pungi.wrappers.scm import get_file
from pungi.wrappers.scm import get_file_from_scm from pungi.wrappers.scm import get_file_from_scm
from pungi.wrappers import kojiwrapper from pungi.wrappers import kojiwrapper
from pungi.phases.base import PhaseBase from pungi.phases.base import PhaseBase
from pungi.runroot import Runroot, download_and_extract_archive from pungi.runroot import Runroot
class BuildinstallPhase(PhaseBase): class BuildinstallPhase(PhaseBase):
@ -144,7 +144,7 @@ class BuildinstallPhase(PhaseBase):
) )
if self.compose.has_comps: if self.compose.has_comps:
comps_repo = self.compose.paths.work.comps_repo(arch, variant) comps_repo = self.compose.paths.work.comps_repo(arch, variant)
if final_output_dir != output_dir or self.lorax_use_koji_plugin: if final_output_dir != output_dir:
comps_repo = translate_path(self.compose, comps_repo) comps_repo = translate_path(self.compose, comps_repo)
repos.append(comps_repo) repos.append(comps_repo)
@ -169,6 +169,7 @@ class BuildinstallPhase(PhaseBase):
"rootfs-size": rootfs_size, "rootfs-size": rootfs_size,
"dracut-args": dracut_args, "dracut-args": dracut_args,
"skip_branding": skip_branding, "skip_branding": skip_branding,
"outputdir": output_dir,
"squashfs_only": squashfs_only, "squashfs_only": squashfs_only,
"configuration_file": configuration_file, "configuration_file": configuration_file,
} }
@ -234,7 +235,7 @@ class BuildinstallPhase(PhaseBase):
) )
makedirs(final_output_dir) makedirs(final_output_dir)
repo_baseurls = self.get_repos(arch) repo_baseurls = self.get_repos(arch)
if final_output_dir != output_dir or self.lorax_use_koji_plugin: if final_output_dir != output_dir:
repo_baseurls = [translate_path(self.compose, r) for r in repo_baseurls] repo_baseurls = [translate_path(self.compose, r) for r in repo_baseurls]
if self.buildinstall_method == "lorax": if self.buildinstall_method == "lorax":
@ -521,20 +522,10 @@ def link_boot_iso(compose, arch, variant, can_fail):
setattr(img, "can_fail", can_fail) setattr(img, "can_fail", can_fail)
setattr(img, "deliverable", "buildinstall") setattr(img, "deliverable", "buildinstall")
try: try:
img.volume_id = iso.get_volume_id( img.volume_id = iso.get_volume_id(new_boot_iso_path)
new_boot_iso_path,
compose.conf.get("createiso_use_xorrisofs"),
)
except RuntimeError: except RuntimeError:
pass pass
# In this phase we should add to compose only the images that compose.im.add(variant.uid, arch, img)
# will be used only as netinstall.
# On this step lorax generates environment
# for creating isos and create them.
# On step `extra_isos` we overwrite the not needed iso `boot Minimal` by
# new iso. It already contains necessary packages from incldued variants.
if variant.uid in compose.conf['netinstall_variants']:
compose.im.add(variant.uid, arch, img)
compose.log_info("[DONE ] %s" % msg) compose.log_info("[DONE ] %s" % msg)
@ -719,8 +710,8 @@ class BuildinstallThread(WorkerThread):
# input on RPM level. # input on RPM level.
cmd_copy = copy(cmd) cmd_copy = copy(cmd)
for key in ["outputdir", "sources"]: for key in ["outputdir", "sources"]:
cmd_copy.pop(key, None) del cmd_copy[key]
old_metadata["cmd"].pop(key, None) del old_metadata["cmd"][key]
# Do not reuse if command line arguments are not the same. # Do not reuse if command line arguments are not the same.
if old_metadata["cmd"] != cmd_copy: if old_metadata["cmd"] != cmd_copy:
@ -835,13 +826,13 @@ class BuildinstallThread(WorkerThread):
# Start the runroot task. # Start the runroot task.
runroot = Runroot(compose, phase="buildinstall") runroot = Runroot(compose, phase="buildinstall")
task_id = None
if buildinstall_method == "lorax" and lorax_use_koji_plugin: if buildinstall_method == "lorax" and lorax_use_koji_plugin:
task_id = runroot.run_pungi_buildinstall( runroot.run_pungi_buildinstall(
cmd, cmd,
log_file=log_file, log_file=log_file,
arch=arch, arch=arch,
packages=packages, packages=packages,
mounts=[compose.topdir],
weight=compose.conf["runroot_weights"].get("buildinstall"), weight=compose.conf["runroot_weights"].get("buildinstall"),
) )
else: else:
@ -874,17 +865,19 @@ class BuildinstallThread(WorkerThread):
log_dir = os.path.join(output_dir, "logs") log_dir = os.path.join(output_dir, "logs")
copy_all(log_dir, final_log_dir) copy_all(log_dir, final_log_dir)
elif lorax_use_koji_plugin: elif lorax_use_koji_plugin:
# If Koji pungi-buildinstall is used, then the buildinstall results # If Koji pungi-buildinstall is used, then the buildinstall results are
# are attached as outputs to the Koji task. Download and unpack # not stored directly in `output_dir` dir, but in "results" and "logs"
# them to the correct location. # subdirectories. We need to move them to final_output_dir.
download_and_extract_archive( results_dir = os.path.join(output_dir, "results")
compose, task_id, "results.tar.gz", final_output_dir move_all(results_dir, final_output_dir, rm_src_dir=True)
)
# Download the logs into proper location too. # Get the log_dir into which we should copy the resulting log files.
log_fname = "buildinstall-%s-logs/dummy" % variant.uid log_fname = "buildinstall-%s-logs/dummy" % variant.uid
final_log_dir = os.path.dirname(compose.paths.log.log_file(arch, log_fname)) final_log_dir = os.path.dirname(compose.paths.log.log_file(arch, log_fname))
download_and_extract_archive(compose, task_id, "logs.tar.gz", final_log_dir) if not os.path.exists(final_log_dir):
makedirs(final_log_dir)
log_dir = os.path.join(output_dir, "logs")
move_all(log_dir, final_log_dir, rm_src_dir=True)
rpms = runroot.get_buildroot_rpms() rpms = runroot.get_buildroot_rpms()
self._write_buildinstall_metadata( self._write_buildinstall_metadata(

View File

@ -14,7 +14,6 @@
# along with this program; if not, see <https://gnu.org/licenses/>. # along with this program; if not, see <https://gnu.org/licenses/>.
import itertools
import os import os
import random import random
import shutil import shutil
@ -24,7 +23,7 @@ import json
import productmd.treeinfo import productmd.treeinfo
from productmd.images import Image from productmd.images import Image
from kobo.threads import ThreadPool, WorkerThread from kobo.threads import ThreadPool, WorkerThread
from kobo.shortcuts import run, relative_path, compute_file_checksums from kobo.shortcuts import run, relative_path
from six.moves import shlex_quote from six.moves import shlex_quote
from pungi.wrappers import iso from pungi.wrappers import iso
@ -189,14 +188,6 @@ class CreateisoPhase(PhaseLoggerMixin, PhaseBase):
if not old_config: if not old_config:
self.logger.info("%s - no config for old compose", log_msg) self.logger.info("%s - no config for old compose", log_msg)
return False return False
# Disable reuse if unsigned packages are allowed. The older compose
# could have unsigned packages, and those may have been signed since
# then. We want to regenerate the ISO to have signatures.
if None in self.compose.conf["sigkeys"]:
self.logger.info("%s - unsigned packages are allowed", log_msg)
return False
# Convert current configuration to JSON and back to encode it similarly # Convert current configuration to JSON and back to encode it similarly
# to the old one # to the old one
config = json.loads(json.dumps(self.compose.conf)) config = json.loads(json.dumps(self.compose.conf))
@ -466,14 +457,7 @@ class CreateIsoThread(WorkerThread):
try: try:
run_createiso_command( run_createiso_command(
num, num, compose, bootable, arch, cmd["cmd"], mounts, log_file
compose,
bootable,
arch,
cmd["cmd"],
mounts,
log_file,
cmd["iso_path"],
) )
except Exception: except Exception:
self.fail(compose, cmd, variant, arch) self.fail(compose, cmd, variant, arch)
@ -540,10 +524,7 @@ def add_iso_to_metadata(
setattr(img, "can_fail", compose.can_fail(variant, arch, "iso")) setattr(img, "can_fail", compose.can_fail(variant, arch, "iso"))
setattr(img, "deliverable", "iso") setattr(img, "deliverable", "iso")
try: try:
img.volume_id = iso.get_volume_id( img.volume_id = iso.get_volume_id(iso_path)
iso_path,
compose.conf.get("createiso_use_xorrisofs"),
)
except RuntimeError: except RuntimeError:
pass pass
if arch == "src": if arch == "src":
@ -554,9 +535,7 @@ def add_iso_to_metadata(
return img return img
def run_createiso_command( def run_createiso_command(num, compose, bootable, arch, cmd, mounts, log_file):
num, compose, bootable, arch, cmd, mounts, log_file, iso_path
):
packages = [ packages = [
"coreutils", "coreutils",
"xorriso" if compose.conf.get("createiso_use_xorrisofs") else "genisoimage", "xorriso" if compose.conf.get("createiso_use_xorrisofs") else "genisoimage",
@ -598,76 +577,6 @@ def run_createiso_command(
weight=compose.conf["runroot_weights"].get("createiso"), weight=compose.conf["runroot_weights"].get("createiso"),
) )
if bootable and compose.conf.get("createiso_use_xorrisofs"):
fix_treeinfo_checksums(compose, iso_path, arch)
def fix_treeinfo_checksums(compose, iso_path, arch):
"""It is possible for the ISO to contain a .treefile with incorrect
checksums. By modifying the ISO (adding files) some of the images may
change.
This function fixes that after the fact by looking for incorrect checksums,
recalculating them and updating the .treeinfo file. Since the size of the
file doesn't change, this seems to not change any images.
"""
modified = False
with iso.mount(iso_path, compose._logger) as mountpoint:
ti = productmd.TreeInfo()
ti.load(os.path.join(mountpoint, ".treeinfo"))
for image, (type_, expected) in ti.checksums.checksums.items():
checksums = compute_file_checksums(os.path.join(mountpoint, image), [type_])
actual = checksums[type_]
if actual == expected:
# Everything fine here, skip to next image.
continue
compose.log_debug("%s: %s: checksum mismatch", iso_path, image)
# Update treeinfo with correct checksum
ti.checksums.checksums[image] = (type_, actual)
modified = True
if not modified:
compose.log_debug("%s: All checksums match, nothing to do.", iso_path)
return
try:
tmpdir = compose.mkdtemp(arch, prefix="fix-checksum-")
# Write modified .treeinfo
ti_path = os.path.join(tmpdir, ".treeinfo")
compose.log_debug("Storing modified .treeinfo in %s", ti_path)
ti.dump(ti_path)
# Write a modified DVD into a temporary path, that is atomically moved
# over the original file.
fixed_path = os.path.join(tmpdir, "fixed-checksum-dvd.iso")
cmd = ["xorriso"]
cmd.extend(
itertools.chain.from_iterable(
iso.xorriso_commands(arch, iso_path, fixed_path)
)
)
cmd.extend(["-map", ti_path, ".treeinfo"])
run(
cmd,
logfile=compose.paths.log.log_file(
arch, "checksum-fix_generate_%s" % os.path.basename(iso_path)
),
)
# The modified ISO no longer has implanted MD5, so that needs to be
# fixed again.
compose.log_debug("Implanting new MD5 to %s", fixed_path)
run(
iso.get_implantisomd5_cmd(fixed_path, compose.supported),
logfile=compose.paths.log.log_file(
arch, "checksum-fix_implantisomd5_%s" % os.path.basename(iso_path)
),
)
# All done, move the updated image to the final location.
compose.log_debug("Updating %s", iso_path)
os.rename(fixed_path, iso_path)
finally:
shutil.rmtree(tmpdir)
def split_iso(compose, arch, variant, no_split=False, logger=None): def split_iso(compose, arch, variant, no_split=False, logger=None):
""" """

View File

@ -166,7 +166,6 @@ class ExtraIsosThread(WorkerThread):
log_file=compose.paths.log.log_file( log_file=compose.paths.log.log_file(
arch, "extraiso-%s" % os.path.basename(iso_path) arch, "extraiso-%s" % os.path.basename(iso_path)
), ),
iso_path=iso_path,
) )
img = add_iso_to_metadata( img = add_iso_to_metadata(
@ -205,14 +204,6 @@ class ExtraIsosThread(WorkerThread):
if not old_config: if not old_config:
self.pool.log_info("%s - no config for old compose", log_msg) self.pool.log_info("%s - no config for old compose", log_msg)
return False return False
# Disable reuse if unsigned packages are allowed. The older compose
# could have unsigned packages, and those may have been signed since
# then. We want to regenerate the ISO to have signatures.
if None in compose.conf["sigkeys"]:
self.pool.log_info("%s - unsigned packages are allowed", log_msg)
return False
# Convert current configuration to JSON and back to encode it similarly # Convert current configuration to JSON and back to encode it similarly
# to the old one # to the old one
config = json.loads(json.dumps(compose.conf)) config = json.loads(json.dumps(compose.conf))
@ -429,12 +420,6 @@ def get_iso_contents(
original_treeinfo, original_treeinfo,
os.path.join(extra_files_dir, ".treeinfo"), os.path.join(extra_files_dir, ".treeinfo"),
) )
tweak_repo_treeinfo(
compose,
include_variants,
original_treeinfo,
original_treeinfo,
)
# Add extra files specific for the ISO # Add extra files specific for the ISO
files.update( files.update(
@ -446,45 +431,6 @@ def get_iso_contents(
return gp return gp
def tweak_repo_treeinfo(compose, include_variants, source_file, dest_file):
"""
The method includes the variants to file .treeinfo of a variant. It takes
the variants which are described
by options `extra_isos -> include_variants`.
"""
ti = productmd.treeinfo.TreeInfo()
ti.load(source_file)
main_variant = next(iter(ti.variants))
for variant_uid in include_variants:
variant = compose.all_variants[variant_uid]
var = productmd.treeinfo.Variant(ti)
var.id = variant.id
var.uid = variant.uid
var.name = variant.name
var.type = variant.type
ti.variants.add(var)
for variant_id in ti.variants:
var = ti.variants[variant_id]
if variant_id == main_variant:
var.paths.packages = 'Packages'
var.paths.repository = '.'
else:
var.paths.packages = os.path.join(
'../../..',
var.uid,
var.arch,
'os/Packages',
)
var.paths.repository = os.path.join(
'../../..',
var.uid,
var.arch,
'os',
)
ti.dump(dest_file, main_variant=main_variant)
def tweak_treeinfo(compose, include_variants, source_file, dest_file): def tweak_treeinfo(compose, include_variants, source_file, dest_file):
ti = load_and_tweak_treeinfo(source_file) ti = load_and_tweak_treeinfo(source_file)
for variant_uid in include_variants: for variant_uid in include_variants:
@ -500,6 +446,7 @@ def tweak_treeinfo(compose, include_variants, source_file, dest_file):
var = ti.variants[variant_id] var = ti.variants[variant_id]
var.paths.packages = os.path.join(var.uid, "Packages") var.paths.packages = os.path.join(var.uid, "Packages")
var.paths.repository = var.uid var.paths.repository = var.uid
ti.dump(dest_file) ti.dump(dest_file)

View File

@ -23,7 +23,6 @@ import threading
from kobo.rpmlib import parse_nvra from kobo.rpmlib import parse_nvra
from kobo.shortcuts import run from kobo.shortcuts import run
from productmd.rpms import Rpms from productmd.rpms import Rpms
from pungi.phases.pkgset.common import get_all_arches
from six.moves import cPickle as pickle from six.moves import cPickle as pickle
try: try:
@ -650,11 +649,6 @@ def _make_lookaside_repo(compose, variant, arch, pkg_map, package_sets=None):
pungi.wrappers.kojiwrapper.KojiWrapper(compose).koji_module.config.topdir, pungi.wrappers.kojiwrapper.KojiWrapper(compose).koji_module.config.topdir,
).rstrip("/") ).rstrip("/")
+ "/", + "/",
"kojimock": lambda: pungi.wrappers.kojiwrapper.KojiMockWrapper(
compose,
get_all_arches(compose),
).koji_module.config.topdir.rstrip("/")
+ "/",
} }
path_prefix = prefixes[compose.conf["pkgset_source"]]() path_prefix = prefixes[compose.conf["pkgset_source"]]()
package_list = set() package_list = set()
@ -666,11 +660,6 @@ def _make_lookaside_repo(compose, variant, arch, pkg_map, package_sets=None):
# we need a union of all SRPMs. # we need a union of all SRPMs.
if pkg_type == "srpm" or pkg_arch == arch: if pkg_type == "srpm" or pkg_arch == arch:
for pkg in packages: for pkg in packages:
if "lookaside" in pkg.get("flags", []):
# We want to ignore lookaside packages, those will
# be visible to the depending variants from the
# lookaside repo directly.
continue
pkg = pkg["path"] pkg = pkg["path"]
if path_prefix and pkg.startswith(path_prefix): if path_prefix and pkg.startswith(path_prefix):
pkg = pkg[len(path_prefix) :] pkg = pkg[len(path_prefix) :]

View File

@ -76,7 +76,7 @@ class ImageContainerThread(WorkerThread):
) )
if koji.watch_task(task_id, log_file) != 0: if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError( raise RuntimeError(
"ImageContainer task failed: %s. See %s for details" "ImageContainer: task %s failed: see %s for details"
% (task_id, log_file) % (task_id, log_file)
) )

View File

@ -1,229 +0,0 @@
# -*- coding: utf-8 -*-
import os
from kobo.threads import ThreadPool, WorkerThread
from kobo import shortcuts
from productmd.images import Image
from . import base
from .. import util
from ..linker import Linker
from ..wrappers import kojiwrapper
from .image_build import EXTENSIONS
KIWIEXTENSIONS = [
("vhd-compressed", ["vhdfixed.xz"], "vhd.xz"),
("vagrant-libvirt", ["vagrant.libvirt.box"], "vagrant-libvirt.box"),
("vagrant-virtualbox", ["vagrant.virtualbox.box"], "vagrant-virtualbox.box"),
]
class KiwiBuildPhase(
base.PhaseLoggerMixin, base.ImageConfigMixin, base.ConfigGuardedPhase
):
name = "kiwibuild"
def __init__(self, compose):
super(KiwiBuildPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.logger)
def _get_arches(self, image_conf, arches):
"""Get an intersection of arches in the config dict and the given ones."""
if "arches" in image_conf:
arches = set(image_conf["arches"]) & arches
return sorted(arches)
@staticmethod
def _get_repo_urls(compose, repos, arch="$basearch"):
"""
Get list of repos with resolved repo URLs. Preserve repos defined
as dicts.
"""
resolved_repos = []
for repo in repos:
repo = util.get_repo_url(compose, repo, arch=arch)
if repo is None:
raise RuntimeError("Failed to resolve repo URL for %s" % repo)
resolved_repos.append(repo)
return resolved_repos
def _get_repo(self, image_conf, variant):
"""
Get a list of repos. First included are those explicitly listed in
config, followed by by repo for current variant if it's not included in
the list already.
"""
repos = shortcuts.force_list(image_conf.get("repos", []))
if not variant.is_empty and variant.uid not in repos:
repos.append(variant.uid)
return KiwiBuildPhase._get_repo_urls(self.compose, repos, arch="$arch")
def run(self):
for variant in self.compose.get_variants():
arches = set([x for x in variant.arches if x != "src"])
for image_conf in self.get_config_block(variant):
build_arches = self._get_arches(image_conf, arches)
if not build_arches:
self.log_debug("skip: no arches")
continue
# these properties can be set per-image *or* as e.g.
# kiwibuild_description_scm or global_release in the config
generics = {
"release": self.get_release(image_conf),
"target": self.get_config(image_conf, "target"),
"descscm": self.get_config(image_conf, "description_scm"),
"descpath": self.get_config(image_conf, "description_path"),
"type": self.get_config(image_conf, "type"),
"type_attr": self.get_config(image_conf, "type_attr"),
"bundle_name_format": self.get_config(
image_conf, "bundle_name_format"
),
}
repo = self._get_repo(image_conf, variant)
failable_arches = image_conf.pop("failable", [])
if failable_arches == ["*"]:
failable_arches = image_conf["arches"]
self.pool.add(RunKiwiBuildThread(self.pool))
self.pool.queue_put(
(
self.compose,
variant,
image_conf,
build_arches,
generics,
repo,
failable_arches,
)
)
self.pool.start()
class RunKiwiBuildThread(WorkerThread):
def process(self, item, num):
(compose, variant, config, arches, generics, repo, failable_arches) = item
self.failable_arches = failable_arches
# the Koji task as a whole can only fail if *all* arches are failable
can_task_fail = set(failable_arches).issuperset(set(arches))
self.num = num
with util.failable(
compose,
can_task_fail,
variant,
"*",
"kiwibuild",
logger=self.pool._logger,
):
self.worker(compose, variant, config, arches, generics, repo)
def worker(self, compose, variant, config, arches, generics, repo):
msg = "kiwibuild task for variant %s" % variant.uid
self.pool.log_info("[BEGIN] %s" % msg)
koji = kojiwrapper.KojiWrapper(compose)
koji.login()
task_id = koji.koji_proxy.kiwiBuild(
generics["target"],
arches,
generics["descscm"],
generics["descpath"],
profile=config["kiwi_profile"],
release=generics["release"],
repos=repo,
type=generics["type"],
type_attr=generics["type_attr"],
result_bundle_name_format=generics["bundle_name_format"],
# this ensures the task won't fail if only failable arches fail
optional_arches=self.failable_arches,
)
koji.save_task_id(task_id)
# Wait for it to finish and capture the output into log file.
log_dir = os.path.join(compose.paths.log.topdir(), "kiwibuild")
util.makedirs(log_dir)
log_file = os.path.join(
log_dir, "%s-%s-watch-task.log" % (variant.uid, self.num)
)
if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError(
"kiwiBuild task failed: %s. See %s for details" % (task_id, log_file)
)
# Refresh koji session which may have timed out while the task was
# running. Watching is done via a subprocess, so the session is
# inactive.
koji = kojiwrapper.KojiWrapper(compose)
linker = Linker(logger=self.pool._logger)
# Process all images in the build. There should be one for each
# architecture, but we don't verify that.
paths = koji.get_image_paths(task_id)
for arch, paths in paths.items():
for path in paths:
type_, format_ = _find_type_and_format(path)
if not format_:
# Path doesn't match any known type.
continue
# image_dir is absolute path to which the image should be copied.
# We also need the same path as relative to compose directory for
# including in the metadata.
image_dir = compose.paths.compose.image_dir(variant) % {"arch": arch}
rel_image_dir = compose.paths.compose.image_dir(
variant, relative=True
) % {"arch": arch}
util.makedirs(image_dir)
filename = os.path.basename(path)
image_dest = os.path.join(image_dir, filename)
src_file = compose.koji_downloader.get_file(path)
linker.link(src_file, image_dest, link_type=compose.conf["link_type"])
# Update image manifest
img = Image(compose.im)
# Get the manifest type from the config if supplied, otherwise we
# determine the manifest type based on the koji output
img.type = type_
img.format = format_
img.path = os.path.join(rel_image_dir, filename)
img.mtime = util.get_mtime(image_dest)
img.size = util.get_file_size(image_dest)
img.arch = arch
img.disc_number = 1 # We don't expect multiple disks
img.disc_count = 1
img.bootable = False
img.subvariant = config.get("subvariant", variant.uid)
setattr(img, "can_fail", arch in self.failable_arches)
setattr(img, "deliverable", "kiwibuild")
compose.im.add(variant=variant.uid, arch=arch, image=img)
self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, task_id))
def _find_type_and_format(path):
for type_, suffixes in EXTENSIONS.items():
for suffix in suffixes:
if path.endswith(suffix):
return type_, suffix
# these are our kiwi-exclusive mappings for images whose extensions
# aren't quite the same as imagefactory
for type_, suffixes, format_ in KIWIEXTENSIONS:
if any(path.endswith(suffix) for suffix in suffixes):
return type_, format_
return None, None

406
pungi/phases/live_images.py Normal file
View File

@ -0,0 +1,406 @@
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>.
import os
import sys
import time
import shutil
from kobo.threads import ThreadPool, WorkerThread
from kobo.shortcuts import run, save_to_file, force_list
from productmd.images import Image
from six.moves import shlex_quote
from pungi.wrappers.kojiwrapper import KojiWrapper
from pungi.wrappers import iso
from pungi.phases import base
from pungi.util import makedirs, get_mtime, get_file_size, failable
from pungi.util import get_repo_urls
# HACK: define cmp in python3
if sys.version_info[0] == 3:
def cmp(a, b):
return (a > b) - (a < b)
class LiveImagesPhase(
base.PhaseLoggerMixin, base.ImageConfigMixin, base.ConfigGuardedPhase
):
name = "live_images"
def __init__(self, compose):
super(LiveImagesPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.logger)
def _get_repos(self, arch, variant, data):
repos = []
if not variant.is_empty:
repos.append(variant.uid)
repos.extend(force_list(data.get("repo", [])))
return get_repo_urls(self.compose, repos, arch=arch)
def run(self):
symlink_isos_to = self.compose.conf.get("symlink_isos_to")
commands = []
for variant in self.compose.all_variants.values():
for arch in variant.arches + ["src"]:
for data in self.get_config_block(variant, arch):
subvariant = data.get("subvariant", variant.uid)
type = data.get("type", "live")
if type == "live":
dest_dir = self.compose.paths.compose.iso_dir(
arch, variant, symlink_to=symlink_isos_to
)
elif type == "appliance":
dest_dir = self.compose.paths.compose.image_dir(
variant, symlink_to=symlink_isos_to
)
dest_dir = dest_dir % {"arch": arch}
makedirs(dest_dir)
else:
raise RuntimeError("Unknown live image type %s" % type)
if not dest_dir:
continue
cmd = {
"name": data.get("name"),
"version": self.get_version(data),
"release": self.get_release(data),
"dest_dir": dest_dir,
"build_arch": arch,
"ks_file": data["kickstart"],
"ksurl": self.get_ksurl(data),
# Used for images wrapped in RPM
"specfile": data.get("specfile", None),
# Scratch (only taken in consideration if specfile
# specified) For images wrapped in rpm is scratch
# disabled by default For other images is scratch
# always on
"scratch": data.get("scratch", False),
"sign": False,
"type": type,
"label": "", # currently not used
"subvariant": subvariant,
"failable_arches": data.get("failable", []),
# First see if live_target is specified, then fall back
# to regular setup of local, phase and global setting.
"target": self.compose.conf.get("live_target")
or self.get_config(data, "target"),
}
cmd["repos"] = self._get_repos(arch, variant, data)
# Signing of the rpm wrapped image
if not cmd["scratch"] and data.get("sign"):
cmd["sign"] = True
cmd["filename"] = self._get_file_name(
arch, variant, cmd["name"], cmd["version"]
)
commands.append((cmd, variant, arch))
for cmd, variant, arch in commands:
self.pool.add(CreateLiveImageThread(self.pool))
self.pool.queue_put((self.compose, cmd, variant, arch))
self.pool.start()
def _get_file_name(self, arch, variant, name=None, version=None):
if self.compose.conf["live_images_no_rename"]:
return None
disc_type = self.compose.conf["disc_types"].get("live", "live")
format = (
"%(compose_id)s-%(variant)s-%(arch)s-%(disc_type)s%(disc_num)s%(suffix)s"
)
# Custom name (prefix)
if name:
custom_iso_name = name
if version:
custom_iso_name += "-%s" % version
format = (
custom_iso_name
+ "-%(variant)s-%(arch)s-%(disc_type)s%(disc_num)s%(suffix)s"
)
# XXX: hardcoded disc_num
return self.compose.get_image_name(
arch, variant, disc_type=disc_type, disc_num=None, format=format
)
class CreateLiveImageThread(WorkerThread):
EXTS = (".iso", ".raw.xz")
def process(self, item, num):
compose, cmd, variant, arch = item
self.failable_arches = cmd.get("failable_arches", [])
self.can_fail = bool(self.failable_arches)
with failable(
compose,
self.can_fail,
variant,
arch,
"live",
cmd.get("subvariant"),
logger=self.pool._logger,
):
self.worker(compose, cmd, variant, arch, num)
def worker(self, compose, cmd, variant, arch, num):
self.basename = "%(name)s-%(version)s-%(release)s" % cmd
log_file = compose.paths.log.log_file(arch, "liveimage-%s" % self.basename)
subvariant = cmd.pop("subvariant")
imgname = "%s-%s-%s-%s" % (
compose.ci_base.release.short,
subvariant,
"Live" if cmd["type"] == "live" else "Disk",
arch,
)
msg = "Creating ISO (arch: %s, variant: %s): %s" % (
arch,
variant,
self.basename,
)
self.pool.log_info("[BEGIN] %s" % msg)
koji_wrapper = KojiWrapper(compose)
_, version = compose.compose_id.rsplit("-", 1)
name = cmd["name"] or imgname
version = cmd["version"] or version
archive = False
if cmd["specfile"] and not cmd["scratch"]:
# Non scratch build are allowed only for rpm wrapped images
archive = True
koji_cmd = koji_wrapper.get_create_image_cmd(
name,
version,
cmd["target"],
cmd["build_arch"],
cmd["ks_file"],
cmd["repos"],
image_type=cmd["type"],
wait=True,
archive=archive,
specfile=cmd["specfile"],
release=cmd["release"],
ksurl=cmd["ksurl"],
)
# avoid race conditions?
# Kerberos authentication failed:
# Permission denied in replay cache code (-1765328215)
time.sleep(num * 3)
output = koji_wrapper.run_blocking_cmd(koji_cmd, log_file=log_file)
if output["retcode"] != 0:
raise RuntimeError(
"LiveImage task failed: %s. See %s for more details."
% (output["task_id"], log_file)
)
# copy finished image to isos/
image_path = [
path
for path in koji_wrapper.get_image_path(output["task_id"])
if self._is_image(path)
]
if len(image_path) != 1:
raise RuntimeError(
"Got %d images from task %d, expected 1."
% (len(image_path), output["task_id"])
)
image_path = compose.koji_downloader.get_file(image_path[0])
filename = cmd.get("filename") or os.path.basename(image_path)
destination = os.path.join(cmd["dest_dir"], filename)
shutil.copy2(image_path, destination)
# copy finished rpm to isos/ (if rpm wrapped ISO was built)
if cmd["specfile"]:
rpm_paths = koji_wrapper.get_wrapped_rpm_path(output["task_id"])
if cmd["sign"]:
# Sign the rpm wrapped images and get their paths
self.pool.log_info(
"Signing rpm wrapped images in task_id: %s (expected key ID: %s)"
% (output["task_id"], compose.conf.get("signing_key_id"))
)
signed_rpm_paths = self._sign_image(
koji_wrapper, compose, cmd, output["task_id"]
)
if signed_rpm_paths:
rpm_paths = signed_rpm_paths
for rpm_path in rpm_paths:
shutil.copy2(rpm_path, cmd["dest_dir"])
if cmd["type"] == "live":
# ISO manifest only makes sense for live images
self._write_manifest(destination)
self._add_to_images(
compose,
variant,
subvariant,
arch,
cmd["type"],
self._get_format(image_path),
destination,
)
self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, output["task_id"]))
def _add_to_images(self, compose, variant, subvariant, arch, type, format, path):
"""Adds the image to images.json"""
img = Image(compose.im)
img.type = "raw-xz" if type == "appliance" else type
img.format = format
img.path = os.path.relpath(path, compose.paths.compose.topdir())
img.mtime = get_mtime(path)
img.size = get_file_size(path)
img.arch = arch
img.disc_number = 1 # We don't expect multiple disks
img.disc_count = 1
img.bootable = True
img.subvariant = subvariant
setattr(img, "can_fail", self.can_fail)
setattr(img, "deliverable", "live")
compose.im.add(variant=variant.uid, arch=arch, image=img)
def _is_image(self, path):
for ext in self.EXTS:
if path.endswith(ext):
return True
return False
def _get_format(self, path):
"""Get format based on extension."""
for ext in self.EXTS:
if path.endswith(ext):
return ext[1:]
raise RuntimeError("Getting format for unknown image %s" % path)
def _write_manifest(self, iso_path):
"""Generate manifest for ISO at given path.
:param iso_path: (str) absolute path to the ISO
"""
dir, filename = os.path.split(iso_path)
run("cd %s && %s" % (shlex_quote(dir), iso.get_manifest_cmd(filename)))
def _sign_image(self, koji_wrapper, compose, cmd, koji_task_id):
signing_key_id = compose.conf.get("signing_key_id")
signing_command = compose.conf.get("signing_command")
if not signing_key_id:
self.pool.log_warning(
"Signing is enabled but signing_key_id is not specified"
)
self.pool.log_warning("Signing skipped")
return None
if not signing_command:
self.pool.log_warning(
"Signing is enabled but signing_command is not specified"
)
self.pool.log_warning("Signing skipped")
return None
# Prepare signing log file
signing_log_file = compose.paths.log.log_file(
cmd["build_arch"], "live_images-signing-%s" % self.basename
)
# Sign the rpm wrapped images
try:
sign_builds_in_task(
koji_wrapper,
koji_task_id,
signing_command,
log_file=signing_log_file,
signing_key_password=compose.conf.get("signing_key_password"),
)
except RuntimeError:
self.pool.log_error(
"Error while signing rpm wrapped images. See log: %s" % signing_log_file
)
raise
# Get pats to the signed rpms
signing_key_id = signing_key_id.lower() # Koji uses lowercase in paths
rpm_paths = koji_wrapper.get_signed_wrapped_rpms_paths(
koji_task_id, signing_key_id
)
# Wait until files are available
if wait_paths(rpm_paths, 60 * 15):
# Files are ready
return rpm_paths
# Signed RPMs are not available
self.pool.log_warning("Signed files are not available: %s" % rpm_paths)
self.pool.log_warning("Unsigned files will be used")
return None
def wait_paths(paths, timeout=60):
started = time.time()
remaining = paths[:]
while True:
for path in remaining[:]:
if os.path.exists(path):
remaining.remove(path)
if not remaining:
break
time.sleep(1)
if timeout >= 0 and (time.time() - started) > timeout:
return False
return True
def sign_builds_in_task(
koji_wrapper, task_id, signing_command, log_file=None, signing_key_password=None
):
# Get list of nvrs that should be signed
nvrs = koji_wrapper.get_build_nvrs(task_id)
if not nvrs:
# No builds are available (scratch build, etc.?)
return
# Append builds to sign_cmd
for nvr in nvrs:
signing_command += " '%s'" % nvr
# Log signing command before password is filled in it
if log_file:
save_to_file(log_file, signing_command, append=True)
# Fill password into the signing command
if signing_key_password:
signing_command = signing_command % {
"signing_key_password": signing_key_password
}
# Sign the builds
run(signing_command, can_fail=False, show_cmd=False, logfile=log_file)

View File

@ -134,7 +134,7 @@ class OSBSThread(WorkerThread):
# though there is not much there). # though there is not much there).
if koji.watch_task(task_id, log_file) != 0: if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError( raise RuntimeError(
"OSBS task failed: %s. See %s for details" % (task_id, log_file) "OSBS: task %s failed: see %s for details" % (task_id, log_file)
) )
scratch = config.get("scratch", False) scratch = config.get("scratch", False)
@ -154,7 +154,7 @@ class OSBSThread(WorkerThread):
reuse_file, reuse_file,
) )
self.pool.log_info("[DONE ] %s (task id: %s)" % (msg, task_id)) self.pool.log_info("[DONE ] %s" % msg)
def _get_image_conf(self, compose, config): def _get_image_conf(self, compose, config):
"""Get image-build.conf from git repo. """Get image-build.conf from git repo.

View File

@ -159,10 +159,6 @@ class RunOSBuildThread(WorkerThread):
if upload_options: if upload_options:
opts["upload_options"] = upload_options opts["upload_options"] = upload_options
customizations = config.get("customizations")
if customizations:
opts["customizations"] = customizations
if release: if release:
opts["release"] = release opts["release"] = release
task_id = koji.koji_proxy.osbuildImage( task_id = koji.koji_proxy.osbuildImage(
@ -185,7 +181,7 @@ class RunOSBuildThread(WorkerThread):
) )
if koji.watch_task(task_id, log_file) != 0: if koji.watch_task(task_id, log_file) != 0:
raise RuntimeError( raise RuntimeError(
"OSBuild task failed: %s. See %s for details" % (task_id, log_file) "OSBuild: task %s failed: see %s for details" % (task_id, log_file)
) )
# Refresh koji session which may have timed out while the task was # Refresh koji session which may have timed out while the task was

View File

@ -85,7 +85,7 @@ class OSTreeThread(WorkerThread):
comps_repo = compose.paths.work.comps_repo( comps_repo = compose.paths.work.comps_repo(
"$basearch", variant=variant, create_dir=False "$basearch", variant=variant, create_dir=False
) )
repos = shortcuts.force_list(config.get("repo", [])) + self.repos repos = shortcuts.force_list(config["repo"]) + self.repos
if compose.has_comps: if compose.has_comps:
repos.append(translate_path(compose, comps_repo)) repos.append(translate_path(compose, comps_repo))
repos = get_repo_dicts(repos, logger=self.pool) repos = get_repo_dicts(repos, logger=self.pool)

View File

@ -1,190 +0,0 @@
# -*- coding: utf-8 -*-
import copy
import json
import os
from kobo import shortcuts
from kobo.threads import ThreadPool, WorkerThread
from productmd.images import Image
from pungi.runroot import Runroot
from .base import ConfigGuardedPhase
from .. import util
from ..util import get_repo_dicts, translate_path
from ..wrappers import scm
class OSTreeContainerPhase(ConfigGuardedPhase):
name = "ostree_container"
def __init__(self, compose, pkgset_phase=None):
super(OSTreeContainerPhase, self).__init__(compose)
self.pool = ThreadPool(logger=self.compose._logger)
self.pkgset_phase = pkgset_phase
def get_repos(self):
return [
translate_path(
self.compose,
self.compose.paths.work.pkgset_repo(
pkgset.name, "$basearch", create_dir=False
),
)
for pkgset in self.pkgset_phase.package_sets
]
def _enqueue(self, variant, arch, conf):
self.pool.add(OSTreeContainerThread(self.pool, self.get_repos()))
self.pool.queue_put((self.compose, variant, arch, conf))
def run(self):
if isinstance(self.compose.conf.get(self.name), dict):
for variant in self.compose.get_variants():
for conf in self.get_config_block(variant):
for arch in conf.get("arches", []) or variant.arches:
self._enqueue(variant, arch, conf)
else:
# Legacy code path to support original configuration.
for variant in self.compose.get_variants():
for arch in variant.arches:
for conf in self.get_config_block(variant, arch):
self._enqueue(variant, arch, conf)
self.pool.start()
class OSTreeContainerThread(WorkerThread):
def __init__(self, pool, repos):
super(OSTreeContainerThread, self).__init__(pool)
self.repos = repos
def process(self, item, num):
compose, variant, arch, config = item
self.num = num
failable_arches = config.get("failable", [])
self.can_fail = util.can_arch_fail(failable_arches, arch)
with util.failable(compose, self.can_fail, variant, arch, "ostree-container"):
self.worker(compose, variant, arch, config)
def worker(self, compose, variant, arch, config):
msg = "OSTree container phase for variant %s, arch %s" % (variant.uid, arch)
self.pool.log_info("[BEGIN] %s" % msg)
workdir = compose.paths.work.topdir("ostree-container-%d" % self.num)
self.logdir = compose.paths.log.topdir(
"%s/%s/ostree-container-%d" % (arch, variant.uid, self.num)
)
repodir = os.path.join(workdir, "config_repo")
self._clone_repo(
compose,
repodir,
config["config_url"],
config.get("config_branch", "main"),
)
repos = shortcuts.force_list(config.get("repo", [])) + self.repos
repos = get_repo_dicts(repos, logger=self.pool)
# copy the original config and update before save to a json file
new_config = copy.copy(config)
# repos in configuration can have repo url set to variant UID,
# update it to have the actual url that we just translated.
new_config.update({"repo": repos})
# remove unnecessary (for 'pungi-make-ostree container' script ) elements
# from config, it doesn't hurt to have them, however remove them can
# reduce confusion
for k in [
"treefile",
"config_url",
"config_branch",
"failable",
"version",
]:
new_config.pop(k, None)
# write a json file to save the configuration, so 'pungi-make-ostree tree'
# can take use of it
extra_config_file = os.path.join(workdir, "extra_config.json")
with open(extra_config_file, "w") as f:
json.dump(new_config, f, indent=4)
self._run_ostree_container_cmd(
compose, variant, arch, config, repodir, extra_config_file=extra_config_file
)
self.pool.log_info("[DONE ] %s" % (msg))
def _run_ostree_container_cmd(
self, compose, variant, arch, config, config_repo, extra_config_file=None
):
target_dir = compose.paths.compose.image_dir(variant) % {"arch": arch}
util.makedirs(target_dir)
version = util.version_generator(compose, config.get("version"))
archive_name = "%s-%s-%s" % (
compose.conf["release_short"],
variant.uid,
version,
)
# Run the pungi-make-ostree command locally to create a script to
# execute in runroot environment.
cmd = [
"pungi-make-ostree",
"container",
"--log-dir=%s" % self.logdir,
"--name=%s" % archive_name,
"--path=%s" % target_dir,
"--treefile=%s" % os.path.join(config_repo, config["treefile"]),
"--extra-config=%s" % extra_config_file,
"--version=%s" % version,
]
_, runroot_script = shortcuts.run(cmd, universal_newlines=True)
default_packages = ["ostree", "rpm-ostree", "selinux-policy-targeted"]
additional_packages = config.get("runroot_packages", [])
packages = default_packages + additional_packages
log_file = os.path.join(self.logdir, "runroot.log")
# TODO: Use to get previous build
mounts = [compose.topdir]
runroot = Runroot(compose, phase="ostree_container")
runroot.run(
" && ".join(runroot_script.splitlines()),
log_file=log_file,
arch=arch,
packages=packages,
mounts=mounts,
new_chroot=True,
weight=compose.conf["runroot_weights"].get("ostree"),
)
fullpath = os.path.join(target_dir, "%s.ociarchive" % archive_name)
# Update image manifest
img = Image(compose.im)
# Get the manifest type from the config if supplied, otherwise we
# determine the manifest type based on the koji output
img.type = "ociarchive"
img.format = "ociarchive"
img.path = os.path.relpath(fullpath, compose.paths.compose.topdir())
img.mtime = util.get_mtime(fullpath)
img.size = util.get_file_size(fullpath)
img.arch = arch
img.disc_number = 1
img.disc_count = 1
img.bootable = False
img.subvariant = config.get("subvariant", variant.uid)
setattr(img, "can_fail", self.can_fail)
setattr(img, "deliverable", "ostree-container")
compose.im.add(variant=variant.uid, arch=arch, image=img)
def _clone_repo(self, compose, repodir, url, branch):
scm.get_dir_from_scm(
{"scm": "git", "repo": url, "branch": branch, "dir": "."},
repodir,
compose=compose,
)

View File

@ -23,8 +23,6 @@ import itertools
import json import json
import os import os
import time import time
import pgpy
import rpm
from six.moves import cPickle as pickle from six.moves import cPickle as pickle
from functools import partial from functools import partial
@ -154,15 +152,9 @@ class PackageSetBase(kobo.log.LoggingBase):
""" """
def nvr_formatter(package_info): def nvr_formatter(package_info):
epoch_suffix = '' # joins NVR parts of the package with '-' character.
if package_info['epoch'] is not None: return "-".join(
epoch_suffix = ':' + package_info['epoch'] (package_info["name"], package_info["version"], package_info["release"])
return (
f"{package_info['name']}"
f"{epoch_suffix}-"
f"{package_info['version']}-"
f"{package_info['release']}."
f"{package_info['arch']}"
) )
def get_error(sigkeys, infos): def get_error(sigkeys, infos):
@ -511,8 +503,7 @@ class KojiPackageSet(PackageSetBase):
response = None response = None
if self.cache_region: if self.cache_region:
cache_key = "%s.get_latest_rpms_%s_%s_%s" % ( cache_key = "KojiPackageSet.get_latest_rpms_%s_%s_%s" % (
str(self.__class__.__name__),
str(tag), str(tag),
str(event), str(event),
str(inherit), str(inherit),
@ -534,8 +525,6 @@ class KojiPackageSet(PackageSetBase):
return response return response
def get_package_path(self, queue_item): def get_package_path(self, queue_item):
rpm_info, build_info = queue_item rpm_info, build_info = queue_item
@ -772,7 +761,6 @@ class KojiPackageSet(PackageSetBase):
"exclusive_noarch": compose.conf[ "exclusive_noarch": compose.conf[
"pkgset_exclusive_arch_considers_noarch" "pkgset_exclusive_arch_considers_noarch"
], ],
"module_defaults_dir": compose.conf.get("module_defaults_dir"),
}, },
f, f,
protocol=pickle.HIGHEST_PROTOCOL, protocol=pickle.HIGHEST_PROTOCOL,
@ -869,7 +857,6 @@ class KojiPackageSet(PackageSetBase):
inherit_to_noarch = compose.conf["pkgset_inherit_exclusive_arch_to_noarch"] inherit_to_noarch = compose.conf["pkgset_inherit_exclusive_arch_to_noarch"]
exclusive_noarch = compose.conf["pkgset_exclusive_arch_considers_noarch"] exclusive_noarch = compose.conf["pkgset_exclusive_arch_considers_noarch"]
module_defaults_dir = compose.conf.get("module_defaults_dir")
if ( if (
reuse_data["allow_invalid_sigkeys"] == self._allow_invalid_sigkeys reuse_data["allow_invalid_sigkeys"] == self._allow_invalid_sigkeys
and reuse_data["packages"] == self.packages and reuse_data["packages"] == self.packages
@ -881,7 +868,6 @@ class KojiPackageSet(PackageSetBase):
# generated with older version of Pungi. Best to not reuse. # generated with older version of Pungi. Best to not reuse.
and reuse_data.get("inherit_to_noarch") == inherit_to_noarch and reuse_data.get("inherit_to_noarch") == inherit_to_noarch
and reuse_data.get("exclusive_noarch") == exclusive_noarch and reuse_data.get("exclusive_noarch") == exclusive_noarch
and reuse_data.get("module_defaults_dir") == module_defaults_dir
): ):
self.log_info("Copying repo data for reuse: %s" % old_repo_dir) self.log_info("Copying repo data for reuse: %s" % old_repo_dir)
copy_all(old_repo_dir, repo_dir) copy_all(old_repo_dir, repo_dir)
@ -896,67 +882,6 @@ class KojiPackageSet(PackageSetBase):
return False return False
class KojiMockPackageSet(KojiPackageSet):
def _is_rpm_signed(self, rpm_path) -> bool:
ts = rpm.TransactionSet()
ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES)
sigkeys = [
sigkey.lower() for sigkey in self.sigkey_ordering
if sigkey is not None
]
if not sigkeys:
return True
with open(rpm_path, 'rb') as fd:
header = ts.hdrFromFdno(fd)
signature = header[rpm.RPMTAG_SIGGPG] or header[rpm.RPMTAG_SIGPGP]
if signature is None:
return False
pgp_msg = pgpy.PGPMessage.from_blob(signature)
return any(
signature.signer.lower() in sigkeys
for signature in pgp_msg.signatures
)
def get_package_path(self, queue_item):
rpm_info, build_info = queue_item
# Check if this RPM is coming from scratch task.
# In this case, we already know the path.
if "path_from_task" in rpm_info:
return rpm_info["path_from_task"]
# we replaced this part because pungi uses way
# of guessing path of package on koji based on sigkey
# we don't need that because all our packages will
# be ready for release
# signature verification is still done during deps resolution
pathinfo = self.koji_wrapper.koji_module.pathinfo
rpm_path = os.path.join(pathinfo.topdir, pathinfo.rpm(rpm_info))
if os.path.isfile(rpm_path):
if not self._is_rpm_signed(rpm_path):
self._invalid_sigkey_rpms.append(rpm_info)
self.log_error(
'RPM "%s" not found for sigs: "%s". Path checked: "%s"',
rpm_info, self.sigkey_ordering, rpm_path
)
return
return rpm_path
else:
self.log_warning("RPM %s not found" % rpm_path)
return None
def populate(self, tag, event=None, inherit=True, include_packages=None):
result = super().populate(
tag=tag,
event=event,
inherit=inherit,
include_packages=include_packages,
)
return result
def _is_src(rpm_info): def _is_src(rpm_info):
"""Check if rpm info object returned by Koji refers to source packages.""" """Check if rpm info object returned by Koji refers to source packages."""
return rpm_info["arch"] in ("src", "nosrc") return rpm_info["arch"] in ("src", "nosrc")

View File

@ -15,10 +15,8 @@
from .source_koji import PkgsetSourceKoji from .source_koji import PkgsetSourceKoji
from .source_repos import PkgsetSourceRepos from .source_repos import PkgsetSourceRepos
from .source_kojimock import PkgsetSourceKojiMock
ALL_SOURCES = { ALL_SOURCES = {
"koji": PkgsetSourceKoji, "koji": PkgsetSourceKoji,
"repos": PkgsetSourceRepos, "repos": PkgsetSourceRepos,
"kojimock": PkgsetSourceKojiMock,
} }

View File

@ -222,7 +222,6 @@ def _add_module_to_variant(
""" """
mmds = {} mmds = {}
archives = koji_wrapper.koji_proxy.listArchives(build["id"]) archives = koji_wrapper.koji_proxy.listArchives(build["id"])
available_arches = set()
for archive in archives: for archive in archives:
if archive["btype"] != "module": if archive["btype"] != "module":
# Skip non module archives # Skip non module archives
@ -236,9 +235,7 @@ def _add_module_to_variant(
# in basearch. This assumes that each arch in the build maps to a # in basearch. This assumes that each arch in the build maps to a
# unique basearch. # unique basearch.
_, arch, _ = filename.split(".") _, arch, _ = filename.split(".")
basearch = getBaseArch(arch) filename = "modulemd.%s.txt" % getBaseArch(arch)
filename = "modulemd.%s.txt" % basearch
available_arches.add(basearch)
except ValueError: except ValueError:
pass pass
mmds[filename] = file_path mmds[filename] = file_path
@ -263,12 +260,6 @@ def _add_module_to_variant(
compose.log_debug("Module %s is filtered from %s.%s", nsvc, variant, arch) compose.log_debug("Module %s is filtered from %s.%s", nsvc, variant, arch)
continue continue
if arch not in available_arches:
compose.log_debug(
"Module %s is not available for arch %s.%s", nsvc, variant, arch
)
continue
filename = "modulemd.%s.txt" % arch filename = "modulemd.%s.txt" % arch
if filename not in mmds: if filename not in mmds:
raise RuntimeError( raise RuntimeError(

File diff suppressed because it is too large Load Diff

View File

@ -13,19 +13,13 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program; if not, see <https://gnu.org/licenses/>. # along with this program; if not, see <https://gnu.org/licenses/>.
import contextlib
import os import os
import re import re
import shutil
import tarfile
import requests
import six import six
from six.moves import shlex_quote from six.moves import shlex_quote
import kobo.log import kobo.log
from kobo.shortcuts import run from kobo.shortcuts import run
from pungi import util
from pungi.wrappers import kojiwrapper from pungi.wrappers import kojiwrapper
@ -100,7 +94,7 @@ class Runroot(kobo.log.LoggingBase):
log_file = os.path.join(log_dir, "program.log") log_file = os.path.join(log_dir, "program.log")
try: try:
with open(log_file) as f: with open(log_file) as f:
for line in f.readlines(): for line in f:
if "losetup: cannot find an unused loop device" in line: if "losetup: cannot find an unused loop device" in line:
return True return True
if re.match("losetup: .* failed to set up loop device", line): if re.match("losetup: .* failed to set up loop device", line):
@ -236,9 +230,9 @@ class Runroot(kobo.log.LoggingBase):
fmt_dict["runroot_key"] = runroot_key fmt_dict["runroot_key"] = runroot_key
self._ssh_run(hostname, user, run_template, fmt_dict, log_file=log_file) self._ssh_run(hostname, user, run_template, fmt_dict, log_file=log_file)
fmt_dict["command"] = ( fmt_dict[
"rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}\n'" "command"
) ] = "rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}\n'"
buildroot_rpms = self._ssh_run( buildroot_rpms = self._ssh_run(
hostname, hostname,
user, user,
@ -320,8 +314,7 @@ class Runroot(kobo.log.LoggingBase):
arch, arch,
args, args,
channel=runroot_channel, channel=runroot_channel,
# We want to change owner only if shared NFS directory is used. chown_uid=os.getuid(),
chown_uid=os.getuid() if kwargs.get("mounts") else None,
**kwargs **kwargs
) )
@ -332,7 +325,6 @@ class Runroot(kobo.log.LoggingBase):
% (output["task_id"], log_file) % (output["task_id"], log_file)
) )
self._result = output self._result = output
return output["task_id"]
def run_pungi_ostree(self, args, log_file=None, arch=None, **kwargs): def run_pungi_ostree(self, args, log_file=None, arch=None, **kwargs):
""" """
@ -389,72 +381,3 @@ class Runroot(kobo.log.LoggingBase):
return self._result return self._result
else: else:
raise ValueError("Unknown runroot_method %r." % self.runroot_method) raise ValueError("Unknown runroot_method %r." % self.runroot_method)
@util.retry(wait_on=requests.exceptions.RequestException)
def _download_file(url, dest):
# contextlib.closing is only needed in requests<2.18
with contextlib.closing(requests.get(url, stream=True, timeout=5)) as r:
if r.status_code == 404:
raise RuntimeError("Archive %s not found" % url)
r.raise_for_status()
with open(dest, "wb") as f:
shutil.copyfileobj(r.raw, f)
def _download_archive(task_id, fname, archive_url, dest_dir):
"""Download file from URL to a destination, with retries."""
temp_file = os.path.join(dest_dir, fname)
_download_file(archive_url, temp_file)
return temp_file
def _extract_archive(task_id, fname, archive_file, dest_path):
"""Extract the archive into given destination.
All items of the archive must match the name of the archive, i.e. all
paths in foo.tar.gz must start with foo/.
"""
basename = os.path.basename(fname).split(".")[0]
strip_prefix = basename + "/"
with tarfile.open(archive_file, "r") as archive:
for member in archive.getmembers():
# Check if each item is either the root directory or is within it.
if member.name != basename and not member.name.startswith(strip_prefix):
raise RuntimeError(
"Archive %s from task %s contains file without expected prefix: %s"
% (fname, task_id, member)
)
dest = os.path.join(dest_path, member.name[len(strip_prefix) :])
if member.isdir():
# Create directories where needed...
util.makedirs(dest)
elif member.isfile():
# ... and extract files into them.
with open(dest, "wb") as dest_obj:
shutil.copyfileobj(archive.extractfile(member), dest_obj)
elif member.islnk():
# We have a hardlink. Let's also link it.
linked_file = os.path.join(
dest_path, member.linkname[len(strip_prefix) :]
)
os.link(linked_file, dest)
else:
# Any other file type is an error.
raise RuntimeError(
"Unexpected file type in %s from task %s: %s"
% (fname, task_id, member)
)
def download_and_extract_archive(compose, task_id, fname, destination):
"""Download a tar archive from task outputs and extract it to the destination."""
koji = kojiwrapper.KojiWrapper(compose).koji_module
# Koji API provides downloadTaskOutput method, but it's not usable as it
# will attempt to load the entire file into memory.
# So instead let's generate a patch and attempt to convert it to a URL.
server_path = os.path.join(koji.pathinfo.task(task_id), fname)
archive_url = server_path.replace(koji.config.topdir, koji.config.topurl)
with util.temp_dir(prefix="buildinstall-download") as tmp_dir:
local_path = _download_archive(task_id, fname, archive_url, tmp_dir)
_extract_archive(task_id, fname, local_path, destination)

View File

@ -128,6 +128,7 @@ def run(config, topdir, has_old, offline, defined_variables, schema_overrides):
pungi.phases.OSTreePhase(compose), pungi.phases.OSTreePhase(compose),
pungi.phases.CreateisoPhase(compose, buildinstall_phase), pungi.phases.CreateisoPhase(compose, buildinstall_phase),
pungi.phases.ExtraIsosPhase(compose, buildinstall_phase), pungi.phases.ExtraIsosPhase(compose, buildinstall_phase),
pungi.phases.LiveImagesPhase(compose),
pungi.phases.LiveMediaPhase(compose), pungi.phases.LiveMediaPhase(compose),
pungi.phases.ImageBuildPhase(compose), pungi.phases.ImageBuildPhase(compose),
pungi.phases.ImageChecksumPhase(compose), pungi.phases.ImageChecksumPhase(compose),

View File

@ -1,441 +0,0 @@
# coding=utf-8
import argparse
import os
import subprocess
import tempfile
from shutil import rmtree
from typing import (
AnyStr,
List,
Dict,
Optional,
)
import createrepo_c as cr
import requests
import yaml
from dataclasses import dataclass, field
from .create_packages_json import (
PackagesGenerator,
RepoInfo,
VariantInfo,
)
@dataclass
class ExtraVariantInfo(VariantInfo):
modules: List[AnyStr] = field(default_factory=list)
packages: List[AnyStr] = field(default_factory=list)
class CreateExtraRepo(PackagesGenerator):
def __init__(
self,
variants: List[ExtraVariantInfo],
bs_auth_token: AnyStr,
local_repository_path: AnyStr,
clear_target_repo: bool = True,
):
self.variants = [] # type: List[ExtraVariantInfo]
super().__init__(variants, [], [])
self.auth_headers = {
'Authorization': f'Bearer {bs_auth_token}',
}
# modules data of modules.yaml.gz from an existing local repo
self.local_modules_data = []
self.local_repository_path = local_repository_path
# path to modules.yaml, which generated by the class
self.default_modules_yaml_path = os.path.join(
local_repository_path,
'modules.yaml',
)
if clear_target_repo:
if os.path.exists(self.local_repository_path):
rmtree(self.local_repository_path)
os.makedirs(self.local_repository_path, exist_ok=True)
else:
self._read_local_modules_yaml()
def _read_local_modules_yaml(self):
"""
Read modules data from an existin local repo
"""
repomd_file_path = os.path.join(
self.local_repository_path,
'repodata',
'repomd.xml',
)
repomd_object = self._parse_repomd(repomd_file_path)
for repomd_record in repomd_object.records:
if repomd_record.type != 'modules':
continue
modules_yaml_path = os.path.join(
self.local_repository_path,
repomd_record.location_href,
)
self.local_modules_data = list(self._parse_modules_file(
modules_yaml_path,
))
break
def _dump_local_modules_yaml(self):
"""
Dump merged modules data to an local repo
"""
if self.local_modules_data:
with open(self.default_modules_yaml_path, 'w') as yaml_file:
yaml.dump_all(
self.local_modules_data,
yaml_file,
)
@staticmethod
def get_repo_info_from_bs_repo(
auth_token: AnyStr,
build_id: AnyStr,
arch: AnyStr,
packages: Optional[List[AnyStr]] = None,
modules: Optional[List[AnyStr]] = None,
) -> List[ExtraVariantInfo]:
"""
Get info about a BS repo and save it to
an object of class ExtraRepoInfo
:param auth_token: Auth token to Build System
:param build_id: ID of a build from BS
:param arch: an architecture of repo which will be used
:param packages: list of names of packages which will be put to an
local repo from a BS repo
:param modules: list of names of modules which will be put to an
local repo from a BS repo
:return: list of ExtraRepoInfo with info about the BS repos
"""
bs_url = 'https://build.cloudlinux.com'
api_uri = 'api/v1'
bs_repo_suffix = 'build_repos'
variants_info = []
# get the full info about a BS repo
repo_request = requests.get(
url=os.path.join(
bs_url,
api_uri,
'builds',
build_id,
),
headers={
'Authorization': f'Bearer {auth_token}',
},
)
repo_request.raise_for_status()
result = repo_request.json()
for build_platform in result['build_platforms']:
platform_name = build_platform['name']
for architecture in build_platform['architectures']:
# skip repo with unsuitable architecture
if architecture != arch:
continue
variant_info = ExtraVariantInfo(
name=f'{build_id}-{platform_name}-{architecture}',
arch=architecture,
packages=packages,
modules=modules,
repos=[
RepoInfo(
path=os.path.join(
bs_url,
bs_repo_suffix,
build_id,
platform_name,
),
folder=architecture,
is_remote=True,
)
]
)
variants_info.append(variant_info)
return variants_info
def _create_local_extra_repo(self):
"""
Call `createrepo_c <path_to_repo>` for creating a local repo
"""
subprocess.call(
f'createrepo_c {self.local_repository_path}',
shell=True,
)
# remove an unnecessary temporary modules.yaml
if os.path.exists(self.default_modules_yaml_path):
os.remove(self.default_modules_yaml_path)
def get_remote_file_content(
self,
file_url: AnyStr,
) -> AnyStr:
"""
Get content from a remote file and write it to a temp file
:param file_url: url of a remote file
:return: path to a temp file
"""
file_request = requests.get(
url=file_url,
# for the case when we get a file from BS
headers=self.auth_headers,
)
file_request.raise_for_status()
with tempfile.NamedTemporaryFile(delete=False) as file_stream:
file_stream.write(file_request.content)
return file_stream.name
def _download_rpm_to_local_repo(
self,
package_location: AnyStr,
repo_info: RepoInfo,
) -> None:
"""
Download a rpm package from a remote repo and save it to a local repo
:param package_location: relative uri of a package in a remote repo
:param repo_info: info about a remote repo which contains a specific
rpm package
"""
rpm_package_remote_path = os.path.join(
repo_info.path,
repo_info.folder,
package_location,
)
rpm_package_local_path = os.path.join(
self.local_repository_path,
os.path.basename(package_location),
)
rpm_request = requests.get(
url=rpm_package_remote_path,
headers=self.auth_headers,
)
rpm_request.raise_for_status()
with open(rpm_package_local_path, 'wb') as rpm_file:
rpm_file.write(rpm_request.content)
def _download_packages(
self,
packages: Dict[AnyStr, cr.Package],
variant_info: ExtraVariantInfo
):
"""
Download all defined packages from a remote repo
:param packages: information about all packages (including
modularity) in a remote repo
:param variant_info: information about a remote variant
"""
for package in packages.values():
package_name = package.name
# Skip a current package from a remote repo if we defined
# the list packages and a current package doesn't belong to it
if variant_info.packages and \
package_name not in variant_info.packages:
continue
for repo_info in variant_info.repos:
self._download_rpm_to_local_repo(
package_location=package.location_href,
repo_info=repo_info,
)
def _download_modules(
self,
modules_data: List[Dict],
variant_info: ExtraVariantInfo,
packages: Dict[AnyStr, cr.Package]
):
"""
Download all defined modularity packages and their data from
a remote repo
:param modules_data: information about all modules in a remote repo
:param variant_info: information about a remote variant
:param packages: information about all packages (including
modularity) in a remote repo
"""
for module in modules_data:
module_data = module['data']
# Skip a current module from a remote repo if we defined
# the list modules and a current module doesn't belong to it
if variant_info.modules and \
module_data['name'] not in variant_info.modules:
continue
# we should add info about a module if the local repodata
# doesn't have it
if module not in self.local_modules_data:
self.local_modules_data.append(module)
# just skip a module's record if it doesn't have rpm artifact
if module['document'] != 'modulemd' or \
'artifacts' not in module_data or \
'rpms' not in module_data['artifacts']:
continue
for rpm in module['data']['artifacts']['rpms']:
# Empty repo_info.packages means that we will download
# all packages from repo including
# the modularity packages
if not variant_info.packages:
break
# skip a rpm if it doesn't belong to a processed repo
if rpm not in packages:
continue
for repo_info in variant_info.repos:
self._download_rpm_to_local_repo(
package_location=packages[rpm].location_href,
repo_info=repo_info,
)
def create_extra_repo(self):
"""
1. Get from the remote repos the specific (or all) packages/modules
2. Save them to a local repo
3. Save info about the modules to a local repo
3. Call `createrepo_c` which creates a local repo
with the right repodata
"""
for variant_info in self.variants:
for repo_info in variant_info.repos:
repomd_records = self._get_repomd_records(
repo_info=repo_info,
)
packages_iterator = self.get_packages_iterator(repo_info)
# parse the repodata (including modules.yaml.gz)
modules_data = self._parse_module_repomd_record(
repo_info=repo_info,
repomd_records=repomd_records,
)
# convert the packages dict to more usable form
# for future checking that a rpm from the module's artifacts
# belongs to a processed repository
packages = {
f'{package.name}-{package.epoch}:{package.version}-'
f'{package.release}.{package.arch}':
package for package in packages_iterator
}
self._download_modules(
modules_data=modules_data,
variant_info=variant_info,
packages=packages,
)
self._download_packages(
packages=packages,
variant_info=variant_info,
)
self._dump_local_modules_yaml()
self._create_local_extra_repo()
def create_parser():
parser = argparse.ArgumentParser()
parser.add_argument(
'--bs-auth-token',
help='Auth token for Build System',
)
parser.add_argument(
'--local-repo-path',
help='Path to a local repo. E.g. /var/repo/test_repo',
required=True,
)
parser.add_argument(
'--clear-local-repo',
help='Clear a local repo before creating a new',
action='store_true',
default=False,
)
parser.add_argument(
'--repo',
action='append',
help='Path to a folder with repofolders or build id. E.g. '
'"http://koji.cloudlinux.com/mirrors/rhel_mirror" or '
'"601809b3c2f5b0e458b14cd3"',
required=True,
)
parser.add_argument(
'--repo-folder',
action='append',
help='A folder which contains folder repodata . E.g. "baseos-stream"',
required=True,
)
parser.add_argument(
'--repo-arch',
action='append',
help='What architecture packages a repository contains. E.g. "x86_64"',
required=True,
)
parser.add_argument(
'--packages',
action='append',
type=str,
default=[],
help='A list of packages names which we want to download to local '
'extra repo. We will download all of packages if param is empty',
required=True,
)
parser.add_argument(
'--modules',
action='append',
type=str,
default=[],
help='A list of modules names which we want to download to local '
'extra repo. We will download all of modules if param is empty',
required=True,
)
return parser
def cli_main():
args = create_parser().parse_args()
repos_info = []
for repo, repo_folder, repo_arch, packages, modules in zip(
args.repo,
args.repo_folder,
args.repo_arch,
args.packages,
args.modules,
):
modules = modules.split()
packages = packages.split()
if repo.startswith('http://'):
repos_info.append(
ExtraVariantInfo(
name=repo_folder,
arch=repo_arch,
repos=[
RepoInfo(
path=repo,
folder=repo_folder,
is_remote=True,
)
],
modules=modules,
packages=packages,
)
)
else:
repos_info.extend(
CreateExtraRepo.get_repo_info_from_bs_repo(
auth_token=args.bs_auth_token,
build_id=repo,
arch=repo_arch,
modules=modules,
packages=packages,
)
)
cer = CreateExtraRepo(
variants=repos_info,
bs_auth_token=args.bs_auth_token,
local_repository_path=args.local_repo_path,
clear_target_repo=args.clear_local_repo,
)
cer.create_extra_repo()
if __name__ == '__main__':
cli_main()

View File

@ -1,514 +0,0 @@
# coding=utf-8
"""
The tool allow to generate package.json. This file is used by pungi
# as parameter `gather_prepopulate`
Sample of using repodata files taken from
https://github.com/rpm-software-management/createrepo_c/blob/master/examples/python/repodata_parsing.py
"""
import argparse
import gzip
import json
import logging
import lzma
import os
import re
import tempfile
from collections import defaultdict
from itertools import tee
from pathlib import Path
from typing import (
AnyStr,
Dict,
List,
Any,
Iterator,
Optional,
Tuple,
Union,
)
import binascii
from urllib.parse import urljoin
import requests
import rpm
import yaml
from createrepo_c import (
Package,
PackageIterator,
Repomd,
RepomdRecord,
)
from dataclasses import dataclass, field
from kobo.rpmlib import parse_nvra
logging.basicConfig(level=logging.INFO)
def _is_compressed_file(first_two_bytes: bytes, initial_bytes: bytes):
return binascii.hexlify(first_two_bytes) == initial_bytes
def is_gzip_file(first_two_bytes):
return _is_compressed_file(
first_two_bytes=first_two_bytes,
initial_bytes=b'1f8b',
)
def is_xz_file(first_two_bytes):
return _is_compressed_file(
first_two_bytes=first_two_bytes,
initial_bytes=b'fd37',
)
@dataclass
class RepoInfo:
# path to a directory with repo directories. E.g. '/var/repos' contains
# 'appstream', 'baseos', etc.
# Or 'http://koji.cloudlinux.com/mirrors/rhel_mirror' if you are
# using remote repo
path: str
# name of folder with a repodata folder. E.g. 'baseos', 'appstream', etc
folder: str
# Is a repo remote or local
is_remote: bool
# Is a reference repository (usually it's a RHEL repo)
# Layout of packages from such repository will be taken as example
# Only layout of specific package (which doesn't exist
# in a reference repository) will be taken as example
is_reference: bool = False
# The packages from 'present' repo will be added to a variant.
# The packages from 'absent' repo will be removed from a variant.
repo_type: str = 'present'
@dataclass
class VariantInfo:
# name of variant. E.g. 'BaseOS', 'AppStream', etc
name: AnyStr
# architecture of variant. E.g. 'x86_64', 'i686', etc
arch: AnyStr
# The packages which will be not added to a variant
excluded_packages: List[str] = field(default_factory=list)
# Repos of a variant
repos: List[RepoInfo] = field(default_factory=list)
class PackagesGenerator:
repo_arches = defaultdict(lambda: list(('noarch',)))
addon_repos = {
'x86_64': ['i686'],
'ppc64le': [],
'aarch64': [],
's390x': [],
'i686': [],
}
def __init__(
self,
variants: List[VariantInfo],
excluded_packages: List[AnyStr],
included_packages: List[AnyStr],
):
self.variants = variants
self.pkgs = dict()
self.excluded_packages = excluded_packages
self.included_packages = included_packages
self.tmp_files = [] # type: list[Path]
for arch, arch_list in self.addon_repos.items():
self.repo_arches[arch].extend(arch_list)
self.repo_arches[arch].append(arch)
def __del__(self):
for tmp_file in self.tmp_files:
if tmp_file.exists():
tmp_file.unlink()
@staticmethod
def _get_full_repo_path(repo_info: RepoInfo):
result = os.path.join(
repo_info.path,
repo_info.folder
)
if repo_info.is_remote:
result = urljoin(
repo_info.path + '/',
repo_info.folder,
)
return result
@staticmethod
def _warning_callback(warning_type, message):
"""
Warning callback for createrepo_c parsing functions
"""
print(f'Warning message: "{message}"; warning type: "{warning_type}"')
return True
def get_remote_file_content(self, file_url: AnyStr) -> AnyStr:
"""
Get content from a remote file and write it to a temp file
:param file_url: url of a remote file
:return: path to a temp file
"""
file_request = requests.get(
url=file_url,
)
file_request.raise_for_status()
with tempfile.NamedTemporaryFile(delete=False) as file_stream:
file_stream.write(file_request.content)
self.tmp_files.append(Path(file_stream.name))
return file_stream.name
@staticmethod
def _parse_repomd(repomd_file_path: AnyStr) -> Repomd:
"""
Parse file repomd.xml and create object Repomd
:param repomd_file_path: path to local repomd.xml
"""
return Repomd(repomd_file_path)
@classmethod
def _parse_modules_file(
cls,
modules_file_path: AnyStr,
) -> Iterator[Any]:
"""
Parse modules.yaml.gz and returns parsed data
:param modules_file_path: path to local modules.yaml.gz
:return: List of dict for each module in a repo
"""
with open(modules_file_path, 'rb') as modules_file:
data = modules_file.read()
if is_gzip_file(data[:2]):
data = gzip.decompress(data)
elif is_xz_file(data[:2]):
data = lzma.decompress(data)
return yaml.load_all(
data,
Loader=yaml.BaseLoader,
)
def _get_repomd_records(
self,
repo_info: RepoInfo,
) -> List[RepomdRecord]:
"""
Get, parse file repomd.xml and extract from it repomd records
:param repo_info: structure which contains info about a current repo
:return: list with repomd records
"""
repomd_file_path = os.path.join(
repo_info.path,
repo_info.folder,
'repodata',
'repomd.xml',
)
if repo_info.is_remote:
repomd_file_path = urljoin(
urljoin(
repo_info.path + '/',
repo_info.folder
) + '/',
'repodata/repomd.xml'
)
repomd_file_path = self.get_remote_file_content(repomd_file_path)
repomd_object = self._parse_repomd(repomd_file_path)
if repo_info.is_remote:
os.remove(repomd_file_path)
return repomd_object.records
def _download_repomd_records(
self,
repo_info: RepoInfo,
repomd_records: List[RepomdRecord],
repomd_records_dict: Dict[str, str],
):
"""
Download repomd records
:param repo_info: structure which contains info about a current repo
:param repomd_records: list with repomd records
:param repomd_records_dict: dict with paths to repodata files
"""
for repomd_record in repomd_records:
if repomd_record.type not in (
'primary',
'filelists',
'other',
):
continue
repomd_record_file_path = os.path.join(
repo_info.path,
repo_info.folder,
repomd_record.location_href,
)
if repo_info.is_remote:
repomd_record_file_path = self.get_remote_file_content(
repomd_record_file_path)
repomd_records_dict[repomd_record.type] = repomd_record_file_path
def _parse_module_repomd_record(
self,
repo_info: RepoInfo,
repomd_records: List[RepomdRecord],
) -> List[Dict]:
"""
Download repomd records
:param repo_info: structure which contains info about a current repo
:param repomd_records: list with repomd records
"""
for repomd_record in repomd_records:
if repomd_record.type != 'modules':
continue
repomd_record_file_path = os.path.join(
repo_info.path,
repo_info.folder,
repomd_record.location_href,
)
if repo_info.is_remote:
repomd_record_file_path = self.get_remote_file_content(
repomd_record_file_path)
return list(self._parse_modules_file(
repomd_record_file_path,
))
return []
@staticmethod
def compare_pkgs_version(package_1: Package, package_2: Package) -> int:
version_tuple_1 = (
package_1.epoch,
package_1.version,
package_1.release,
)
version_tuple_2 = (
package_2.epoch,
package_2.version,
package_2.release,
)
return rpm.labelCompare(version_tuple_1, version_tuple_2)
def get_packages_iterator(
self,
repo_info: RepoInfo,
) -> Union[PackageIterator, Iterator]:
full_repo_path = self._get_full_repo_path(repo_info)
pkgs_iterator = self.pkgs.get(full_repo_path)
if pkgs_iterator is None:
repomd_records = self._get_repomd_records(
repo_info=repo_info,
)
repomd_records_dict = {} # type: Dict[str, str]
self._download_repomd_records(
repo_info=repo_info,
repomd_records=repomd_records,
repomd_records_dict=repomd_records_dict,
)
pkgs_iterator = PackageIterator(
primary_path=repomd_records_dict['primary'],
filelists_path=repomd_records_dict['filelists'],
other_path=repomd_records_dict['other'],
warningcb=self._warning_callback,
)
pkgs_iterator, self.pkgs[full_repo_path] = tee(pkgs_iterator)
return pkgs_iterator
def get_package_arch(
self,
package: Package,
variant_arch: str,
) -> str:
result = variant_arch
if package.arch in self.repo_arches[variant_arch]:
result = package.arch
return result
def is_skipped_module_package(
self,
package: Package,
variant_arch: str,
) -> bool:
package_key = self.get_package_key(package, variant_arch)
# Even a module package will be added to packages.json if
# it presents in the list of included packages
return 'module' in package.release and not any(
re.search(
f'^{included_pkg}$',
package_key,
) or included_pkg in (package.name, package_key)
for included_pkg in self.included_packages
)
def is_excluded_package(
self,
package: Package,
variant_arch: str,
excluded_packages: List[str],
) -> bool:
package_key = self.get_package_key(package, variant_arch)
return any(
re.search(
f'^{excluded_pkg}$',
package_key,
) or excluded_pkg in (package.name, package_key)
for excluded_pkg in excluded_packages
)
@staticmethod
def get_source_rpm_name(package: Package) -> str:
source_rpm_nvra = parse_nvra(package.rpm_sourcerpm)
return source_rpm_nvra['name']
def get_package_key(self, package: Package, variant_arch: str) -> str:
return (
f'{package.name}.'
f'{self.get_package_arch(package, variant_arch)}'
)
def generate_packages_json(
self
) -> Dict[AnyStr, Dict[AnyStr, Dict[AnyStr, List[AnyStr]]]]:
"""
Generate packages.json
"""
packages = defaultdict(lambda: defaultdict(lambda: {
'variants': list(),
}))
for variant_info in self.variants:
for repo_info in variant_info.repos:
is_reference = repo_info.is_reference
for package in self.get_packages_iterator(repo_info=repo_info):
if self.is_skipped_module_package(
package=package,
variant_arch=variant_info.arch,
):
continue
if self.is_excluded_package(
package=package,
variant_arch=variant_info.arch,
excluded_packages=self.excluded_packages,
):
continue
if self.is_excluded_package(
package=package,
variant_arch=variant_info.arch,
excluded_packages=variant_info.excluded_packages,
):
continue
package_key = self.get_package_key(
package,
variant_info.arch,
)
source_rpm_name = self.get_source_rpm_name(package)
package_info = packages[source_rpm_name][package_key]
if 'is_reference' not in package_info:
package_info['variants'].append(variant_info.name)
package_info['is_reference'] = is_reference
package_info['package'] = package
elif not package_info['is_reference'] or \
package_info['is_reference'] == is_reference and \
self.compare_pkgs_version(
package_1=package,
package_2=package_info['package'],
) > 0:
package_info['variants'] = [variant_info.name]
package_info['is_reference'] = is_reference
package_info['package'] = package
elif self.compare_pkgs_version(
package_1=package,
package_2=package_info['package'],
) == 0 and repo_info.repo_type != 'absent':
package_info['variants'].append(variant_info.name)
result = defaultdict(lambda: defaultdict(
lambda: defaultdict(list),
))
for variant_info in self.variants:
for source_rpm_name, packages_info in packages.items():
for package_key, package_info in packages_info.items():
variant_pkgs = result[variant_info.name][variant_info.arch]
if variant_info.name not in package_info['variants']:
continue
variant_pkgs[source_rpm_name].append(package_key)
return result
def create_parser():
parser = argparse.ArgumentParser()
parser.add_argument(
'-c',
'--config',
type=Path,
default=Path('config.yaml'),
required=False,
help='Path to a config',
)
parser.add_argument(
'-o',
'--json-output-path',
type=str,
help='Full path to output json file',
required=True,
)
return parser
def read_config(config_path: Path) -> Optional[Dict]:
if not config_path.exists():
logging.error('A config by path "%s" does not exist', config_path)
exit(1)
with config_path.open('r') as config_fd:
return yaml.safe_load(config_fd)
def process_config(config_data: Dict) -> Tuple[
List[VariantInfo],
List[str],
List[str],
]:
excluded_packages = config_data.get('excluded_packages', [])
included_packages = config_data.get('included_packages', [])
variants = [VariantInfo(
name=variant_name,
arch=variant_info['arch'],
excluded_packages=variant_info.get('excluded_packages', []),
repos=[RepoInfo(
path=variant_repo['path'],
folder=variant_repo['folder'],
is_remote=variant_repo['remote'],
is_reference=variant_repo['reference'],
repo_type=variant_repo.get('repo_type', 'present'),
) for variant_repo in variant_info['repos']]
) for variant_name, variant_info in config_data['variants'].items()]
return variants, excluded_packages, included_packages
def cli_main():
args = create_parser().parse_args()
variants, excluded_packages, included_packages = process_config(
config_data=read_config(args.config)
)
pg = PackagesGenerator(
variants=variants,
excluded_packages=excluded_packages,
included_packages=included_packages,
)
result = pg.generate_packages_json()
with open(args.json_output_path, 'w') as packages_file:
json.dump(
result,
packages_file,
indent=4,
sort_keys=True,
)
if __name__ == '__main__':
cli_main()

View File

@ -1,255 +0,0 @@
import gzip
import lzma
import os
from argparse import ArgumentParser, FileType
from glob import iglob
from io import BytesIO
from pathlib import Path
from typing import List, AnyStr, Iterable, Union, Optional
import logging
from urllib.parse import urljoin
import yaml
import createrepo_c as cr
from typing.io import BinaryIO
from .create_packages_json import PackagesGenerator, is_gzip_file, is_xz_file
EMPTY_FILE = '.empty'
def read_modules_yaml(modules_yaml_path: Union[str, Path]) -> BytesIO:
with open(modules_yaml_path, 'rb') as fp:
return BytesIO(fp.read())
def grep_list_of_modules_yaml(repos_path: AnyStr) -> Iterable[BytesIO]:
"""
Find all of valid *modules.yaml.gz in repos
:param repos_path: path to a directory which contains repo dirs
:return: iterable object of content from *modules.yaml.*
"""
return (
read_modules_yaml_from_specific_repo(repo_path=Path(path).parent)
for path in iglob(
str(Path(repos_path).joinpath('**/repodata')),
recursive=True
)
)
def _is_remote(path: str):
return any(str(path).startswith(protocol)
for protocol in ('http', 'https'))
def read_modules_yaml_from_specific_repo(
repo_path: Union[str, Path]
) -> Optional[BytesIO]:
"""
Read modules_yaml from a specific repo (remote or local)
:param repo_path: path/url to a specific repo
(final dir should contain dir `repodata`)
:return: iterable object of content from *modules.yaml.*
"""
if _is_remote(repo_path):
repomd_url = urljoin(
repo_path + '/',
'repodata/repomd.xml',
)
packages_generator = PackagesGenerator(
variants=[],
excluded_packages=[],
included_packages=[],
)
repomd_file_path = packages_generator.get_remote_file_content(
file_url=repomd_url
)
else:
repomd_file_path = os.path.join(
repo_path,
'repodata/repomd.xml',
)
repomd_obj = cr.Repomd(str(repomd_file_path))
for record in repomd_obj.records:
if record.type != 'modules':
continue
else:
if _is_remote(repo_path):
modules_yaml_url = urljoin(
repo_path + '/',
record.location_href,
)
packages_generator = PackagesGenerator(
variants=[],
excluded_packages=[],
included_packages=[],
)
modules_yaml_path = packages_generator.get_remote_file_content(
file_url=modules_yaml_url
)
else:
modules_yaml_path = os.path.join(
repo_path,
record.location_href,
)
return read_modules_yaml(modules_yaml_path=modules_yaml_path)
else:
return None
def _should_grep_defaults(
document_type: str,
grep_only_modules_data: bool = False,
grep_only_modules_defaults_data: bool = False,
) -> bool:
xor_flag = grep_only_modules_data == grep_only_modules_defaults_data
if document_type == 'modulemd' and (xor_flag or grep_only_modules_data):
return True
return False
def _should_grep_modules(
document_type: str,
grep_only_modules_data: bool = False,
grep_only_modules_defaults_data: bool = False,
) -> bool:
xor_flag = grep_only_modules_data == grep_only_modules_defaults_data
if document_type == 'modulemd-defaults' and \
(xor_flag or grep_only_modules_defaults_data):
return True
return False
def collect_modules(
modules_paths: List[BinaryIO],
target_dir: str,
grep_only_modules_data: bool = False,
grep_only_modules_defaults_data: bool = False,
):
"""
Read given modules.yaml.gz files and export modules
and modulemd files from it.
Returns:
object:
"""
xor_flag = grep_only_modules_defaults_data is grep_only_modules_data
modules_path = os.path.join(target_dir, 'modules')
module_defaults_path = os.path.join(target_dir, 'module_defaults')
if grep_only_modules_data or xor_flag:
os.makedirs(modules_path, exist_ok=True)
if grep_only_modules_defaults_data or xor_flag:
os.makedirs(module_defaults_path, exist_ok=True)
# Defaults modules can be empty, but pungi detects
# empty folder while copying and raises the exception in this case
Path(os.path.join(module_defaults_path, EMPTY_FILE)).touch()
for module_file in modules_paths:
data = module_file.read()
if is_gzip_file(data[:2]):
data = gzip.decompress(data)
elif is_xz_file(data[:2]):
data = lzma.decompress(data)
documents = yaml.load_all(data, Loader=yaml.BaseLoader)
for doc in documents:
path = None
if _should_grep_modules(
doc['document'],
grep_only_modules_data,
grep_only_modules_defaults_data,
):
name = f"{doc['data']['module']}.yaml"
path = os.path.join(module_defaults_path, name)
logging.info('Found %s module defaults', name)
elif _should_grep_defaults(
doc['document'],
grep_only_modules_data,
grep_only_modules_defaults_data,
):
# pungi.phases.pkgset.sources.source_koji.get_koji_modules
stream = doc['data']['stream'].replace('-', '_')
doc_data = doc['data']
name = f"{doc_data['name']}-{stream}-" \
f"{doc_data['version']}.{doc_data['context']}"
arch_dir = os.path.join(
modules_path,
doc_data['arch']
)
os.makedirs(arch_dir, exist_ok=True)
path = os.path.join(
arch_dir,
name,
)
logging.info('Found module %s', name)
if 'artifacts' not in doc['data']:
logging.warning(
'RPM %s does not have explicit list of artifacts',
name
)
if path is not None:
with open(path, 'w') as f:
yaml.dump(doc, f, default_flow_style=False)
def cli_main():
parser = ArgumentParser()
content_type_group = parser.add_mutually_exclusive_group(required=False)
content_type_group.add_argument(
'--get-only-modules-data',
action='store_true',
help='Parse and get only modules data',
)
content_type_group.add_argument(
'--get-only-modules-defaults-data',
action='store_true',
help='Parse and get only modules_defaults data',
)
path_group = parser.add_mutually_exclusive_group(required=True)
path_group.add_argument(
'-p', '--path',
type=FileType('rb'), nargs='+',
help='Path to modules.yaml.gz file. '
'You may pass multiple files by passing -p path1 path2'
)
path_group.add_argument(
'-rp', '--repo-path',
required=False,
type=str,
default=None,
help='Path to a directory which contains repodirs. E.g. /var/repos'
)
path_group.add_argument(
'-rd', '--repodata-paths',
required=False,
type=str,
nargs='+',
default=[],
help='Paths/urls to the directories with directory `repodata`',
)
parser.add_argument('-t', '--target', required=True)
namespace = parser.parse_args()
if namespace.repodata_paths:
modules = []
for repodata_path in namespace.repodata_paths:
modules.append(read_modules_yaml_from_specific_repo(
repodata_path,
))
elif namespace.path is not None:
modules = namespace.path
else:
modules = grep_list_of_modules_yaml(namespace.repo_path)
modules = list(filter(lambda i: i is not None, modules))
collect_modules(
modules,
namespace.target,
namespace.get_only_modules_data,
namespace.get_only_modules_defaults_data,
)
if __name__ == '__main__':
cli_main()

View File

@ -1,96 +0,0 @@
import re
from argparse import ArgumentParser
import os
from glob import iglob
from typing import List
from pathlib import Path
from dataclasses import dataclass
from productmd.common import parse_nvra
@dataclass
class Package:
nvra: dict
path: Path
def search_rpms(top_dir: Path) -> List[Package]:
"""
Search for all *.rpm files recursively
in given top directory
Returns:
list: list of paths
"""
return [Package(
nvra=parse_nvra(Path(path).stem),
path=Path(path),
) for path in iglob(str(top_dir.joinpath('**/*.rpm')), recursive=True)]
def is_excluded_package(
package: Package,
excluded_packages: List[str],
) -> bool:
package_key = f'{package.nvra["name"]}.{package.nvra["arch"]}'
return any(
re.search(
f'^{excluded_pkg}$',
package_key,
) or excluded_pkg in (package.nvra['name'], package_key)
for excluded_pkg in excluded_packages
)
def copy_rpms(
packages: List[Package],
target_top_dir: Path,
excluded_packages: List[str],
):
"""
Search synced repos for rpms and prepare
koji-like structure for pungi
Instead of repos, use following structure:
# ls /mnt/koji/
i686/ noarch/ x86_64/
Returns:
Nothing:
"""
for package in packages:
if is_excluded_package(package, excluded_packages):
continue
target_arch_dir = target_top_dir.joinpath(package.nvra['arch'])
target_file = target_arch_dir.joinpath(package.path.name)
os.makedirs(target_arch_dir, exist_ok=True)
if not target_file.exists():
try:
os.link(package.path, target_file)
except OSError:
# hardlink failed, try symlinking
package.path.symlink_to(target_file)
def cli_main():
parser = ArgumentParser()
parser.add_argument('-p', '--path', required=True, type=Path)
parser.add_argument('-t', '--target', required=True, type=Path)
parser.add_argument(
'-e',
'--excluded-packages',
required=False,
nargs='+',
type=str,
default=[],
)
namespace = parser.parse_args()
rpms = search_rpms(namespace.path)
copy_rpms(rpms, namespace.target, namespace.excluded_packages)
if __name__ == '__main__':
cli_main()

View File

@ -478,7 +478,8 @@ def main():
print("RPM size: %s MiB" % (mypungi.size_packages() / 1024**2)) print("RPM size: %s MiB" % (mypungi.size_packages() / 1024**2))
if not opts.nodebuginfo: if not opts.nodebuginfo:
print( print(
"DEBUGINFO size: %s MiB" % (mypungi.size_debuginfo() / 1024**2) "DEBUGINFO size: %s MiB"
% (mypungi.size_debuginfo() / 1024**2)
) )
if not opts.nosource: if not opts.nosource:
print("SRPM size: %s MiB" % (mypungi.size_srpms() / 1024**2)) print("SRPM size: %s MiB" % (mypungi.size_srpms() / 1024**2))

View File

@ -97,7 +97,6 @@ def main(ns, persistdir, cachedir):
dnf_conf = Conf(ns.arch) dnf_conf = Conf(ns.arch)
dnf_conf.persistdir = persistdir dnf_conf.persistdir = persistdir
dnf_conf.cachedir = cachedir dnf_conf.cachedir = cachedir
dnf_conf.optional_metadata_types = ["filelists"]
dnf_obj = DnfWrapper(dnf_conf) dnf_obj = DnfWrapper(dnf_conf)
gather_opts = GatherOptions() gather_opts = GatherOptions()

View File

@ -252,15 +252,9 @@ def main():
kobo.log.add_stderr_logger(logger) kobo.log.add_stderr_logger(logger)
conf = util.load_config(opts.config) conf = util.load_config(opts.config)
compose_type = opts.compose_type or conf.get("compose_type", "production")
label = opts.label or conf.get("label")
if label:
try:
productmd.composeinfo.verify_label(label)
except ValueError as ex:
abort(str(ex))
if compose_type == "production" and not label and not opts.no_label: compose_type = opts.compose_type or conf.get("compose_type", "production")
if compose_type == "production" and not opts.label and not opts.no_label:
abort("must specify label for a production compose") abort("must specify label for a production compose")
if ( if (
@ -310,7 +304,7 @@ def main():
opts.target_dir, opts.target_dir,
conf, conf,
compose_type=compose_type, compose_type=compose_type,
compose_label=label, compose_label=opts.label,
parent_compose_ids=opts.parent_compose_id, parent_compose_ids=opts.parent_compose_id,
respin_of=opts.respin_of, respin_of=opts.respin_of,
) )
@ -321,7 +315,7 @@ def main():
ci = Compose.get_compose_info( ci = Compose.get_compose_info(
conf, conf,
compose_type=compose_type, compose_type=compose_type,
compose_label=label, compose_label=opts.label,
parent_compose_ids=opts.parent_compose_id, parent_compose_ids=opts.parent_compose_id,
respin_of=opts.respin_of, respin_of=opts.respin_of,
) )
@ -423,12 +417,11 @@ def run_compose(
compose, buildinstall_phase, pkgset_phase compose, buildinstall_phase, pkgset_phase
) )
ostree_phase = pungi.phases.OSTreePhase(compose, pkgset_phase) ostree_phase = pungi.phases.OSTreePhase(compose, pkgset_phase)
ostree_container_phase = pungi.phases.OSTreeContainerPhase(compose, pkgset_phase)
createiso_phase = pungi.phases.CreateisoPhase(compose, buildinstall_phase) createiso_phase = pungi.phases.CreateisoPhase(compose, buildinstall_phase)
extra_isos_phase = pungi.phases.ExtraIsosPhase(compose, buildinstall_phase) extra_isos_phase = pungi.phases.ExtraIsosPhase(compose, buildinstall_phase)
liveimages_phase = pungi.phases.LiveImagesPhase(compose)
livemedia_phase = pungi.phases.LiveMediaPhase(compose) livemedia_phase = pungi.phases.LiveMediaPhase(compose)
image_build_phase = pungi.phases.ImageBuildPhase(compose, buildinstall_phase) image_build_phase = pungi.phases.ImageBuildPhase(compose, buildinstall_phase)
kiwibuild_phase = pungi.phases.KiwiBuildPhase(compose)
osbuild_phase = pungi.phases.OSBuildPhase(compose) osbuild_phase = pungi.phases.OSBuildPhase(compose)
osbs_phase = pungi.phases.OSBSPhase(compose, pkgset_phase, buildinstall_phase) osbs_phase = pungi.phases.OSBSPhase(compose, pkgset_phase, buildinstall_phase)
image_container_phase = pungi.phases.ImageContainerPhase(compose) image_container_phase = pungi.phases.ImageContainerPhase(compose)
@ -445,18 +438,17 @@ def run_compose(
gather_phase, gather_phase,
extrafiles_phase, extrafiles_phase,
createiso_phase, createiso_phase,
liveimages_phase,
livemedia_phase, livemedia_phase,
image_build_phase, image_build_phase,
image_checksum_phase, image_checksum_phase,
test_phase, test_phase,
ostree_phase, ostree_phase,
ostree_installer_phase, ostree_installer_phase,
ostree_container_phase,
extra_isos_phase, extra_isos_phase,
osbs_phase, osbs_phase,
osbuild_phase, osbuild_phase,
image_container_phase, image_container_phase,
kiwibuild_phase,
): ):
if phase.skip(): if phase.skip():
continue continue
@ -471,6 +463,50 @@ def run_compose(
print(i) print(i)
raise RuntimeError("Configuration is not valid") raise RuntimeError("Configuration is not valid")
# PREP
# Note: This may be put into a new method of phase classes (e.g. .prep())
# in same way as .validate() or .run()
# Prep for liveimages - Obtain a password for signing rpm wrapped images
if (
"signing_key_password_file" in compose.conf
and "signing_command" in compose.conf
and "%(signing_key_password)s" in compose.conf["signing_command"]
and not liveimages_phase.skip()
):
# TODO: Don't require key if signing is turned off
# Obtain signing key password
signing_key_password = None
# Use appropriate method
if compose.conf["signing_key_password_file"] == "-":
# Use stdin (by getpass module)
try:
signing_key_password = getpass.getpass("Signing key password: ")
except EOFError:
compose.log_debug("Ignoring signing key password")
pass
else:
# Use text file with password
try:
signing_key_password = (
open(compose.conf["signing_key_password_file"], "r")
.readline()
.rstrip("\n")
)
except IOError:
# Filename is not print intentionally in case someone puts
# password directly into the option
err_msg = "Cannot load password from file specified by 'signing_key_password_file' option" # noqa: E501
compose.log_error(err_msg)
print(err_msg)
raise RuntimeError(err_msg)
if signing_key_password:
# Store the password
compose.conf["signing_key_password"] = signing_key_password
init_phase.start() init_phase.start()
init_phase.stop() init_phase.stop()
@ -483,7 +519,6 @@ def run_compose(
(gather_phase, createrepo_phase), (gather_phase, createrepo_phase),
extrafiles_phase, extrafiles_phase,
(ostree_phase, ostree_installer_phase), (ostree_phase, ostree_installer_phase),
ostree_container_phase,
) )
essentials_phase = pungi.phases.WeaverPhase(compose, essentials_schema) essentials_phase = pungi.phases.WeaverPhase(compose, essentials_schema)
essentials_phase.start() essentials_phase.start()
@ -508,10 +543,10 @@ def run_compose(
compose_images_schema = ( compose_images_schema = (
createiso_phase, createiso_phase,
extra_isos_phase, extra_isos_phase,
liveimages_phase,
image_build_phase, image_build_phase,
livemedia_phase, livemedia_phase,
osbuild_phase, osbuild_phase,
kiwibuild_phase,
) )
post_image_phase = pungi.phases.WeaverPhase( post_image_phase = pungi.phases.WeaverPhase(
compose, (image_checksum_phase, image_container_phase) compose, (image_checksum_phase, image_container_phase)
@ -533,11 +568,10 @@ def run_compose(
and ostree_installer_phase.skip() and ostree_installer_phase.skip()
and createiso_phase.skip() and createiso_phase.skip()
and extra_isos_phase.skip() and extra_isos_phase.skip()
and liveimages_phase.skip()
and livemedia_phase.skip() and livemedia_phase.skip()
and image_build_phase.skip() and image_build_phase.skip()
and kiwibuild_phase.skip()
and osbuild_phase.skip() and osbuild_phase.skip()
and ostree_container_phase.skip()
): ):
compose.im.dump(compose.paths.compose.metadata("images.json")) compose.im.dump(compose.paths.compose.metadata("images.json"))
compose.dump_containers_metadata() compose.dump_containers_metadata()

View File

@ -487,7 +487,10 @@ def get_volid(compose, arch, variant=None, disc_type=False, formats=None, **kwar
tried.add(volid) tried.add(volid)
if volid and len(volid) > 32: if volid and len(volid) > 32:
volid = volid[:32] raise ValueError(
"Could not create volume ID longer than 32 bytes, options are %r",
sorted(tried, key=len),
)
if compose.conf["restricted_volid"]: if compose.conf["restricted_volid"]:
# Replace all non-alphanumeric characters and non-underscores) with # Replace all non-alphanumeric characters and non-underscores) with

View File

@ -306,8 +306,6 @@ class CompsWrapper(object):
append_common_info(doc, group_node, group, force_description=True) append_common_info(doc, group_node, group, force_description=True)
append_bool(doc, group_node, "default", group.default) append_bool(doc, group_node, "default", group.default)
append_bool(doc, group_node, "uservisible", group.uservisible) append_bool(doc, group_node, "uservisible", group.uservisible)
if group.display_order is not None:
append(doc, group_node, "display_order", str(group.display_order))
if group.lang_only: if group.lang_only:
append(doc, group_node, "langonly", group.lang_only) append(doc, group_node, "langonly", group.lang_only)

View File

@ -88,12 +88,5 @@ def parse_output(output):
packages.add((name, arch, frozenset(flags))) packages.add((name, arch, frozenset(flags)))
else: else:
name, arch = nevra.rsplit(".", 1) name, arch = nevra.rsplit(".", 1)
# replace dash by underscore in stream of module's nerva modules.add(name.split(":", 1)[1])
# source of name looks like
# module:llvm-toolset:rhel8:8040020210411062713:9f9e2e7e.x86_64
name = ':'.join(
item.replace('-', '_') if i == 1 else item for
i, item in enumerate(name.split(':')[1:])
)
modules.add(name)
return packages, modules return packages, modules

View File

@ -280,21 +280,14 @@ def get_manifest_cmd(iso_name, xorriso=False, output_file=None):
) )
def get_volume_id(path, xorriso=False): def get_volume_id(path):
if xorriso: cmd = ["isoinfo", "-d", "-i", path]
cmd = ["xorriso", "-indev", path] retcode, output = run(cmd, universal_newlines=True)
retcode, output = run(cmd, universal_newlines=True)
for line in output.splitlines():
if line.startswith("Volume id"):
return line.split("'")[1]
else:
cmd = ["isoinfo", "-d", "-i", path]
retcode, output = run(cmd, universal_newlines=True)
for line in output.splitlines(): for line in output.splitlines():
line = line.strip() line = line.strip()
if line.startswith("Volume id:"): if line.startswith("Volume id:"):
return line[11:].strip() return line[11:].strip()
raise RuntimeError("Could not read Volume ID") raise RuntimeError("Could not read Volume ID")
@ -516,21 +509,3 @@ def mount(image, logger=None, use_guestmount=True):
util.run_unmount_cmd(["fusermount", "-u", mount_dir], path=mount_dir) util.run_unmount_cmd(["fusermount", "-u", mount_dir], path=mount_dir)
else: else:
util.run_unmount_cmd(["umount", mount_dir], path=mount_dir) util.run_unmount_cmd(["umount", mount_dir], path=mount_dir)
def xorriso_commands(arch, input, output):
"""List of xorriso commands to modify a bootable image."""
commands = [
("-indev", input),
("-outdev", output),
# isoinfo -J uses the Joliet tree, and it's used by virt-install
("-joliet", "on"),
# Support long filenames in the Joliet trees. Repodata is particularly
# likely to run into this limit.
("-compliance", "joliet_long_names"),
("-boot_image", "any", "replay"),
]
if arch == "ppc64le":
# This is needed for the image to be bootable.
commands.append(("-as", "mkisofs", "-U", "--"))
return commands

View File

@ -1,299 +0,0 @@
import os
import time
from pathlib import Path
from attr import dataclass
from kobo.rpmlib import parse_nvra
from pungi.module_util import Modulemd
# just a random value which we don't
# use in mock currently
# originally builds are filtered by this value
# to get consistent snapshot of tags and packages
from pungi.scripts.gather_rpms import search_rpms
LAST_EVENT_ID = 999999
# last event time is not important but build
# time should be less then it
LAST_EVENT_TIME = time.time()
BUILD_TIME = 0
# virtual build that collects all
# packages built for some arch
RELEASE_BUILD_ID = 15270
# tag that should have all packages available
ALL_PACKAGES_TAG = 'dist-c8-compose'
# tag that should have all modules available
ALL_MODULES_TAG = 'dist-c8-module-compose'
@dataclass
class Module:
build_id: int
name: str
nvr: str
stream: str
version: str
context: str
arch: str
class KojiMock:
"""
Class that acts like real koji (for some needed methods)
but uses local storage as data source
"""
def __init__(self, packages_dir, modules_dir, all_arches):
self._modules = self._gather_modules(modules_dir)
self._modules_dir = modules_dir
self._packages_dir = packages_dir
self._all_arches = all_arches
@staticmethod
def _gather_modules(modules_dir):
modules = {}
for index, (f, arch) in enumerate(
(sub_path.name, sub_path.parent.name)
for path in Path(modules_dir).glob('*')
for sub_path in path.iterdir()
):
parsed = parse_nvra(f)
modules[index] = Module(
name=parsed['name'],
nvr=f,
version=parsed['release'],
context=parsed['arch'],
stream=parsed['version'],
build_id=index,
arch=arch,
)
return modules
@staticmethod
def getLastEvent(*args, **kwargs):
return {'id': LAST_EVENT_ID, 'ts': LAST_EVENT_TIME}
def listTagged(self, tag_name, *args, **kwargs):
"""
Returns list of virtual 'builds' that contain packages by given tag
There are two kinds of tags: modular and distributive.
For now, only one kind, distributive one, is needed.
"""
if tag_name != ALL_MODULES_TAG:
raise ValueError("I don't know what tag is %s" % tag_name)
builds = []
for module in self._modules.values():
builds.append({
'build_id': module.build_id,
'owner_name': 'centos',
'package_name': module.name,
'nvr': module.nvr,
'version': module.stream,
'release': '%s.%s' % (module.version, module.context),
'name': module.name,
'id': module.build_id,
'tag_name': tag_name,
'arch': module.arch,
# Following fields are currently not
# used but returned by real koji
# left them here just for reference
#
# 'task_id': None,
# 'state': 1,
# 'start_time': '2020-12-23 16:43:59',
# 'creation_event_id': 309485,
# 'creation_time': '2020-12-23 17:05:33.553748',
# 'epoch': None, 'tag_id': 533,
# 'completion_time': '2020-12-23 17:05:23',
# 'volume_id': 0,
# 'package_id': 3221,
# 'owner_id': 11,
# 'volume_name': 'DEFAULT',
})
return builds
@staticmethod
def getFullInheritance(*args, **kwargs):
"""
Unneeded because we use local storage.
"""
return []
def getBuild(self, build_id, *args, **kwargs):
"""
Used to get information about build
(used in pungi only for modules currently)
"""
module = self._modules[build_id]
result = {
'id': build_id,
'name': module.name,
'version': module.stream,
'release': '%s.%s' % (module.version, module.context),
'completion_ts': BUILD_TIME,
'state': 'COMPLETE',
'arch': module.arch,
'extra': {
'typeinfo': {
'module': {
'stream': module.stream,
'version': module.version,
'name': module.name,
'context': module.context,
'content_koji_tag': '-'.join([
module.name,
module.stream,
module.version
]) + '.' + module.context
}
}
}
}
return result
def listArchives(self, build_id, *args, **kwargs):
"""
Originally lists artifacts for build, but in pungi used
only to get list of modulemd files for some module
"""
module = self._modules[build_id]
return [
{
'build_id': module.build_id,
'filename': f'modulemd.{module.arch}.txt',
'btype': 'module'
},
# noone ever uses this file
# but it should be because pungi ignores builds
# with len(files) <= 1
{
'build_id': module.build_id,
'filename': 'modulemd.txt',
'btype': 'module'
}
]
def listTaggedRPMS(self, tag_name, *args, **kwargs):
"""
Get information about packages that are tagged by tag.
There are two kings of tags: per-module and per-distr.
"""
if tag_name == ALL_PACKAGES_TAG:
builds, packages = self._get_release_packages()
else:
builds, packages = self._get_module_packages(tag_name)
return [
packages,
builds
]
def _get_release_packages(self):
"""
Search packages dir and keep only
packages that are non-modular.
This is quite the way how real koji works:
- modular packages are tagged by module-* tag
- all other packages are tagged with dist* tag
"""
packages = []
# get all rpms in folder
rpms = search_rpms(Path(self._packages_dir))
for rpm in rpms:
info = parse_nvra(rpm.path.stem)
if 'module' in info['release']:
continue
packages.append({
"build_id": RELEASE_BUILD_ID,
"name": info['name'],
"extra": None,
"arch": info['arch'],
"epoch": info['epoch'] or None,
"version": info['version'],
"metadata_only": False,
"release": info['release'],
# not used currently
# "id": 262555,
# "size": 0
})
builds = []
return builds, packages
def _get_module_packages(self, tag_name):
"""
Get list of builds for module and given module tag name.
"""
builds = []
packages = []
modules = self._get_modules_by_name(tag_name)
for module in modules:
if module is None:
raise ValueError('Module %s is not found' % tag_name)
path = os.path.join(
self._modules_dir,
module.arch,
tag_name,
)
builds.append({
"build_id": module.build_id,
"package_name": module.name,
"nvr": module.nvr,
"tag_name": module.nvr,
"version": module.stream,
"release": module.version,
"id": module.build_id,
"name": module.name,
"volume_name": "DEFAULT",
# Following fields are currently not
# used but returned by real koji
# left them here just for reference
#
# "owner_name": "mbox-mbs-backend",
# "task_id": 195937,
# "state": 1,
# "start_time": "2020-12-22 19:20:12.504578",
# "creation_event_id": 306731,
# "creation_time": "2020-12-22 19:20:12.504578",
# "epoch": None,
# "tag_id": 1192,
# "completion_time": "2020-12-22 19:34:34.716615",
# "volume_id": 0,
# "package_id": 104,
# "owner_id": 6,
})
if os.path.exists(path):
info = Modulemd.ModuleStream.read_string(open(path).read(), strict=True)
for art in info.get_rpm_artifacts():
data = parse_nvra(art)
packages.append({
"build_id": module.build_id,
"name": data['name'],
"extra": None,
"arch": data['arch'],
"epoch": data['epoch'] or None,
"version": data['version'],
"metadata_only": False,
"release": data['release'],
"id": 262555,
"size": 0
})
else:
raise RuntimeError('Unable to find module %s' % path)
return builds, packages
def _get_modules_by_name(self, tag_name):
modules = []
for arch in self._all_arches:
for module in self._modules.values():
if module.nvr != tag_name or module.arch != arch:
continue
modules.append(module)
return modules

View File

@ -32,7 +32,6 @@ import six.moves.xmlrpc_client as xmlrpclib
from flufl.lock import Lock from flufl.lock import Lock
from datetime import timedelta from datetime import timedelta
from .kojimock import KojiMock
from .. import util from .. import util
from ..arch_utils import getBaseArch from ..arch_utils import getBaseArch
@ -414,6 +413,92 @@ class KojiWrapper(object):
return cmd return cmd
def get_create_image_cmd(
self,
name,
version,
target,
arch,
ks_file,
repos,
image_type="live",
image_format=None,
release=None,
wait=True,
archive=False,
specfile=None,
ksurl=None,
):
# Usage: koji spin-livecd [options] <name> <version> <target> <arch> <kickstart-file> # noqa: E501
# Usage: koji spin-appliance [options] <name> <version> <target> <arch> <kickstart-file> # noqa: E501
# Examples:
# * name: RHEL-7.0
# * name: Satellite-6.0.1-RHEL-6
# ** -<type>.<arch>
# * version: YYYYMMDD[.n|.t].X
# * release: 1
cmd = self._get_cmd()
if image_type == "live":
cmd.append("spin-livecd")
elif image_type == "appliance":
cmd.append("spin-appliance")
else:
raise ValueError("Invalid image type: %s" % image_type)
if not archive:
cmd.append("--scratch")
cmd.append("--noprogress")
if wait:
cmd.append("--wait")
else:
cmd.append("--nowait")
if specfile:
cmd.append("--specfile=%s" % specfile)
if ksurl:
cmd.append("--ksurl=%s" % ksurl)
if isinstance(repos, list):
for repo in repos:
cmd.append("--repo=%s" % repo)
else:
cmd.append("--repo=%s" % repos)
if image_format:
if image_type != "appliance":
raise ValueError("Format can be specified only for appliance images'")
supported_formats = ["raw", "qcow", "qcow2", "vmx"]
if image_format not in supported_formats:
raise ValueError(
"Format is not supported: %s. Supported formats: %s"
% (image_format, " ".join(sorted(supported_formats)))
)
cmd.append("--format=%s" % image_format)
if release is not None:
cmd.append("--release=%s" % release)
# IMPORTANT: all --opts have to be provided *before* args
# Usage:
# koji spin-livecd [options] <name> <version> <target> <arch> <kickstart-file>
cmd.append(name)
cmd.append(version)
cmd.append(target)
# i686 -> i386 etc.
arch = getBaseArch(arch)
cmd.append(arch)
cmd.append(ks_file)
return cmd
def _has_connection_error(self, output): def _has_connection_error(self, output):
"""Checks if output indicates connection error.""" """Checks if output indicates connection error."""
return re.search("error: failed to connect\n$", output) return re.search("error: failed to connect\n$", output)
@ -527,7 +612,6 @@ class KojiWrapper(object):
"createImage", "createImage",
"createLiveMedia", "createLiveMedia",
"createAppliance", "createAppliance",
"createKiwiImage",
]: ]:
continue continue
@ -785,45 +869,6 @@ class KojiWrapper(object):
pass pass
class KojiMockWrapper(object):
lock = threading.Lock()
def __init__(self, compose, all_arches):
self.all_arches = all_arches
self.compose = compose
try:
self.profile = self.compose.conf["koji_profile"]
except KeyError:
raise RuntimeError("Koji profile must be configured")
with self.lock:
self.koji_module = koji.get_profile_module(self.profile)
session_opts = {}
for key in (
"timeout",
"keepalive",
"max_retries",
"retry_interval",
"anon_retry",
"offline_retry",
"offline_retry_interval",
"debug",
"debug_xmlrpc",
"serverca",
"use_fast_upload",
):
value = getattr(self.koji_module.config, key, None)
if value is not None:
session_opts[key] = value
self.koji_proxy = KojiMock(
packages_dir=self.koji_module.config.topdir,
modules_dir=os.path.join(
self.koji_module.config.topdir,
'modules',
),
all_arches=self.all_arches,
)
def get_buildroot_rpms(compose, task_id): def get_buildroot_rpms(compose, task_id):
"""Get build root RPMs - either from runroot or local""" """Get build root RPMs - either from runroot or local"""
result = [] result = []
@ -914,8 +959,7 @@ class KojiDownloadProxy:
:param str dest: file path to store the result in :param str dest: file path to store the result in
:returns: path to the downloaded file (same as dest) or None if the URL :returns: path to the downloaded file (same as dest) or None if the URL
""" """
# contextlib.closing is only needed in requests<2.18 with self.session.get(url, stream=True) as r:
with contextlib.closing(self.session.get(url, stream=True)) as r:
if r.status_code == 404: if r.status_code == 404:
self.logger.warning("GET %s NOT FOUND", url) self.logger.warning("GET %s NOT FOUND", url)
return None return None

View File

@ -148,15 +148,6 @@ class UnifiedISO(object):
new_path = os.path.join(self.temp_dir, "trees", arch, old_relpath) new_path = os.path.join(self.temp_dir, "trees", arch, old_relpath)
makedirs(os.path.dirname(new_path)) makedirs(os.path.dirname(new_path))
# Resolve symlinks to external files. Symlinks within the
# provided `dir` are kept.
if os.path.islink(old_path):
real_path = os.readlink(old_path)
abspath = os.path.normpath(
os.path.join(os.path.dirname(old_path), real_path)
)
if not abspath.startswith(dir):
old_path = real_path
try: try:
self.linker.link(old_path, new_path) self.linker.link(old_path, new_path)
except OSError as exc: except OSError as exc:

View File

@ -1,7 +1,9 @@
# Some packages must be installed via dnf/yum first, see doc/contributing.rst # Some packages must be installed via dnf/yum first, see doc/contributing.rst
dict.sorted
dogpile.cache dogpile.cache
flufl.lock ; python_version >= '3.0' flufl.lock ; python_version >= '3.0'
flufl.lock < 3.0 ; python_version <= '2.7' flufl.lock < 3.0 ; python_version <= '2.7'
funcsigs
jsonschema jsonschema
kobo kobo
koji koji
@ -12,4 +14,4 @@ ordered_set
productmd productmd
pykickstart pykickstart
python-multilib python-multilib
urlgrabber ; python_version < '3.0' urlgrabber

View File

@ -20,7 +20,7 @@ packages = sorted(packages)
setup( setup(
name="pungi", name="pungi",
version="4.7.0", version="4.5.1",
description="Distribution compose tool", description="Distribution compose tool",
url="https://pagure.io/pungi", url="https://pagure.io/pungi",
author="Dennis Gilmore", author="Dennis Gilmore",
@ -42,10 +42,6 @@ setup(
"pungi-config-dump = pungi.scripts.config_dump:cli_main", "pungi-config-dump = pungi.scripts.config_dump:cli_main",
"pungi-config-validate = pungi.scripts.config_validate:cli_main", "pungi-config-validate = pungi.scripts.config_validate:cli_main",
"pungi-cache-cleanup = pungi.scripts.cache_cleanup:main", "pungi-cache-cleanup = pungi.scripts.cache_cleanup:main",
"pungi-gather-modules = pungi.scripts.gather_modules:cli_main",
"pungi-gather-rpms = pungi.scripts.gather_rpms:cli_main",
"pungi-generate-packages-json = pungi.scripts.create_packages_json:cli_main", # noqa: E501
"pungi-create-extra-repo = pungi.scripts.create_extra_repo:cli_main"
] ]
}, },
scripts=["contrib/yum-dnf-compare/pungi-compare-depsolving"], scripts=["contrib/yum-dnf-compare/pungi-compare-depsolving"],
@ -66,5 +62,5 @@ setup(
"dogpile.cache", "dogpile.cache",
], ],
extras_require={':python_version=="2.7"': ["enum34", "lockfile"]}, extras_require={':python_version=="2.7"': ["enum34", "lockfile"]},
tests_require=["pytest", "pytest-cov", "pyfakefs"], tests_require=["mock", "pytest", "pytest-cov"],
) )

View File

@ -1 +0,0 @@
SHA512 (pungi-4.7.0.tar.bz2) = 55c7527a0dff6efa8ed13b1ccdfd3628686fadb55b78fb456e552f4972b831aa96f3ff37ac54837462d91df834157f38426e6b66b52216e1e5861628df724eca

View File

@ -1,5 +1,5 @@
mock; python_version < '3.3' mock
parameterized parameterized
pytest pytest
pytest-cov pytest-cov
unittest2; python_version < '3.0' unittest2

View File

@ -1,4 +1,4 @@
FROM registry.fedoraproject.org/fedora:latest FROM fedora:33
LABEL \ LABEL \
name="Pungi test" \ name="Pungi test" \
description="Run tests using tox with Python 3" \ description="Run tests using tox with Python 3" \

View File

@ -1,36 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
<revision>1612479076</revision>
<data type="primary">
<checksum type="sha256">08941fae6bdb14f3b22bfad38b9d7dcb685a9df58fe8f515a3a0b2fe1af903bb</checksum>
<open-checksum type="sha256">2a15e618f049a883d360ccbf3e764b30640255f47dc526c633b1722fe23cbcbc</open-checksum>
<location href="repodata/08941fae6bdb14f3b22bfad38b9d7dcb685a9df58fe8f515a3a0b2fe1af903bb-primary.xml.gz"/>
<timestamp>1612479075</timestamp>
<size>1240</size>
<open-size>3888</open-size>
</data>
<data type="filelists">
<checksum type="sha256">e37a0b4a63b2b245dca1727195300cd3961f80aebc82ae7b9849dbf7482f5d0f</checksum>
<open-checksum type="sha256">b1782bc4207a5b7c3e64115d5a1d001802e8d363f022ea165df7cdab6f14651c</open-checksum>
<location href="repodata/e37a0b4a63b2b245dca1727195300cd3961f80aebc82ae7b9849dbf7482f5d0f-filelists.xml.gz"/>
<timestamp>1612479075</timestamp>
<size>439</size>
<open-size>1295</open-size>
</data>
<data type="other">
<checksum type="sha256">92992176bce71dcde9e4b6ad1442e7b5c7f3de9b7f019a2cd27d042ab38ea2b1</checksum>
<open-checksum type="sha256">3b847919691ad32279b13463de6c08f1f8b32f51e87b7d8d7e95a3ec2f46ef51</open-checksum>
<location href="repodata/92992176bce71dcde9e4b6ad1442e7b5c7f3de9b7f019a2cd27d042ab38ea2b1-other.xml.gz"/>
<timestamp>1612479075</timestamp>
<size>630</size>
<open-size>1911</open-size>
</data>
<data type="modules">
<checksum type="sha256">e7a671401f8e207e4cd3b90b4ac92d621f84a34dc9026f57c3f427fbed444c57</checksum>
<open-checksum type="sha256">d59fee86c18018cc18bb7325aa74aa0abf923c64d29a4ec45e08dcd01a0c3966</open-checksum>
<location href="repodata/e7a671401f8e207e4cd3b90b4ac92d621f84a34dc9026f57c3f427fbed444c57-modules.yaml.gz"/>
<timestamp>1612479075</timestamp>
<size>920</size>
<open-size>3308</open-size>
</data>
</repomd>

View File

@ -1,55 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
<revision>1666177486</revision>
<data type="primary">
<checksum type="sha256">89cb9cc1181635c9147864a7076d91fb81072641d481cd202832a2d257453576</checksum>
<open-checksum type="sha256">07255d9856f7531b52a6459f6fc7701c6d93c6d6c29d1382d83afcc53f13494a</open-checksum>
<location href="repodata/89cb9cc1181635c9147864a7076d91fb81072641d481cd202832a2d257453576-primary.xml.gz"/>
<timestamp>1666177486</timestamp>
<size>1387</size>
<open-size>6528</open-size>
</data>
<data type="filelists">
<checksum type="sha256">f69ca03957574729fd5150335b0d87afddcfb37a97aed5b06272212854f1773d</checksum>
<open-checksum type="sha256">c2e1e674d7d48bccaa16cae0a5f70cb55ef4cd7352b4d9d4fdaa619075d07dbc</open-checksum>
<location href="repodata/f69ca03957574729fd5150335b0d87afddcfb37a97aed5b06272212854f1773d-filelists.xml.gz"/>
<timestamp>1666177486</timestamp>
<size>1252</size>
<open-size>5594</open-size>
</data>
<data type="other">
<checksum type="sha256">b3827bd6c9ea67ffa3912002515c64e4d9fe5c4dacbf7c46b0d8768b7abbb84f</checksum>
<open-checksum type="sha256">9ce24c526239e349d023c577b2ae3872c8b0f1888aed1fb24b9b9aa12063fdf3</open-checksum>
<location href="repodata/b3827bd6c9ea67ffa3912002515c64e4d9fe5c4dacbf7c46b0d8768b7abbb84f-other.xml.gz"/>
<timestamp>1666177486</timestamp>
<size>999</size>
<open-size>6320</open-size>
</data>
<data type="primary_db">
<checksum type="sha256">ab8df35061dfa0285069b843f24a7076e31266d9a8abe8282340bcb936aa61d7</checksum>
<open-checksum type="sha256">2bce9554ce4496cef34b5cd69f186f7f3143c7cabae8fa384fc5c9eeab326f7f</open-checksum>
<location href="repodata/ab8df35061dfa0285069b843f24a7076e31266d9a8abe8282340bcb936aa61d7-primary.sqlite.bz2"/>
<timestamp>1666177486</timestamp>
<size>3558</size>
<open-size>106496</open-size>
<database_version>10</database_version>
</data>
<data type="filelists_db">
<checksum type="sha256">8bcf6d40db4e922934ac47e8ac7fb8d15bdacf579af8c819d2134ed54d30550b</checksum>
<open-checksum type="sha256">f7001d1df7f5f7e4898919b15710bea8ed9711ce42faf68e22b757e63169b1fb</open-checksum>
<location href="repodata/8bcf6d40db4e922934ac47e8ac7fb8d15bdacf579af8c819d2134ed54d30550b-filelists.sqlite.bz2"/>
<timestamp>1666177486</timestamp>
<size>2360</size>
<open-size>28672</open-size>
<database_version>10</database_version>
</data>
<data type="other_db">
<checksum type="sha256">01b82e9eb7ee9151f283c6e761ae450de18ed2d64b5e32de88689eaf95216a80</checksum>
<open-checksum type="sha256">07f5b9750af1e440d37ca216e719dd288149e79e9132f2fdccb6f73b2e5dd541</open-checksum>
<location href="repodata/01b82e9eb7ee9151f283c6e761ae450de18ed2d64b5e32de88689eaf95216a80-other.sqlite.bz2"/>
<timestamp>1666177486</timestamp>
<size>2196</size>
<open-size>32768</open-size>
<database_version>10</database_version>
</data>
</repomd>

View File

@ -1,55 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
<revision>1666177500</revision>
<data type="primary">
<checksum type="sha256">a1d342aa7cef3a2034fc3f9d6ee02d63572780bc76e61749a57e50b6b3ca9869</checksum>
<open-checksum type="sha256">a9e3eae447dd44282d7d96db5f15f049b757925397adb752f4df982176bab7e0</open-checksum>
<location href="repodata/a1d342aa7cef3a2034fc3f9d6ee02d63572780bc76e61749a57e50b6b3ca9869-primary.xml.gz"/>
<timestamp>1666177500</timestamp>
<size>3501</size>
<open-size>37296</open-size>
</data>
<data type="filelists">
<checksum type="sha256">6778922d5853d20f213ae7702699a76f1e87e55d6bfb5e4ac6a117d904d47b3c</checksum>
<open-checksum type="sha256">e30b666d9d88a70de69a08f45e6696bcd600c45485d856bd0213395d7da7bd49</open-checksum>
<location href="repodata/6778922d5853d20f213ae7702699a76f1e87e55d6bfb5e4ac6a117d904d47b3c-filelists.xml.gz"/>
<timestamp>1666177500</timestamp>
<size>27624</size>
<open-size>318187</open-size>
</data>
<data type="other">
<checksum type="sha256">5a60d79d8bce6a805f4fdb22fd891524359dce8ccc665c0b54e7299e79debe84</checksum>
<open-checksum type="sha256">b18138f4a3de45714e578fb1f30b7ec54fdcdaf1a22585891625b6af0894388e</open-checksum>
<location href="repodata/5a60d79d8bce6a805f4fdb22fd891524359dce8ccc665c0b54e7299e79debe84-other.xml.gz"/>
<timestamp>1666177500</timestamp>
<size>1876</size>
<open-size>28701</open-size>
</data>
<data type="primary_db">
<checksum type="sha256">c27bc2ce947173aba305041552c3c6d8db71442c1a2e5dcaf35ff750fe0469fc</checksum>
<open-checksum type="sha256">586e1af8934229925adb9e746ae5ced119859dfd97f4e3237399bb36a7d7f071</open-checksum>
<location href="repodata/c27bc2ce947173aba305041552c3c6d8db71442c1a2e5dcaf35ff750fe0469fc-primary.sqlite.bz2"/>
<timestamp>1666177500</timestamp>
<size>11528</size>
<open-size>126976</open-size>
<database_version>10</database_version>
</data>
<data type="filelists_db">
<checksum type="sha256">ed350865982e7a1e45b144839b56eac888e5d8f680571dd2cd06b37dc83e0fd8</checksum>
<open-checksum type="sha256">697903989d0f77de2d44a2b603e75c9b4ca23b3795eb136d175caf5666ce6459</open-checksum>
<location href="repodata/ed350865982e7a1e45b144839b56eac888e5d8f680571dd2cd06b37dc83e0fd8-filelists.sqlite.bz2"/>
<timestamp>1666177500</timestamp>
<size>20440</size>
<open-size>163840</open-size>
<database_version>10</database_version>
</data>
<data type="other_db">
<checksum type="sha256">35eff699131e0976429144c6f4514d21568177dc64bb4091c3ff62f76b293725</checksum>
<open-checksum type="sha256">3bd999a1bdf300df836a4607b7b75f845d8e1432e3e4e1ab6f0c7cc8a853db39</open-checksum>
<location href="repodata/35eff699131e0976429144c6f4514d21568177dc64bb4091c3ff62f76b293725-other.sqlite.bz2"/>
<timestamp>1666177500</timestamp>
<size>4471</size>
<open-size>49152</open-size>
<database_version>10</database_version>
</data>
</repomd>

View File

@ -1,58 +0,0 @@
[checksums]
images/boot.iso = sha256:fc8a4be604b6425746f12fa706116eb940f93358f036b8fbbe518b516cb6870c
[general]
; WARNING.0 = This section provides compatibility with pre-productmd treeinfos.
; WARNING.1 = Read productmd documentation for details about new format.
arch = x86_64
family = Test
name = Test 1.0
packagedir = Packages
platforms = x86_64,xen
repository = .
timestamp = 1531881582
variant = Server
variants = Client,Server
version = 1.0
[header]
type = productmd.treeinfo
version = 1.2
[images-x86_64]
boot.iso = images/boot.iso
[images-xen]
initrd = images/pxeboot/initrd.img
kernel = images/pxeboot/vmlinuz
[release]
name = Test
short = T
version = 1.0
[stage2]
mainimage = images/install.img
[tree]
arch = x86_64
build_timestamp = 1531881582
platforms = x86_64,xen
variants = Client,Server
[variant-Client]
id = Client
name = Client
packages = ../../../Client/x86_64/os/Packages
repository = ../../../Client/x86_64/os
type = variant
uid = Client
[variant-Server]
id = Server
name = Server
packages = Packages
repository = .
type = variant
uid = Server

View File

@ -1,20 +0,0 @@
---
document: modulemd
version: 2
data:
name: module
stream: master
version: 20190318
context: abcdef
arch: x86_64
summary: Dummy module
description: Dummy module
license:
module:
- Beerware
content:
- Beerware
artifacts:
rpms:
- foobar-0:1.0-1.noarch
...

View File

@ -1,20 +0,0 @@
---
document: modulemd
version: 2
data:
name: module
stream: master
version: 20190318
context: abcdef
arch: x86_64
summary: Dummy module
description: Dummy module
license:
module:
- Beerware
content:
- Beerware
artifacts:
rpms:
- foobar-0:1.0-1.noarch
...

View File

@ -1,20 +0,0 @@
---
document: modulemd
version: 2
data:
name: scratch-module
stream: master
version: 20200710
context: abcdef
arch: x86_64
summary: Dummy module
description: Dummy module
license:
module:
- Beerware
content:
- Beerware
artifacts:
rpms:
- foobar-0:1.0-1.noarch
...

View File

@ -1,20 +0,0 @@
---
document: modulemd
version: 2
data:
name: scratch-module
stream: master
version: 20200710
context: abcdef
arch: x86_64
summary: Dummy module
description: Dummy module
license:
module:
- Beerware
content:
- Beerware
artifacts:
rpms:
- foobar-0:1.0-1.noarch
...

View File

@ -2,13 +2,12 @@
import difflib import difflib
import errno import errno
import hashlib
import os import os
import shutil import shutil
import tempfile import tempfile
from collections import defaultdict from collections import defaultdict
from unittest import mock import mock
import six import six
from kobo.rpmlib import parse_nvr from kobo.rpmlib import parse_nvr
@ -362,9 +361,3 @@ def fake_run_in_threads(func, params, threads=None):
"""Like run_in_threads from Kobo, but actually runs tasks serially.""" """Like run_in_threads from Kobo, but actually runs tasks serially."""
for num, param in enumerate(params): for num, param in enumerate(params):
func(None, param, num) func(None, param, num)
def hash_string(alg, s):
m = hashlib.new(alg)
m.update(s.encode("utf-8"))
return m.hexdigest()

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from unittest import mock import mock
import unittest import unittest
from pungi.arch import ( from pungi.arch import (

View File

@ -1,4 +1,4 @@
from unittest import mock import mock
try: try:
import unittest2 as unittest import unittest2 as unittest

View File

@ -1,15 +1,15 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
try: try:
import unittest2 as unittest import unittest2 as unittest
except ImportError: except ImportError:
import unittest import unittest
from unittest import mock import mock
import six import six
from copy import copy from copy import copy
from six.moves import StringIO from six.moves import StringIO
from ddt import ddt, data
import os import os
@ -254,7 +254,6 @@ class TestBuildinstallPhase(PungiTestCase):
def test_starts_threads_for_each_cmd_with_lorax_koji_plugin( def test_starts_threads_for_each_cmd_with_lorax_koji_plugin(
self, get_volid, poolCls self, get_volid, poolCls
): ):
topurl = "https://example.com/composes/"
compose = BuildInstallCompose( compose = BuildInstallCompose(
self.topdir, self.topdir,
{ {
@ -265,7 +264,6 @@ class TestBuildinstallPhase(PungiTestCase):
"buildinstall_method": "lorax", "buildinstall_method": "lorax",
"lorax_use_koji_plugin": True, "lorax_use_koji_plugin": True,
"disc_types": {"dvd": "DVD"}, "disc_types": {"dvd": "DVD"},
"translate_paths": [(self.topdir, topurl)],
}, },
) )
@ -282,9 +280,9 @@ class TestBuildinstallPhase(PungiTestCase):
"version": "1", "version": "1",
"release": "1", "release": "1",
"sources": [ "sources": [
topurl + "work/amd64/repo/p1", self.topdir + "/work/amd64/repo/p1",
topurl + "work/amd64/repo/p2", self.topdir + "/work/amd64/repo/p2",
topurl + "work/amd64/comps_repo_Server", self.topdir + "/work/amd64/comps_repo_Server",
], ],
"variant": "Server", "variant": "Server",
"installpkgs": ["bash", "vim"], "installpkgs": ["bash", "vim"],
@ -301,6 +299,7 @@ class TestBuildinstallPhase(PungiTestCase):
"rootfs-size": None, "rootfs-size": None,
"dracut-args": [], "dracut-args": [],
"skip_branding": False, "skip_branding": False,
"outputdir": self.topdir + "/work/amd64/buildinstall/Server",
"squashfs_only": False, "squashfs_only": False,
"configuration_file": None, "configuration_file": None,
}, },
@ -309,9 +308,9 @@ class TestBuildinstallPhase(PungiTestCase):
"version": "1", "version": "1",
"release": "1", "release": "1",
"sources": [ "sources": [
topurl + "work/amd64/repo/p1", self.topdir + "/work/amd64/repo/p1",
topurl + "work/amd64/repo/p2", self.topdir + "/work/amd64/repo/p2",
topurl + "work/amd64/comps_repo_Client", self.topdir + "/work/amd64/comps_repo_Client",
], ],
"variant": "Client", "variant": "Client",
"installpkgs": [], "installpkgs": [],
@ -328,6 +327,7 @@ class TestBuildinstallPhase(PungiTestCase):
"rootfs-size": None, "rootfs-size": None,
"dracut-args": [], "dracut-args": [],
"skip_branding": False, "skip_branding": False,
"outputdir": self.topdir + "/work/amd64/buildinstall/Client",
"squashfs_only": False, "squashfs_only": False,
"configuration_file": None, "configuration_file": None,
}, },
@ -336,9 +336,9 @@ class TestBuildinstallPhase(PungiTestCase):
"version": "1", "version": "1",
"release": "1", "release": "1",
"sources": [ "sources": [
topurl + "work/x86_64/repo/p1", self.topdir + "/work/x86_64/repo/p1",
topurl + "work/x86_64/repo/p2", self.topdir + "/work/x86_64/repo/p2",
topurl + "work/x86_64/comps_repo_Server", self.topdir + "/work/x86_64/comps_repo_Server",
], ],
"variant": "Server", "variant": "Server",
"installpkgs": ["bash", "vim"], "installpkgs": ["bash", "vim"],
@ -355,6 +355,7 @@ class TestBuildinstallPhase(PungiTestCase):
"rootfs-size": None, "rootfs-size": None,
"dracut-args": [], "dracut-args": [],
"skip_branding": False, "skip_branding": False,
"outputdir": self.topdir + "/work/x86_64/buildinstall/Server",
"squashfs_only": False, "squashfs_only": False,
"configuration_file": None, "configuration_file": None,
}, },
@ -1233,9 +1234,9 @@ class BuildinstallThreadTestCase(PungiTestCase):
@mock.patch("pungi.wrappers.kojiwrapper.KojiWrapper") @mock.patch("pungi.wrappers.kojiwrapper.KojiWrapper")
@mock.patch("pungi.wrappers.kojiwrapper.get_buildroot_rpms") @mock.patch("pungi.wrappers.kojiwrapper.get_buildroot_rpms")
@mock.patch("pungi.phases.buildinstall.run") @mock.patch("pungi.phases.buildinstall.run")
@mock.patch("pungi.phases.buildinstall.download_and_extract_archive") @mock.patch("pungi.phases.buildinstall.move_all")
def test_buildinstall_thread_with_lorax_using_koji_plugin( def test_buildinstall_thread_with_lorax_using_koji_plugin(
self, download, run, get_buildroot_rpms, KojiWrapperMock, mock_tweak, mock_link self, move_all, run, get_buildroot_rpms, KojiWrapperMock, mock_tweak, mock_link
): ):
compose = BuildInstallCompose( compose = BuildInstallCompose(
self.topdir, self.topdir,
@ -1281,8 +1282,9 @@ class BuildinstallThreadTestCase(PungiTestCase):
self.cmd, self.cmd,
channel=None, channel=None,
packages=["lorax"], packages=["lorax"],
mounts=[self.topdir],
weight=123, weight=123,
chown_uid=None, chown_uid=os.getuid(),
) )
], ],
) )
@ -1323,14 +1325,13 @@ class BuildinstallThreadTestCase(PungiTestCase):
[mock.call(compose, "x86_64", compose.variants["Server"], False)], [mock.call(compose, "x86_64", compose.variants["Server"], False)],
) )
self.assertEqual( self.assertEqual(
download.call_args_list, move_all.call_args_list,
[ [
mock.call(compose, 1234, "results.tar.gz", destdir), mock.call(os.path.join(destdir, "results"), destdir, rm_src_dir=True),
mock.call( mock.call(
compose, os.path.join(destdir, "logs"),
1234,
"logs.tar.gz",
os.path.join(self.topdir, "logs/x86_64/buildinstall-Server-logs"), os.path.join(self.topdir, "logs/x86_64/buildinstall-Server-logs"),
rm_src_dir=True,
), ),
], ],
) )
@ -1813,7 +1814,6 @@ class BuildinstallThreadTestCase(PungiTestCase):
self.assertEqual(ret, None) self.assertEqual(ret, None)
@ddt
class TestSymlinkIso(PungiTestCase): class TestSymlinkIso(PungiTestCase):
def setUp(self): def setUp(self):
super(TestSymlinkIso, self).setUp() super(TestSymlinkIso, self).setUp()
@ -1829,13 +1829,8 @@ class TestSymlinkIso(PungiTestCase):
@mock.patch("pungi.phases.buildinstall.get_file_size") @mock.patch("pungi.phases.buildinstall.get_file_size")
@mock.patch("pungi.phases.buildinstall.iso") @mock.patch("pungi.phases.buildinstall.iso")
@mock.patch("pungi.phases.buildinstall.run") @mock.patch("pungi.phases.buildinstall.run")
@data(['Server'], ['BaseOS']) def test_hardlink(self, run, iso, get_file_size, get_mtime, ImageCls):
def test_hardlink(self, netinstall_variants, run, iso, get_file_size, get_mtime, ImageCls): self.compose.conf = {"buildinstall_symlink": False, "disc_types": {}}
self.compose.conf = {
"buildinstall_symlink": False,
"disc_types": {},
"netinstall_variants": netinstall_variants,
}
get_file_size.return_value = 1024 get_file_size.return_value = 1024
get_mtime.return_value = 13579 get_mtime.return_value = 13579
@ -1862,7 +1857,7 @@ class TestSymlinkIso(PungiTestCase):
) )
self.assertEqual(iso.get_implanted_md5.mock_calls, [mock.call(tgt)]) self.assertEqual(iso.get_implanted_md5.mock_calls, [mock.call(tgt)])
self.assertEqual(iso.get_manifest_cmd.mock_calls, [mock.call("image-name")]) self.assertEqual(iso.get_manifest_cmd.mock_calls, [mock.call("image-name")])
self.assertEqual(iso.get_volume_id.mock_calls, [mock.call(tgt, None)]) self.assertEqual(iso.get_volume_id.mock_calls, [mock.call(tgt)])
self.assertEqual( self.assertEqual(
run.mock_calls, run.mock_calls,
[ [
@ -1885,14 +1880,9 @@ class TestSymlinkIso(PungiTestCase):
self.assertEqual(image.bootable, True) self.assertEqual(image.bootable, True)
self.assertEqual(image.implant_md5, iso.get_implanted_md5.return_value) self.assertEqual(image.implant_md5, iso.get_implanted_md5.return_value)
self.assertEqual(image.can_fail, False) self.assertEqual(image.can_fail, False)
if 'Server' in netinstall_variants: self.assertEqual(
self.assertEqual( self.compose.im.add.mock_calls, [mock.call("Server", "x86_64", image)]
self.compose.im.add.mock_calls, [mock.call("Server", "x86_64", image)] )
)
else:
self.assertEqual(
self.compose.im.add.mock_calls, []
)
@mock.patch("pungi.phases.buildinstall.Image") @mock.patch("pungi.phases.buildinstall.Image")
@mock.patch("pungi.phases.buildinstall.get_mtime") @mock.patch("pungi.phases.buildinstall.get_mtime")
@ -1905,7 +1895,6 @@ class TestSymlinkIso(PungiTestCase):
self.compose.conf = { self.compose.conf = {
"buildinstall_symlink": False, "buildinstall_symlink": False,
"disc_types": {"boot": "netinst"}, "disc_types": {"boot": "netinst"},
"netinstall_variants": ['Server'],
} }
get_file_size.return_value = 1024 get_file_size.return_value = 1024
get_mtime.return_value = 13579 get_mtime.return_value = 13579
@ -1933,7 +1922,7 @@ class TestSymlinkIso(PungiTestCase):
) )
self.assertEqual(iso.get_implanted_md5.mock_calls, [mock.call(tgt)]) self.assertEqual(iso.get_implanted_md5.mock_calls, [mock.call(tgt)])
self.assertEqual(iso.get_manifest_cmd.mock_calls, [mock.call("image-name")]) self.assertEqual(iso.get_manifest_cmd.mock_calls, [mock.call("image-name")])
self.assertEqual(iso.get_volume_id.mock_calls, [mock.call(tgt, None)]) self.assertEqual(iso.get_volume_id.mock_calls, [mock.call(tgt)])
self.assertEqual( self.assertEqual(
run.mock_calls, run.mock_calls,
[ [

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from unittest import mock import mock
try: try:
import unittest2 as unittest import unittest2 as unittest

View File

@ -1,7 +1,7 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import logging import logging
from unittest import mock import mock
try: try:
import unittest2 as unittest import unittest2 as unittest

View File

@ -7,7 +7,7 @@ except ImportError:
import unittest import unittest
import six import six
from unittest import mock import mock
from pungi import checks from pungi import checks
from tests.helpers import load_config, PKGSET_REPOS from tests.helpers import load_config, PKGSET_REPOS

View File

@ -1,7 +1,7 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from unittest import mock import mock
import os import os
import six import six

View File

@ -1,228 +0,0 @@
# coding=utf-8
import os
from unittest import TestCase, mock, main
import yaml
from pungi.scripts.create_extra_repo import CreateExtraRepo, ExtraVariantInfo, RepoInfo
FOLDER_WITH_TEST_DATA = os.path.join(
os.path.dirname(
os.path.abspath(__file__)
),
'data/test_create_extra_repo/',
)
TEST_MODULE_INFO = yaml.load("""
---
document: modulemd
version: 2
data:
name: perl-App-cpanminus
stream: 1.7044
version: 8030020210126085450
context: 3a33b840
arch: x86_64
summary: Get, unpack, build and install CPAN modules
description: >
This is a CPAN client that requires zero configuration, and stands alone but it's
maintainable and extensible with plug-ins and friendly to shell scripting.
license:
module:
- MIT
content:
- (GPL+ or Artistic) and GPLv2+
- ASL 2.0
- GPL+ or Artistic
dependencies:
- buildrequires:
perl: [5.30]
platform: [el8.3.0]
requires:
perl: [5.30]
perl-YAML: []
platform: [el8]
references:
community: https://metacpan.org/release/App-cpanminus
profiles:
common:
description: App-cpanminus distribution
rpms:
- perl-App-cpanminus
api:
rpms:
- perl-App-cpanminus
filter:
rpms:
- perl-CPAN-DistnameInfo-dummy
- perl-Test-Deep
buildopts:
rpms:
macros: >
%_without_perl_CPAN_Meta_Check_enables_extra_test 1
components:
rpms:
perl-App-cpanminus:
rationale: The API.
ref: perl-App-cpanminus-1.7044-5.module+el8.2.0+4278+abcfa81a.src.rpm
buildorder: 1
arches: [i686, x86_64]
perl-CPAN-DistnameInfo:
rationale: Run-time dependency.
ref: stream-0.12-rhel-8.3.0
arches: [i686, x86_64]
perl-CPAN-Meta-Check:
rationale: Run-time dependency.
ref: perl-CPAN-Meta-Check-0.014-6.module+el8.2.0+4278+abcfa81a.src.rpm
buildorder: 1
arches: [i686, x86_64]
perl-File-pushd:
rationale: Run-time dependency.
ref: perl-File-pushd-1.014-6.module+el8.2.0+4278+abcfa81a.src.rpm
arches: [i686, x86_64]
perl-Module-CPANfile:
rationale: Run-time dependency.
ref: perl-Module-CPANfile-1.1002-7.module+el8.2.0+4278+abcfa81a.src.rpm
arches: [i686, x86_64]
perl-Parse-PMFile:
rationale: Run-time dependency.
ref: perl-Parse-PMFile-0.41-7.module+el8.2.0+4278+abcfa81a.src.rpm
arches: [i686, x86_64]
perl-String-ShellQuote:
rationale: Run-time dependency.
ref: perl-String-ShellQuote-1.04-24.module+el8.2.0+4278+abcfa81a.src.rpm
arches: [i686, x86_64]
perl-Test-Deep:
rationale: Build-time dependency.
ref: stream-1.127-rhel-8.3.0
arches: [i686, x86_64]
artifacts:
rpms:
- perl-App-cpanminus-0:1.7044-5.module_el8.3.0+2027+c8990d1d.noarch
- perl-App-cpanminus-0:1.7044-5.module_el8.3.0+2027+c8990d1d.src
- perl-CPAN-Meta-Check-0:0.014-6.module_el8.3.0+2027+c8990d1d.noarch
- perl-CPAN-Meta-Check-0:0.014-6.module_el8.3.0+2027+c8990d1d.src
- perl-File-pushd-0:1.014-6.module_el8.3.0+2027+c8990d1d.noarch
- perl-File-pushd-0:1.014-6.module_el8.3.0+2027+c8990d1d.src
- perl-Module-CPANfile-0:1.1002-7.module_el8.3.0+2027+c8990d1d.noarch
- perl-Module-CPANfile-0:1.1002-7.module_el8.3.0+2027+c8990d1d.src
- perl-Parse-PMFile-0:0.41-7.module_el8.3.0+2027+c8990d1d.noarch
- perl-Parse-PMFile-0:0.41-7.module_el8.3.0+2027+c8990d1d.src
- perl-String-ShellQuote-0:1.04-24.module_el8.3.0+2027+c8990d1d.noarch
- perl-String-ShellQuote-0:1.04-24.module_el8.3.0+2027+c8990d1d.src
...
""", Loader=yaml.BaseLoader)
TEST_REPO_INFO = RepoInfo(
path=FOLDER_WITH_TEST_DATA,
folder='test_repo',
is_remote=False,
)
TEST_VARIANT_INFO = ExtraVariantInfo(
name='TestRepo',
arch='x86_64',
packages=[],
modules=[],
repos=[TEST_REPO_INFO]
)
BS_BUILD_INFO = {
'build_platforms': [
{
'architectures': ['non_fake_arch', 'fake_arch'],
'name': 'fake_platform'
}
]
}
class TestCreteExtraRepo(TestCase):
maxDiff = None
def test_01_get_repo_info_from_bs_repo(self):
auth_token = 'fake_auth_token'
build_id = 'fake_build_id'
arch = 'fake_arch'
packages = ['fake_package1', 'fake_package2']
modules = ['fake_module1', 'fake_module2']
request_object = mock.Mock()
request_object.raise_for_status = lambda: True
request_object.json = lambda: BS_BUILD_INFO
with mock.patch(
'pungi.scripts.create_extra_repo.requests.get',
return_value=request_object,
) as mock_request_get:
repos_info = CreateExtraRepo.get_repo_info_from_bs_repo(
auth_token=auth_token,
build_id=build_id,
arch=arch,
packages=packages,
modules=modules,
)
self.assertEqual(
[
ExtraVariantInfo(
name=f'{build_id}-fake_platform-{arch}',
arch=arch,
packages=packages,
modules=modules,
repos=[
RepoInfo(
path='https://build.cloudlinux.com/'
f'build_repos/{build_id}/fake_platform',
folder=arch,
is_remote=True,
)
]
)
],
repos_info,
)
mock_request_get.assert_called_once_with(
url=f'https://build.cloudlinux.com/api/v1/builds/{build_id}',
headers={
'Authorization': f'Bearer {auth_token}',
}
)
def test_02_create_extra_repo(self):
with mock.patch(
'pungi.scripts.create_extra_repo.'
'CreateExtraRepo._read_local_modules_yaml',
return_value=[],
) as mock__read_local_modules_yaml, mock.patch(
'pungi.scripts.create_extra_repo.'
'CreateExtraRepo._download_rpm_to_local_repo',
) as mock__download_rpm_to_local_repo, mock.patch(
'pungi.scripts.create_extra_repo.'
'CreateExtraRepo._dump_local_modules_yaml'
) as mock__dump_local_modules_yaml, mock.patch(
'pungi.scripts.create_extra_repo.'
'CreateExtraRepo._create_local_extra_repo'
) as mock__create_local_extra_repo:
cer = CreateExtraRepo(
variants=[TEST_VARIANT_INFO],
bs_auth_token='fake_auth_token',
local_repository_path='/path/to/local/repo',
clear_target_repo=False,
)
mock__read_local_modules_yaml.assert_called_once_with()
cer.create_extra_repo()
mock__download_rpm_to_local_repo.assert_called_once_with(
package_location='perl-App-cpanminus-1.7044-5.'
'module_el8.3.0+2027+c8990d1d.noarch.rpm',
repo_info=TEST_REPO_INFO,
)
mock__dump_local_modules_yaml.assert_called_once_with()
mock__create_local_extra_repo.assert_called_once_with()
self.assertEqual(
[TEST_MODULE_INFO],
cer.local_modules_data,
)
if __name__ == '__main__':
main()

View File

@ -1,112 +0,0 @@
# coding=utf-8
import os
from collections import defaultdict
from unittest import TestCase, mock, main
from pungi.scripts.create_packages_json import (
PackagesGenerator,
RepoInfo,
VariantInfo,
)
FOLDER_WITH_TEST_DATA = os.path.join(
os.path.dirname(
os.path.abspath(__file__)
),
'data/test_create_packages_json/',
)
test_repo_info = RepoInfo(
path=FOLDER_WITH_TEST_DATA,
folder='test_repo',
is_remote=False,
is_reference=True,
)
test_repo_info_2 = RepoInfo(
path=FOLDER_WITH_TEST_DATA,
folder='test_repo_2',
is_remote=False,
is_reference=True,
)
variant_info_1 = VariantInfo(
name='TestRepo',
arch='x86_64',
repos=[test_repo_info]
)
variant_info_2 = VariantInfo(
name='TestRepo2',
arch='x86_64',
repos=[test_repo_info_2]
)
class TestPackagesJson(TestCase):
def test_01_get_remote_file_content(self):
"""
Test the getting of content from a remote file
"""
request_object = mock.Mock()
request_object.raise_for_status = lambda: True
request_object.content = b'TestContent'
with mock.patch(
'pungi.scripts.create_packages_json.requests.get',
return_value=request_object,
) as mock_requests_get, mock.patch(
'pungi.scripts.create_packages_json.tempfile.NamedTemporaryFile',
) as mock_tempfile:
mock_tempfile.return_value.__enter__.return_value.name = 'tmpfile'
packages_generator = PackagesGenerator(
variants=[],
excluded_packages=[],
included_packages=[],
)
file_name = packages_generator.get_remote_file_content(
file_url='fakeurl')
mock_requests_get.assert_called_once_with(url='fakeurl')
mock_tempfile.assert_called_once_with(delete=False)
mock_tempfile.return_value.__enter__().\
write.assert_called_once_with(b'TestContent')
self.assertEqual(
file_name,
'tmpfile',
)
def test_02_generate_additional_packages(self):
pg = PackagesGenerator(
variants=[
variant_info_1,
variant_info_2,
],
excluded_packages=['zziplib-utils'],
included_packages=['vim-file*'],
)
test_packages = defaultdict(
lambda: defaultdict(
lambda: defaultdict(
list,
)
)
)
test_packages['TestRepo']['x86_64']['zziplib'] = \
[
'zziplib.i686',
'zziplib.x86_64',
]
test_packages['TestRepo2']['x86_64']['vim'] = \
[
'vim-X11.i686',
'vim-common.i686',
'vim-enhanced.i686',
'vim-filesystem.noarch',
]
result = pg.generate_packages_json()
self.assertEqual(
test_packages,
result,
)
if __name__ == '__main__':
main()

View File

@ -2,11 +2,9 @@
import logging import logging
from unittest import mock import mock
import contextlib
import six import six
import productmd
import os import os
from tests import helpers from tests import helpers
@ -611,9 +609,7 @@ class CreateisoThreadTest(helpers.PungiTestCase):
iso.get_implanted_md5.call_args_list, iso.get_implanted_md5.call_args_list,
[mock.call(cmd["iso_path"], logger=compose._logger)], [mock.call(cmd["iso_path"], logger=compose._logger)],
) )
self.assertEqual( self.assertEqual(iso.get_volume_id.call_args_list, [mock.call(cmd["iso_path"])])
iso.get_volume_id.call_args_list, [mock.call(cmd["iso_path"], False)]
)
self.assertEqual(len(compose.im.add.call_args_list), 1) self.assertEqual(len(compose.im.add.call_args_list), 1)
args, _ = compose.im.add.call_args_list[0] args, _ = compose.im.add.call_args_list[0]
@ -696,9 +692,7 @@ class CreateisoThreadTest(helpers.PungiTestCase):
iso.get_implanted_md5.call_args_list, iso.get_implanted_md5.call_args_list,
[mock.call(cmd["iso_path"], logger=compose._logger)], [mock.call(cmd["iso_path"], logger=compose._logger)],
) )
self.assertEqual( self.assertEqual(iso.get_volume_id.call_args_list, [mock.call(cmd["iso_path"])])
iso.get_volume_id.call_args_list, [mock.call(cmd["iso_path"], False)]
)
self.assertEqual(len(compose.im.add.call_args_list), 2) self.assertEqual(len(compose.im.add.call_args_list), 2)
for args, _ in compose.im.add.call_args_list: for args, _ in compose.im.add.call_args_list:
@ -789,9 +783,7 @@ class CreateisoThreadTest(helpers.PungiTestCase):
iso.get_implanted_md5.call_args_list, iso.get_implanted_md5.call_args_list,
[mock.call(cmd["iso_path"], logger=compose._logger)], [mock.call(cmd["iso_path"], logger=compose._logger)],
) )
self.assertEqual( self.assertEqual(iso.get_volume_id.call_args_list, [mock.call(cmd["iso_path"])])
iso.get_volume_id.call_args_list, [mock.call(cmd["iso_path"], False)]
)
self.assertEqual(len(compose.im.add.call_args_list), 1) self.assertEqual(len(compose.im.add.call_args_list), 1)
args, _ = compose.im.add.call_args_list[0] args, _ = compose.im.add.call_args_list[0]
@ -972,9 +964,7 @@ class CreateisoThreadTest(helpers.PungiTestCase):
iso.get_implanted_md5.call_args_list, iso.get_implanted_md5.call_args_list,
[mock.call(cmd["iso_path"], logger=compose._logger)], [mock.call(cmd["iso_path"], logger=compose._logger)],
) )
self.assertEqual( self.assertEqual(iso.get_volume_id.call_args_list, [mock.call(cmd["iso_path"])])
iso.get_volume_id.call_args_list, [mock.call(cmd["iso_path"], False)]
)
self.assertEqual(len(compose.im.add.call_args_list), 1) self.assertEqual(len(compose.im.add.call_args_list), 1)
args, _ = compose.im.add.call_args_list[0] args, _ = compose.im.add.call_args_list[0]
@ -1386,9 +1376,7 @@ class CreateisoTryReusePhaseTest(helpers.PungiTestCase):
) )
def test_old_config_changed(self): def test_old_config_changed(self):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"createiso_allow_reuse": True})
self.topdir, {"createiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
old_config = compose.conf.copy() old_config = compose.conf.copy()
old_config["release_version"] = "2" old_config["release_version"] = "2"
compose.load_old_compose_config.return_value = old_config compose.load_old_compose_config.return_value = old_config
@ -1401,26 +1389,8 @@ class CreateisoTryReusePhaseTest(helpers.PungiTestCase):
phase.try_reuse(cmd, compose.variants["Server"], "x86_64", opts) phase.try_reuse(cmd, compose.variants["Server"], "x86_64", opts)
) )
@mock.patch("pungi.phases.createiso.read_json_file")
def test_unsigned_packages_allowed(self, read_json_file):
compose = helpers.DummyCompose(self.topdir, {"createiso_allow_reuse": True})
compose.load_old_compose_config.return_value = compose.conf.copy()
phase = createiso.CreateisoPhase(compose, mock.Mock())
phase.logger = self.logger
cmd = {"disc_num": 1, "disc_count": 1}
opts = CreateIsoOpts(volid="new-volid")
read_json_file.return_value = {"opts": {"volid": "old-volid"}}
self.assertFalse(
phase.try_reuse(cmd, compose.variants["Server"], "x86_64", opts)
)
def test_no_old_metadata(self): def test_no_old_metadata(self):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"createiso_allow_reuse": True})
self.topdir, {"createiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
compose.load_old_compose_config.return_value = compose.conf.copy() compose.load_old_compose_config.return_value = compose.conf.copy()
phase = createiso.CreateisoPhase(compose, mock.Mock()) phase = createiso.CreateisoPhase(compose, mock.Mock())
phase.logger = self.logger phase.logger = self.logger
@ -1433,9 +1403,7 @@ class CreateisoTryReusePhaseTest(helpers.PungiTestCase):
@mock.patch("pungi.phases.createiso.read_json_file") @mock.patch("pungi.phases.createiso.read_json_file")
def test_volume_id_differs(self, read_json_file): def test_volume_id_differs(self, read_json_file):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"createiso_allow_reuse": True})
self.topdir, {"createiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
compose.load_old_compose_config.return_value = compose.conf.copy() compose.load_old_compose_config.return_value = compose.conf.copy()
phase = createiso.CreateisoPhase(compose, mock.Mock()) phase = createiso.CreateisoPhase(compose, mock.Mock())
phase.logger = self.logger phase.logger = self.logger
@ -1451,9 +1419,7 @@ class CreateisoTryReusePhaseTest(helpers.PungiTestCase):
@mock.patch("pungi.phases.createiso.read_json_file") @mock.patch("pungi.phases.createiso.read_json_file")
def test_packages_differ(self, read_json_file): def test_packages_differ(self, read_json_file):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"createiso_allow_reuse": True})
self.topdir, {"createiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
compose.load_old_compose_config.return_value = compose.conf.copy() compose.load_old_compose_config.return_value = compose.conf.copy()
phase = createiso.CreateisoPhase(compose, mock.Mock()) phase = createiso.CreateisoPhase(compose, mock.Mock())
phase.logger = self.logger phase.logger = self.logger
@ -1475,9 +1441,7 @@ class CreateisoTryReusePhaseTest(helpers.PungiTestCase):
@mock.patch("pungi.phases.createiso.read_json_file") @mock.patch("pungi.phases.createiso.read_json_file")
def test_runs_perform_reuse(self, read_json_file): def test_runs_perform_reuse(self, read_json_file):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"createiso_allow_reuse": True})
self.topdir, {"createiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
compose.load_old_compose_config.return_value = compose.conf.copy() compose.load_old_compose_config.return_value = compose.conf.copy()
phase = createiso.CreateisoPhase(compose, mock.Mock()) phase = createiso.CreateisoPhase(compose, mock.Mock())
phase.logger = self.logger phase.logger = self.logger
@ -1631,103 +1595,3 @@ class ComposeConfGetIsoLevelTest(helpers.PungiTestCase):
compose, compose.variants["Client"], "x86_64" compose, compose.variants["Client"], "x86_64"
), ),
) )
def mk_mount(topdir, images):
@contextlib.contextmanager
def dummy_mount(path, logger):
treeinfo = [
"[general]",
"family = Test",
"version = 1.0",
"arch = x86_64",
"variant = Server",
"[checksums]",
]
for image in images:
helpers.touch(os.path.join(topdir, image.path), image.content)
treeinfo.append("%s = sha256:%s" % (image.path, image.checksum))
helpers.touch(os.path.join(topdir, ".treeinfo"), "\n".join(treeinfo))
yield topdir
return dummy_mount
class _MockRun:
"""This class replaces kobo.shortcuts.run and validates that the correct
two commands are called. The assertions can not be done after the tested
function finishes because it will clean up the .treeinfo file that needs to
be checked.
"""
def __init__(self):
self.num_calls = 0
self.asserts = [self._assert_xorriso, self._assert_implantisomd5]
def __call__(self, cmd, logfile):
self.num_calls += 1
self.asserts.pop(0)(cmd)
def _assert_xorriso(self, cmd):
assert cmd[0] == "xorriso"
ti = productmd.TreeInfo()
input_iso = None
for i, arg in enumerate(cmd):
if arg == "-map":
ti.load(cmd[i + 1])
if arg == "-outdev":
self.temp_iso = cmd[i + 1]
if arg == "-indev":
input_iso = cmd[i + 1]
assert self.input_iso == input_iso
assert ti.checksums.checksums[self.image_relative_path] == self.image_checksum
def _assert_implantisomd5(self, cmd):
assert cmd[0] == "/usr/bin/implantisomd5"
assert cmd[-1] == self.temp_iso
class DummyImage:
def __init__(self, path, content, checksum=None):
self.path = path
self.content = content
self.checksum = checksum or helpers.hash_string("sha256", content)
@mock.patch("os.rename")
@mock.patch("pungi.phases.createiso.run", new_callable=_MockRun)
class FixChecksumsTest(helpers.PungiTestCase):
def test_checksum_matches(self, mock_run, mock_rename):
compose = helpers.DummyCompose(self.topdir, {})
arch = "x86_64"
iso_path = "DUMMY_ISO"
with mock.patch(
"pungi.wrappers.iso.mount",
new=mk_mount(self.topdir, [DummyImage("images/eltorito.img", "eltorito")]),
):
createiso.fix_treeinfo_checksums(compose, iso_path, arch)
self.assertEqual(mock_run.num_calls, 0)
self.assertEqual(mock_rename.call_args_list, [])
def test_checksum_fix(self, mock_run, mock_rename):
compose = helpers.DummyCompose(self.topdir, {})
arch = "x86_64"
img = "images/eltorito.img"
content = "eltorito"
iso_path = "DUMMY_ISO"
mock_run.input_iso = iso_path
mock_run.image_relative_path = "images/eltorito.img"
mock_run.image_checksum = ("sha256", helpers.hash_string("sha256", content))
with mock.patch(
"pungi.wrappers.iso.mount",
new=mk_mount(self.topdir, [DummyImage(img, content, "abc")]),
):
createiso.fix_treeinfo_checksums(compose, iso_path, arch)
# The new image was copied over the old one
self.assertEqual(
mock_rename.call_args_list, [mock.call(mock_run.temp_iso, iso_path)]
)

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from unittest import mock import mock
from parameterized import parameterized from parameterized import parameterized
import os import os

View File

@ -8,7 +8,7 @@ except ImportError:
import glob import glob
import os import os
from unittest import mock import mock
import six import six
from pungi.module_util import Modulemd from pungi.module_util import Modulemd

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from unittest import mock import mock
import os import os
from productmd.extra_files import ExtraFiles from productmd.extra_files import ExtraFiles

View File

@ -1,8 +1,8 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from typing import AnyStr, List
from unittest import mock
import six
import logging import logging
import mock
import six
import os import os
@ -130,7 +130,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
), ),
], ],
) )
iso_path = os.path.join(self.topdir, "compose/Server/x86_64/iso/my.iso")
self.assertEqual( self.assertEqual(
rcc.call_args_list, rcc.call_args_list,
[ [
@ -149,7 +148,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
log_file=os.path.join( log_file=os.path.join(
self.topdir, "logs/x86_64/extraiso-my.iso.x86_64.log" self.topdir, "logs/x86_64/extraiso-my.iso.x86_64.log"
), ),
iso_path=iso_path,
) )
], ],
) )
@ -160,7 +158,7 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
compose, compose,
server, server,
"x86_64", "x86_64",
iso_path, os.path.join(self.topdir, "compose/Server/x86_64/iso/my.iso"),
True, True,
additional_variants=["Client"], additional_variants=["Client"],
) )
@ -207,7 +205,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
), ),
], ],
) )
iso_path = os.path.join(self.topdir, "compose/Server/x86_64/iso/my.iso")
self.assertEqual( self.assertEqual(
rcc.call_args_list, rcc.call_args_list,
[ [
@ -226,7 +223,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
log_file=os.path.join( log_file=os.path.join(
self.topdir, "logs/x86_64/extraiso-my.iso.x86_64.log" self.topdir, "logs/x86_64/extraiso-my.iso.x86_64.log"
), ),
iso_path=iso_path,
) )
], ],
) )
@ -237,7 +233,7 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
compose, compose,
server, server,
"x86_64", "x86_64",
iso_path, os.path.join(self.topdir, "compose/Server/x86_64/iso/my.iso"),
True, True,
additional_variants=["Client"], additional_variants=["Client"],
) )
@ -282,7 +278,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
), ),
], ],
) )
iso_path = os.path.join(self.topdir, "compose/Server/x86_64/iso/my.iso")
self.assertEqual( self.assertEqual(
rcc.call_args_list, rcc.call_args_list,
[ [
@ -301,7 +296,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
log_file=os.path.join( log_file=os.path.join(
self.topdir, "logs/x86_64/extraiso-my.iso.x86_64.log" self.topdir, "logs/x86_64/extraiso-my.iso.x86_64.log"
), ),
iso_path=iso_path,
) )
], ],
) )
@ -312,7 +306,7 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
compose, compose,
server, server,
"x86_64", "x86_64",
iso_path, os.path.join(self.topdir, "compose/Server/x86_64/iso/my.iso"),
True, True,
additional_variants=["Client"], additional_variants=["Client"],
) )
@ -359,7 +353,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
), ),
], ],
) )
iso_path = os.path.join(self.topdir, "compose/Server/x86_64/iso/my.iso")
self.assertEqual( self.assertEqual(
rcc.call_args_list, rcc.call_args_list,
[ [
@ -378,7 +371,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
log_file=os.path.join( log_file=os.path.join(
self.topdir, "logs/x86_64/extraiso-my.iso.x86_64.log" self.topdir, "logs/x86_64/extraiso-my.iso.x86_64.log"
), ),
iso_path=iso_path,
) )
], ],
) )
@ -389,7 +381,7 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
compose, compose,
server, server,
"x86_64", "x86_64",
iso_path, os.path.join(self.topdir, "compose/Server/x86_64/iso/my.iso"),
False, False,
additional_variants=["Client"], additional_variants=["Client"],
) )
@ -431,7 +423,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
), ),
], ],
) )
iso_path = os.path.join(self.topdir, "compose/Server/source/iso/my.iso")
self.assertEqual( self.assertEqual(
rcc.call_args_list, rcc.call_args_list,
[ [
@ -450,7 +441,6 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
log_file=os.path.join( log_file=os.path.join(
self.topdir, "logs/src/extraiso-my.iso.src.log" self.topdir, "logs/src/extraiso-my.iso.src.log"
), ),
iso_path=iso_path,
) )
], ],
) )
@ -461,7 +451,7 @@ class ExtraIsosThreadTest(helpers.PungiTestCase):
compose, compose,
server, server,
"src", "src",
iso_path, os.path.join(self.topdir, "compose/Server/source/iso/my.iso"),
False, False,
additional_variants=["Client"], additional_variants=["Client"],
) )
@ -624,7 +614,6 @@ class GetExtraFilesTest(helpers.PungiTestCase):
) )
@mock.patch("pungi.phases.extra_isos.tweak_repo_treeinfo")
@mock.patch("pungi.phases.extra_isos.tweak_treeinfo") @mock.patch("pungi.phases.extra_isos.tweak_treeinfo")
@mock.patch("pungi.wrappers.iso.write_graft_points") @mock.patch("pungi.wrappers.iso.write_graft_points")
@mock.patch("pungi.wrappers.iso.get_graft_points") @mock.patch("pungi.wrappers.iso.get_graft_points")
@ -634,7 +623,7 @@ class GetIsoContentsTest(helpers.PungiTestCase):
self.compose = helpers.DummyCompose(self.topdir, {}) self.compose = helpers.DummyCompose(self.topdir, {})
self.variant = self.compose.variants["Server"] self.variant = self.compose.variants["Server"]
def test_non_bootable_binary(self, ggp, wgp, tt, trt): def test_non_bootable_binary(self, ggp, wgp, tt):
gp = { gp = {
"compose/Client/x86_64/os/Packages": {"f/foo.rpm": "/mnt/f/foo.rpm"}, "compose/Client/x86_64/os/Packages": {"f/foo.rpm": "/mnt/f/foo.rpm"},
"compose/Client/x86_64/os/repodata": { "compose/Client/x86_64/os/repodata": {
@ -704,15 +693,7 @@ class GetIsoContentsTest(helpers.PungiTestCase):
], ],
) )
# Check correct call to tweak_repo_treeinfo def test_inherit_extra_files(self, ggp, wgp, tt):
self._tweak_repo_treeinfo_call_list_checker(
trt_mock=trt,
main_variant='Server',
addon_variants=['Client'],
sub_path='x86_64/os',
)
def test_inherit_extra_files(self, ggp, wgp, tt, trt):
gp = { gp = {
"compose/Client/x86_64/os/Packages": {"f/foo.rpm": "/mnt/f/foo.rpm"}, "compose/Client/x86_64/os/Packages": {"f/foo.rpm": "/mnt/f/foo.rpm"},
"compose/Client/x86_64/os/repodata": { "compose/Client/x86_64/os/repodata": {
@ -786,15 +767,7 @@ class GetIsoContentsTest(helpers.PungiTestCase):
], ],
) )
# Check correct call to tweak_repo_treeinfo def test_source(self, ggp, wgp, tt):
self._tweak_repo_treeinfo_call_list_checker(
trt_mock=trt,
main_variant='Server',
addon_variants=['Client'],
sub_path='x86_64/os',
)
def test_source(self, ggp, wgp, tt, trt):
gp = { gp = {
"compose/Client/source/tree/Packages": {"f/foo.rpm": "/mnt/f/foo.rpm"}, "compose/Client/source/tree/Packages": {"f/foo.rpm": "/mnt/f/foo.rpm"},
"compose/Client/source/tree/repodata": { "compose/Client/source/tree/repodata": {
@ -864,15 +837,7 @@ class GetIsoContentsTest(helpers.PungiTestCase):
], ],
) )
# Check correct call to tweak_repo_treeinfo def test_bootable(self, ggp, wgp, tt):
self._tweak_repo_treeinfo_call_list_checker(
trt_mock=trt,
main_variant='Server',
addon_variants=['Client'],
sub_path='source/tree',
)
def test_bootable(self, ggp, wgp, tt, trt):
self.compose.conf["buildinstall_method"] = "lorax" self.compose.conf["buildinstall_method"] = "lorax"
bi_dir = os.path.join(self.topdir, "work/x86_64/buildinstall/Server") bi_dir = os.path.join(self.topdir, "work/x86_64/buildinstall/Server")
@ -898,8 +863,10 @@ class GetIsoContentsTest(helpers.PungiTestCase):
"images/efiboot.img": os.path.join(iso_dir, "images/efiboot.img"), "images/efiboot.img": os.path.join(iso_dir, "images/efiboot.img"),
} }
ggp.side_effect = lambda compose, x: ( ggp.side_effect = (
gp[x[0][len(self.topdir) + 1 :]] if len(x) == 1 else bi_gp lambda compose, x: gp[x[0][len(self.topdir) + 1 :]]
if len(x) == 1
else bi_gp
) )
gp_file = os.path.join(self.topdir, "work/x86_64/iso/my.iso-graft-points") gp_file = os.path.join(self.topdir, "work/x86_64/iso/my.iso-graft-points")
@ -972,42 +939,6 @@ class GetIsoContentsTest(helpers.PungiTestCase):
], ],
) )
# Check correct call to tweak_repo_treeinfo
self._tweak_repo_treeinfo_call_list_checker(
trt_mock=trt,
main_variant='Server',
addon_variants=['Client'],
sub_path='x86_64/os',
)
def _tweak_repo_treeinfo_call_list_checker(
self,
trt_mock: mock.Mock,
main_variant: AnyStr,
addon_variants: List[AnyStr],
sub_path: AnyStr) -> None:
"""
Check correct call to tweak_repo_treeinfo
"""
path_to_treeinfo = os.path.join(
self.topdir,
'compose',
main_variant,
sub_path,
'.treeinfo',
)
self.assertEqual(
trt_mock.call_args_list,
[
mock.call(
self.compose,
addon_variants,
path_to_treeinfo,
path_to_treeinfo,
)
]
)
class GetFilenameTest(helpers.PungiTestCase): class GetFilenameTest(helpers.PungiTestCase):
def test_use_original_name(self): def test_use_original_name(self):
@ -1085,15 +1016,6 @@ class TweakTreeinfoTest(helpers.PungiTestCase):
self.assertFilesEqual(output, expected) self.assertFilesEqual(output, expected)
def test_repo_tweak(self):
compose = helpers.DummyCompose(self.topdir, {})
input = os.path.join(helpers.FIXTURE_DIR, "extraiso.treeinfo")
output = os.path.join(self.topdir, "actual-treeinfo")
expected = os.path.join(helpers.FIXTURE_DIR, "extraiso-tweaked-expected.treeinfo")
extra_isos.tweak_repo_treeinfo(compose, ["Client"], input, output)
self.assertFilesEqual(output, expected)
class PrepareMetadataTest(helpers.PungiTestCase): class PrepareMetadataTest(helpers.PungiTestCase):
@mock.patch("pungi.metadata.create_media_repo") @mock.patch("pungi.metadata.create_media_repo")
@ -1156,9 +1078,7 @@ class ExtraisoTryReusePhaseTest(helpers.PungiTestCase):
) )
def test_buildinstall_changed(self): def test_buildinstall_changed(self):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"extraiso_allow_reuse": True})
self.topdir, {"extraiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
thread = extra_isos.ExtraIsosThread(compose, mock.Mock()) thread = extra_isos.ExtraIsosThread(compose, mock.Mock())
thread.logger = self.logger thread.logger = self.logger
thread.bi = mock.Mock() thread.bi = mock.Mock()
@ -1172,9 +1092,7 @@ class ExtraisoTryReusePhaseTest(helpers.PungiTestCase):
) )
def test_no_old_config(self): def test_no_old_config(self):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"extraiso_allow_reuse": True})
self.topdir, {"extraiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
thread = extra_isos.ExtraIsosThread(compose, mock.Mock()) thread = extra_isos.ExtraIsosThread(compose, mock.Mock())
thread.logger = self.logger thread.logger = self.logger
opts = CreateIsoOpts() opts = CreateIsoOpts()
@ -1186,9 +1104,7 @@ class ExtraisoTryReusePhaseTest(helpers.PungiTestCase):
) )
def test_old_config_changed(self): def test_old_config_changed(self):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"extraiso_allow_reuse": True})
self.topdir, {"extraiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
old_config = compose.conf.copy() old_config = compose.conf.copy()
old_config["release_version"] = "2" old_config["release_version"] = "2"
compose.load_old_compose_config.return_value = old_config compose.load_old_compose_config.return_value = old_config
@ -1203,9 +1119,7 @@ class ExtraisoTryReusePhaseTest(helpers.PungiTestCase):
) )
def test_no_old_metadata(self): def test_no_old_metadata(self):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"extraiso_allow_reuse": True})
self.topdir, {"extraiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
compose.load_old_compose_config.return_value = compose.conf.copy() compose.load_old_compose_config.return_value = compose.conf.copy()
thread = extra_isos.ExtraIsosThread(compose, mock.Mock()) thread = extra_isos.ExtraIsosThread(compose, mock.Mock())
thread.logger = self.logger thread.logger = self.logger
@ -1219,9 +1133,7 @@ class ExtraisoTryReusePhaseTest(helpers.PungiTestCase):
@mock.patch("pungi.phases.extra_isos.read_json_file") @mock.patch("pungi.phases.extra_isos.read_json_file")
def test_volume_id_differs(self, read_json_file): def test_volume_id_differs(self, read_json_file):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"extraiso_allow_reuse": True})
self.topdir, {"extraiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
compose.load_old_compose_config.return_value = compose.conf.copy() compose.load_old_compose_config.return_value = compose.conf.copy()
thread = extra_isos.ExtraIsosThread(compose, mock.Mock()) thread = extra_isos.ExtraIsosThread(compose, mock.Mock())
thread.logger = self.logger thread.logger = self.logger
@ -1238,9 +1150,7 @@ class ExtraisoTryReusePhaseTest(helpers.PungiTestCase):
@mock.patch("pungi.phases.extra_isos.read_json_file") @mock.patch("pungi.phases.extra_isos.read_json_file")
def test_packages_differ(self, read_json_file): def test_packages_differ(self, read_json_file):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"extraiso_allow_reuse": True})
self.topdir, {"extraiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
compose.load_old_compose_config.return_value = compose.conf.copy() compose.load_old_compose_config.return_value = compose.conf.copy()
thread = extra_isos.ExtraIsosThread(compose, mock.Mock()) thread = extra_isos.ExtraIsosThread(compose, mock.Mock())
thread.logger = self.logger thread.logger = self.logger
@ -1261,41 +1171,9 @@ class ExtraisoTryReusePhaseTest(helpers.PungiTestCase):
) )
) )
@mock.patch("pungi.phases.extra_isos.read_json_file")
def test_unsigned_packages(self, read_json_file):
compose = helpers.DummyCompose(self.topdir, {"extraiso_allow_reuse": True})
compose.load_old_compose_config.return_value = compose.conf.copy()
thread = extra_isos.ExtraIsosThread(compose, mock.Mock())
thread.logger = self.logger
thread.perform_reuse = mock.Mock()
new_graft_points = os.path.join(self.topdir, "new_graft_points")
helpers.touch(new_graft_points)
opts = CreateIsoOpts(graft_points=new_graft_points, volid="volid")
old_graft_points = os.path.join(self.topdir, "old_graft_points")
helpers.touch(old_graft_points)
dummy_iso_path = "dummy-iso-path/dummy.iso"
read_json_file.return_value = {
"opts": {
"graft_points": old_graft_points,
"volid": "volid",
"output_dir": os.path.dirname(dummy_iso_path),
"iso_name": os.path.basename(dummy_iso_path),
},
}
self.assertFalse(
thread.try_reuse(
compose, compose.variants["Server"], "x86_64", "abcdef", opts
)
)
@mock.patch("pungi.phases.extra_isos.read_json_file") @mock.patch("pungi.phases.extra_isos.read_json_file")
def test_runs_perform_reuse(self, read_json_file): def test_runs_perform_reuse(self, read_json_file):
compose = helpers.DummyCompose( compose = helpers.DummyCompose(self.topdir, {"extraiso_allow_reuse": True})
self.topdir, {"extraiso_allow_reuse": True, "sigkeys": ["abcdef"]}
)
compose.load_old_compose_config.return_value = compose.conf.copy() compose.load_old_compose_config.return_value = compose.conf.copy()
thread = extra_isos.ExtraIsosThread(compose, mock.Mock()) thread = extra_isos.ExtraIsosThread(compose, mock.Mock())
thread.logger = self.logger thread.logger = self.logger

View File

@ -153,10 +153,7 @@ class TestParseOutput(unittest.TestCase):
self.assertEqual(modules, set()) self.assertEqual(modules, set())
def test_extracts_modules(self): def test_extracts_modules(self):
touch( touch(self.file, "module:mod:master:20181003:cafebeef.x86_64@repo-0\n")
self.file,
"module:mod:master-1:20181003:cafebeef.x86_64@repo-0\n"
)
packages, modules = fus.parse_output(self.file) packages, modules = fus.parse_output(self.file)
self.assertEqual(packages, set()) self.assertEqual(packages, set())
self.assertEqual(modules, set(["mod:master_1:20181003:cafebeef"])) self.assertEqual(modules, set(["mod:master:20181003:cafebeef"]))

View File

@ -2286,7 +2286,6 @@ class DNFDepsolvingTestCase(DepsolvingBase, unittest.TestCase):
conf = Conf(base_arch) conf = Conf(base_arch)
conf.persistdir = persistdir conf.persistdir = persistdir
conf.cachedir = self.cachedir conf.cachedir = self.cachedir
conf.optional_metadata_types = ["filelists"]
if exclude: if exclude:
conf.exclude = exclude conf.exclude = exclude
dnf = DnfWrapper(conf) dnf = DnfWrapper(conf)

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from unittest import mock import mock
from pungi.phases.gather.methods import method_deps as deps from pungi.phases.gather.methods import method_deps as deps
from tests import helpers from tests import helpers

View File

@ -2,7 +2,7 @@
from collections import namedtuple from collections import namedtuple
import copy import copy
from unittest import mock import mock
import os import os
import six import six

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from unittest import mock import mock
import os import os
import six import six

View File

@ -1,124 +0,0 @@
# -*- coding: utf-8 -*-
import gzip
import os
from io import StringIO
import yaml
from pungi.scripts.gather_modules import collect_modules, EMPTY_FILE
import unittest
from pyfakefs.fake_filesystem_unittest import TestCase
MARIADB_MODULE = yaml.load("""
---
document: modulemd
version: 2
data:
name: mariadb-devel
stream: 10.3-1
version: 8010020200108182321
context: cdc1202b
arch: x86_64
summary: MariaDB Module
description: >-
MariaDB is a community developed branch of MySQL.
components:
rpms:
Judy:
rationale: MariaDB dependency for OQgraph computation engine
ref: a3583b33f939e74a530f2a1dff0552dff2c8ea73
buildorder: 4
arches: [aarch64, i686, ppc64le, x86_64]
artifacts:
rpms:
- Judy-0:1.0.5-18.module_el8.1.0+217+4d875839.i686
- Judy-debuginfo-0:1.0.5-18.module_el8.1.0+217+4d875839.i686
""", Loader=yaml.BaseLoader)
JAVAPACKAGES_TOOLS_MODULE = yaml.load("""
---
document: modulemd
version: 2
data:
name: javapackages-tools
stream: 201801
version: 8000020190628172923
context: b07bea58
arch: x86_64
summary: Tools and macros for Java packaging support
description: >-
Java Packages Tools is a collection of tools that make it easier to build RPM
packages containing software running on Java platform.
components:
rpms:
ant:
rationale: "Runtime dependency of ant-contrib"
ref: 2eaf095676540e2805ee7e8c7f6f78285c428fdc
arches: [aarch64, i686, ppc64le, x86_64]
artifacts:
rpms:
- ant-0:1.10.5-1.module_el8.0.0+30+832da3a1.noarch
- ant-0:1.10.5-1.module_el8.0.0+30+832da3a1.src
""", Loader=yaml.BaseLoader)
ANT_DEFAULTS = yaml.load("""
data:
module: ant
profiles:
'1.10':
- common
stream: '1.10'
document: modulemd-defaults
version: '1'
""", Loader=yaml.BaseLoader)
PATH_TO_KOJI = '/path/to/koji'
MODULES_YAML_GZ = 'modules.yaml.gz'
class TestModulesYamlParser(TestCase):
maxDiff = None
def setUp(self):
self.setUpPyfakefs()
def _prepare_test_data(self):
"""
Create modules.yaml.gz with some test data
"""
os.makedirs(PATH_TO_KOJI)
modules_gz_path = os.path.join(PATH_TO_KOJI, MODULES_YAML_GZ)
# dump modules into compressed file as in generic repos for rpm
io = StringIO()
yaml.dump_all([MARIADB_MODULE, JAVAPACKAGES_TOOLS_MODULE, ANT_DEFAULTS], io)
with open(os.path.join(PATH_TO_KOJI, MODULES_YAML_GZ), 'wb') as f:
f.write(gzip.compress(io.getvalue().encode()))
return modules_gz_path
def test_export_modules(self):
modules_gz_path = self._prepare_test_data()
paths = [open(modules_gz_path, 'rb')]
collect_modules(paths, PATH_TO_KOJI)
# check directory structure matches expected
self.assertEqual([MODULES_YAML_GZ, 'modules', 'module_defaults'], os.listdir(PATH_TO_KOJI))
self.assertEqual(['mariadb-devel-10.3_1-8010020200108182321.cdc1202b',
'javapackages-tools-201801-8000020190628172923.b07bea58'],
os.listdir(os.path.join(PATH_TO_KOJI, 'modules/x86_64')))
self.assertEqual([EMPTY_FILE, 'ant.yaml'],
os.listdir(os.path.join(PATH_TO_KOJI, 'module_defaults')))
# check that modules were exported
self.assertEqual(MARIADB_MODULE, yaml.safe_load(
open(os.path.join(PATH_TO_KOJI, 'modules/x86_64', 'mariadb-devel-10.3_1-8010020200108182321.cdc1202b'))))
self.assertEqual(JAVAPACKAGES_TOOLS_MODULE, yaml.safe_load(
open(os.path.join(PATH_TO_KOJI, 'modules/x86_64', 'javapackages-tools-201801-8000020190628172923.b07bea58'))))
# check that defaults were copied
self.assertEqual(ANT_DEFAULTS, yaml.safe_load(
open(os.path.join(PATH_TO_KOJI, 'module_defaults', 'ant.yaml'))))
if __name__ == '__main__':
unittest.main()

View File

@ -4,7 +4,7 @@ import copy
import json import json
import os import os
from unittest import mock import mock
try: try:
import unittest2 as unittest import unittest2 as unittest
@ -1057,8 +1057,10 @@ class TestGatherPackages(helpers.PungiTestCase):
@mock.patch("pungi.phases.gather.get_gather_method") @mock.patch("pungi.phases.gather.get_gather_method")
def test_hybrid_method(self, get_gather_method, get_variant_packages): def test_hybrid_method(self, get_gather_method, get_variant_packages):
packages, groups, filters = mock.Mock(), mock.Mock(), mock.Mock() packages, groups, filters = mock.Mock(), mock.Mock(), mock.Mock()
get_variant_packages.side_effect = lambda c, v, a, s, p: ( get_variant_packages.side_effect = (
(packages, groups, filters) if s == "comps" else (None, None, None) lambda c, v, a, s, p: (packages, groups, filters)
if s == "comps"
else (None, None, None)
) )
get_gather_method.return_value.return_value.return_value = { get_gather_method.return_value.return_value.return_value = {
"rpm": [], "rpm": [],

View File

@ -1,151 +0,0 @@
# -*- coding: utf-8 -*-
import os
import unittest
from pathlib import Path
from pyfakefs.fake_filesystem_unittest import TestCase
from pungi.scripts.gather_rpms import search_rpms, copy_rpms, Package
from productmd.common import parse_nvra
PATH_TO_REPOS = '/path/to/repos'
MODULES_YAML_GZ = 'modules.yaml.gz'
class TestGatherRpms(TestCase):
maxDiff = None
FILES_TO_CREATE = [
'powertools/Packages/libvirt-6.0.0-28.module_el'
'8.3.0+555+a55c8938.i686.rpm',
'powertools/Packages/libgit2-devel-0.26.8-2.el8.x86_64.rpm',
'powertools/Packages/xalan-j2-2.7.1-38.module_el'
'8.0.0+30+832da3a1.noarch.rpm',
'appstream/Packages/bnd-maven-plugin-3.5.0-4.module_el'
'8.0.0+30+832da3a1.noarch.rpm',
'appstream/Packages/OpenEXR-devel-2.2.0-11.el8.i686.rpm',
'appstream/Packages/mingw-binutils-generic-2.30-1.el8.x86_64.rpm',
'appstream/Packages/somenonrpm',
]
def setUp(self):
self.setUpPyfakefs()
os.makedirs(PATH_TO_REPOS)
for filepath in self.FILES_TO_CREATE:
os.makedirs(
os.path.join(PATH_TO_REPOS, os.path.dirname(filepath)),
exist_ok=True,
)
open(os.path.join(PATH_TO_REPOS, filepath), 'w').close()
def test_gather_rpms(self):
self.assertEqual(
[Package(nvra=parse_nvra('libvirt-6.0.0-28.module_'
'el8.3.0+555+a55c8938.i686'),
path=Path(
f'{PATH_TO_REPOS}/powertools/Packages/'
f'libvirt-6.0.0-28.module_el'
f'8.3.0+555+a55c8938.i686.rpm'
)),
Package(nvra=parse_nvra('libgit2-devel-0.26.8-2.el8.x86_64'),
path=Path(
f'{PATH_TO_REPOS}/powertools/Packages/'
f'libgit2-devel-0.26.8-2.el8.x86_64.rpm'
)),
Package(nvra=parse_nvra('xalan-j2-2.7.1-38.module_el'
'8.0.0+30+832da3a1.noarch'),
path=Path(
f'{PATH_TO_REPOS}/powertools/Packages/'
f'xalan-j2-2.7.1-38.module_el'
f'8.0.0+30+832da3a1.noarch.rpm'
)),
Package(nvra=parse_nvra('bnd-maven-plugin-3.5.0-4.module_el'
'8.0.0+30+832da3a1.noarch'),
path=Path(
'/path/to/repos/appstream/Packages/'
'bnd-maven-plugin-3.5.0-4.module_el'
'8.0.0+30+832da3a1.noarch.rpm'
)),
Package(nvra=parse_nvra('OpenEXR-devel-2.2.0-11.el8.i686'),
path=Path(
f'{PATH_TO_REPOS}/appstream/Packages/'
f'OpenEXR-devel-2.2.0-11.el8.i686.rpm'
)),
Package(nvra=parse_nvra('mingw-binutils-generic-'
'2.30-1.el8.x86_64'),
path=Path(
f'{PATH_TO_REPOS}/appstream/Packages/'
f'mingw-binutils-generic-2.30-1.el8.x86_64.rpm'
))
],
search_rpms(Path(PATH_TO_REPOS))
)
def test_copy_rpms(self):
target_path = Path('/mnt/koji')
packages = [
Package(nvra=parse_nvra('libvirt-6.0.0-28.module_'
'el8.3.0+555+a55c8938.i686'),
path=Path(
f'{PATH_TO_REPOS}/powertools/Packages/'
f'libvirt-6.0.0-28.module_el'
f'8.3.0+555+a55c8938.i686.rpm'
)),
Package(nvra=parse_nvra('libgit2-devel-0.26.8-2.el8.x86_64'),
path=Path(
f'{PATH_TO_REPOS}/powertools/Packages/'
f'libgit2-devel-0.26.8-2.el8.x86_64.rpm'
)),
Package(nvra=parse_nvra('xalan-j2-2.7.1-38.module_'
'el8.0.0+30+832da3a1.noarch'),
path=Path(
f'{PATH_TO_REPOS}/powertools/Packages/'
f'xalan-j2-2.7.1-38.module_el'
f'8.0.0+30+832da3a1.noarch.rpm'
)),
Package(nvra=parse_nvra('bnd-maven-plugin-3.5.0-4.module_el'
'8.0.0+30+832da3a1.noarch'),
path=Path(
'/path/to/repos/appstream/Packages/'
'bnd-maven-plugin-3.5.0-4.module_el'
'8.0.0+30+832da3a1.noarch.rpm'
)),
Package(nvra=parse_nvra('OpenEXR-devel-2.2.0-11.el8.i686'),
path=Path(
f'{PATH_TO_REPOS}/appstream/Packages/'
f'OpenEXR-devel-2.2.0-11.el8.i686.rpm'
)),
Package(nvra=parse_nvra('mingw-binutils-generic-'
'2.30-1.el8.x86_64'),
path=Path(
f'{PATH_TO_REPOS}/appstream/Packages/'
f'mingw-binutils-generic-2.30-1.el8.x86_64.rpm'
))
]
copy_rpms(packages, target_path, [])
self.assertCountEqual([
'xalan-j2-2.7.1-38.module_el8.0.0+30+832da3a1.noarch.rpm',
'bnd-maven-plugin-3.5.0-4.module_el8.0.0+30+832da3a1.noarch.rpm'
], os.listdir(target_path / 'noarch'))
self.assertCountEqual([
'libgit2-devel-0.26.8-2.el8.x86_64.rpm',
'mingw-binutils-generic-2.30-1.el8.x86_64.rpm'
], os.listdir(target_path / 'x86_64'))
self.assertCountEqual([
'libvirt-6.0.0-28.module_el8.3.0+555+a55c8938.i686.rpm',
'OpenEXR-devel-2.2.0-11.el8.i686.rpm'
], os.listdir(target_path / 'i686'))
self.assertCountEqual([
'i686', 'x86_64', 'noarch'
], os.listdir(target_path))
if __name__ == '__main__':
unittest.main()

View File

@ -5,7 +5,7 @@ try:
except ImportError: except ImportError:
import unittest import unittest
from unittest import mock import mock
import six import six
from pungi.phases.gather.sources.source_module import GatherSourceModule from pungi.phases.gather.sources.source_module import GatherSourceModule

Some files were not shown because too many files have changed in this diff Show More