Compare commits

...

28 Commits

Author SHA1 Message Date
Haibo Lin 467c7a7f6a 4.3.8 release
JIRA: RHELCMP-11448
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-03-28 18:05:15 +08:00
Lubomír Sedlář e1d7544c2b createiso: Update possibly changed file on DVD
There's no good way of detecting if buildinstall phase tweaked boot
configuration (and efiboot.img). We should update those files in the DVD
just to be sure.

The .discinfo file is always different and needs to be updated.

Relates: https://pagure.io/pungi/issue/1647
JIRA: RHELCMP-10811
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-27 12:40:39 +00:00
Lubomír Sedlář a71c8e23be pkgset: Stop reuse if configuration changed
When options controlling excluding arches change, it should break reuse.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-22 12:56:02 +00:00
Lubomír Sedlář ab508c1511 Allow disabling inheriting ExcludeArch to noarch packages
Copying ExcludeArch/ExclusiveArch from source rpm to noarch is an easy
option to block shipping that particular noarch package from a certain
architecture. However, there is no way to bypass it, and it is rather
confusing and not discoverable.

An alternative way to remove an unwanted package is to use the good old
`filter_packages`, which has enough granularity to remove pretty much
anything from anywhere. The only downside is that it requires a change
in configuration, so it can't be done by a packager directly from a spec
file.

When we decide to break backwards compatibility, this option should be
removed and the entire ExcludeArch/ExclusiveArch inheritance removed
completely.

JIRA: ENGCMP-2606
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-22 12:56:02 +00:00
Lubomír Sedlář f960b4d155 pkgset: Support extra builds with no tags
This is a rather fringe use case. If the configuration contains
pkgset_koji_builds or pkgset_koji_scratch_tasks but no pkgset_koji_tag,
the compose will be empty.

The expectation though is that the packages should be pulled.

The extra RPMs are added to all non-modular tags because they are
supposed to mask builds from the same packages (e.g. user may want to
explicitly pull in older version than tagged).

This patch adds support for composes containing only explicitly listed
builds by creating a dummy package set that is not actually using any
tag.

JIRA: RHELCMP-11385
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-17 15:10:35 +01:00
Lubomír Sedlář 602b698080 buildinstall: Avoid pointlessly tweaking the boot images
Only modify boot images if there actually is some change.

The tweak function updates config files with volume id and kickstart
file. Even if we don't have a kickstart and there is no change in the
config files, the image will be regenerated. This leads to a change in
checksum for no good reason.

This patch keeps track of modified config files. If there are none, it
avoids touching anything else.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-16 07:46:56 +00:00
Haibo Lin b30f7e0d83 Prevent to reuse if unsigned packages are allowed
JIRA: RHELCMP-8415
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-03-16 15:32:09 +08:00
Lubomír Sedlář 0c3b6e22f9 Pass parent id/respin id to CTS
When the --target-dir option is used, the compose can be created in CTS,
but the parent and respin information is not passed through. That leads
to data missing later on.

JIRA: RHELCMP-11411
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-14 10:51:34 +01:00
Haibo Lin 3175ede38a Exclude existing files in boot.iso
JIRA: RHELCMP-10811
Fixes: https://pagure.io/pungi/issue/1647
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-03-09 15:33:25 +08:00
Lubomír Sedlář 8920eef339 image-build/osbuild: Pull ISOs into the compose
OSBuild tasks can produce ISO files. If they do, we should include them
in the compose, and we should pull them into the iso/ subdirectory
together with other ISOs.

Fixes: https://pagure.io/pungi/issue/1657
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-03-06 09:35:47 +01:00
Lubomír Sedlář 58036eab84 Retry 401 error from CTS
This could be a transient error caused by kerberos server instability.

JIRA: RHELCMP-11251
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-02-28 10:14:02 +01:00
Lubomír Sedlář a4476f2570 gather: Better detection of debuginfo in lookaside
If the depsolver wants to include a package that is present in both the
source repo and a lookaside repo, it reliably detects binary packages
present in lookaside, but for debuginfo it's not so reliable.

There is a separate package object for each package in each repo.
Depending on which one is used, debuginfo could be included in the
result or not. This patch fixes that by actually looking if the same
package is present in any lookaside repo.

JIRA: RHELCMP-9373
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-02-27 15:33:19 +01:00
Haibo Lin 8c06b7a3f1 Log versions of all installed packages
JIRA: RHELCMP-9493
Signed-off-by: Haibo Lin <hlin@redhat.com>
2023-02-06 18:24:20 +08:00
Lubomír Sedlář 64ae81b416 Use authentication for all CTS calls
The update of compose URL relied on environment being set from the
initial import. This got broken when a unique credentials cache started
to be used, and was cleaned up after the import.

JIRA: RHELCMP-11072
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-02-02 13:52:11 +00:00
Lubomír Sedlář 826169af7c Fix black complaints
These are newly detected by black 23.1.0.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-02-02 12:53:32 +01:00
Lubomír Sedlář d97b8bdd33 Add vhd.gz extension to compressed VHD images
JIRA: RHELCMP-11027
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-01-31 11:16:58 +01:00
Lubomír Sedlář 8768b23cbe Add vhd-compressed image type
JIRA: RHELCMP-11027
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-01-30 09:27:22 +00:00
Lubomír Sedlář 51628a974d Update to work with latest mock
The `called_once` attribute now raises an exception. Switch to
`assert_called_once` method. Also replace `assertTrue(x.called)` with
`x.assert_called()`.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2023-01-26 13:05:48 +01:00
Ondrej Nosek 88327d5784 Default bztar format for sdist command
Usage of the 'bztar' format is unchanged, just changing the way
of configuration. The previous method was deprecated.

Signed-off-by: Ondrej Nosek <onosek@redhat.com>
2022-12-12 12:10:54 +01:00
Ondrej Nosek 6e0a9385f2 4.3.7 release
Signed-off-by: Ondrej Nosek <onosek@redhat.com>
2022-12-09 13:50:53 +01:00
Lubomír Sedlář 8be0d84f8a
osbuild: test passing of rich repos from configuration
Test that "rich" repositories defined as dicts in the configuration
stay as dicts in the arguments passed to the osbuild phase.

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-11-28 14:47:11 +01:00
Tomáš Hozza 8f0906be53
osbuild: support specifying `package_sets` for repos
The `koji-osbuild` plugin supports additional formats for the `repo`
property since v4 [1]. Specifically, a repo can be specified as a
dictionary with `baseurl` key and `package_sets` list containing
specific package set names, that the repository should be used for.

Extend the configuration schema to reflect the plugin change.
Extend the documentation to cover the new repository format.
Extend an existing unit test to specify additional repository using the
added format.

[1] https://github.com/osbuild/koji-osbuild/pull/82

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-11-28 14:47:11 +01:00
Tomáš Hozza e3072c3d5f
osbuild: don't use `util.get_repo_urls()`
Don't use `util.get_repo_urls()` to resolve provided repositories, but
implement osbuild-specific variant of the function named
`_get_repo_urls(). The reason is that the function from `utils`
transforms repositories defined as dicts to strings, which is
undesired for osbuild. The requirement for osbuild is to preserve the
dict as is, just to resolve the string in `baseurl` to the actual
repository URL.

Add a unit test covering the newly added function. It is inspired by a
similar test from `test_util.py`.

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-11-28 14:47:11 +01:00
Tomáš Hozza ef6d40dce4
osbuild: update schema and config documentation
The `koji-osbuild` Hub schema has been relaxed a bit in the latest
release (v11). Adjust the schema in Pungi to reflect changes in
`koji-osbuild`.

For more information on the changes in `koji-osbuild`, see:
https://github.com/osbuild/koji-osbuild/pull/108

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-11-28 14:17:42 +01:00
Lubomír Sedlář df6664098d Speed up tests by 30 seconds
The retry test for CTS doesn't actually need to wait. Let's mock the
sleep function.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2022-11-23 11:48:12 +01:00
Lubomír Sedlář 147df93f75 Stop sending compose paths to CTS
The tracking service will reject it as it's not an HTTP URL. Let's not
even try.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2022-11-23 11:48:12 +01:00
Lubomír Sedlář dd8c1002d4 Report errors from CTS
If the service returns a status code indicating a user error, report
that and do not retry.

Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2022-11-23 11:48:12 +01:00
Lubomír Sedlář 12e3a46390 createiso: Create Joliet tree with xorriso
This structure is important for isoinfo -J, which is in turn called by
virt-install.

This can be tested by using a bootable ISO by modifying it with a dummy
additional file and preserving boot records:

    $ xorriso -indev netinst.iso -outdev test.iso -boot_image any replay -map setup.py setup.py -end
    ...
    $ isoinfo -J -i test.iso
    isoinfo: Unable to find Joliet SVD
    $ rm test.iso
    $ xorriso -indev netinst.iso -outdev test.iso -joliet on -boot_image any replay -map setup.py setup.py -end
    ...
    $ isoinfo -J -i test.iso
    $

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2144105
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
2022-11-22 12:58:46 +01:00
36 changed files with 589 additions and 175 deletions

View File

@ -2,6 +2,7 @@ include AUTHORS
include COPYING include COPYING
include GPL include GPL
include pungi.spec include pungi.spec
include setup.cfg
include tox.ini include tox.ini
include share/* include share/*
include share/multilib/* include share/multilib/*

View File

@ -53,7 +53,7 @@ copyright = u'2016, Red Hat, Inc.'
# The short X.Y version. # The short X.Y version.
version = '4.3' version = '4.3'
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = '4.3.6' release = '4.3.8'
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.

View File

@ -581,6 +581,16 @@ Options
with everything. Set this option to ``False`` to ignore ``noarch`` in with everything. Set this option to ``False`` to ignore ``noarch`` in
``ExclusiveArch`` and always consider only binary architectures. ``ExclusiveArch`` and always consider only binary architectures.
**pkgset_inherit_exclusive_arch_to_noarch** = True
(*bool*) -- When set to ``True``, the value of ``ExclusiveArch`` or
``ExcludeArch`` will be copied from source rpm to all its noarch packages.
That will than limit which architectures the noarch packages can be
included in.
By setting this option to ``False`` this step is skipped, and noarch
packages will by default land in all architectures. They can still be
excluded by listing them in a relevant section of ``filter_packages``.
**pkgset_allow_reuse** = True **pkgset_allow_reuse** = True
(*bool*) -- When set to ``True``, *Pungi* will try to reuse pkgset data (*bool*) -- When set to ``True``, *Pungi* will try to reuse pkgset data
from the old composes specified by ``--old-composes``. When enabled, this from the old composes specified by ``--old-composes``. When enabled, this
@ -1607,8 +1617,23 @@ OSBuild Composer for building images
* ``release`` -- release part of the final NVR. If neither this option nor * ``release`` -- release part of the final NVR. If neither this option nor
the global ``osbuild_release`` is set, Koji will automatically generate a the global ``osbuild_release`` is set, Koji will automatically generate a
value. value.
* ``repo`` -- a list of repository URLs from which to consume packages for * ``repo`` -- a list of repositories from which to consume packages for
building the image. By default only the variant repository is used. building the image. By default only the variant repository is used.
The list items may use one of the following formats:
* String with just the repository URL.
* Dictionary with the following keys:
* ``baseurl`` -- URL of the repository.
* ``package_sets`` -- a list of package set names to use for this
repository. Package sets are an internal concept of Image Builder
and are used in image definitions. If specified, the repository is
used by Image Builder only for the pipeline with the same name.
For example, specifying the ``build`` package set name will make
the repository to be used only for the build environment in which
the image will be built. (optional)
* ``arches`` -- list of architectures for which to build the image. By * ``arches`` -- list of architectures for which to build the image. By
default, the variant arches are used. This option can only restrict it, default, the variant arches are used. This option can only restrict it,
not add a new one. not add a new one.
@ -1641,13 +1666,13 @@ OSBuild Composer for building images
* ``tenant_id`` -- Azure tenant ID to upload the image to * ``tenant_id`` -- Azure tenant ID to upload the image to
* ``subscription_id`` -- Azure subscription ID to upload the image to * ``subscription_id`` -- Azure subscription ID to upload the image to
* ``resource_group`` -- Azure resource group to upload the image to * ``resource_group`` -- Azure resource group to upload the image to
* ``location`` -- Azure location to upload the image to * ``location`` -- Azure location of the resource group (optional)
* ``image_name`` -- Image name of the uploaded Azure image (optional) * ``image_name`` -- Image name of the uploaded Azure image (optional)
* **GCP upload options** -- upload to Google Cloud Platform. * **GCP upload options** -- upload to Google Cloud Platform.
* ``region`` -- GCP region to upload the image to * ``region`` -- GCP region to upload the image to
* ``bucket`` -- GCP bucket to upload the image to * ``bucket`` -- GCP bucket to upload the image to (optional)
* ``share_with_accounts`` -- list of GCP accounts to share the image * ``share_with_accounts`` -- list of GCP accounts to share the image
with with
* ``image_name`` -- Image name of the uploaded GCP image (optional) * ``image_name`` -- Image name of the uploaded GCP image (optional)

View File

@ -1,5 +1,5 @@
Name: pungi Name: pungi
Version: 4.3.6 Version: 4.3.8
Release: 1%{?dist} Release: 1%{?dist}
Summary: Distribution compose tool Summary: Distribution compose tool
@ -111,6 +111,43 @@ pytest
cd tests && ./test_compose.sh cd tests && ./test_compose.sh
%changelog %changelog
* Tue Mar 28 2023 Haibo Lin <hlin@redhat.com> - 4.3.8-1
- createiso: Update possibly changed file on DVD (lsedlar)
- pkgset: Stop reuse if configuration changed (lsedlar)
- Allow disabling inheriting ExcludeArch to noarch packages (lsedlar)
- pkgset: Support extra builds with no tags (lsedlar)
- buildinstall: Avoid pointlessly tweaking the boot images (lsedlar)
- Prevent to reuse if unsigned packages are allowed (hlin)
- Pass parent id/respin id to CTS (lsedlar)
- Exclude existing files in boot.iso (hlin)
- image-build/osbuild: Pull ISOs into the compose (lsedlar)
- Retry 401 error from CTS (lsedlar)
- gather: Better detection of debuginfo in lookaside (lsedlar)
- Log versions of all installed packages (hlin)
- Use authentication for all CTS calls (lsedlar)
- Fix black complaints (lsedlar)
- Add vhd.gz extension to compressed VHD images (lsedlar)
- Add vhd-compressed image type (lsedlar)
- Update to work with latest mock (lsedlar)
- Default bztar format for sdist command (onosek)
* Fri Dec 09 2022 Ondřej Nosek <onosek@redhat.com>
- osbuild: test passing of rich repos from configuration (lsedlar)
- osbuild: support specifying `package_sets` for repos (thozza)
- osbuild: don't use `util.get_repo_urls()` (thozza)
- osbuild: update schema and config documentation (thozza)
- Speed up tests by 30 seconds (lsedlar)
- Stop sending compose paths to CTS (lsedlar)
- Report errors from CTS (lsedlar)
- createiso: Create Joliet tree with xorriso (lsedlar)
- init: Filter comps for modular variants with tags (lsedlar)
- Retry failed cts requests (hlin)
- Ignore existing kerberos ticket for CTS auth (lsedlar)
- osbuild: support specifying upload_options (thozza)
- osbuild: accept only a single image type in the configuration (thozza)
- Add Jenkinsfile for CI (hlin)
- profiler: Flush stdout before printing (lsedlar)
* Fri Aug 26 2022 Lubomír Sedlář <lsedlar@redhat.com> - 4.3.6-1 * Fri Aug 26 2022 Lubomír Sedlář <lsedlar@redhat.com> - 4.3.6-1
- pkgset: Report better error when module is missing an arch (lsedlar) - pkgset: Report better error when module is missing an arch (lsedlar)
- osbuild: add support for building ostree artifacts (ondrej) - osbuild: add support for building ostree artifacts (ondrej)

View File

@ -830,6 +830,10 @@ def make_schema():
"type": "boolean", "type": "boolean",
"default": True, "default": True,
}, },
"pkgset_inherit_exclusive_arch_to_noarch": {
"type": "boolean",
"default": True,
},
"pkgset_scratch_modules": { "pkgset_scratch_modules": {
"type": "object", "type": "object",
"patternProperties": { "patternProperties": {
@ -1188,14 +1192,36 @@ def make_schema():
}, },
"arches": {"$ref": "#/definitions/list_of_strings"}, "arches": {"$ref": "#/definitions/list_of_strings"},
"release": {"type": "string"}, "release": {"type": "string"},
"repo": {"$ref": "#/definitions/list_of_strings"}, "repo": {
"type": "array",
"items": {
"oneOf": [
{
"type": "object",
"additionalProperties": False,
"required": ["baseurl"],
"properties": {
"baseurl": {"type": "string"},
"package_sets": {
"type": "array",
"items": {"type": "string"},
},
},
},
{"type": "string"},
]
},
},
"failable": {"$ref": "#/definitions/list_of_strings"}, "failable": {"$ref": "#/definitions/list_of_strings"},
"subvariant": {"type": "string"}, "subvariant": {"type": "string"},
"ostree_url": {"type": "string"}, "ostree_url": {"type": "string"},
"ostree_ref": {"type": "string"}, "ostree_ref": {"type": "string"},
"ostree_parent": {"type": "string"}, "ostree_parent": {"type": "string"},
"upload_options": { "upload_options": {
"oneOf": [ # this should be really 'oneOf', but the minimal
# required properties in AWSEC2 and GCP options
# overlap.
"anyOf": [
# AWSEC2UploadOptions # AWSEC2UploadOptions
{ {
"type": "object", "type": "object",
@ -1234,7 +1260,6 @@ def make_schema():
"tenant_id", "tenant_id",
"subscription_id", "subscription_id",
"resource_group", "resource_group",
"location",
], ],
"properties": { "properties": {
"tenant_id": {"type": "string"}, "tenant_id": {"type": "string"},
@ -1250,7 +1275,7 @@ def make_schema():
{ {
"type": "object", "type": "object",
"additionalProperties": False, "additionalProperties": False,
"required": ["region", "bucket"], "required": ["region"],
"properties": { "properties": {
"region": {"type": "string"}, "region": {"type": "string"},
"bucket": {"type": "string"}, "bucket": {"type": "string"},

View File

@ -17,6 +17,7 @@
__all__ = ("Compose",) __all__ = ("Compose",)
import contextlib
import errno import errno
import logging import logging
import os import os
@ -57,14 +58,58 @@ except ImportError:
SUPPORTED_MILESTONES = ["RC", "Update", "SecurityFix"] SUPPORTED_MILESTONES = ["RC", "Update", "SecurityFix"]
def is_status_fatal(status_code):
"""Check if status code returned from CTS reports an error that is unlikely
to be fixed by retrying. Generally client errors (4XX) are fatal, with the
exception of 401 Unauthorized which could be caused by transient network
issue between compose host and KDC.
"""
if status_code == 401:
return False
return status_code >= 400 and status_code < 500
@retry(wait_on=RequestException) @retry(wait_on=RequestException)
def retry_request(method, url, data=None, auth=None): def retry_request(method, url, data=None, auth=None):
request_method = getattr(requests, method) request_method = getattr(requests, method)
rv = request_method(url, json=data, auth=auth) rv = request_method(url, json=data, auth=auth)
if is_status_fatal(rv.status_code):
try:
error = rv.json()["message"]
except ValueError:
error = rv.text
raise RuntimeError("CTS responded with %d: %s" % (rv.status_code, error))
rv.raise_for_status() rv.raise_for_status()
return rv return rv
@contextlib.contextmanager
def cts_auth(cts_keytab):
auth = None
if cts_keytab:
# requests-kerberos cannot accept custom keytab, we need to use
# environment variable for this. But we need to change environment
# only temporarily just for this single requests.post.
# So at first backup the current environment and revert to it
# after the requests call.
from requests_kerberos import HTTPKerberosAuth
auth = HTTPKerberosAuth()
environ_copy = dict(os.environ)
if "$HOSTNAME" in cts_keytab:
cts_keytab = cts_keytab.replace("$HOSTNAME", socket.gethostname())
os.environ["KRB5_CLIENT_KTNAME"] = cts_keytab
os.environ["KRB5CCNAME"] = "DIR:%s" % tempfile.mkdtemp()
try:
yield auth
finally:
if cts_keytab:
shutil.rmtree(os.environ["KRB5CCNAME"].split(":", 1)[1])
os.environ.clear()
os.environ.update(environ_copy)
def get_compose_info( def get_compose_info(
conf, conf,
compose_type="production", compose_type="production",
@ -94,38 +139,19 @@ def get_compose_info(
ci.compose.type = compose_type ci.compose.type = compose_type
ci.compose.date = compose_date or time.strftime("%Y%m%d", time.localtime()) ci.compose.date = compose_date or time.strftime("%Y%m%d", time.localtime())
ci.compose.respin = compose_respin or 0 ci.compose.respin = compose_respin or 0
ci.compose.id = ci.create_compose_id()
cts_url = conf.get("cts_url", None) cts_url = conf.get("cts_url")
if cts_url: if cts_url:
# Requests-kerberos cannot accept custom keytab, we need to use # Create compose in CTS and get the reserved compose ID.
# environment variable for this. But we need to change environment url = os.path.join(cts_url, "api/1/composes/")
# only temporarily just for this single requests.post. data = {
# So at first backup the current environment and revert to it "compose_info": json.loads(ci.dumps()),
# after the requests.post call. "parent_compose_ids": parent_compose_ids,
cts_keytab = conf.get("cts_keytab", None) "respin_of": respin_of,
authentication = get_authentication(conf) }
if cts_keytab: with cts_auth(conf.get("cts_keytab")) as authentication:
environ_copy = dict(os.environ)
if "$HOSTNAME" in cts_keytab:
cts_keytab = cts_keytab.replace("$HOSTNAME", socket.gethostname())
os.environ["KRB5_CLIENT_KTNAME"] = cts_keytab
os.environ["KRB5CCNAME"] = "DIR:%s" % tempfile.mkdtemp()
try:
# Create compose in CTS and get the reserved compose ID.
ci.compose.id = ci.create_compose_id()
url = os.path.join(cts_url, "api/1/composes/")
data = {
"compose_info": json.loads(ci.dumps()),
"parent_compose_ids": parent_compose_ids,
"respin_of": respin_of,
}
rv = retry_request("post", url, data=data, auth=authentication) rv = retry_request("post", url, data=data, auth=authentication)
finally:
if cts_keytab:
shutil.rmtree(os.environ["KRB5CCNAME"].split(":", 1)[1])
os.environ.clear()
os.environ.update(environ_copy)
# Update local ComposeInfo with received ComposeInfo. # Update local ComposeInfo with received ComposeInfo.
cts_ci = ComposeInfo() cts_ci = ComposeInfo()
@ -133,22 +159,9 @@ def get_compose_info(
ci.compose.respin = cts_ci.compose.respin ci.compose.respin = cts_ci.compose.respin
ci.compose.id = cts_ci.compose.id ci.compose.id = cts_ci.compose.id
else:
ci.compose.id = ci.create_compose_id()
return ci return ci
def get_authentication(conf):
authentication = None
cts_keytab = conf.get("cts_keytab", None)
if cts_keytab:
from requests_kerberos import HTTPKerberosAuth
authentication = HTTPKerberosAuth()
return authentication
def write_compose_info(compose_dir, ci): def write_compose_info(compose_dir, ci):
""" """
Write ComposeInfo `ci` to `compose_dir` subdirectories. Write ComposeInfo `ci` to `compose_dir` subdirectories.
@ -162,17 +175,20 @@ def write_compose_info(compose_dir, ci):
def update_compose_url(compose_id, compose_dir, conf): def update_compose_url(compose_id, compose_dir, conf):
authentication = get_authentication(conf)
cts_url = conf.get("cts_url", None) cts_url = conf.get("cts_url", None)
if cts_url: if cts_url:
url = os.path.join(cts_url, "api/1/composes", compose_id) url = os.path.join(cts_url, "api/1/composes", compose_id)
tp = conf.get("translate_paths", None) tp = conf.get("translate_paths", None)
compose_url = translate_path_raw(tp, compose_dir) compose_url = translate_path_raw(tp, compose_dir)
if compose_url == compose_dir:
# We do not have a URL, do not attempt the update.
return
data = { data = {
"action": "set_url", "action": "set_url",
"compose_url": compose_url, "compose_url": compose_url,
} }
return retry_request("patch", url, data=data, auth=authentication) with cts_auth(conf.get("cts_keytab")) as authentication:
return retry_request("patch", url, data=data, auth=authentication)
def get_compose_dir( def get_compose_dir(
@ -183,11 +199,19 @@ def get_compose_dir(
compose_respin=None, compose_respin=None,
compose_label=None, compose_label=None,
already_exists_callbacks=None, already_exists_callbacks=None,
parent_compose_ids=None,
respin_of=None,
): ):
already_exists_callbacks = already_exists_callbacks or [] already_exists_callbacks = already_exists_callbacks or []
ci = get_compose_info( ci = get_compose_info(
conf, compose_type, compose_date, compose_respin, compose_label conf,
compose_type,
compose_date,
compose_respin,
compose_label,
parent_compose_ids,
respin_of,
) )
cts_url = conf.get("cts_url", None) cts_url = conf.get("cts_url", None)

View File

@ -5,11 +5,14 @@ from __future__ import print_function
import os import os
import six import six
from collections import namedtuple from collections import namedtuple
from kobo.shortcuts import run
from six.moves import shlex_quote from six.moves import shlex_quote
from .wrappers import iso from .wrappers import iso
from .wrappers.jigdo import JigdoWrapper from .wrappers.jigdo import JigdoWrapper
from .phases.buildinstall import BOOT_CONFIGS, BOOT_IMAGES
CreateIsoOpts = namedtuple( CreateIsoOpts = namedtuple(
"CreateIsoOpts", "CreateIsoOpts",
@ -119,17 +122,46 @@ def make_jigdo(f, opts):
def write_xorriso_commands(opts): def write_xorriso_commands(opts):
# Create manifest for the boot.iso listing all contents
boot_iso_manifest = "%s.manifest" % os.path.join(
opts.script_dir, os.path.basename(opts.boot_iso)
)
run(
iso.get_manifest_cmd(
opts.boot_iso, opts.use_xorrisofs, output_file=boot_iso_manifest
)
)
# Find which files may have been updated by pungi. This only includes a few
# files from tweaking buildinstall and .discinfo metadata. There's no good
# way to detect whether the boot config files actually changed, so we may
# be updating files in the ISO with the same data.
UPDATEABLE_FILES = set(BOOT_IMAGES + BOOT_CONFIGS + [".discinfo"])
updated_files = set()
excluded_files = set()
with open(boot_iso_manifest) as f:
for line in f:
path = line.lstrip("/").rstrip("\n")
if path in UPDATEABLE_FILES:
updated_files.add(path)
else:
excluded_files.add(path)
script = os.path.join(opts.script_dir, "xorriso-%s.txt" % id(opts)) script = os.path.join(opts.script_dir, "xorriso-%s.txt" % id(opts))
with open(script, "w") as f: with open(script, "w") as f:
emit(f, "-indev %s" % opts.boot_iso) emit(f, "-indev %s" % opts.boot_iso)
emit(f, "-outdev %s" % os.path.join(opts.output_dir, opts.iso_name)) emit(f, "-outdev %s" % os.path.join(opts.output_dir, opts.iso_name))
emit(f, "-boot_image any replay") emit(f, "-boot_image any replay")
emit(f, "-volid %s" % opts.volid) emit(f, "-volid %s" % opts.volid)
# isoinfo -J uses the Joliet tree, and it's used by virt-install
emit(f, "-joliet on")
with open(opts.graft_points) as gp: with open(opts.graft_points) as gp:
for line in gp: for line in gp:
iso_path, fs_path = line.strip().split("=", 1) iso_path, fs_path = line.strip().split("=", 1)
emit(f, "-map %s %s" % (fs_path, iso_path)) if iso_path in excluded_files:
continue
cmd = "-update" if iso_path in updated_files else "-map"
emit(f, "%s %s %s" % (cmd, fs_path, iso_path))
if opts.arch == "ppc64le": if opts.arch == "ppc64le":
# This is needed for the image to be bootable. # This is needed for the image to be bootable.

View File

@ -1118,7 +1118,6 @@ class Pungi(PungiBase):
self.logger.info("Finished gathering package objects.") self.logger.info("Finished gathering package objects.")
def gather(self): def gather(self):
# get package objects according to the input list # get package objects according to the input list
self.getPackageObjects() self.getPackageObjects()
if self.is_sources: if self.is_sources:

View File

@ -616,7 +616,6 @@ class Gather(GatherBase):
return added return added
for pkg in self.result_debug_packages.copy(): for pkg in self.result_debug_packages.copy():
if pkg not in self.finished_add_debug_package_deps: if pkg not in self.finished_add_debug_package_deps:
deps = self._get_package_deps(pkg, debuginfo=True) deps = self._get_package_deps(pkg, debuginfo=True)
for i, req in deps: for i, req in deps:
@ -784,7 +783,6 @@ class Gather(GatherBase):
continue continue
debug_pkgs = [] debug_pkgs = []
pkg_in_lookaside = pkg.repoid in self.opts.lookaside_repos
for i in candidates: for i in candidates:
if pkg.arch != i.arch: if pkg.arch != i.arch:
continue continue
@ -792,7 +790,7 @@ class Gather(GatherBase):
# If it's not debugsource package or does not match name of # If it's not debugsource package or does not match name of
# the package, we don't want it in. # the package, we don't want it in.
continue continue
if i.repoid in self.opts.lookaside_repos or pkg_in_lookaside: if self.is_from_lookaside(i):
self._set_flag(i, PkgFlag.lookaside) self._set_flag(i, PkgFlag.lookaside)
if i not in self.result_debug_packages: if i not in self.result_debug_packages:
added.add(i) added.add(i)

View File

@ -297,7 +297,7 @@ class BuildinstallPhase(PhaseBase):
"Unsupported buildinstall method: %s" % self.buildinstall_method "Unsupported buildinstall method: %s" % self.buildinstall_method
) )
for (variant, cmd) in commands: for variant, cmd in commands:
self.pool.add(BuildinstallThread(self.pool)) self.pool.add(BuildinstallThread(self.pool))
self.pool.queue_put( self.pool.queue_put(
(self.compose, arch, variant, cmd, self.pkgset_phase) (self.compose, arch, variant, cmd, self.pkgset_phase)
@ -364,9 +364,17 @@ BOOT_CONFIGS = [
"EFI/BOOT/BOOTX64.conf", "EFI/BOOT/BOOTX64.conf",
"EFI/BOOT/grub.cfg", "EFI/BOOT/grub.cfg",
] ]
BOOT_IMAGES = [
"images/efiboot.img",
]
def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None): def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None):
"""
Put escaped volume ID and possibly kickstart file into the boot
configuration files.
:returns: list of paths to modified config files
"""
volid_escaped = volid.replace(" ", r"\x20").replace("\\", "\\\\") volid_escaped = volid.replace(" ", r"\x20").replace("\\", "\\\\")
volid_escaped_2 = volid_escaped.replace("\\", "\\\\") volid_escaped_2 = volid_escaped.replace("\\", "\\\\")
found_configs = [] found_configs = []
@ -374,7 +382,6 @@ def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None):
config_path = os.path.join(path, config) config_path = os.path.join(path, config)
if not os.path.exists(config_path): if not os.path.exists(config_path):
continue continue
found_configs.append(config)
with open(config_path, "r") as f: with open(config_path, "r") as f:
data = original_data = f.read() data = original_data = f.read()
@ -394,8 +401,13 @@ def tweak_configs(path, volid, ks_file, configs=BOOT_CONFIGS, logger=None):
with open(config_path, "w") as f: with open(config_path, "w") as f:
f.write(data) f.write(data)
if logger and data != original_data: if data != original_data:
logger.info("Boot config %s changed" % config_path) found_configs.append(config)
if logger:
# Generally lorax should create file with correct volume id
# already. If we don't have a kickstart, this function should
# be a no-op.
logger.info("Boot config %s changed" % config_path)
return found_configs return found_configs
@ -434,31 +446,32 @@ def tweak_buildinstall(
if kickstart_file and found_configs: if kickstart_file and found_configs:
shutil.copy2(kickstart_file, os.path.join(dst, "ks.cfg")) shutil.copy2(kickstart_file, os.path.join(dst, "ks.cfg"))
images = [ images = [os.path.join(tmp_dir, img) for img in BOOT_IMAGES]
os.path.join(tmp_dir, "images", "efiboot.img"), if found_configs:
] for image in images:
for image in images: if not os.path.isfile(image):
if not os.path.isfile(image): continue
continue
with iso.mount( with iso.mount(
image, image,
logger=compose._logger, logger=compose._logger,
use_guestmount=compose.conf.get("buildinstall_use_guestmount"), use_guestmount=compose.conf.get("buildinstall_use_guestmount"),
) as mount_tmp_dir: ) as mount_tmp_dir:
for config in BOOT_CONFIGS: for config in found_configs:
config_path = os.path.join(tmp_dir, config) # Put each modified config file into the image (overwriting the
config_in_image = os.path.join(mount_tmp_dir, config) # original).
config_path = os.path.join(tmp_dir, config)
config_in_image = os.path.join(mount_tmp_dir, config)
if os.path.isfile(config_in_image): if os.path.isfile(config_in_image):
cmd = [ cmd = [
"cp", "cp",
"-v", "-v",
"--remove-destination", "--remove-destination",
config_path, config_path,
config_in_image, config_in_image,
] ]
run(cmd) run(cmd)
# HACK: make buildinstall files world readable # HACK: make buildinstall files world readable
run("chmod -R a+rX %s" % shlex_quote(tmp_dir)) run("chmod -R a+rX %s" % shlex_quote(tmp_dir))

View File

@ -369,7 +369,7 @@ class CreateisoPhase(PhaseLoggerMixin, PhaseBase):
if self.compose.notifier: if self.compose.notifier:
self.compose.notifier.send("createiso-targets", deliverables=deliverables) self.compose.notifier.send("createiso-targets", deliverables=deliverables)
for (cmd, variant, arch) in commands: for cmd, variant, arch in commands:
self.pool.add(CreateIsoThread(self.pool)) self.pool.add(CreateIsoThread(self.pool))
self.pool.queue_put((self.compose, cmd, variant, arch)) self.pool.queue_put((self.compose, cmd, variant, arch))

View File

@ -76,7 +76,7 @@ class ExtraIsosPhase(PhaseLoggerMixin, ConfigGuardedPhase, PhaseBase):
for arch in sorted(arches): for arch in sorted(arches):
commands.append((config, variant, arch)) commands.append((config, variant, arch))
for (config, variant, arch) in commands: for config, variant, arch in commands:
self.pool.add(ExtraIsosThread(self.pool, self.bi)) self.pool.add(ExtraIsosThread(self.pool, self.bi))
self.pool.queue_put((self.compose, config, variant, arch)) self.pool.queue_put((self.compose, config, variant, arch))

View File

@ -90,7 +90,7 @@ class GatherPhase(PhaseBase):
# check whether variants from configuration value # check whether variants from configuration value
# 'variant_as_lookaside' are correct # 'variant_as_lookaside' are correct
for (requiring, required) in variant_as_lookaside: for requiring, required in variant_as_lookaside:
if requiring in all_variants and required not in all_variants: if requiring in all_variants and required not in all_variants:
errors.append( errors.append(
"variant_as_lookaside: variant %r doesn't exist but is " "variant_as_lookaside: variant %r doesn't exist but is "
@ -99,7 +99,7 @@ class GatherPhase(PhaseBase):
# check whether variants from configuration value # check whether variants from configuration value
# 'variant_as_lookaside' have same architectures # 'variant_as_lookaside' have same architectures
for (requiring, required) in variant_as_lookaside: for requiring, required in variant_as_lookaside:
if ( if (
requiring in all_variants requiring in all_variants
and required in all_variants and required in all_variants
@ -235,7 +235,7 @@ def reuse_old_gather_packages(compose, arch, variant, package_sets, methods):
if not hasattr(compose, "_gather_reused_variant_arch"): if not hasattr(compose, "_gather_reused_variant_arch"):
setattr(compose, "_gather_reused_variant_arch", []) setattr(compose, "_gather_reused_variant_arch", [])
variant_as_lookaside = compose.conf.get("variant_as_lookaside", []) variant_as_lookaside = compose.conf.get("variant_as_lookaside", [])
for (requiring, required) in variant_as_lookaside: for requiring, required in variant_as_lookaside:
if ( if (
requiring == variant.uid requiring == variant.uid
and (required, arch) not in compose._gather_reused_variant_arch and (required, arch) not in compose._gather_reused_variant_arch
@ -468,9 +468,7 @@ def gather_packages(compose, arch, variant, package_sets, fulltree_excludes=None
) )
else: else:
for source_name in ("module", "comps", "json"): for source_name in ("module", "comps", "json"):
packages, groups, filter_packages = get_variant_packages( packages, groups, filter_packages = get_variant_packages(
compose, arch, variant, source_name, package_sets compose, arch, variant, source_name, package_sets
) )
@ -575,7 +573,6 @@ def trim_packages(compose, arch, variant, pkg_map, parent_pkgs=None, remove_pkgs
move_to_parent_pkgs = _mk_pkg_map() move_to_parent_pkgs = _mk_pkg_map()
removed_pkgs = _mk_pkg_map() removed_pkgs = _mk_pkg_map()
for pkg_type, pkgs in pkg_map.items(): for pkg_type, pkgs in pkg_map.items():
new_pkgs = [] new_pkgs = []
for pkg in pkgs: for pkg in pkgs:
pkg_path = pkg["path"] pkg_path = pkg["path"]

View File

@ -25,6 +25,7 @@ from productmd.rpms import Rpms
# results will be pulled into the compose. # results will be pulled into the compose.
EXTENSIONS = { EXTENSIONS = {
"docker": ["tar.gz", "tar.xz"], "docker": ["tar.gz", "tar.xz"],
"iso": ["iso"],
"liveimg-squashfs": ["liveimg.squashfs"], "liveimg-squashfs": ["liveimg.squashfs"],
"qcow": ["qcow"], "qcow": ["qcow"],
"qcow2": ["qcow2"], "qcow2": ["qcow2"],
@ -39,6 +40,7 @@ EXTENSIONS = {
"vdi": ["vdi"], "vdi": ["vdi"],
"vmdk": ["vmdk"], "vmdk": ["vmdk"],
"vpc": ["vhd"], "vpc": ["vhd"],
"vhd-compressed": ["vhd.gz", "vhd.xz"],
"vsphere-ova": ["vsphere.ova"], "vsphere-ova": ["vsphere.ova"],
} }

View File

@ -117,7 +117,7 @@ class LiveImagesPhase(
commands.append((cmd, variant, arch)) commands.append((cmd, variant, arch))
for (cmd, variant, arch) in commands: for cmd, variant, arch in commands:
self.pool.add(CreateLiveImageThread(self.pool)) self.pool.add(CreateLiveImageThread(self.pool))
self.pool.queue_put((self.compose, cmd, variant, arch)) self.pool.queue_put((self.compose, cmd, variant, arch))

View File

@ -27,6 +27,35 @@ class OSBuildPhase(
arches = set(image_conf["arches"]) & arches arches = set(image_conf["arches"]) & arches
return sorted(arches) return sorted(arches)
@staticmethod
def _get_repo_urls(compose, repos, arch="$basearch"):
"""
Get list of repos with resolved repo URLs. Preserve repos defined
as dicts.
"""
resolved_repos = []
for repo in repos:
if isinstance(repo, dict):
try:
url = repo["baseurl"]
except KeyError:
raise RuntimeError(
"`baseurl` is required in repo dict %s" % str(repo)
)
url = util.get_repo_url(compose, url, arch=arch)
if url is None:
raise RuntimeError("Failed to resolve repo URL for %s" % str(repo))
repo["baseurl"] = url
resolved_repos.append(repo)
else:
repo = util.get_repo_url(compose, repo, arch=arch)
if repo is None:
raise RuntimeError("Failed to resolve repo URL for %s" % repo)
resolved_repos.append(repo)
return resolved_repos
def _get_repo(self, image_conf, variant): def _get_repo(self, image_conf, variant):
""" """
Get a list of repos. First included are those explicitly listed in Get a list of repos. First included are those explicitly listed in
@ -38,7 +67,7 @@ class OSBuildPhase(
if not variant.is_empty and variant.uid not in repos: if not variant.is_empty and variant.uid not in repos:
repos.append(variant.uid) repos.append(variant.uid)
return util.get_repo_urls(self.compose, repos, arch="$arch") return OSBuildPhase._get_repo_urls(self.compose, repos, arch="$arch")
def run(self): def run(self):
for variant in self.compose.get_variants(): for variant in self.compose.get_variants():
@ -183,10 +212,18 @@ class RunOSBuildThread(WorkerThread):
# image_dir is absolute path to which the image should be copied. # image_dir is absolute path to which the image should be copied.
# We also need the same path as relative to compose directory for # We also need the same path as relative to compose directory for
# including in the metadata. # including in the metadata.
image_dir = compose.paths.compose.image_dir(variant) % {"arch": arch} if archive["type_name"] == "iso":
rel_image_dir = compose.paths.compose.image_dir(variant, relative=True) % { # If the produced image is actually an ISO, it should go to
"arch": arch # iso/ subdirectory.
} image_dir = compose.paths.compose.iso_dir(arch, variant)
rel_image_dir = compose.paths.compose.iso_dir(
arch, variant, relative=True
)
else:
image_dir = compose.paths.compose.image_dir(variant) % {"arch": arch}
rel_image_dir = compose.paths.compose.image_dir(
variant, relative=True
) % {"arch": arch}
util.makedirs(image_dir) util.makedirs(image_dir)
image_dest = os.path.join(image_dir, archive["filename"]) image_dest = os.path.join(image_dir, archive["filename"])
@ -209,7 +246,7 @@ class RunOSBuildThread(WorkerThread):
# Update image manifest # Update image manifest
img = Image(compose.im) img = Image(compose.im)
img.type = archive["type_name"] img.type = archive["type_name"] if archive["type_name"] != "iso" else "dvd"
img.format = suffix img.format = suffix
img.path = os.path.join(rel_image_dir, archive["filename"]) img.path = os.path.join(rel_image_dir, archive["filename"])
img.mtime = util.get_mtime(image_dest) img.mtime = util.get_mtime(image_dest)

View File

@ -38,12 +38,17 @@ from pungi.phases.createrepo import add_modular_metadata
def populate_arch_pkgsets(compose, path_prefix, global_pkgset): def populate_arch_pkgsets(compose, path_prefix, global_pkgset):
result = {} result = {}
exclusive_noarch = compose.conf["pkgset_exclusive_arch_considers_noarch"]
for arch in compose.get_arches(): for arch in compose.get_arches():
compose.log_info("Populating package set for arch: %s", arch) compose.log_info("Populating package set for arch: %s", arch)
is_multilib = is_arch_multilib(compose.conf, arch) is_multilib = is_arch_multilib(compose.conf, arch)
arches = get_valid_arches(arch, is_multilib, add_src=True) arches = get_valid_arches(arch, is_multilib, add_src=True)
pkgset = global_pkgset.subset(arch, arches, exclusive_noarch=exclusive_noarch) pkgset = global_pkgset.subset(
arch,
arches,
exclusive_noarch=compose.conf["pkgset_exclusive_arch_considers_noarch"],
inherit_to_noarch=compose.conf["pkgset_inherit_exclusive_arch_to_noarch"],
)
pkgset.save_file_list( pkgset.save_file_list(
compose.paths.work.package_list(arch=arch, pkgset=global_pkgset), compose.paths.work.package_list(arch=arch, pkgset=global_pkgset),
remove_path_prefix=path_prefix, remove_path_prefix=path_prefix,

View File

@ -203,16 +203,31 @@ class PackageSetBase(kobo.log.LoggingBase):
return self.rpms_by_arch return self.rpms_by_arch
def subset(self, primary_arch, arch_list, exclusive_noarch=True): def subset(
self, primary_arch, arch_list, exclusive_noarch=True, inherit_to_noarch=True
):
"""Create a subset of this package set that only includes """Create a subset of this package set that only includes
packages compatible with""" packages compatible with"""
pkgset = PackageSetBase( pkgset = PackageSetBase(
self.name, self.sigkey_ordering, logger=self._logger, arches=arch_list self.name, self.sigkey_ordering, logger=self._logger, arches=arch_list
) )
pkgset.merge(self, primary_arch, arch_list, exclusive_noarch=exclusive_noarch) pkgset.merge(
self,
primary_arch,
arch_list,
exclusive_noarch=exclusive_noarch,
inherit_to_noarch=inherit_to_noarch,
)
return pkgset return pkgset
def merge(self, other, primary_arch, arch_list, exclusive_noarch=True): def merge(
self,
other,
primary_arch,
arch_list,
exclusive_noarch=True,
inherit_to_noarch=True,
):
""" """
Merge ``other`` package set into this instance. Merge ``other`` package set into this instance.
""" """
@ -251,7 +266,7 @@ class PackageSetBase(kobo.log.LoggingBase):
if i.file_path in self.file_cache: if i.file_path in self.file_cache:
# TODO: test if it really works # TODO: test if it really works
continue continue
if exclusivearch_list and arch == "noarch": if inherit_to_noarch and exclusivearch_list and arch == "noarch":
if is_excluded(i, exclusivearch_list, logger=self._logger): if is_excluded(i, exclusivearch_list, logger=self._logger):
continue continue
@ -318,6 +333,11 @@ class FilelistPackageSet(PackageSetBase):
return result return result
# This is a marker to indicate package set with only extra builds/tasks and no
# tasks.
MISSING_KOJI_TAG = object()
class KojiPackageSet(PackageSetBase): class KojiPackageSet(PackageSetBase):
def __init__( def __init__(
self, self,
@ -371,7 +391,7 @@ class KojiPackageSet(PackageSetBase):
:param int signed_packages_wait: How long to wait between search attemts. :param int signed_packages_wait: How long to wait between search attemts.
""" """
super(KojiPackageSet, self).__init__( super(KojiPackageSet, self).__init__(
name, name if name != MISSING_KOJI_TAG else "no-tag",
sigkey_ordering=sigkey_ordering, sigkey_ordering=sigkey_ordering,
arches=arches, arches=arches,
logger=logger, logger=logger,
@ -576,7 +596,9 @@ class KojiPackageSet(PackageSetBase):
inherit, inherit,
) )
self.log_info("[BEGIN] %s" % msg) self.log_info("[BEGIN] %s" % msg)
rpms, builds = self.get_latest_rpms(tag, event, inherit=inherit) rpms, builds = [], []
if tag != MISSING_KOJI_TAG:
rpms, builds = self.get_latest_rpms(tag, event, inherit=inherit)
extra_rpms, extra_builds = self.get_extra_rpms() extra_rpms, extra_builds = self.get_extra_rpms()
rpms += extra_rpms rpms += extra_rpms
builds += extra_builds builds += extra_builds
@ -681,6 +703,15 @@ class KojiPackageSet(PackageSetBase):
:param include_packages: an iterable of tuples (package name, arch) that should :param include_packages: an iterable of tuples (package name, arch) that should
be included. be included.
""" """
if len(self.sigkey_ordering) > 1 and (
None in self.sigkey_ordering or "" in self.sigkey_ordering
):
self.log_warning(
"Stop writing reuse file as unsigned packages are allowed "
"in the compose."
)
return
reuse_file = compose.paths.work.pkgset_reuse_file(self.name) reuse_file = compose.paths.work.pkgset_reuse_file(self.name)
self.log_info("Writing pkgset reuse file: %s" % reuse_file) self.log_info("Writing pkgset reuse file: %s" % reuse_file)
try: try:
@ -697,6 +728,12 @@ class KojiPackageSet(PackageSetBase):
"srpms_by_name": self.srpms_by_name, "srpms_by_name": self.srpms_by_name,
"extra_builds": self.extra_builds, "extra_builds": self.extra_builds,
"include_packages": include_packages, "include_packages": include_packages,
"inherit_to_noarch": compose.conf[
"pkgset_inherit_exclusive_arch_to_noarch"
],
"exclusive_noarch": compose.conf[
"pkgset_exclusive_arch_considers_noarch"
],
}, },
f, f,
protocol=pickle.HIGHEST_PROTOCOL, protocol=pickle.HIGHEST_PROTOCOL,
@ -791,6 +828,8 @@ class KojiPackageSet(PackageSetBase):
self.log_debug("Failed to load reuse file: %s" % str(e)) self.log_debug("Failed to load reuse file: %s" % str(e))
return False return False
inherit_to_noarch = compose.conf["pkgset_inherit_exclusive_arch_to_noarch"]
exclusive_noarch = compose.conf["pkgset_exclusive_arch_considers_noarch"]
if ( if (
reuse_data["allow_invalid_sigkeys"] == self._allow_invalid_sigkeys reuse_data["allow_invalid_sigkeys"] == self._allow_invalid_sigkeys
and reuse_data["packages"] == self.packages and reuse_data["packages"] == self.packages
@ -798,6 +837,10 @@ class KojiPackageSet(PackageSetBase):
and reuse_data["extra_builds"] == self.extra_builds and reuse_data["extra_builds"] == self.extra_builds
and reuse_data["sigkeys"] == self.sigkey_ordering and reuse_data["sigkeys"] == self.sigkey_ordering
and reuse_data["include_packages"] == include_packages and reuse_data["include_packages"] == include_packages
# If the value is not present in reuse data, the compose was
# generated with older version of Pungi. Best to not reuse.
and reuse_data.get("inherit_to_noarch") == inherit_to_noarch
and reuse_data.get("exclusive_noarch") == exclusive_noarch
): ):
self.log_info("Copying repo data for reuse: %s" % old_repo_dir) self.log_info("Copying repo data for reuse: %s" % old_repo_dir)
copy_all(old_repo_dir, repo_dir) copy_all(old_repo_dir, repo_dir)

View File

@ -791,17 +791,23 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
pkgsets = [] pkgsets = []
extra_builds = force_list(compose.conf.get("pkgset_koji_builds", []))
extra_tasks = force_list(compose.conf.get("pkgset_koji_scratch_tasks", []))
if not pkgset_koji_tags and (extra_builds or extra_tasks):
# We have extra packages to pull in, but no tag to merge them with.
compose_tags.append(pungi.phases.pkgset.pkgsets.MISSING_KOJI_TAG)
pkgset_koji_tags.append(pungi.phases.pkgset.pkgsets.MISSING_KOJI_TAG)
# Get package set for each compose tag and merge it to global package # Get package set for each compose tag and merge it to global package
# list. Also prepare per-variant pkgset, because we do not have list # list. Also prepare per-variant pkgset, because we do not have list
# of binary RPMs in module definition - there is just list of SRPMs. # of binary RPMs in module definition - there is just list of SRPMs.
for compose_tag in compose_tags: for compose_tag in compose_tags:
compose.log_info("Loading package set for tag %s", compose_tag) compose.log_info("Loading package set for tag %s", compose_tag)
kwargs = {}
if compose_tag in pkgset_koji_tags: if compose_tag in pkgset_koji_tags:
extra_builds = force_list(compose.conf.get("pkgset_koji_builds", [])) kwargs["extra_builds"] = extra_builds
extra_tasks = force_list(compose.conf.get("pkgset_koji_scratch_tasks", [])) kwargs["extra_tasks"] = extra_tasks
else:
extra_builds = []
extra_tasks = []
pkgset = pungi.phases.pkgset.pkgsets.KojiPackageSet( pkgset = pungi.phases.pkgset.pkgsets.KojiPackageSet(
compose_tag, compose_tag,
@ -813,10 +819,9 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
allow_invalid_sigkeys=allow_invalid_sigkeys, allow_invalid_sigkeys=allow_invalid_sigkeys,
populate_only_packages=populate_only_packages_to_gather, populate_only_packages=populate_only_packages_to_gather,
cache_region=compose.cache_region, cache_region=compose.cache_region,
extra_builds=extra_builds,
extra_tasks=extra_tasks,
signed_packages_retries=compose.conf["signed_packages_retries"], signed_packages_retries=compose.conf["signed_packages_retries"],
signed_packages_wait=compose.conf["signed_packages_wait"], signed_packages_wait=compose.conf["signed_packages_wait"],
**kwargs
) )
# Check if we have cache for this tag from previous compose. If so, use # Check if we have cache for this tag from previous compose. If so, use
@ -880,7 +885,6 @@ def populate_global_pkgset(compose, koji_wrapper, path_prefix, event):
) )
for variant in compose.all_variants.values(): for variant in compose.all_variants.values():
if compose_tag in variant_tags[variant]: if compose_tag in variant_tags[variant]:
# If it's a modular tag, store the package set for the module. # If it's a modular tag, store the package set for the module.
for nsvc, koji_tag in variant.module_uid_to_koji_tag.items(): for nsvc, koji_tag in variant.module_uid_to_koji_tag.items():
if compose_tag == koji_tag: if compose_tag == koji_tag:

View File

@ -319,7 +319,6 @@ def get_arguments(config):
def main(): def main():
config = pungi.config.Config() config = pungi.config.Config()
opts = get_arguments(config) opts = get_arguments(config)

View File

@ -300,7 +300,12 @@ def main():
if opts.target_dir: if opts.target_dir:
compose_dir = Compose.get_compose_dir( compose_dir = Compose.get_compose_dir(
opts.target_dir, conf, compose_type=compose_type, compose_label=opts.label opts.target_dir,
conf,
compose_type=compose_type,
compose_label=opts.label,
parent_compose_ids=opts.parent_compose_id,
respin_of=opts.respin_of,
) )
else: else:
compose_dir = opts.compose_dir compose_dir = opts.compose_dir
@ -380,6 +385,14 @@ def run_compose(
compose.log_info("Current timezone offset: %s" % pungi.util.get_tz_offset()) compose.log_info("Current timezone offset: %s" % pungi.util.get_tz_offset())
compose.log_info("COMPOSE_ID=%s" % compose.compose_id) compose.log_info("COMPOSE_ID=%s" % compose.compose_id)
installed_pkgs_log = compose.paths.log.log_file("global", "installed-pkgs")
compose.log_info("Logging installed packages to %s" % installed_pkgs_log)
try:
with open(installed_pkgs_log, "w") as f:
subprocess.Popen(["rpm", "-qa"], stdout=f)
except Exception as e:
compose.log_warning("Failed to log installed packages: %s" % str(e))
compose.read_variants() compose.read_variants()
# dump the config file # dump the config file

View File

@ -260,20 +260,23 @@ def get_isohybrid_cmd(iso_path, arch):
return cmd return cmd
def get_manifest_cmd(iso_name, xorriso=False): def get_manifest_cmd(iso_name, xorriso=False, output_file=None):
if not output_file:
output_file = "%s.manifest" % iso_name
if xorriso: if xorriso:
return """xorriso -dev %s --find | return """xorriso -dev %s --find |
tail -n+2 | tail -n+2 |
tr -d "'" | tr -d "'" |
cut -c2- | cut -c2- |
sort >> %s.manifest""" % ( sort >> %s""" % (
shlex_quote(iso_name),
shlex_quote(iso_name), shlex_quote(iso_name),
shlex_quote(output_file),
) )
else: else:
return "isoinfo -R -f -i %s | grep -v '/TRANS.TBL$' | sort >> %s.manifest" % ( return "isoinfo -R -f -i %s | grep -v '/TRANS.TBL$' | sort >> %s" % (
shlex_quote(iso_name),
shlex_quote(iso_name), shlex_quote(iso_name),
shlex_quote(output_file),
) )

View File

@ -276,7 +276,6 @@ class Variant(object):
modules=None, modules=None,
modular_koji_tags=None, modular_koji_tags=None,
): ):
environments = environments or [] environments = environments or []
buildinstallpackages = buildinstallpackages or [] buildinstallpackages = buildinstallpackages or []

2
setup.cfg Normal file
View File

@ -0,0 +1,2 @@
[sdist]
formats=bztar

View File

@ -5,14 +5,9 @@
import os import os
import glob import glob
import distutils.command.sdist
from setuptools import setup from setuptools import setup
# override default tarball format with bzip2
distutils.command.sdist.sdist.default_format = {"posix": "bztar"}
# recursively scan for python modules to be included # recursively scan for python modules to be included
package_root_dirs = ["pungi", "pungi_utils"] package_root_dirs = ["pungi", "pungi_utils"]
packages = set() packages = set()
@ -25,7 +20,7 @@ packages = sorted(packages)
setup( setup(
name="pungi", name="pungi",
version="4.3.6", version="4.3.8",
description="Distribution compose tool", description="Distribution compose tool",
url="https://pagure.io/pungi", url="https://pagure.io/pungi",
author="Dennis Gilmore", author="Dennis Gilmore",

View File

@ -628,6 +628,7 @@ class ComposeTestCase(unittest.TestCase):
ci_copy = dict(self.ci_json) ci_copy = dict(self.ci_json)
ci_copy["header"]["version"] = "1.2" ci_copy["header"]["version"] = "1.2"
mocked_response = mock.MagicMock() mocked_response = mock.MagicMock()
mocked_response.status_code = 200
mocked_response.text = json.dumps(self.ci_json) mocked_response.text = json.dumps(self.ci_json)
mocked_requests.post.return_value = mocked_response mocked_requests.post.return_value = mocked_response
@ -811,6 +812,7 @@ class TracebackTest(unittest.TestCase):
class RetryRequestTest(unittest.TestCase): class RetryRequestTest(unittest.TestCase):
@mock.patch("time.sleep", new=lambda x: x)
@mock.patch("pungi.compose.requests") @mock.patch("pungi.compose.requests")
def test_retry_timeout(self, mocked_requests): def test_retry_timeout(self, mocked_requests):
mocked_requests.post.side_effect = [ mocked_requests.post.side_effect = [
@ -827,3 +829,17 @@ class RetryRequestTest(unittest.TestCase):
], ],
) )
self.assertEqual(rv.status_code, 200) self.assertEqual(rv.status_code, 200)
@mock.patch("pungi.compose.requests")
def test_no_retry_on_client_error(self, mocked_requests):
mocked_requests.post.side_effect = [
mock.Mock(status_code=400, json=lambda: {"message": "You made a mistake"}),
]
url = "http://locahost/api/1/composes/"
with self.assertRaises(RuntimeError):
retry_request("post", url)
self.assertEqual(
mocked_requests.mock_calls,
[mock.call.post(url, json=None, auth=None)],
)

View File

@ -45,7 +45,7 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
client_args = { client_args = {
"original_image_conf": original_image_conf, "original_image_conf": original_image_conf,
"image_conf": { "image_conf": {
@ -137,7 +137,7 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
server_args = { server_args = {
"original_image_conf": original_image_conf, "original_image_conf": original_image_conf,
"image_conf": { "image_conf": {
@ -196,7 +196,7 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
server_args = { server_args = {
"original_image_conf": original_image_conf, "original_image_conf": original_image_conf,
"image_conf": { "image_conf": {
@ -261,8 +261,8 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertFalse(phase.pool.add.called) phase.pool.add.assert_not_called()
self.assertFalse(phase.pool.queue_put.called) phase.pool.queue_put.assert_not_called()
@mock.patch("pungi.phases.image_build.ThreadPool") @mock.patch("pungi.phases.image_build.ThreadPool")
def test_image_build_set_install_tree(self, ThreadPool): def test_image_build_set_install_tree(self, ThreadPool):
@ -297,9 +297,9 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertTrue(phase.pool.queue_put.called_once) phase.pool.queue_put.assert_called_once()
args, kwargs = phase.pool.queue_put.call_args args, kwargs = phase.pool.queue_put.call_args
self.assertEqual(args[0][0], compose) self.assertEqual(args[0][0], compose)
self.assertDictEqual( self.assertDictEqual(
@ -364,9 +364,9 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertTrue(phase.pool.queue_put.called_once) phase.pool.queue_put.assert_called_once()
args, kwargs = phase.pool.queue_put.call_args args, kwargs = phase.pool.queue_put.call_args
self.assertEqual(args[0][0], compose) self.assertEqual(args[0][0], compose)
self.assertDictEqual( self.assertDictEqual(
@ -430,9 +430,9 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertTrue(phase.pool.queue_put.called_once) phase.pool.queue_put.assert_called_once()
args, kwargs = phase.pool.queue_put.call_args args, kwargs = phase.pool.queue_put.call_args
self.assertEqual(args[0][0], compose) self.assertEqual(args[0][0], compose)
self.assertDictEqual( self.assertDictEqual(
@ -501,9 +501,9 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertTrue(phase.pool.queue_put.called_once) phase.pool.queue_put.assert_called_once()
args, kwargs = phase.pool.queue_put.call_args args, kwargs = phase.pool.queue_put.call_args
self.assertEqual(args[0][0], compose) self.assertEqual(args[0][0], compose)
self.assertDictEqual( self.assertDictEqual(
@ -569,9 +569,9 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertTrue(phase.pool.queue_put.called_once) phase.pool.queue_put.assert_called_once()
args, kwargs = phase.pool.queue_put.call_args args, kwargs = phase.pool.queue_put.call_args
self.assertEqual( self.assertEqual(
args[0][1].get("image_conf", {}).get("image-build", {}).get("release"), args[0][1].get("image_conf", {}).get("image-build", {}).get("release"),
@ -612,9 +612,9 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertTrue(phase.pool.queue_put.called_once) phase.pool.queue_put.assert_called_once()
args, kwargs = phase.pool.queue_put.call_args args, kwargs = phase.pool.queue_put.call_args
self.assertEqual( self.assertEqual(
args[0][1].get("image_conf", {}).get("image-build", {}).get("release"), args[0][1].get("image_conf", {}).get("image-build", {}).get("release"),
@ -655,9 +655,9 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertTrue(phase.pool.queue_put.called_once) phase.pool.queue_put.assert_called_once()
args, kwargs = phase.pool.queue_put.call_args args, kwargs = phase.pool.queue_put.call_args
self.assertTrue(args[0][1].get("scratch")) self.assertTrue(args[0][1].get("scratch"))
@ -692,7 +692,7 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
server_args = { server_args = {
"original_image_conf": original_image_conf, "original_image_conf": original_image_conf,
"image_conf": { "image_conf": {
@ -755,7 +755,7 @@ class TestImageBuildPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
server_args = { server_args = {
"original_image_conf": original_image_conf, "original_image_conf": original_image_conf,
"image_conf": { "image_conf": {

View File

@ -28,6 +28,7 @@ def fake_listdir(pattern, result=None, exc=None):
"""Create a function that mocks os.listdir. If the path contains pattern, """Create a function that mocks os.listdir. If the path contains pattern,
result will be returned or exc raised. Otherwise it's normal os.listdir result will be returned or exc raised. Otherwise it's normal os.listdir
""" """
# The point of this is to avoid issues on Python 2, where apparently # The point of this is to avoid issues on Python 2, where apparently
# isdir() is using listdir(), so the mocking is breaking it. # isdir() is using listdir(), so the mocking is breaking it.
def worker(path): def worker(path):

View File

@ -121,7 +121,6 @@ class KojiWrapperTest(KojiWrapperBaseTestCase):
) )
def test_get_image_paths(self): def test_get_image_paths(self):
# The data for this tests is obtained from the actual Koji build. It # The data for this tests is obtained from the actual Koji build. It
# includes lots of fields that are not used, but for the sake of # includes lots of fields that are not used, but for the sake of
# completeness is fully preserved. # completeness is fully preserved.
@ -321,7 +320,6 @@ class KojiWrapperTest(KojiWrapperBaseTestCase):
) )
def test_get_image_paths_failed_subtask(self): def test_get_image_paths_failed_subtask(self):
failed = set() failed = set()
def failed_callback(arch): def failed_callback(arch):

View File

@ -43,7 +43,7 @@ class TestLiveImagesPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.maxDiff = None self.maxDiff = None
six.assertCountEqual( six.assertCountEqual(
self, self,
@ -124,7 +124,7 @@ class TestLiveImagesPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.maxDiff = None self.maxDiff = None
six.assertCountEqual( six.assertCountEqual(
self, self,
@ -192,7 +192,7 @@ class TestLiveImagesPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.maxDiff = None self.maxDiff = None
six.assertCountEqual( six.assertCountEqual(
self, self,
@ -265,7 +265,7 @@ class TestLiveImagesPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.maxDiff = None self.maxDiff = None
six.assertCountEqual( six.assertCountEqual(
self, self,
@ -363,7 +363,7 @@ class TestLiveImagesPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.maxDiff = None self.maxDiff = None
six.assertCountEqual( six.assertCountEqual(
self, self,
@ -433,7 +433,7 @@ class TestLiveImagesPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.maxDiff = None self.maxDiff = None
six.assertCountEqual( six.assertCountEqual(
self, self,
@ -503,7 +503,7 @@ class TestLiveImagesPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.maxDiff = None self.maxDiff = None
six.assertCountEqual( six.assertCountEqual(
self, self,
@ -571,7 +571,7 @@ class TestLiveImagesPhase(PungiTestCase):
phase.run() phase.run()
# assert at least one thread was started # assert at least one thread was started
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.maxDiff = None self.maxDiff = None
six.assertCountEqual( six.assertCountEqual(
self, self,

View File

@ -36,7 +36,7 @@ class TestLiveMediaPhase(PungiTestCase):
phase = LiveMediaPhase(compose) phase = LiveMediaPhase(compose)
phase.run() phase.run()
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertEqual( self.assertEqual(
phase.pool.queue_put.call_args_list, phase.pool.queue_put.call_args_list,
[ [
@ -93,7 +93,7 @@ class TestLiveMediaPhase(PungiTestCase):
phase = LiveMediaPhase(compose) phase = LiveMediaPhase(compose)
phase.run() phase.run()
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertEqual( self.assertEqual(
phase.pool.queue_put.call_args_list, phase.pool.queue_put.call_args_list,
[ [
@ -156,7 +156,7 @@ class TestLiveMediaPhase(PungiTestCase):
phase = LiveMediaPhase(compose) phase = LiveMediaPhase(compose)
phase.run() phase.run()
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertEqual( self.assertEqual(
phase.pool.queue_put.call_args_list, phase.pool.queue_put.call_args_list,
[ [
@ -267,7 +267,7 @@ class TestLiveMediaPhase(PungiTestCase):
phase = LiveMediaPhase(compose) phase = LiveMediaPhase(compose)
phase.run() phase.run()
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertEqual( self.assertEqual(
phase.pool.queue_put.call_args_list, phase.pool.queue_put.call_args_list,
[ [
@ -444,7 +444,7 @@ class TestLiveMediaPhase(PungiTestCase):
phase = LiveMediaPhase(compose) phase = LiveMediaPhase(compose)
phase.run() phase.run()
self.assertTrue(phase.pool.add.called) phase.pool.add.assert_called()
self.assertEqual( self.assertEqual(
phase.pool.queue_put.call_args_list, phase.pool.queue_put.call_args_list,

View File

@ -133,7 +133,7 @@ class TestNotifier(unittest.TestCase):
def test_does_not_run_without_config(self, run, makedirs): def test_does_not_run_without_config(self, run, makedirs):
n = PungiNotifier(None) n = PungiNotifier(None)
n.send("cmd", foo="bar", baz="quux") n.send("cmd", foo="bar", baz="quux")
self.assertFalse(run.called) run.assert_not_called()
@mock.patch("pungi.util.translate_path") @mock.patch("pungi.util.translate_path")
@mock.patch("kobo.shortcuts.run") @mock.patch("kobo.shortcuts.run")
@ -146,4 +146,4 @@ class TestNotifier(unittest.TestCase):
n.send("cmd", **self.data) n.send("cmd", **self.data)
self.assertEqual(run.call_args_list, [self._call("run-notify", "cmd")]) self.assertEqual(run.call_args_list, [self._call("run-notify", "cmd")])
self.assertTrue(self.compose.log_warning.called) self.compose.log_warning.assert_called()

View File

@ -3,14 +3,76 @@
import mock import mock
import os import os
import shutil
import tempfile
import unittest
import koji as orig_koji import koji as orig_koji
from tests import helpers from tests import helpers
from pungi import compose
from pungi.phases import osbuild from pungi.phases import osbuild
from pungi.checks import validate from pungi.checks import validate
class OSBuildPhaseHelperFuncsTest(unittest.TestCase):
@mock.patch("pungi.compose.ComposeInfo")
def setUp(self, ci):
self.tmp_dir = tempfile.mkdtemp()
conf = {"translate_paths": [(self.tmp_dir, "http://example.com")]}
ci.return_value.compose.respin = 0
ci.return_value.compose.id = "RHEL-8.0-20180101.n.0"
ci.return_value.compose.date = "20160101"
ci.return_value.compose.type = "nightly"
ci.return_value.compose.type_suffix = ".n"
ci.return_value.compose.label = "RC-1.0"
ci.return_value.compose.label_major_version = "1"
compose_dir = os.path.join(self.tmp_dir, ci.return_value.compose.id)
self.compose = compose.Compose(conf, compose_dir)
server_variant = mock.Mock(uid="Server", type="variant")
client_variant = mock.Mock(uid="Client", type="variant")
self.compose.all_variants = {
"Server": server_variant,
"Client": client_variant,
}
def tearDown(self):
shutil.rmtree(self.tmp_dir)
def test__get_repo_urls(self):
repos = [
"http://example.com/repo",
"Server",
{
"baseurl": "Client",
"package_sets": ["build"],
},
{
"baseurl": "ftp://example.com/linux/repo",
"package_sets": ["build"],
},
]
expect = [
"http://example.com/repo",
"http://example.com/RHEL-8.0-20180101.n.0/compose/Server/$basearch/os",
{
"baseurl": "http://example.com/RHEL-8.0-20180101.n.0/compose/Client/"
+ "$basearch/os",
"package_sets": ["build"],
},
{
"baseurl": "ftp://example.com/linux/repo",
"package_sets": ["build"],
},
]
self.assertEqual(
osbuild.OSBuildPhase._get_repo_urls(self.compose, repos), expect
)
class OSBuildPhaseTest(helpers.PungiTestCase): class OSBuildPhaseTest(helpers.PungiTestCase):
@mock.patch("pungi.phases.osbuild.ThreadPool") @mock.patch("pungi.phases.osbuild.ThreadPool")
def test_run(self, ThreadPool): def test_run(self, ThreadPool):
@ -124,6 +186,49 @@ class OSBuildPhaseTest(helpers.PungiTestCase):
) )
self.assertNotEqual(validate(compose.conf), ([], [])) self.assertNotEqual(validate(compose.conf), ([], []))
@mock.patch("pungi.phases.osbuild.ThreadPool")
def test_rich_repos(self, ThreadPool):
repo = {"baseurl": "http://example.com/repo", "package_sets": ["build"]}
cfg = {
"name": "test-image",
"distro": "rhel-8",
"version": "1",
"target": "image-target",
"arches": ["x86_64"],
"image_types": ["qcow2"],
"repo": [repo],
}
compose = helpers.DummyCompose(
self.topdir, {"osbuild": {"^Everything$": [cfg]}}
)
self.assertValidConfig(compose.conf)
pool = ThreadPool.return_value
phase = osbuild.OSBuildPhase(compose)
phase.run()
self.assertEqual(len(pool.add.call_args_list), 1)
self.assertEqual(
pool.queue_put.call_args_list,
[
mock.call(
(
compose,
compose.variants["Everything"],
cfg,
["x86_64"],
"1",
None,
"image-target",
[repo, self.topdir + "/compose/Everything/$arch/os"],
[],
),
),
],
)
class RunOSBuildThreadTest(helpers.PungiTestCase): class RunOSBuildThreadTest(helpers.PungiTestCase):
def setUp(self): def setUp(self):
@ -189,7 +294,13 @@ class RunOSBuildThreadTest(helpers.PungiTestCase):
"1", # version "1", # version
"15", # release "15", # release
"image-target", "image-target",
[self.topdir + "/compose/Everything/$arch/os"], [
self.topdir + "/compose/Everything/$arch/os",
{
"baseurl": self.topdir + "/compose/Everything/$arch/os",
"package_sets": ["build"],
},
],
["x86_64"], ["x86_64"],
), ),
1, 1,
@ -211,7 +322,13 @@ class RunOSBuildThreadTest(helpers.PungiTestCase):
["aarch64", "x86_64"], ["aarch64", "x86_64"],
opts={ opts={
"release": "15", "release": "15",
"repo": [self.topdir + "/compose/Everything/$arch/os"], "repo": [
self.topdir + "/compose/Everything/$arch/os",
{
"baseurl": self.topdir + "/compose/Everything/$arch/os",
"package_sets": ["build"],
},
],
}, },
), ),
mock.call.save_task_id(1234), mock.call.save_task_id(1234),

View File

@ -315,7 +315,6 @@ class OstreeTreeScriptTest(helpers.PungiTestCase):
@mock.patch("kobo.shortcuts.run") @mock.patch("kobo.shortcuts.run")
def test_extra_config_with_keep_original_sources(self, run): def test_extra_config_with_keep_original_sources(self, run):
configdir = os.path.join(self.topdir, "config") configdir = os.path.join(self.topdir, "config")
self._make_dummy_config_dir(configdir) self._make_dummy_config_dir(configdir)
treefile = os.path.join(configdir, "fedora-atomic-docker-host.json") treefile = os.path.join(configdir, "fedora-atomic-docker-host.json")

View File

@ -47,7 +47,7 @@ class TestMaterializedPkgsetCreate(helpers.PungiTestCase):
pkgset.name = name pkgset.name = name
pkgset.reuse = None pkgset.reuse = None
def mock_subset(primary, arch_list, exclusive_noarch): def mock_subset(primary, arch_list, **kwargs):
self.subsets[primary] = mock.Mock() self.subsets[primary] = mock.Mock()
return self.subsets[primary] return self.subsets[primary]
@ -73,10 +73,16 @@ class TestMaterializedPkgsetCreate(helpers.PungiTestCase):
self.assertEqual(result["amd64"], self.subsets["amd64"]) self.assertEqual(result["amd64"], self.subsets["amd64"])
self.pkgset.subset.assert_any_call( self.pkgset.subset.assert_any_call(
"x86_64", ["x86_64", "noarch", "src"], exclusive_noarch=True "x86_64",
["x86_64", "noarch", "src"],
exclusive_noarch=True,
inherit_to_noarch=True,
) )
self.pkgset.subset.assert_any_call( self.pkgset.subset.assert_any_call(
"amd64", ["amd64", "x86_64", "noarch", "src"], exclusive_noarch=True "amd64",
["amd64", "x86_64", "noarch", "src"],
exclusive_noarch=True,
inherit_to_noarch=True,
) )
for arch, pkgset in result.package_sets.items(): for arch, pkgset in result.package_sets.items():

View File

@ -853,6 +853,8 @@ class TestReuseKojiPkgset(helpers.PungiTestCase):
"include_packages": None, "include_packages": None,
"rpms_by_arch": mock.Mock(), "rpms_by_arch": mock.Mock(),
"srpms_by_name": mock.Mock(), "srpms_by_name": mock.Mock(),
"exclusive_noarch": True,
"inherit_to_noarch": True,
} }
) )
self.pkgset.old_file_cache = mock.Mock() self.pkgset.old_file_cache = mock.Mock()
@ -934,6 +936,28 @@ class TestMergePackageSets(PkgsetCompareMixin, unittest.TestCase):
first.rpms_by_arch, {"i686": ["rpms/bash@4.3.42@4.fc24@i686"], "noarch": []} first.rpms_by_arch, {"i686": ["rpms/bash@4.3.42@4.fc24@i686"], "noarch": []}
) )
def test_merge_doesnt_exclude_noarch_exclude_arch_when_configured(self):
first = pkgsets.PackageSetBase("first", [None])
second = pkgsets.PackageSetBase("second", [None])
pkg = first.file_cache.add("rpms/bash@4.3.42@4.fc24@i686")
first.rpms_by_arch.setdefault(pkg.arch, []).append(pkg)
pkg = second.file_cache.add("rpms/pungi@4.1.3@3.fc25@noarch")
pkg.excludearch = ["i686"]
second.rpms_by_arch.setdefault(pkg.arch, []).append(pkg)
first.merge(second, "i386", ["i686", "noarch"], inherit_to_noarch=False)
print(first.rpms_by_arch)
self.assertPkgsetEqual(
first.rpms_by_arch,
{
"i686": ["rpms/bash@4.3.42@4.fc24@i686"],
"noarch": ["rpms/pungi@4.1.3@3.fc25@noarch"],
},
)
def test_merge_excludes_noarch_exclusive_arch(self): def test_merge_excludes_noarch_exclusive_arch(self):
first = pkgsets.PackageSetBase("first", [None]) first = pkgsets.PackageSetBase("first", [None])
second = pkgsets.PackageSetBase("second", [None]) second = pkgsets.PackageSetBase("second", [None])