New `buildinstall.metadata` file is created once the buildinstall
phase is done. This file contains:
- list of lorax command line arguments.
- list of RPMs installed in the buildinstall buildroot.
- list of RPMs installed in the resulting boot.iso.
This file is checked in the next compose run to find out if
the result of buildinstall phase from the previous compose
can be reused. Following is checked:
- lorax commandline arguments are the same (except of expected
differences).
- The NVRAs of RPMs in the runroot_tag are the same as the ones
installed in the old buildinstall buildroot.
- The NVRAs of RPMs installed in the boot.iso are the same as
the ones in package sets in the current compose.
By its implementation, this reuse strategy is used only if
pungi_buildinstall Koji plugin is used.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
Add tests for buildinstall reuse and buildinstall_allow_reuse option.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
Move repoclosure out from test phase into its own phase and
run parallel with image building phases(osbs, imagebuild, ...)
to speed things up.
JIRA: RHELCMP-8
Signed-off-by: Haibo Lin <hlin@redhat.com>
I've analysed multiple nigthly composes and often the only difference
in the configuration between two nightly composes is different
`product_id` commit hash.
The `product_id` is used in later `createrepo` phase and does not
influence the gather phase at all. I therefore think it can be
whitelisted in gather phase reuse code.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
- Get also requires/provides of RPMs in package set.
- Store the results of gather phase as pickle file.
- Reuse old gather phase results in case Pungi configuration
did not change, the "names" of RPMs in global package set
did not change and their requires/provides did not change.
- Add `gather_allow_reuse` option to enable this feature.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
Add gather_allow_reuse, add more tests and better handling of gather_lookaside_repos.
The current code calls `find_old_compose` followed by multiple `os.path.*`
calls to find out if particular file exists in the old compose. This
duplicates code a lot and makes it harder to read.
In this commit, new `Compose.old_compose_path` is introduced and
used instead of direct calls of `find_old_compose`.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
ODCS creates symlinks to real directories containing the composes.
The directory structure is similar to following one:
- `./nightly/Fedora-Rawhide-20200304.n.0` -> `../odcs-3`
- `./nightly/Fedora-Rawhide-20200305.n.0` -> `../odcs-4`
- `./nightly/latest-Fedora-Rawhide` -> `../odcs-5`
The current Pungi code to search for old composes skips symlinks
and therefore old ODCS composes are not found.
This commit removes this check and therefore symlinks are allowed
when searching for old compose.
I think this check existed to prevent using `latest-*` symlink as
source for the compose. But this is not possible, because the
code checks that the old compose directory name has certain pattern
constructed from release_short, release_version, ... The `latest-*`
symlink definitely does not match this pattern.
I also executed test compose and it worked as expected.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
If Koji pungi-buildinstall is used, then the buildinstall results are
stored in the `output_dir` dir, but in "results" and "logs" subdirectories.
We need to move them to final_output_dir.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
E231 missing whitespace after ','
E265 block comment should start with '# '
E266 too many leading '#' for block comment
E302 expected 2 blank lines, found 1
E501 line too long (115 > 88 characters)
E713 test for membership should be 'not in'
E722 do not use bare 'except'
F812 list comprehension redefines 'g' from line 1499
F821 undefined name 'cmp'
F841 local variable 'ex' is assigned to but never used
JIRA: COMPOSE-4108
Signed-off-by: Haibo Lin <hlin@redhat.com>
Some composes might need extra validation to ensure they are following
certain strict rules - for example containing only signed packages or
packages only from particular Koji tag.
There is currently no way how to check that Pungi configuration fulfills
these extra requirements.
This commit adds new `--schema-override` option to
`pungi-config-validate` script which allows caller to specify path to
JSON schema overriding the default JSON schema and therefore limitting
it further.
For exmaple, to limit the `pkgset_source` to `koji`, one can use
following JSON schema override:
```
{
"properties": {
"pkgset_source": {
"enum": ["koji"]
}
}
}
```
It is possible to use `--schema-override` multiple times to apply
multiple schema overrides.
Merges: https://pagure.io/pungi/pull-request/1341
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
When partial cleanup messes up the guestfs cache, the call to guestmount
will fail. To fix that, let's check if there is a problem first and
clean up everything if needed.
Relates: https://bugzilla.redhat.com/show_bug.cgi?id=1771976
JIRA: COMPOSE-3932
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
We would like to start generating the buildinstall phase using the safer
Koji Pungi Buildinstall plugin and stop the direct use of Runroot plugin.
The plugin so far exists only as PR for Koji:
https://pagure.io/koji/pull-request/1939
This commit adds support for this plugin when `lorax_use_koji_plugin`
is set to `True`.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
When `link_type = "symlink"` is used, the packages are in fact symlinks
to /mnt/koji. When graft points file is generated, the paths in this graft
points file point to symlinks and therefore symlinks are copied into the
generated ISO file instead of real files.
In this commit, the code to generate the graft points file is changed
so it resolves the symlink to real file stored on /mnt/koji. To make
this code safer, it does such resolving only in case the symlink points
outside of `compose.paths.compose.topdir()`. Therefore you can still
generate ISO file with symlink pointing to file stored within the ISO
file itself, although this is not done currently afaik.
The main reason for this is to be able to generate ISO files even
without hardlinks (which would need read-write access on /mnt/koji)
and without copying all the packages from /mnt/koji to local storage.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
The `runroot_method` now accepts `dict` value with phase name as a key
and runroot method as a value. For backward compatibility, the `str`
value is still supported.
The new `global_runroot_method` option has been added which defines
the runroot method in case it is not set in `dict` in the `runroot_method`.
This commit allows running `createiso` phase locally while keeping the other
phases in Koji.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
Originally the list of solvables for fus was growing with each iteration
and nothing was ever removed. That later changed so that fus iterations
are only done on newly added stuff. It's great for performance, but
means that the last log is not a superset of all others.
To get all dependency problems we need to look into all log files, not
just the last one.
JIRA: COMPOSE-3964
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
When running repoclosure as root user, it will use other dir instead of
the one returned by getCacheDir().
For yum, with --tempcache option could let the cache dir returned by
getCacheDir() always be used.
For dnf, there's no such an option and we have to handle it specially.
JIRA: COMPOSE-3922
Signed-off-by: Haibo Lin <hlin@redhat.com>
When probing lookasides for platform definition, we need to make sure it
works for repos specified as HTTP urls. Createrepo doesn't seem to
automatically download the repodata, so we have to help it.
JIRA: COMPOSE-3958
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Each depsolved tree will be using its own cache for fus. This should
still allow for faster loading of metadata after first iteration, but
should prevent errors from using cached files meant for another variant
or architecture. The cache is deleted after the last iteration.
JIRA: COMPOSE-3959
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This is basically collecting all individual extra_files.json and putting
their content into a single location in
compose/metadata/extra_files.json. The file format is part of productmd
1.23.
JIRA: COMPOSE-3831
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
A lorax template used for the ostree-installer might need an additional
package dependency (e.g., flatpak to embed a flatpak repository) - add
a config key 'extra_runroot_pkgs' to the ostree installer configuration
to allow supplementing the set of packages installed into the runroot.
Signed-off-by: Owen W. Taylor <otaylor@fishsoup.net>
It was needed to provide assertItemsEqual method. Starting with Python
3.2, there's assertCountEqual that does the same thing. Six provides a
helper that will dispatch to the existing method. With this change,
unittest 2 is only needed on Python 2.6 to backport the method.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Until now, the behaviour was that all debuginfo from a build would be
included if at least one package with the same arch was included.
This resulted in many debuginfo packages being included even though
their corresponding package was not present.
With this patch, we will always pull in debugsource, but foo-debuginfo
will only be included if foo is included for the same arch. As a
consequence, it is necessary to resolve dependencies of debuginfo
packages. There are cases where foo-debuginfo needs foo-debuginfo-common
for example.
This change means that DNF and YUM backends are no longer identical in
the output. The tests where this is demonstrated are duplicated and
their results are modified.
JIRA: COMPOSE-3823
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Make dummy-bash -> dummy-glibc dependency require archful. This avoids
potential race condition where order of dependency processing can result in
different packages being pulled in. The tests where this could happen are
updated.
Make dummy-glibc-debuginfo depend on dummy-glibc-debuginfo-common.
The filenames for the repo no longer include hash, and sqlite databases are not
generated.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This should make it possible to only import the library only when it's
really needed.
DNF does not work with libmodulemd v2. If we import libmodulemd2 and
then dnf, the program will just hang forever. We only need DNF in
pungi-gather, where libmodulemd is not needed, and also where we do need
libmodulemd, we don't have DNF.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Add a configuration option to enable skipping some modules found in the
configured tag.
Fixes: https://pagure.io/pungi/issue/1260
JIRA: COMPOSE-3794
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This patch adds a new config option. This is expected to be a name of
subdirectory in the repo with module defaults. If supplied, overrides
from that location are loaded every time defaults are loaded.
This raises the minimal required version of libmodulemd to 2.8.0
JIRA: COMPOSE-3828
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
There will be a new log file logs/global/excluding-arch.global.log
Fixes: https://pagure.io/pungi/issue/1251
Signed-off-by: Haibo Lin <hlin@redhat.com>
This was a workaround to make some packages from the global repo
invisible for depsolving. This is now handled by packages being in
different repos. We can select which repos are enabled at which point.
This achieves the same result, but much faster.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
The repo was used to speed up creating lookaside repo from a variant.
This uses a similar approach as createrepo phase: selecting the last
available package set and using that data.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Simply use all existing package set repos as input for the runroot
task. The command line gets a bit long, but the actual behaviour should
remain the same.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
There is no longer a single repo with all packages. This means that the
metadata has to be loaded from another location.
When taking packages from Koji, we can assume that the non-modular
package tag will be processed last. The repo for this tag will be used.
This has better chance of being useful than using a random module.
For repo sources, there is only one package set anyway, so this change
makes no difference.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
With this patch, there should be a separate package set for each tag
that is consumed.
Generally each module will create a separate package set, with the
exception of -devel modules that will be in the same set as their
non-devel version.
Variants no longer need to keep their own package set objects. Instead
they now include a set of package set names that should be used for the
variant. This can replace the whitelist mechanism of deps gather method.
JIRA: COMPOSE-3620
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Once a package set repo is written to disk, let's use this object that
connects the repository path with the mapping of packages.
This change also makes it explicit where the dependency on package set
repos are. In the original codebase, any part of code could generate a
path to the repo, even if that repo has not yet been written.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This name will serve as an identifier for the group of packages.
For Koji package sets, it should the name of the tag from which the
packages come. For package sets based on repos a dummy constant name is
used.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This opens up a path to having multiple package sets in the compose. The
pkgset phase now creates a list of them (although at this time there is
always a single item in that list).
Any consumer of the package sets objects is updated to handle a list.
Generally this means an extra loop.
JIRA: COMPOSE-3620
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
For 'yum' backend, only cache dirs following repoclosure-$COMPOSE_ID-$variant.$arch
name convention are created, e.g. repoclosure-DP-1.0-20190822.t.0-Bar-Tools.x86_64
But for 'dnf' backend, the dir name looks like
repoclosure-$COMPOSE_ID-$variant.$arch-$suffix and there are other files
created, e.g.
repoclosure-DP-1.0-20190822.t.0-Bar-Tools.x86_64-df9fe164317e314e
repoclosure-DP-1.0-20190822.t.0-Bar-Tools.x86_64-filenames.solvx
repoclosure-DP-1.0-20190822.t.0-Bar-Tools.x86_64.solv
JIRA: COMPOSE-2565
Signed-off-by: Haibo Lin <hlin@redhat.com>
Both pkgset sources use the same logic to create per-arch repos. There
is no reason to have that code in both places.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
The data parsed from variants.xml uses a different format that what we
added in `_add_module_to_variant`. This leads to crashes later.
JIRA: COMPOSE-3746
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This was already discouraged to not be used, and is a bad idea in
current setup anyway. Removing this can simplify the code.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Sometimes it's practical not just warn when ISO is larger than expected,
but to also abort the compose.
JIRA: COMPOSE-3658
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
We can avoid parsing source modulemd information since we can get the
same information from the Koji build info.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Historically each variant had a list of modules. This is no longer
needed and can be dropped. We can also stop logging the modulemd since
we know it was retrieved from Koji and not modified locally.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
On Python 3 it is not possible to sort str and None or RpmWrapper.
First convert everything to strings and then sort it. The sorting is
really to simplify diffing the files, so exact order does not have to be
preserved.
Fixes: https://pagure.io/pungi/issue/1227
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This also cleans up the runroot method detection code to not rely on the
now removed option.
JIRA: COMPOSE-2634
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
If the package set repo contains any modular package, the module
metadata is added there as well.
This is needed to accomodate change in DNF that refuses to work with
repo with modular packages if the metadata is not there. This DNF change
can cause issues in buildinstall phase.
Related: https://bugzilla.redhat.com/show_bug.cgi?id=1623128
The hybrid solver is modified to not create a separate repo with the
module metadata anymore, since it will be available in the repo with
packages. This also allows us to drop code to look into lookaside repos.
We still need to iterate over local modules in order to find out what
platform should be used.
JIRA: COMPOSE-3621
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
If the validation or dumping script is given some options, they should
only be removed if they are not valid. We have to remove the invalid
ones, otherwise that would cause a warning about unknown options.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
When trying to validate a template that should later be filled in with
`pungi-config-dump`, there will be errors about undefined variables.
These are meant to be set when the template is populated.
This patch adds support for `-e`, `--define` argument to the validation
script that can be used to suppress these errors.
Alternatively a JSON file is read from the directory with config file
that can contain values for the variables.
The `--define` option is changed in both validation and dumping to allow
empty string as an accepted value.
JIRA: COMPOSE-3599
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This patch reuses the existing createrepo_num_threads options to limit
maximum number of parallel createrepo processes.
Fixes: https://pagure.io/pungi/issue/955
JIRA: COMPOSE-2575
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Starting tests just to run mock functions slows the tests down for no
good reason. Let's instead mock the runner and run the dummy tasks
serially.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Higher protocols should be more efficient in terms of performance and
storage size. Since we don't really care about interoperability with
different python version, we can safely go to the highest version.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Since there can be multiple tags, the check must be done once for all of
them at the same time. Otherwise any module found only in some and not
all tags would raise this error.
The code builds a set of all existing patterns and then removes items
from it. If there is something left once all tags are processed, it
means such patterns were not matched by anything.
JIRA: COMPOSE-3609
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
For scm dict resolving the return value should be git ref (or source
branch for offline mode).
JIRA: COMPOSE-3614
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>