The scheme for generating versions has changed multiple times. MBS is
careful to only modify them so that they always compare correctly. This
only works though if the versions are treated as numbers.
This should be safe in that non-numbers should never be encountered as
module version. Libmodulemd internally stores the version as int (or
some version of int).
JIRA: COMPOSE-3540
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Hardlink command can be installed in /usr/sbin, where it is not visible
to non-priviledged users. They can still run it, but don't have it in
their PATH.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Even if we want to break hardlinks from Koji volume, there may be files
that can be hardlinked and save some space on the media.
JIRA: COMPOSE-3482
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Python 3.8 no longer sorts attributes automatically, which is causing
some of the tests to fail. The easiest fix is to update the code to make
sure sorting is in place.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1698514
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This adds few new config options which are well described in the
configuration documentation. Please refer to it for more information.
Merges: https://pagure.io/pungi/pull-request/1170
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
Embedding the registry configuration into OSBS config itself is
simple, but makes it impossible to reuse the same configuration for
multiple different composes.
A nice example is a nightly pushing images to a testing registry, and
production compose building the same images but pushing to staging
location. The original design requires duplication of all the
configuration just because registries are different.
With this option, the push information is stored in a separate option as
a mapping from NVR patterns to arbitrary data. The patterns are used to
match finished builds to registry.
The old configuration is marked as deprecated in code and will
eventually be removed. The deprecation handling in config validation
does not allow emitting warnings for nested values.
JIRA: COMPOSE-3394
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Lockfile and dict.sorted are only used in pungi.gather module, which
does not work with Python 3 due to dependency on yum. There's no point
in always pulling in these two libraries there.
This change fixes issues with python dependencies generator that puts
python3.6dist(lockfile) into RPM requires.
For builds on Python 2.6 these libraries are actually needed, but there
are no generators on RHEL 6. So this change can only break development
on RHEL 6. And if someone needs to do that, they have bigger problems...
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This can be used to fill particular values to a predefined template
without editing any file disk.
JIRA: COMPOSE-3316
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
The parts in multi compose are now named reasonably.
The package set repos are selected only once and not per variant. We
can't use repos with packages for specific arch only as that would
require more transformations during generating test data.
Lookasides for ResilientStorage part are added to point to Server part.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Use a method to add repos that will apply $arch and $basearch
substitution automatically. Yum backend only applies $basearch, so if
compatibility is needed, that one should be used.
Drop code for handling mirrorlist, since Pungi does not ever use it, and
being used externally is not really supported.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Tests are not passing in Fedora builds with this code, since it was
deleted in DNF. I think it's not actually doing anything, so we should
be able to drop it as well.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
When a phase is started or stopped, add a line to the to output. This
should help users keep track of what is happening in case the part takes
a long time to run.
JIRA: COMPOSE-3287
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
A module build can create packages that are tagged in the content tag,
but should not be included in the module. Originally Pungi didn't know
what exactly the module contains and so it needed to apply filters to
exclude stuff that was definitely out.
With getting the final MMD from Koji, we can actually make this a bit
more strict by only keeping packages that we know we need.
When processing each content tag, we can put into package set only
packages that are included in some module using that tag. This should
work with -devel modules as well. Both the regular and -devel modules
will contribute to the set and thus all packages will go to the package
set.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This source does not really return anything useful. It was necessary to
process the source modulemd to fill in list of RPMs. Since we now get
the final files from Koji, this is not needed anymore and the source can
be dropped.
This change requires a lot of tweaks for test.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
The file we get from Koji has all important bits already filled in.
There is no need to add anything.
This patch also stops filtering the artifacts to only contain packages
that are actually in the repo. Thus debuginfo and source packages will
appear there.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Instead of loading the "source" modulemd, always get the final file for
each architecture from Koji.
Logging the downloaded files locally is no longer necessary, when
debugging a problem we can find the files in the respective builds in
Koji.
Tests are updated to work with new code, and an obsolete test is
removed.
JIRA: COMPOSE-3147
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Sometimes the release version can be more specific than what should be
exposed to users of the boot iso.
JIRA: COMPOSE-3295
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
This adds new `Runroot` class and new `runroot_method` option which makes
it possible to choose between two currently available runroot methods:
- Local
- Koji
The main goal of this commit is to make it possible to add new runroot
methods in the future and this is the first step in that direction.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
Run arbitrary commands before and after the compose.
The example config is updated to generate latest symlink with a
post-compose script. The pre compose script runs always, post compose
runs only if the compose is not doomed.
JIRA: COMPOSE-3288
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
If the configuration sets keytab path and principal, run kinit with
custom cache file, and delete the file at the end of the run.
JIRA: COMPOSE-3288
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Behavior before this change: debuginfo was added based on source RPMs.
Pungi would get all debuginfo for all architectures that include at
least one binary package. This had a consequence that if foo-x.x86_64
and foo-x.i686 from the same built were included, foo-y-debuginfo.i686
would be included despite foo-y only being present for x86_64.
The patch changes this to work on binary package level: for each
included package `x`, check if there is `x-debuginfo` or
`x-debugsource`, and include them. Packages added in this way have to go
through one iteration of the solver, to account for cases such as
`x-libs-debuginfo` depending on `x-debuginfo` (with only `x` starting
this chain).
To make it slightly faster, the invocation of fus is changed to only
process newly added package in each iteration. It should have no effect
on the results, and subsequent iterations are much faster if they don't
process the same stuff again and again.
JIRA: COMPOSE-3247
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
The file in old compose does not change, there is no need to load it
again for each tag.
One consequence of this change is that the same cache will be used by
all package sets. They add stuff to the cache as they use it. However
that should not be a problem for the same reason it's okay to use the
cache in the first place. If two packages have the same path, they are
actually the same file and it's okay to reuse the data.
JIRA: COMPOSE-3374
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Also remove the useless listTaggedRPMs call which has been used *only*
to catch the very rare error which happened from time to time when
we used PDC in Pungi few years ago. This rare error is not relevant
anymore now.
Signed-off-by: Jan Kaluza <jkaluza@redhat.com>
If it fails, we can't really tell if it's a transient error or just too
old git client. Fall back to full clone immediately and retry there.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
SOURCE_DATE_EPOCH is preferred over time.time() call in metadata
generation, which makes time.time() mock ineffective. Set
SOURCE_DATE_EPOCH instead of mocking time.time(). This additionally test
if SOURCE_DATE_EPOCH is really used.
Adjust expected output, as SOURCE_DATE_EPOCH is defined without
fractional component.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
If the config is loaded from JSON, we will get list instead of tuple.
The validation rule does not really care whether it's a list or tuple,
it only enforces there are two strings. The definition is renamed to be
a bit more descriptive.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
If the config uses SCM dicts that include branch or tag names, they will
be resolved to specific commit ids.
It goes through the caching resolver. The main motivation for that is to
correctly support the --offline flag. It's highly unlikely there will be
two scm_dicts in the config with the same repo.
JIRA: COMPOSE-3279
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Split out a smaller function that receives a git repo URL and ref and
returns the commit id. The caching resolver class is modified to use
this second function if branch is given to it.
The new function checks if the ref received already looks like a commit
ID, and if so it does not attempt to do anything with it.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
Before this patch, there were two code paths: either getting the only
the wanted content by calling git-archive, or cloning the repository and
copying the files.
Both these approaches have the downside of not allowing retriving
content from a specific git commit.
The workaround is to create a new empty repo (in the location to which
we cloned previously), fetching the specific commit to there and then
checking it out.
This supports any commit and works identically for any protocol. The
downside is that all files in that commit will be downloaded. It should
be no worse than the git-clone path, but can result in bigger transfers
than git-archive.
Unfortunately this is only supported with git 2.5+. On older version
fetch will fail with no error message (tested with 1.8.3). This can be
used to fall back to full clone. This fallback is clearly suboptimal in
terms of data transfer, but it should work reliably.
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>
We run a thread for each task, not one per variant. If there are
multiple containers in a single variant, the logs contain confusing
entries about starting the phase multiple times.
2019-02-28 01:58:54 ... [BEGIN] OSBS phase for variant Foo
2019-02-28 01:58:54 ... [BEGIN] OSBS phase for variant Foo
Signed-off-by: Lubomír Sedlář <lsedlar@redhat.com>