It is not actually needed. projects_info deduplicates the package list,
placing other builds into the builds list instead of making a new
package entry. So it returns a sorted and deduped list of packages, as
expected.
(cherry picked from commit 6443f34337)
It is failing to load on Fedora for quite some time and there no-one
complaining about this so it would be easier to disable it instead of
fixing it.
(cherry picked from commit d64a320ba1)
- on some arches (also Fedora x86_64) systemd-nspawn may not be
available
- delete composes from other tests in rlPhaseStartCleanup because
we're seeing the tar compose kind of hanging in Jenkins and that
test script is executed last so the slave may be running out of
disk space. Be a good citizen and clean up after the previous
tests.
(cherry picked from commit ea78cce882)
b/c we've migrated to Upshift we must use different instance type,
specify the desired network to connect to and update how we get
the ip address of the launched VM.
(cherry picked from commit c95d7084a6)
The reason for the 3G minimum was because anaconda had a bug with how it
calculated minimum disk size when using kickstart. The gix for this has
been in Anaconda since 29.19-1, so we can now remove our limit and
create somewhat smaller disk images.
(cherry picked from commit 7e78dc368f)
We need to be root to read the certificates that give access to the
package repos. Right now, the alternative seems to be changing
permissions on the certs themselves, which seems less good. We're
running anaconda as root anyway.
(cherry picked from commit 022e9eba3e)
If a repository has `sslcacert`, `sslclientcert`, or `ssclientkey` set,
pass them to anaconda through the kickstart file. This is mostly the
case when using RHEL repositories that are accessed through a
subscription.
(cherry picked from commit e194b5926c)
The OS_PROJECT_NAME (or OS_TENANT_NAME) environment variable needs to be defined.
Use the OS_PROJECT_NAME, since it is recommended in the documentation instead of
the older OS_TENANT_NAME.
(cherry picked from commit cc6fdb2fac)
this will serve as a reminder that sometimes Jenkins jobs can be
missing or failing and also lists the comments which team members
can use to trigger Jenkins jobs, especially for PRs from
non-members.
(cherry picked from commit de6419f0d1)
Jenkins uses templates to define all jobs which means they need to
have the same make targets even if the targets don't do anything.
(cherry picked from commit 57b4f2e8f3)
otherwise composer-cli is unable to glob() the kickstart
files and we're left without supported compose types. Seen
during AWS testing for example.
Helps with running some of the tests via sudo b/c this is
what Jenkins requires.
(cherry picked from commit b88466fd74)
If trying to execute test_cli.sh inside a git checkout
we are going to get the following exception:
Traceback (most recent call last):
File "./src/sbin/lorax-composer", line 251, in <module>
repo = open_or_create_repo(server.config["REPO_DIR"])
File "/home/jenkins/lorax/src/pylorax/api/recipes.py", line 306, in open_or_create_repo
gi.repository.GLib.Error: ggit-error: failed to stat '/home/jenkins/lorax/tests/pylorax/blueprints': Permission denied (-1)
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib64/python3.7/multiprocessing/popen_fork.py", line 54, in _send_signal
os.kill(self.pid, sig)
From what I can tell open_or_create_repo() is trying to initialize
a git repository inside the blueprints directory which fails when
we have an active git checkout.
This doesn't happen when we run the tests in Travis CI because
rsync excludes .git/ inside the Docker container.
(cherry picked from commit c9d706a382)
these targets help hooking up things in Jenkins and enable us to
perform build & deploy tests for cloud images.
NOTE: use sudo -E to preserve the environment
(cherry picked from commit 366ae55abe)
this will be used to invoke scripts that build/push cloud images
without having to duplicate the setup/teardown/report parts!
(cherry picked from commit af2ae790ce)
When I re-arranged the test-in-docker I didn't realize how .travis.yml
was extracting the results. This should fix it.
When running with test-in-docker we mount the source read-only on
/linux-ro/ inside the container and copy it over to /lorax/ for running
the tests.
The local directory ./.test-results/ is mounted on /test-results/ in the
container and the .coverage file is copied into there so that it is
available on the host.
(cherry picked from commit b61a91954a)
Some of these can only run as root on a real system with access to loop
devices. They are skipped when running in a container.
(cherry picked from commit 063a1770e1)
Add a /.in-container file to the container root so that tests requiring root
and loop device support will be skipped when running in a container.
(cherry picked from commit bab4b20d0d)
Apparently nobody has used these since the switch to py3, xrange is now
range and it needs to read the file in binary mode when generating the
sha256.
(cherry picked from commit 8e749efbbf)