When the repository has multiple arches, eg. i686 and x86_64, it should
add a new entry to the project's builds list, not create a new project
in the list.
This handles that by adding a modified insort_left function and
examining the packages returned from dnf to make sure they aren't
already listed in the results. It also handles adding them in sorted
order so that no further sorting needs to be done on the results.
Resolves: rhbz#1656642
Depending on which version of dnf is installed it may return a tuple, or
may return a list. Work around this by only comparing the 1st element of
it.
Related: rhbz#1655876
Stop using fake dnf object, use the real thing. This will help catch
problems with dnf returning unexpected types like VectorString.
(cherry picked from commit 27aff75aa3)
Related: rhbz#1655876
Anaconda is currently not able to handle cdn repo urls in the kickstart.
Add a new function that checks for extra repos and returns True.
Related: rhbz#1655623
If the system ran out of space, or was rebooted unexpectedly, the state
of the queue symlinks, or the results STATUS files may be inconsistent.
This checks them and:
* Removes broken symlinks from queue/new and queue/run
* Removes symlinks from run and sets the build to FAILED
* Sets builds w/o a STATUS to FAILED
* Sets builds with STATUS of RUNNING to FAILED
* Creates missing queue/new symlinks to results with STATUS of WAITING
So, any builds that were running during the reboot will be FAILED, and
any that were waiting to be started will be started upon rebooting.
Resolves: rhbz#1647985
(cherry picked from commit 4dd9004d13)
Anaconda requires the root password to be set or locked, so if there
isn't anything setting it we write out 'rootpw --lock'
Also adds tests for this.
Resolves: rhbz#1626122
Also kill the lorax-composer process and remove /run/weldr/api.socket
so that when this is run with podman you don't get an error about
attempting to tar up the socket.
Related: rhbz#1613058
- need to specify --sharedir so lorax-composer can find its
kickstart files
- each test script writes results into a separate directory to
avoid a passing test overwriting the results from a failing one.
To avoid reporting failures in case of previously failing tests
(e.g. during development) remove the temporary directories holding
tets results before execution!
these are built on top of beakerlib and we use its internal
protocol to figure out the result without relying on the full
test runner that is tipically used inside of a RHEL environment!
Includes a disabled test snippet for Issue #460
This is similar to the AMI type, but also adds open-vm-tools and does not do
anything special to the partitioning
(cherry picked from commit 1056bfc25b)
Resolves: rhbz#1628646
This does pretty much the same things as the AMI compose type, but also
replaces NetworkManager with the Azure linux agent.
(cherry picked from commit e0c236ff36)
Resolves: rhbz#1628648
This differs from lmc's --make-ami in that creates a full disk image instead of
an fsimage. Create a raw disk image with a / and /boot partitions, and enable
sshd, chronyd, and cockpit by default.
(cherry picked from commit 18188bf6cf)
Resolves: rhbz#1628647
This tests to make sure that the metadata timer is working (by setting
it to 10s and adding a new package to the repo), and that
DNFLock.lock_check immediately picks up a new package.
This depends on rpmfluff which is available from Fedora or EPEL repos.
Related: rhbz#1631561
Passing ?limit=0 to the blueprints/list, blueprints/changes,
projects/list, modules/list should always return the total possible
results, not 0.
Also move the composer-cli test_diff to the end so that it will work
consistently. Do this by naming it test_z_diff.
(cherry picked from commit 972b5c4142)
Note the exception string checking around compose_type. I didn't really
want to introduce a new exception type just for this, but also didn't
want to duplicate strings. I'd be open to other suggestions for how to
do this.
(cherry picked from commit b3bb438254)
This adds some fairly redundant code to the beginning of all the
blueprint routes to attempt reading a commit from git for the
blueprint's recipe. If it succeeds, the blueprint exists and the route
can continue. Otherwise, return an error. Hopefully this doesn't slow
things down too much.
(cherry picked from commit a925cc7ddb)
Note that this also changes the return type of uuid_info to return None
when an unknown ID is given. The other uuid_* functions are fine
because they are checked ahead of time.
(cherry picked from commit 6497b4fb65)
Make sure no UTF8 characters are allowed and return an error if they
are.
Also includes tests to make sure the correct error is returned.
(cherry picked from commit 86d79cd8a6)
Currently the code is not UTF8 safe, so we need to return a clear error
when invalid characters are passed in.
This also adds tests for the routes to confirm that an error is
correctly returned.
(cherry picked from commit 74f5def3d4)
These depend on there being a freshly installed lorax-composer API
server running, if there is no /run/weldr/api.socket they will be
skipped.
(cherry picked from commit 7700ae3135)
This adds empty __init__.py to tests so that a lib.py library of helper
functions can be imported from the tests.
Add captured_output to use with composer-cli tests to capture stdout/err
output from the functions.
(cherry picked from commit eeae331ba0)
Some results have errors and no status, others have status and errors.
Update the function to return the final rc to exit with, and a bool
indicating whether or not to continue processing the other fields.
Add a bunch of tests for the new function to make sure I have the logic
correct.
(cherry picked from commit 35fa067219)
This adds a new argument to projects_depsolve and
projects_depsolve_with_size that contains the group list, unfortunately.
I would have prefered adding a function that just returns a list of all
the contents of a group and then add that to what was being passed into
projects_depsolve. However, there does not appear to be any good way to
do that in yum aside from a lot of grubbing around in the comps object,
which I am unwilling to do.
(cherry picked from commit 0259f3564d)
Depsolve the packages included in the templates and report any errors
using the /api/status 'msgs' field. This should help narrow down
problems with package sources not being setup correctly.
(cherry picked from commit d92f2f5b04)
It is perfectly valid to have more than one build of a package, eg. one
in the release repo and one in the updates repo.
(cherry picked from commit 86c4ef5f45)
DNF Repo.dump() function cannot be used as a .repo file for dnf due to
it writing baseurl and gpgkey as a list instead of a string. Add a new
function to write this in the correct format, and limited to the fields
we use.
Add a test for the new function.
Fix /projects/source/info to return an error 400 if a nonexistant TOML
source is requested. If JSON is used the error is part of the standard
response.
Update test_server.py to check for the correct error code.
(cherry picked from commit afa89ea657)
When adding a source failed it wasn't being removed from the dnf object.
This fixes that, and returns an error when setting up the source fails.
Also adds a test for it.
This also includes detecting rawhide vs. non-rawhide releases and
adjusting the tests accordingly (some of the source names change).
(cherry picked from commit dd8e4d9e99)