Compare commits

..

9 Commits

Author SHA1 Message Date
c6b0548ebd Merge pull request 'Update Vendors patch against upstream version 0.23.0-2 (eabab8c496a7d6a76ff1aa0d7e34b0345530e30a)' (#27) from ykohut/leapp-repository:a8-elevate-0230 into a8-elevate-0230
Reviewed-on: #27
2025-12-01 14:32:21 +00:00
Yuriy Kohut
33566e267f Update Vendors patch against upstream eabab8c496a7d6a76ff1aa0d7e34b0345530e30a (0.23.0-2)
The package version 0.23.0-2.elevate.1
2025-12-01 15:07:43 +02:00
f53b53a68e Import from CS git
(cherry picked from commit 316902b8f7)
2025-12-01 15:05:53 +02:00
Yuriy Kohut
ca41224a54 Update Vendors patch against upstream eabab8c496a7d6a76ff1aa0d7e34b0345530e30a (0.23.0-1)
The package version 0.23.0-1.elevate.5
2025-11-28 14:34:46 +02:00
Yuriy Kohut
1d1276410e Update Vendors patch against upstream 249cd3b203d05937a4d4a02b484444291f4aed85 (0.23.0-1)
The package version 0.23.0-1.elevate.4
2025-11-14 16:20:43 +02:00
Yuriy Kohut
3c063417e2 Update Vendors patch against upstream b7f862249e2227d2c5f3f6e33d74f8d2a2367a11 (0.23.0-1)
The package version 0.23.0-1.elevate.3
2025-11-13 12:06:23 +02:00
Yuriy Kohut
2f853fc90e Update Vendors patch against upstream 47fce173e75408d9a7a26225d389161caf72e244 (0.23.0-1)
The package version 0.23.0-1.elevate.2
2025-09-30 10:33:44 +03:00
15fd026e53 Merge pull request 'Add Vendors patch created against upstream 0.23.0-1' (#26) from ykohut/leapp-repository:a8-elevate-0230 into a8-elevate-0230
Reviewed-on: #26
2025-09-12 13:44:41 +00:00
Yuriy Kohut
476ebfeb49 Add Vendors patch created against upstream 0.23.0-1 (dcf53c28ea9c3fdd03277abcdeb1d124660f7f8e)
The package version 0.23.0-1.elevate.1
2025-09-12 15:05:17 +03:00
44 changed files with 6132 additions and 7263 deletions

View File

@ -1,26 +0,0 @@
From 1340bd735f731087aad53c8159a3616298fe0f57 Mon Sep 17 00:00:00 2001
From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com>
Date: Mon, 10 Nov 2025 18:43:05 +0000
Subject: [PATCH 070/111] chore(deps): update
peter-evans/create-or-update-comment action to v5
---
.github/workflows/pr-welcome-msg.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.github/workflows/pr-welcome-msg.yml b/.github/workflows/pr-welcome-msg.yml
index f056fb79..4c12ab2a 100644
--- a/.github/workflows/pr-welcome-msg.yml
+++ b/.github/workflows/pr-welcome-msg.yml
@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Create comment
- uses: peter-evans/create-or-update-comment@v4
+ uses: peter-evans/create-or-update-comment@v5
with:
issue-number: ${{ github.event.pull_request.number }}
body: |
--
2.52.0

View File

@ -1,28 +0,0 @@
From 4feb11f7d4d0a265611e5d2f80b91c05116885b7 Mon Sep 17 00:00:00 2001
From: Peter Mocary <pmocary@redhat.com>
Date: Fri, 21 Nov 2025 15:16:31 +0100
Subject: [PATCH 071/111] fix cgroups-v1 inhibitor remediation
The inhibitor, that checks if cgroups-v1 are enabled, generated remediation
command which had incorrect form. This patch fixes the separator used
for kernel arguments that need to be removed.
---
.../actors/inhibitcgroupsv1/libraries/inhibitcgroupsv1.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/repos/system_upgrade/el9toel10/actors/inhibitcgroupsv1/libraries/inhibitcgroupsv1.py b/repos/system_upgrade/el9toel10/actors/inhibitcgroupsv1/libraries/inhibitcgroupsv1.py
index 6c891f22..0a38ace3 100644
--- a/repos/system_upgrade/el9toel10/actors/inhibitcgroupsv1/libraries/inhibitcgroupsv1.py
+++ b/repos/system_upgrade/el9toel10/actors/inhibitcgroupsv1/libraries/inhibitcgroupsv1.py
@@ -48,7 +48,7 @@ def process():
[
"grubby",
"--update-kernel=ALL",
- '--remove-args="{}"'.format(",".join(remediation_cmd_args)),
+ '--remove-args="{}"'.format(" ".join(remediation_cmd_args)),
],
],
),
--
2.52.0

View File

@ -1,41 +0,0 @@
From 3f5bb62e03a33e43893234d5912570b3dad15f82 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Fri, 19 Sep 2025 16:37:14 +0200
Subject: [PATCH 072/111] Update rhel gpg-signatures map
Add auxilliary key 2, auxilliary key 3 and the old signing key to
"keys". These were handled in obsoleted-keys, but not used here.
Add beta key to keys obsoleted in rhel 8.
---
.../common/files/distro/rhel/gpg-signatures.json | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json b/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json
index 3cc67f82..5b27e197 100644
--- a/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json
+++ b/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json
@@ -4,14 +4,18 @@
"5326810137017186",
"938a80caf21541eb",
"fd372689897da07a",
- "45689c882fa658e0"
+ "45689c882fa658e0",
+ "f76f66c3d4082792",
+ "5054e4a45a6340b3",
+ "219180cddb42a60e"
],
"obsoleted-keys": {
"7": [],
"8": [
"gpg-pubkey-2fa658e0-45700c69",
"gpg-pubkey-37017186-45761324",
- "gpg-pubkey-db42a60e-37ea5438"
+ "gpg-pubkey-db42a60e-37ea5438",
+ "gpg-pubkey-897da07a-3c979a7f"
],
"9": ["gpg-pubkey-d4082792-5b32db75"],
"10": ["gpg-pubkey-fd431d51-4ae0493b"]
--
2.52.0

View File

@ -1,28 +0,0 @@
From 7e345c872b073022c459f40ae404a8a38a90038b Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Fri, 19 Sep 2025 16:41:22 +0200
Subject: [PATCH 073/111] Update centos gpg-signatures map
Add SIG Extras key to "keys".
---
.../common/files/distro/centos/gpg-signatures.json | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json b/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json
index fe85e03c..1be56176 100644
--- a/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json
+++ b/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json
@@ -1,8 +1,9 @@
{
"keys": [
"24c6a8a7f4a80eb5",
+ "4eb84e71f2ee9d55",
"05b555b38483c65d",
- "4eb84e71f2ee9d55"
+ "1ff6a2171d997668"
],
"obsoleted-keys": {
"10": ["gpg-pubkey-8483c65d-5ccc5b19"]
--
2.52.0

View File

@ -1,32 +0,0 @@
From 4dff2a8d33fafc65e6c76687cc62006f82f7360a Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Mon, 24 Nov 2025 00:23:37 +0100
Subject: [PATCH 074/111] gpg/almalinux: Remove Eurolinux Tuxcare GPG key
Only RPM GPG keys provided by the distribution should be included in the
gpg-signatures.json files as they are used to detect which packages are
1st party.
The Eurolinux Tuxcare GPG key is 3rd party, therefore it should not be
included.
---
.../common/files/distro/almalinux/gpg-signatures.json | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/repos/system_upgrade/common/files/distro/almalinux/gpg-signatures.json b/repos/system_upgrade/common/files/distro/almalinux/gpg-signatures.json
index 24bc93ba..18b6c516 100644
--- a/repos/system_upgrade/common/files/distro/almalinux/gpg-signatures.json
+++ b/repos/system_upgrade/common/files/distro/almalinux/gpg-signatures.json
@@ -3,8 +3,7 @@
"51d6647ec21ad6ea",
"d36cb86cb86b3716",
"2ae81e8aced7258b",
- "429785e181b961a5",
- "d07bf2a08d50eb66"
+ "429785e181b961a5"
],
"obsoleted-keys": {
"7": [],
--
2.52.0

View File

@ -1,532 +0,0 @@
From 37a071df9242e10821b8d6ab7a0e727ffb7e871d Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Fri, 19 Sep 2025 17:18:48 +0200
Subject: [PATCH 075/111] removeobsoletegpgkeys: Adjust for converting
When doing upgrade + conversion, removing obsolete GPG from the current
(or target for that matter) distro doesn't make sense, because we are
moving to a different distro.
Instead, all distro provided keys from the source distro need to be
removed, as the target distro uses it's own keys. Those are imported
elsewhere later during the upgrade.
A new list is added to the gpg-signatures.json maps, which contains the
names of the fake RPMs "generated" upon importing a GPG key into the RPM
DB.
These are in the order the key IDs ("keys" in the map) are in, however
the mapping is not always 1:1 (e.g. the Centos SIG Extras keys).
The key id could be mapped to the RPM names, however since the RPM NVR
format is:
gpg-pubkey-<last 8 chars from key ID>-<creation time of the signature packet>
there could be a collision between the key IDs.
Some key RPMs are missing in Alma Linux map as I couldn't find out what
keys some the fingerprints correspond to.
Jira: RHEL-110190
I addded annotations to the keys at:
https://github.com/oamg/leapp-repository/wiki/gpg%E2%80%90signatures.json-key-annotations.
---
.../actors/removeobsoletegpgkeys/actor.py | 11 +-
.../libraries/removeobsoleterpmgpgkeys.py | 50 +++-
.../tests/test_removeobsoleterpmgpgkeys.py | 244 ++++++++++++------
.../distro/almalinux/gpg-signatures.json | 12 +-
.../files/distro/centos/gpg-signatures.json | 12 +-
.../files/distro/rhel/gpg-signatures.json | 20 +-
6 files changed, 229 insertions(+), 120 deletions(-)
diff --git a/repos/system_upgrade/common/actors/removeobsoletegpgkeys/actor.py b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/actor.py
index 5674ee3f..58b15a84 100644
--- a/repos/system_upgrade/common/actors/removeobsoletegpgkeys/actor.py
+++ b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/actor.py
@@ -8,9 +8,14 @@ class RemoveObsoleteGpgKeys(Actor):
"""
Remove obsoleted RPM GPG keys.
- New version might make existing RPM GPG keys obsolete. This might be caused
- for example by the hashing algorithm becoming deprecated or by the key
- getting replaced.
+ The definition of what keys are considered obsolete depends on whether the
+ upgrade also does a conversion:
+ - If not converting, the obsolete keys are those that are no longer valid
+ on the target version. This might be caused for example by the hashing
+ algorithm becoming deprecated or by the key getting replaced. Note that
+ only keys provided by the vendor of the OS are handled.
+ - If converting, the obsolete keys are all of the keys provided by the
+ vendor of the source distribution.
A DNFWorkaround is registered to actually remove the keys.
"""
diff --git a/repos/system_upgrade/common/actors/removeobsoletegpgkeys/libraries/removeobsoleterpmgpgkeys.py b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/libraries/removeobsoleterpmgpgkeys.py
index df08e6fa..7d047395 100644
--- a/repos/system_upgrade/common/actors/removeobsoletegpgkeys/libraries/removeobsoleterpmgpgkeys.py
+++ b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/libraries/removeobsoleterpmgpgkeys.py
@@ -1,3 +1,5 @@
+import itertools
+
from leapp.libraries.common.config import get_source_distro_id, get_target_distro_id
from leapp.libraries.common.config.version import get_target_major_version
from leapp.libraries.common.distro import get_distribution_data
@@ -6,18 +8,25 @@ from leapp.libraries.stdlib import api
from leapp.models import DNFWorkaround, InstalledRPM
+def _is_key_installed(key):
+ """
+ :param key: The NVR of the gpg key RPM (e.g. gpg-pubkey-1d997668-61bae63b)
+ """
+ name, version, release = key.rsplit("-", 2)
+ return has_package(InstalledRPM, name, version=version, release=release)
+
+
def _get_obsolete_keys():
"""
- Return keys obsoleted in target and previous versions
+ Get keys obsoleted in target and previous major versions
"""
distribution = get_target_distro_id()
- obsoleted_keys_map = get_distribution_data(distribution).get('obsoleted-keys', {})
+ obsoleted_keys_map = get_distribution_data(distribution).get("obsoleted-keys", {})
keys = []
for version in range(7, int(get_target_major_version()) + 1):
try:
for key in obsoleted_keys_map[str(version)]:
- name, version, release = key.rsplit("-", 2)
- if has_package(InstalledRPM, name, version=version, release=release):
+ if _is_key_installed(key):
keys.append(key)
except KeyError:
pass
@@ -25,6 +34,22 @@ def _get_obsolete_keys():
return keys
+def _get_source_distro_keys():
+ """
+ Get all known keys of the source distro
+
+ This includes keys from all relevant previous OS versions as all of those
+ might be present on the system.
+ """
+ distribution = get_source_distro_id()
+ keys = get_distribution_data(distribution).get("keys", {})
+ return [
+ key
+ for key in itertools.chain.from_iterable(keys.values())
+ if _is_key_installed(key)
+ ]
+
+
def register_dnfworkaround(keys):
api.produce(
DNFWorkaround(
@@ -36,13 +61,12 @@ def register_dnfworkaround(keys):
def process():
- if get_source_distro_id() != get_target_distro_id():
- # TODO adjust for conversions, in the current state it would not have
- # any effect, just skip it
- return
-
- keys = _get_obsolete_keys()
- if not keys:
- return
+ if get_source_distro_id() == get_target_distro_id():
+ # only upgrading - remove keys obsoleted in previous versions
+ keys = _get_obsolete_keys()
+ else:
+ # also converting - we need to remove all keys from the source distro
+ keys = _get_source_distro_keys()
- register_dnfworkaround(keys)
+ if keys:
+ register_dnfworkaround(keys)
diff --git a/repos/system_upgrade/common/actors/removeobsoletegpgkeys/tests/test_removeobsoleterpmgpgkeys.py b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/tests/test_removeobsoleterpmgpgkeys.py
index b78174cc..8b9b842b 100644
--- a/repos/system_upgrade/common/actors/removeobsoletegpgkeys/tests/test_removeobsoleterpmgpgkeys.py
+++ b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/tests/test_removeobsoleterpmgpgkeys.py
@@ -1,77 +1,79 @@
import os
+import unittest.mock as mock
import pytest
from leapp.libraries.actor import removeobsoleterpmgpgkeys
-from leapp.libraries.common.config.version import get_target_major_version
-from leapp.libraries.common.rpms import has_package
from leapp.libraries.common.testutils import CurrentActorMocked, produce_mocked
from leapp.libraries.stdlib import api
-from leapp.models import DNFWorkaround, InstalledRPM, RPM
+from leapp.models import InstalledRPM, RPM
+_CUR_DIR = os.path.dirname(os.path.abspath(__file__))
-def _get_test_installedrpm():
- return InstalledRPM(
+
+def common_folder_path_mocked(folder):
+ return os.path.join(_CUR_DIR, "../../../files/", folder)
+
+
+def test_is_key_installed(monkeypatch):
+ installed_rpms = InstalledRPM(
items=[
RPM(
- name='gpg-pubkey',
- version='d4082792',
- release='5b32db75',
- epoch='0',
- packager='Red Hat, Inc. (auxiliary key 2) <security@redhat.com>',
- arch='noarch',
- pgpsig=''
+ name="gpg-pubkey",
+ version="d4082792",
+ release="5b32db75",
+ epoch="0",
+ packager="Red Hat, Inc. (auxiliary key 2) <security@redhat.com>",
+ arch="noarch",
+ pgpsig="",
),
RPM(
- name='gpg-pubkey',
- version='2fa658e0',
- release='45700c69',
- epoch='0',
- packager='Red Hat, Inc. (auxiliary key) <security@redhat.com>',
- arch='noarch',
- pgpsig=''
+ name="gpg-pubkey",
+ version="2fa658e0",
+ release="45700c69",
+ epoch="0",
+ packager="Red Hat, Inc. (auxiliary key) <security@redhat.com>",
+ arch="noarch",
+ pgpsig="",
),
RPM(
- name='gpg-pubkey',
- version='12345678',
- release='abcdefgh',
- epoch='0',
- packager='made up',
- arch='noarch',
- pgpsig=''
+ name="gpg-pubkey",
+ version="12345678",
+ release="abcdefgh",
+ epoch="0",
+ packager="made up",
+ arch="noarch",
+ pgpsig="",
),
]
)
+ monkeypatch.setattr(
+ api, "current_actor", CurrentActorMocked(msgs=[installed_rpms])
+ )
+
+ assert removeobsoleterpmgpgkeys._is_key_installed("gpg-pubkey-d4082792-5b32db75")
+ assert removeobsoleterpmgpgkeys._is_key_installed("gpg-pubkey-2fa658e0-45700c69")
+ assert removeobsoleterpmgpgkeys._is_key_installed("gpg-pubkey-12345678-abcdefgh")
+ assert not removeobsoleterpmgpgkeys._is_key_installed(
+ "gpg-pubkey-db42a60e-37ea5438"
+ )
+
@pytest.mark.parametrize(
"version, expected",
[
- (9, ["gpg-pubkey-d4082792-5b32db75", "gpg-pubkey-2fa658e0-45700c69"]),
- (8, ["gpg-pubkey-2fa658e0-45700c69"])
+ ("9", ["gpg-pubkey-d4082792-5b32db75", "gpg-pubkey-2fa658e0-45700c69"]),
+ ("8", ["gpg-pubkey-2fa658e0-45700c69"])
]
)
def test_get_obsolete_keys(monkeypatch, version, expected):
- def get_target_major_version_mocked():
- return version
-
- monkeypatch.setattr(
- removeobsoleterpmgpgkeys,
- "get_target_major_version",
- get_target_major_version_mocked,
- )
-
+ monkeypatch.setattr(api, "current_actor", CurrentActorMocked(dst_ver=version))
+ monkeypatch.setattr(api, "get_common_folder_path", common_folder_path_mocked)
monkeypatch.setattr(
- api,
- "current_actor",
- CurrentActorMocked(
- msgs=[_get_test_installedrpm()]
- ),
+ removeobsoleterpmgpgkeys, "_is_key_installed", lambda key: key in expected
)
- cur_dir = os.path.dirname(os.path.abspath(__file__))
- monkeypatch.setattr(api, 'get_common_folder_path', lambda folder: os.path.join(cur_dir, '../../../files/', folder))
-
keys = removeobsoleterpmgpgkeys._get_obsolete_keys()
assert set(keys) == set(expected)
@@ -79,50 +81,83 @@ def test_get_obsolete_keys(monkeypatch, version, expected):
@pytest.mark.parametrize(
"version, obsoleted_keys, expected",
[
- (10, None, []),
- (10, {}, []),
- (10, {"8": ["gpg-pubkey-888-abc"], "10": ["gpg-pubkey-10-10"]}, ["gpg-pubkey-888-abc", "gpg-pubkey-10-10"]),
- (9, {"8": ["gpg-pubkey-888-abc"], "9": ["gpg-pubkey-999-def"]}, ["gpg-pubkey-999-def", "gpg-pubkey-888-abc"]),
- (8, {"8": ["gpg-pubkey-888-abc"], "9": ["gpg-pubkey-999-def"]}, ["gpg-pubkey-888-abc"])
- ]
+ ("10", None, []),
+ ("10", {}, []),
+ (
+ "10",
+ {"8": ["gpg-pubkey-888-abc"], "10": ["gpg-pubkey-10-10"]},
+ ["gpg-pubkey-888-abc", "gpg-pubkey-10-10"],
+ ),
+ (
+ "9",
+ {"8": ["gpg-pubkey-888-abc"], "9": ["gpg-pubkey-999-def"]},
+ ["gpg-pubkey-999-def", "gpg-pubkey-888-abc"],
+ ),
+ (
+ "8",
+ {"8": ["gpg-pubkey-888-abc"], "9": ["gpg-pubkey-999-def"]},
+ ["gpg-pubkey-888-abc"],
+ ),
+ ],
)
-def test_get_obsolete_keys_incomplete_data(monkeypatch, version, obsoleted_keys, expected):
- def get_target_major_version_mocked():
- return version
+def test_get_obsolete_keys_incomplete_data(
+ monkeypatch, version, obsoleted_keys, expected
+):
+ monkeypatch.setattr(api, "current_actor", CurrentActorMocked(dst_ver=version))
+ monkeypatch.setattr(
+ removeobsoleterpmgpgkeys, "_is_key_installed", lambda key: key in expected
+ )
def get_distribution_data_mocked(_distro):
if obsoleted_keys is None:
return {}
- return {'obsoleted-keys': obsoleted_keys}
-
- def has_package_mocked(*args, **kwargs):
- return True
+ return {"obsoleted-keys": obsoleted_keys}
monkeypatch.setattr(
- removeobsoleterpmgpgkeys,
- "get_target_major_version",
- get_target_major_version_mocked,
+ removeobsoleterpmgpgkeys, "get_distribution_data", get_distribution_data_mocked
)
- monkeypatch.setattr(
- removeobsoleterpmgpgkeys,
- "get_distribution_data",
- get_distribution_data_mocked,
- )
+ keys = removeobsoleterpmgpgkeys._get_obsolete_keys()
+ assert set(keys) == set(expected)
- monkeypatch.setattr(
- removeobsoleterpmgpgkeys,
- "has_package",
- has_package_mocked,
- )
+@pytest.mark.parametrize(
+ "distro, expected",
+ [
+ (
+ "centos",
+ [
+ "gpg-pubkey-8483c65d-5ccc5b19",
+ "gpg-pubkey-1d997668-621e3cac",
+ "gpg-pubkey-1d997668-61bae63b",
+ ],
+ ),
+ (
+ "rhel",
+ [
+ "gpg-pubkey-fd431d51-4ae0493b",
+ "gpg-pubkey-37017186-45761324",
+ "gpg-pubkey-f21541eb-4a5233e8",
+ "gpg-pubkey-897da07a-3c979a7f",
+ "gpg-pubkey-2fa658e0-45700c69",
+ "gpg-pubkey-d4082792-5b32db75",
+ "gpg-pubkey-5a6340b3-6229229e",
+ "gpg-pubkey-db42a60e-37ea5438",
+ ],
+ ),
+ ],
+)
+def test_get_source_distro_keys(monkeypatch, distro, expected):
+ """
+ Test that the correct keys are returned for each distro.
+ """
+ monkeypatch.setattr(api, "current_actor", CurrentActorMocked(src_distro=distro))
+ monkeypatch.setattr(api, "get_common_folder_path", common_folder_path_mocked)
monkeypatch.setattr(
- api,
- "current_actor",
- CurrentActorMocked(),
+ removeobsoleterpmgpgkeys, "_is_key_installed", lambda _key: True
)
- keys = removeobsoleterpmgpgkeys._get_obsolete_keys()
+ keys = removeobsoleterpmgpgkeys._get_source_distro_keys()
assert set(keys) == set(expected)
@@ -134,16 +169,61 @@ def test_get_obsolete_keys_incomplete_data(monkeypatch, version, obsoleted_keys,
]
)
def test_workaround_should_register(monkeypatch, keys, should_register):
- def get_obsolete_keys_mocked():
- return keys
-
monkeypatch.setattr(
- removeobsoleterpmgpgkeys,
- '_get_obsolete_keys',
- get_obsolete_keys_mocked
+ removeobsoleterpmgpgkeys, "_get_obsolete_keys", lambda: keys
)
- monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(api, "produce", produce_mocked())
monkeypatch.setattr(api, "current_actor", CurrentActorMocked())
removeobsoleterpmgpgkeys.process()
assert api.produce.called == should_register
+
+
+def test_process(monkeypatch):
+ """
+ Test that the correct path is taken depending on whether also converting
+ """
+ obsolete = ["gpg-pubkey-12345678-abcdefgh"]
+ source_distro = ["gpg-pubkey-87654321-hgfedcba"]
+
+ monkeypatch.setattr(
+ removeobsoleterpmgpgkeys, "_get_obsolete_keys", lambda: obsolete
+ )
+ monkeypatch.setattr(
+ removeobsoleterpmgpgkeys, "_get_source_distro_keys", lambda: source_distro,
+ )
+
+ # upgrade only path
+ monkeypatch.setattr(
+ api, "current_actor", CurrentActorMocked(src_distro="rhel", dst_distro="rhel")
+ )
+ with mock.patch(
+ "leapp.libraries.actor.removeobsoleterpmgpgkeys.register_dnfworkaround"
+ ):
+ removeobsoleterpmgpgkeys.process()
+ removeobsoleterpmgpgkeys.register_dnfworkaround.assert_called_once_with(
+ obsolete
+ )
+
+ # upgrade + conversion paths
+ monkeypatch.setattr(
+ api, "current_actor", CurrentActorMocked(src_distro="rhel", dst_distro="centos")
+ )
+ with mock.patch(
+ "leapp.libraries.actor.removeobsoleterpmgpgkeys.register_dnfworkaround"
+ ):
+ removeobsoleterpmgpgkeys.process()
+ removeobsoleterpmgpgkeys.register_dnfworkaround.assert_called_once_with(
+ source_distro
+ )
+
+ monkeypatch.setattr(
+ api, "current_actor", CurrentActorMocked(src_distro="centos", dst_distro="rhel")
+ )
+ with mock.patch(
+ "leapp.libraries.actor.removeobsoleterpmgpgkeys.register_dnfworkaround"
+ ):
+ removeobsoleterpmgpgkeys.process()
+ removeobsoleterpmgpgkeys.register_dnfworkaround.assert_called_once_with(
+ source_distro
+ )
diff --git a/repos/system_upgrade/common/files/distro/almalinux/gpg-signatures.json b/repos/system_upgrade/common/files/distro/almalinux/gpg-signatures.json
index 18b6c516..b17e8a66 100644
--- a/repos/system_upgrade/common/files/distro/almalinux/gpg-signatures.json
+++ b/repos/system_upgrade/common/files/distro/almalinux/gpg-signatures.json
@@ -1,10 +1,10 @@
{
- "keys": [
- "51d6647ec21ad6ea",
- "d36cb86cb86b3716",
- "2ae81e8aced7258b",
- "429785e181b961a5"
- ],
+ "keys": {
+ "51d6647ec21ad6ea": ["gpg-pubkey-3abb34f8-5ffd890e"],
+ "d36cb86cb86b3716": ["gpg-pubkey-ced7258b-6525146f"],
+ "2ae81e8aced7258b": ["gpg-pubkey-b86b3716-61e69f29"],
+ "429785e181b961a5": ["gpg-pubkey-81b961a5-64106f70"]
+ },
"obsoleted-keys": {
"7": [],
"8": [],
diff --git a/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json b/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json
index 1be56176..1092ff58 100644
--- a/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json
+++ b/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json
@@ -1,10 +1,10 @@
{
- "keys": [
- "24c6a8a7f4a80eb5",
- "4eb84e71f2ee9d55",
- "05b555b38483c65d",
- "1ff6a2171d997668"
- ],
+ "keys": {
+ "24c6a8a7f4a80eb5": [],
+ "4eb84e71f2ee9d55": [],
+ "05b555b38483c65d": ["gpg-pubkey-8483c65d-5ccc5b19"],
+ "1ff6a2171d997668": ["gpg-pubkey-1d997668-621e3cac", "gpg-pubkey-1d997668-61bae63b"]
+ },
"obsoleted-keys": {
"10": ["gpg-pubkey-8483c65d-5ccc5b19"]
}
diff --git a/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json b/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json
index 5b27e197..d6c2328d 100644
--- a/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json
+++ b/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json
@@ -1,14 +1,14 @@
{
- "keys": [
- "199e2f91fd431d51",
- "5326810137017186",
- "938a80caf21541eb",
- "fd372689897da07a",
- "45689c882fa658e0",
- "f76f66c3d4082792",
- "5054e4a45a6340b3",
- "219180cddb42a60e"
- ],
+ "keys": {
+ "199e2f91fd431d51": ["gpg-pubkey-fd431d51-4ae0493b"],
+ "5326810137017186": ["gpg-pubkey-37017186-45761324"],
+ "938a80caf21541eb": ["gpg-pubkey-f21541eb-4a5233e8"],
+ "fd372689897da07a": ["gpg-pubkey-897da07a-3c979a7f"],
+ "45689c882fa658e0": ["gpg-pubkey-2fa658e0-45700c69"],
+ "f76f66c3d4082792": ["gpg-pubkey-d4082792-5b32db75"],
+ "5054e4a45a6340b3": ["gpg-pubkey-5a6340b3-6229229e"],
+ "219180cddb42a60e": ["gpg-pubkey-db42a60e-37ea5438"]
+ },
"obsoleted-keys": {
"7": [],
"8": [
--
2.52.0

View File

@ -1,228 +0,0 @@
From eabab8c496a7d6a76ff1aa0d7e34b0345530e30a Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Mon, 24 Nov 2025 16:44:53 +0100
Subject: [PATCH 076/111] docs: Add doc about community upgrades
The existing coding and PR workflow guidelines are split into separate
pages under "Contrbuting" and the new doc is added there as well.
Jira: RHEL-110563
---
.../coding-guidelines.md} | 53 +------------------
.../source/contributing/community-upgrades.md | 39 ++++++++++++++
docs/source/contributing/index.rst | 18 +++++++
docs/source/contributing/pr-guidelines.md | 48 +++++++++++++++++
docs/source/index.rst | 2 +-
5 files changed, 107 insertions(+), 53 deletions(-)
rename docs/source/{contrib-and-devel-guidelines.md => contributing/coding-guidelines.md} (68%)
create mode 100644 docs/source/contributing/community-upgrades.md
create mode 100644 docs/source/contributing/index.rst
create mode 100644 docs/source/contributing/pr-guidelines.md
diff --git a/docs/source/contrib-and-devel-guidelines.md b/docs/source/contributing/coding-guidelines.md
similarity index 68%
rename from docs/source/contrib-and-devel-guidelines.md
rename to docs/source/contributing/coding-guidelines.md
index 3229c8a4..d06d0200 100644
--- a/docs/source/contrib-and-devel-guidelines.md
+++ b/docs/source/contributing/coding-guidelines.md
@@ -1,5 +1,4 @@
-# Contribution and development guidelines
-## Code guidelines
+# Coding guidelines
Your code should follow the [Python Coding Guidelines](https://leapp.readthedocs.io/en/latest/contributing.html#follow-python-coding-guidelines) used for the leapp project. On top of these rules follow instructions
below.
@@ -84,53 +83,3 @@ guaranteed to exist and executable.
The use of the {py:mod}`subprocess` library is forbidden in leapp repositories.
Use of the library would require very good reasoning, why the
{py:func}`~leapp.libraries.stdlib.run` function cannot be used.
-
-## Commits and pull requests (PRs)
-### PR description
-The description should contain information about all introduced changes:
-* What has been changed
-* How it has been changed
-* The reason for the change
-* How could people try/test the PR
-* Reference to a Jira ticket, Github issue, ... if applicable
-
-Good description provides all information for readers without the need to
-read the code. Note that reviewers can decline to review the PR with a poor
-description.
-
-### Commit messages
-When your pull-request is ready to be reviewed, every commit needs to include
-a title and a body continuing a description of the change --- what problem is
-being solved and how. The end of the commit body should contain Jira issue
-number (if applicable), GitHub issue that is being fixed, etc.:
-```
- Commit title
-
- Commit message body on multiple lines
-
- Jira-ref: <ticket-number>
-```
-
-Note that good commit message should provide information in similar way like
-the PR description. Poorly written commit messages can block the merge of PR
-or proper review.
-
-### Granularity of commits
-The granularity of commits depends strongly on the problem being solved. However,
-a large number of small commits is typically undesired. If possible, aim a
-Git history such that commits can be reverted individually, without requiring reverting
-numerous other dependent commits in order to get the `main` branch into a working state.
-
-Note that commits fixing problems of other commits in the PR are expected to be
-squashed before the final review and merge of the PR. Using of `git commit --fixup ...`
-and `git commit --squash ...` commands can help you to prepare such commits
-properly in advance and make the rebase later easier using `git rebase -i --autosquash`.
-We suggest you to get familiar with these commands as it can make your work really
-easier. Note that when you are starting to get higher number of such fixing commits
-in your PR, it's good practice to use the rebase more often. High numbers of such
-commits could make the final rebase more tricky in the end. So your PR should not
-have more than 15 commits at any time.
-
-### Create a separate git branch for your changes
-TBD
-
diff --git a/docs/source/contributing/community-upgrades.md b/docs/source/contributing/community-upgrades.md
new file mode 100644
index 00000000..cbec0a24
--- /dev/null
+++ b/docs/source/contributing/community-upgrades.md
@@ -0,0 +1,39 @@
+# Community upgrades for Centos-like distros
+
+In the past, this project was solely focused on Red Hat Enterprise Linux upgrades. Recently, we've been extending and refactoring the `leapp-repository` codebase to allow upgrades of other distributions, such as CentOS Stream and also upgrades + conversions between different distributions in one step.
+
+This document outlines the state of support for upgrades of distributions other than RHEL. Note that support in this case doesn't mean what the codebase allows, but what the core leapp team supports in terms of issues, bugfixes, feature requests, testing, etc.
+
+RHEL upgrades and upgrades + conversions *to* RHEL are the only officially supported upgrade paths and are the primary focus of leapp developers. However, we are open to and welcome contributions from the community, allowing other upgrade (and conversion) paths in the codebase. For example, we've already integrated a contribution introducing upgrade paths for Alma Linux upgrades.
+
+This does not mean that we won't offer help outside of the outlined scope, but it is primarily up to the contributors contributing a particular upgrade path to maintain and test it. Also, it can take us some time to get to such PRs, so be patient please.
+
+Upon agreement we can also update the upgrade paths (in `upgrade_paths.json`) when there is a new release of the particular distribution. However note that we might include some upgrade paths required for conversions *to* RHEL on top of that.
+
+Contributions improving the overall upgrade experience are also welcome, as they always have been.
+
+```{note}
+By default, upgrade + conversion paths are automatically derived from upgrade paths. If this is not desired or other paths are required, feel free to open a pull request or open a [discussion](https://github.com/oamg/leapp-repository/discussions) on that topic.
+```
+
+## How to contribute
+
+Currently, the process for enabling upgrades and conversions for other distributions is not fully documented. In the meantime you can use the [pull request introducing Alma Linux upgrades](https://github.com/oamg/leapp-repository/pull/1391/) as reference. However, note that the leapp upgrade data files have special rules for updates, described below.
+
+### Leapp data files
+
+#### repomap.json
+
+To use correct target repositories during the upgrade automatically, the `repomap.json` data file needs to be updated to cover repositories of the newly added distribution. However, the file cannot be updated manually as its content is generated, hence any manual changes would be overwritten with the next update. Currently there is not straightforward way for the community to update our generators, but you can
+
+- submit a separate PR of how the resulting `repomap.json` file should look like, for an example you can take a look at [this PR](https://github.com/oamg/leapp-repository/pull/1395)
+- or provide the list of repositories (possibly also architectures) present on the distribution
+
+and we will update the generators accordingly, asking you to review the result then. We are discussing an improvement to make this more community friendly.
+
+#### pes-events.json and device_driver_deprecation_data.json
+
+Both PES events and device driver deprecation data only contain data for RHEL in the upstream `leapp-repository` and we will not include any data unrelated to RHEL. If you find a bug in the data, you can open a bug in the [RHEL Jira](https://issues.redhat.com/) for the `leapp-repository` component.
+
+Before contributing, make sure your PR conforms to our {doc}`Coding guidelines<coding-guidelines>`
+ and {doc}`PR guidelines<pr-guidelines>`.
diff --git a/docs/source/contributing/index.rst b/docs/source/contributing/index.rst
new file mode 100644
index 00000000..ebdc9151
--- /dev/null
+++ b/docs/source/contributing/index.rst
@@ -0,0 +1,18 @@
+Contributing
+========================================================
+
+.. toctree::
+ :maxdepth: 4
+ :caption: Contents:
+ :glob:
+
+ coding-guidelines
+ pr-guidelines
+ community-upgrades
+
+.. Indices and tables
+.. ==================
+..
+.. * :ref:`genindex`
+.. * :ref:`modindex`
+.. * :ref:`search`
diff --git a/docs/source/contributing/pr-guidelines.md b/docs/source/contributing/pr-guidelines.md
new file mode 100644
index 00000000..4f6ee4fe
--- /dev/null
+++ b/docs/source/contributing/pr-guidelines.md
@@ -0,0 +1,48 @@
+# Commits and pull requests (PRs)
+## PR description
+The description should contain information about all introduced changes:
+* What has been changed
+* How it has been changed
+* The reason for the change
+* How could people try/test the PR
+* Reference to a Jira ticket, Github issue, ... if applicable
+
+Good description provides all information for readers without the need to
+read the code. Note that reviewers can decline to review the PR with a poor
+description.
+
+## Commit messages
+When your pull-request is ready to be reviewed, every commit needs to include
+a title and a body continuing a description of the change --- what problem is
+being solved and how. The end of the commit body should contain Jira issue
+number (if applicable), GitHub issue that is being fixed, etc.:
+```
+ Commit title
+
+ Commit message body on multiple lines
+
+ Jira-ref: <ticket-number>
+```
+
+Note that good commit message should provide information in similar way like
+the PR description. Poorly written commit messages can block the merge of PR
+or proper review.
+
+## Granularity of commits
+The granularity of commits depends strongly on the problem being solved. However,
+a large number of small commits is typically undesired. If possible, aim a
+Git history such that commits can be reverted individually, without requiring reverting
+numerous other dependent commits in order to get the `main` branch into a working state.
+
+Note that commits fixing problems of other commits in the PR are expected to be
+squashed before the final review and merge of the PR. Using of `git commit --fixup ...`
+and `git commit --squash ...` commands can help you to prepare such commits
+properly in advance and make the rebase later easier using `git rebase -i --autosquash`.
+We suggest you to get familiar with these commands as it can make your work really
+easier. Note that when you are starting to get higher number of such fixing commits
+in your PR, it's good practice to use the rebase more often. High numbers of such
+commits could make the final rebase more tricky in the end. So your PR should not
+have more than 15 commits at any time.
+
+## Create a separate git branch for your changes
+TBD
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 27537ca4..ed68f751 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -21,7 +21,7 @@ providing Red Hat Enterprise Linux in-place upgrade functionality.
upgrade-architecture-and-workflow/index
configuring-ipu/index
libraries-and-api/index
- contrib-and-devel-guidelines
+ contributing/index
faq
.. Indices and tables
--
2.52.0

View File

@ -1,34 +0,0 @@
From 3fe0ed289460919b4f08e1ceb1f1be17dd99b302 Mon Sep 17 00:00:00 2001
From: Daniel Diblik <8378124+danmyway@users.noreply.github.com>
Date: Fri, 5 Dec 2025 16:09:06 +0100
Subject: [PATCH 077/111] Point the test pipelines to the next branch (#1461)
---
.packit.yaml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index 0c3f682a..83b7ce6a 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -110,7 +110,7 @@ jobs:
job: tests
trigger: ignore
fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "main"
+ fmf_ref: "next"
use_internal_tf: True
labels:
- sanity
@@ -447,7 +447,7 @@ jobs:
job: tests
trigger: ignore
fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "main"
+ fmf_ref: "next"
use_internal_tf: True
labels:
- sanity
--
2.52.0

View File

@ -1,34 +0,0 @@
From 2fb6beaec3e2f9badf5bf2956e4523c1b588b657 Mon Sep 17 00:00:00 2001
From: Michal Hecko <mhecko@redhat.com>
Date: Thu, 27 Nov 2025 22:30:31 +0100
Subject: [PATCH 078/111] boot: fix deps when bindmounting /boot to
/sysroot/boot
When /boot is a separate partition, we bindmount what=/sysroot/boot to
/boot, so that we can perform necessary checks when booting with FIPS
enabled. However, the current solution contains incorrect unit
dependencies: it requires sysroot-boot.target. There is no such target,
and the correct value should be sysroot-boot.mount. This patch corrects
the dependencies.
---
.../mount_units_generator/files/bundled_units/boot.mount | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/repos/system_upgrade/common/actors/initramfs/mount_units_generator/files/bundled_units/boot.mount b/repos/system_upgrade/common/actors/initramfs/mount_units_generator/files/bundled_units/boot.mount
index 869c5e4c..531f6c75 100644
--- a/repos/system_upgrade/common/actors/initramfs/mount_units_generator/files/bundled_units/boot.mount
+++ b/repos/system_upgrade/common/actors/initramfs/mount_units_generator/files/bundled_units/boot.mount
@@ -1,8 +1,8 @@
[Unit]
DefaultDependencies=no
Before=local-fs.target
-After=sysroot-boot.target
-Requires=sysroot-boot.target
+After=sysroot-boot.mount
+Requires=sysroot-boot.mount
[Mount]
What=/sysroot/boot
--
2.52.0

View File

@ -1,30 +0,0 @@
From 07e2ec22c2f1aae09a318b48562712f1477799b9 Mon Sep 17 00:00:00 2001
From: karolinku <kkula@redhat.com>
Date: Mon, 1 Dec 2025 16:51:48 +0100
Subject: [PATCH 080/111] Fix remediation command to wrap it with quotes
In checkrootsymlinks actor, remediation command did not include
necessary double quotes, what makes the command syntactically
incorrect and not able to apply.
Jira: RHEL-30447
---
repos/system_upgrade/common/actors/checkrootsymlinks/actor.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/checkrootsymlinks/actor.py b/repos/system_upgrade/common/actors/checkrootsymlinks/actor.py
index c35272b2..7b89bf7a 100644
--- a/repos/system_upgrade/common/actors/checkrootsymlinks/actor.py
+++ b/repos/system_upgrade/common/actors/checkrootsymlinks/actor.py
@@ -55,7 +55,7 @@ class CheckRootSymlinks(Actor):
os.path.relpath(item.target, '/'),
os.path.join('/', item.name)])
commands.append(command)
- rem_commands = [['sh', '-c', ' && '.join(commands)]]
+ rem_commands = [['sh', '-c', '"{}"'.format(' && '.join(commands))]]
# Generate reports about non-utf8 absolute links presence
nonutf_count = len(absolute_links_nonutf)
if nonutf_count > 0:
--
2.52.0

View File

@ -1,188 +0,0 @@
From 2c1ecc24b1b6bbba074a7b6cd2dab994ab26a6cb Mon Sep 17 00:00:00 2001
From: karolinku <kkula@redhat.com>
Date: Wed, 23 Apr 2025 11:46:07 +0200
Subject: [PATCH 081/111] Add upstream doc about running single actor
JIRA: RHELMISC-11596
---
.../tutorials/howto-single-actor-run.md | 155 ++++++++++++++++++
docs/source/tutorials/index.rst | 1 +
2 files changed, 156 insertions(+)
create mode 100644 docs/source/tutorials/howto-single-actor-run.md
diff --git a/docs/source/tutorials/howto-single-actor-run.md b/docs/source/tutorials/howto-single-actor-run.md
new file mode 100644
index 00000000..728ca083
--- /dev/null
+++ b/docs/source/tutorials/howto-single-actor-run.md
@@ -0,0 +1,155 @@
+# Running a single Actor
+
+During development or debugging of actors there may appear a need of running single actor instead of the entire workflow. The advantages of such approach include:
+- **Time and resource efficiency** - Running the entire workflow takes time and resources. Source system is scanned, information is collected and stored, in-place upgrade process goes through several phases. All these actions take time, actors are run multiple times during debugging or development process, so preparing single actor execution lets us save time.
+- **Isolation of problem** - When debugged issue is related to single actor, this approach allows to isolate the issue without interference from other actors.
+
+
+```{hint}
+In practice, running a single actor for debugging does not have to be the best way to start when you do not have much experience with Leapp and IPU yet. However, in some cases it's still very valuable and helpful.
+```
+
+The execution of an actor using the `snactor` tool seems simple. In case of system upgrade leapp repositories it's not so straightforward and
+it can be quite complicated. In this guide we share our experience how to use `snactor` correctly, describing typical problems that developers hit.
+
+There are two main approaches:
+- **Running an actor with an empty or non-existent leapp database** -- applicable when a crafted data (or no data at all) is needed. Usually during development.
+- **Running an actor with leapp database filled by previous leapp execution** -- useful for debugging when the leapp.db file is available and want to run the actor in the same context as it has been previously executed when an error occurred.
+
+```{note}
+The leapp database refers to the `leapp.db` file. In case of using snactor, it's by default present in the `.leapp` directory of the used leapp repository
+scope.
+```
+
+````{tip}
+Cleaning the database can be managed with `snactor` tool command:
+```shell
+snactor messages clear
+```
+In other way, the database file can be also simply removed instead of using snactor.
+````
+
+
+Since an actor seems to be an independent piece of code, there is a dependency chain to resolve inside a workflow, especially around consumed messages and configuration which have to be resolved. When running entire In-Place Upgrade process, those dependencies needed for each actor are satisfied by assignment of each actor to specific phase, where actors emit and consume messages in desired sequence. Single actor usually needs specific list of such requirements, which can be fulfilled by manual preparation of this dependency chain. This very limited amount of resources needed for single actor can be called minimal context.
+
+
+## Running a single actor with minimal context
+
+It is possible to run a single actor without proceeding with `leapp preupgrade` machinery.
+This solution is based on the snactor tool. However, this solution requires minimal context to run.
+
+As mentioned before and described in [article](https://leapp.readthedocs.io/en/stable/building-blocks-and-architecture.html#architecture-overview)
+about workflow architecture, most of the actors are part of the produce/consume chain of messages. Important step in this procedure is to recreate the sequence of actors to be run to fulfill a chain of dependencies and provide necessary variables.
+
+Let's explain these steps based on a real case. The following example will be based on the `scan_fips` actor.
+
+
+### Initial configuration
+
+All actors (even those which are not depending on any message emitted by other actors) depend on some initial configuration which is provided by the `ipu_workflow_config` [actor](https://github.com/oamg/leapp-repository/blob/main/repos/system_upgrade/common/actors/ipuworkflowconfig/libraries/ipuworkflowconfig.py). No matter what actor you would like to run, the first step is always to run the `ipu_workflow_config` actor.
+
+Due to some missing initial variables, which usually are set by the framework, those variables need to be exported manually. Note that following vars are example ones, adjust them to your needs depending on your system configuration:
+```shell
+
+export LEAPP_UPGRADE_PATH_FLAVOUR=default
+export LEAPP_UPGRADE_PATH_TARGET_RELEASE=9.8
+export LEAPP_TARGET_OS=9.8
+```
+
+The `ipu_workflow_config` actor produces `IPUWorkflow` message, which contains all required initial config, so at the beginning execute:
+
+```shell
+snactor run ipu_workflow_config --print-output --save-output
+```
+
+```{note}
+Option `--save-output` is necessary to preserve output of this command in Leapp database. Without saving the message, it will not be available for other actors. Option *--print-output* is optional.
+```
+
+### Resolving actor's message dependencies
+
+All basic information what actor consumes and produce can be found in each `actor.py` [code](https://github.com/oamg/leapp-repository/blob/main/repos/system_upgrade/common/actors/scanfips/actor.py#L13-L14). In case of `scan_fips` actor it's:
+
+```shell
+ consumes = (KernelCmdline,)
+ produces = (FIPSInfo,)
+```
+
+This actor consumes one message and produces another. Now we need to track the consumed message, which is `KernelCmdline`. Grep the cloned repository to find that the actor which produces such [message](https://github.com/oamg/leapp-repository/blob/main/repos/system_upgrade/common/actors/scankernelcmdline/actor.py#L14) is `scan_kernel_cmdline`.
+
+```shell
+snactor run scan_kernel_cmdline --print-output --save-output --actor-config IPUConfig
+```
+
+```{note}
+Important step here is to point out what actor config needs to be used, `IPUConfig` in that case.
+This parameter needs to be specified every time you want to run an actor, pointing to proper configuration.
+```
+
+This [scan_kernel_cmdline](https://github.com/oamg/leapp-repository/blob/main/repos/system_upgrade/common/actors/scankernelcmdline/actor.py#L13) doesn't consume anything: `consumes = ()`. So finally the desired actor can be run:
+
+```shell
+snactor run scan_fips --print-output --save-output --actor-config IPUConfig
+```
+
+### Limitations
+Note that not all cases will be as simple as the presented one, sometimes actors depend on multiple messages originating from other actors, requiring longer session of environment recreation.
+
+Also actors designed to run on other architectures will not be able to run.
+
+## Run single actor with existing database
+
+In contrast to the previous paragraph, where we operated only on self-created minimal context, the tutorial below will explain how to work with existing or provided context.
+Sometimes - especially for debugging and reproduction of the bug it is very convenient to use provided Leapp database *leapp.db*. This is a file containing all information needed to run Leapp framework on a system, including messages and configurations. Usually all necessary environment for actors is set up by
+first run of `leapp preupgrade` command, when starting from scratch. In this case, we already have `leapp.db` (e.g. transferred from other system) database file.
+
+Every new run of `leapp` command creates another entry in the database. It creates
+another row in execution table with specific ID, so each context can be easily tracked and
+reproduced.
+
+See the list of executions using the [leapp-inspector](https://leapp-repository.readthedocs.io/latest/tutorials/troubleshooting-debugging.html#troubleshooting-with-leapp-inspector) tool.
+
+```shell
+leapp-inspector --db path/to/leapp.db executions
+```
+Example output:
+```shell
+##################################################################
+ Executions of Leapp
+##################################################################
+Execution | Timestamp
+------------------------------------ | ---------------------------
+d146e105-fafd-43a2-a791-54e141eeab9c | 2025-11-26T19:39:20.563594Z
+b7fd5dca-a49f-4af7-b70c-8bbcc28a4338 | 2025-11-26T19:39:38.034070Z
+50b5289f-be4d-4206-a6e0-73e3caa1f9ed | 2025-11-26T19:41:40.401273Z
+
+```
+
+
+To determine which context (execution) `leapp` will run, there are two variables: `LEAPP_DEBUG_PRESERVE_CONTEXT`
+and `LEAPP_EXECUTION_ID`. When the `LEAPP_DEBUG_PRESERVE_CONTEXT` is set to 1 and the environment has
+`LEAPP_EXECUTION_ID` set, the `LEAPP_EXECUTION_ID` is not overwritten with snactor's execution ID.
+This allows the developer to run actors in the same way as if the actor was run during the last leapp's
+execution, thus, avoiding to rerun the entire upgrade process.
+
+
+Set variables:
+```shell
+
+export LEAPP_DEBUG_PRESERVE_CONTEXT=1
+export LEAPP_EXECUTION_ID=50b5289f-be4d-4206-a6e0-73e3caa1f9ed
+```
+
+Run desired actors or the entire upgrade process safely now. Output will not be preserved as another context entry.
+```shell
+
+snactor run --config /etc/leapp/leapp.conf --actor-config IPUConfig <ActorName> --print-output
+```
+
+```{note}
+Point to `leapp.conf` file with *--config* option. By default this file is located in `/etc/leapp/` and, among others, it contains Leapp database (`leapp.db`) location. When working with given database, either adjust configuration file or place database file in default location.
+```
+
+### Limitations
+
+Even though the context was provided, it is not possible to run actors which are designed for different architecture than source system.
diff --git a/docs/source/tutorials/index.rst b/docs/source/tutorials/index.rst
index a04fc183..6059e76a 100644
--- a/docs/source/tutorials/index.rst
+++ b/docs/source/tutorials/index.rst
@@ -19,6 +19,7 @@ write leapp actors for **In-Place Upgrades (IPU)** with the leapp framework.
setup-devel-env
howto-first-actor-upgrade
+ howto-single-actor-run
custom-content
configurable-actors
templates/index
--
2.52.0

View File

@ -1,615 +0,0 @@
From 80169c215d6c59cfe86b3ac2fe9553fc3cf61836 Mon Sep 17 00:00:00 2001
From: Peter Mocary <pmocary@redhat.com>
Date: Thu, 4 Dec 2025 14:52:41 +0100
Subject: [PATCH 082/111] add handling for LVM configuration
The relevant user LVM configuration is now copied into the target userspace
container along with enabling LVM dracut module for upgrade initramfs creation.
The LVM configuration is copied into the target userspace container when
lvm2 package in installed. Based on the configuration, the devices file
is also copied in when present and enabled. The --nolvmconf option used when
executing dracut is changed into --lvmconf instead if the files are
copied into the target userspace container.
Jira: RHEL-14712
---
.../common/actors/checklvm/actor.py | 24 +++
.../actors/checklvm/libraries/checklvm.py | 74 ++++++++
.../actors/checklvm/tests/test_checklvm.py | 92 +++++++++
.../upgradeinitramfsgenerator/actor.py | 2 +
.../libraries/upgradeinitramfsgenerator.py | 36 ++--
.../common/actors/scanlvmconfig/actor.py | 18 ++
.../scanlvmconfig/libraries/scanlvmconfig.py | 52 ++++++
.../scanlvmconfig/tests/test_scanlvmconfig.py | 176 ++++++++++++++++++
.../system_upgrade/common/models/lvmconfig.py | 26 +++
9 files changed, 487 insertions(+), 13 deletions(-)
create mode 100644 repos/system_upgrade/common/actors/checklvm/actor.py
create mode 100644 repos/system_upgrade/common/actors/checklvm/libraries/checklvm.py
create mode 100644 repos/system_upgrade/common/actors/checklvm/tests/test_checklvm.py
create mode 100644 repos/system_upgrade/common/actors/scanlvmconfig/actor.py
create mode 100644 repos/system_upgrade/common/actors/scanlvmconfig/libraries/scanlvmconfig.py
create mode 100644 repos/system_upgrade/common/actors/scanlvmconfig/tests/test_scanlvmconfig.py
create mode 100644 repos/system_upgrade/common/models/lvmconfig.py
diff --git a/repos/system_upgrade/common/actors/checklvm/actor.py b/repos/system_upgrade/common/actors/checklvm/actor.py
new file mode 100644
index 00000000..167698db
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checklvm/actor.py
@@ -0,0 +1,24 @@
+from leapp.actors import Actor
+from leapp.libraries.actor.checklvm import check_lvm
+from leapp.models import DistributionSignedRPM, LVMConfig, TargetUserSpaceUpgradeTasks, UpgradeInitramfsTasks
+from leapp.reporting import Report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+
+
+class CheckLVM(Actor):
+ """
+ Check if the LVM is installed and ensure the target userspace container
+ and initramfs are prepared to support it.
+
+ The LVM configuration files are copied into the target userspace container
+ so that the dracut is able to use them while creating the initramfs.
+ The dracut LVM module is enabled by this actor as well.
+ """
+
+ name = 'check_lvm'
+ consumes = (DistributionSignedRPM, LVMConfig)
+ produces = (Report, TargetUserSpaceUpgradeTasks, UpgradeInitramfsTasks)
+ tags = (ChecksPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ check_lvm()
diff --git a/repos/system_upgrade/common/actors/checklvm/libraries/checklvm.py b/repos/system_upgrade/common/actors/checklvm/libraries/checklvm.py
new file mode 100644
index 00000000..073bfbf4
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checklvm/libraries/checklvm.py
@@ -0,0 +1,74 @@
+import os
+
+from leapp import reporting
+from leapp.libraries.common.rpms import has_package
+from leapp.libraries.stdlib import api
+from leapp.models import (
+ CopyFile,
+ DistributionSignedRPM,
+ DracutModule,
+ LVMConfig,
+ TargetUserSpaceUpgradeTasks,
+ UpgradeInitramfsTasks
+)
+
+LVM_CONFIG_PATH = '/etc/lvm/lvm.conf'
+LVM_DEVICES_FILE_PATH_PREFIX = '/etc/lvm/devices'
+
+
+def _report_filter_detection():
+ title = 'LVM filter definition detected.'
+ summary = (
+ 'Beginning with RHEL 9, LVM devices file is used by default to select devices used by '
+ f'LVM. Since leapp detected the use of LVM filter in the {LVM_CONFIG_PATH} configuration '
+ 'file, the configuration won\'t be modified to use devices file during the upgrade and '
+ 'the LVM filter will remain in use after the upgrade.'
+ )
+
+ remediation_hint = (
+ 'While not required, switching to the LVM devices file from the LVM filter is possible '
+ 'using the following command. The command uses the existing LVM filter to create the system.devices '
+ 'file which is then used instead of the LVM filter. Before running the command, '
+ f'make sure that \'use_devicesfile=1\' is set in {LVM_CONFIG_PATH}.'
+ )
+ remediation_command = ['vgimportdevices']
+
+ reporting.create_report([
+ reporting.Title(title),
+ reporting.Summary(summary),
+ reporting.Remediation(hint=remediation_hint, commands=[remediation_command]),
+ reporting.ExternalLink(
+ title='Limiting LVM device visibility and usage',
+ url='https://red.ht/limiting-lvm-devices-visibility-and-usage',
+ ),
+ reporting.Severity(reporting.Severity.INFO),
+ ])
+
+
+def check_lvm():
+ if not has_package(DistributionSignedRPM, 'lvm2'):
+ return
+
+ lvm_config = next(api.consume(LVMConfig), None)
+ if not lvm_config:
+ return
+
+ lvm_devices_file_path = os.path.join(LVM_DEVICES_FILE_PATH_PREFIX, lvm_config.devices.devicesfile)
+ lvm_devices_file_exists = os.path.isfile(lvm_devices_file_path)
+
+ filters_used = not lvm_config.devices.use_devicesfile or not lvm_devices_file_exists
+ if filters_used:
+ _report_filter_detection()
+
+ api.current_logger().debug('Including lvm dracut module.')
+ api.produce(UpgradeInitramfsTasks(include_dracut_modules=[DracutModule(name='lvm')]))
+
+ copy_files = []
+ api.current_logger().debug('Copying "{}" to the target userspace.'.format(LVM_CONFIG_PATH))
+ copy_files.append(CopyFile(src=LVM_CONFIG_PATH))
+
+ if lvm_devices_file_exists and lvm_config.devices.use_devicesfile:
+ api.current_logger().debug('Copying "{}" to the target userspace.'.format(lvm_devices_file_path))
+ copy_files.append(CopyFile(src=lvm_devices_file_path))
+
+ api.produce(TargetUserSpaceUpgradeTasks(copy_files=copy_files))
diff --git a/repos/system_upgrade/common/actors/checklvm/tests/test_checklvm.py b/repos/system_upgrade/common/actors/checklvm/tests/test_checklvm.py
new file mode 100644
index 00000000..a7da8050
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checklvm/tests/test_checklvm.py
@@ -0,0 +1,92 @@
+import os
+
+import pytest
+
+from leapp.libraries.actor import checklvm
+from leapp.libraries.common.testutils import produce_mocked
+from leapp.libraries.stdlib import api
+from leapp.models import (
+ DistributionSignedRPM,
+ LVMConfig,
+ LVMConfigDevicesSection,
+ RPM,
+ TargetUserSpaceUpgradeTasks,
+ UpgradeInitramfsTasks
+)
+
+
+def test_check_lvm_when_lvm_not_installed(monkeypatch):
+ def consume_mocked(model):
+ if model == LVMConfig:
+ assert False
+ if model == DistributionSignedRPM:
+ yield DistributionSignedRPM(items=[])
+
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(api, 'consume', consume_mocked)
+
+ checklvm.check_lvm()
+
+ assert not api.produce.called
+
+
+@pytest.mark.parametrize(
+ ('config', 'create_report', 'devices_file_exists'),
+ [
+ (LVMConfig(devices=LVMConfigDevicesSection(use_devicesfile=False)), True, False),
+ (LVMConfig(devices=LVMConfigDevicesSection(use_devicesfile=True)), False, True),
+ (LVMConfig(devices=LVMConfigDevicesSection(use_devicesfile=True)), True, False),
+ (LVMConfig(devices=LVMConfigDevicesSection(use_devicesfile=False, devicesfile="test.devices")), True, False),
+ (LVMConfig(devices=LVMConfigDevicesSection(use_devicesfile=True, devicesfile="test.devices")), False, True),
+ (LVMConfig(devices=LVMConfigDevicesSection(use_devicesfile=True, devicesfile="test.devices")), True, False),
+ ]
+)
+def test_scan_when_lvm_installed(monkeypatch, config, create_report, devices_file_exists):
+ lvm_package = RPM(
+ name='lvm2',
+ version='2',
+ release='1',
+ epoch='1',
+ packager='',
+ arch='x86_64',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID 199e2f91fd431d51'
+ )
+
+ def isfile_mocked(_):
+ return devices_file_exists
+
+ def consume_mocked(model):
+ if model == LVMConfig:
+ yield config
+ if model == DistributionSignedRPM:
+ yield DistributionSignedRPM(items=[lvm_package])
+
+ def report_filter_detection_mocked():
+ assert create_report
+
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(api, 'consume', consume_mocked)
+ monkeypatch.setattr(os.path, 'isfile', isfile_mocked)
+ monkeypatch.setattr(checklvm, '_report_filter_detection', report_filter_detection_mocked)
+
+ checklvm.check_lvm()
+
+ # The lvm is installed, thus the dracut module is enabled and at least the lvm.conf is copied
+ assert api.produce.called == 2
+ assert len(api.produce.model_instances) == 2
+
+ expected_copied_files = [checklvm.LVM_CONFIG_PATH]
+ if devices_file_exists and config.devices.use_devicesfile:
+ devices_file_path = os.path.join(checklvm.LVM_DEVICES_FILE_PATH_PREFIX, config.devices.devicesfile)
+ expected_copied_files.append(devices_file_path)
+
+ for produced_model in api.produce.model_instances:
+ assert isinstance(produced_model, (UpgradeInitramfsTasks, TargetUserSpaceUpgradeTasks))
+
+ if isinstance(produced_model, UpgradeInitramfsTasks):
+ assert len(produced_model.include_dracut_modules) == 1
+ assert produced_model.include_dracut_modules[0].name == 'lvm'
+ else:
+ assert len(produced_model.copy_files) == len(expected_copied_files)
+ for file in produced_model.copy_files:
+ assert file.src in expected_copied_files
diff --git a/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/actor.py b/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/actor.py
index d99bab48..c0c93036 100644
--- a/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/actor.py
+++ b/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/actor.py
@@ -6,6 +6,7 @@ from leapp.models import (
BootContent,
FIPSInfo,
LiveModeConfig,
+ LVMConfig,
TargetOSInstallationImage,
TargetUserSpaceInfo,
TargetUserSpaceUpgradeTasks,
@@ -31,6 +32,7 @@ class UpgradeInitramfsGenerator(Actor):
consumes = (
FIPSInfo,
LiveModeConfig,
+ LVMConfig,
RequiredUpgradeInitramPackages, # deprecated
TargetOSInstallationImage,
TargetUserSpaceInfo,
diff --git a/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/libraries/upgradeinitramfsgenerator.py b/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/libraries/upgradeinitramfsgenerator.py
index eefdb41a..03447b7c 100644
--- a/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/libraries/upgradeinitramfsgenerator.py
+++ b/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/libraries/upgradeinitramfsgenerator.py
@@ -12,6 +12,7 @@ from leapp.models import UpgradeDracutModule # deprecated
from leapp.models import (
BootContent,
LiveModeConfig,
+ LVMConfig,
TargetOSInstallationImage,
TargetUserSpaceInfo,
TargetUserSpaceUpgradeTasks,
@@ -364,20 +365,29 @@ def generate_initram_disk(context):
def fmt_module_list(module_list):
return ','.join(mod.name for mod in module_list)
+ env_variables = [
+ 'LEAPP_KERNEL_VERSION={kernel_version}',
+ 'LEAPP_ADD_DRACUT_MODULES="{dracut_modules}"',
+ 'LEAPP_KERNEL_ARCH={arch}',
+ 'LEAPP_ADD_KERNEL_MODULES="{kernel_modules}"',
+ 'LEAPP_DRACUT_INSTALL_FILES="{files}"'
+ ]
+
+ if next(api.consume(LVMConfig), None):
+ env_variables.append('LEAPP_DRACUT_LVMCONF="1"')
+
+ env_variables = ' '.join(env_variables)
+ env_variables = env_variables.format(
+ kernel_version=_get_target_kernel_version(context),
+ dracut_modules=fmt_module_list(initramfs_includes.dracut_modules),
+ kernel_modules=fmt_module_list(initramfs_includes.kernel_modules),
+ arch=api.current_actor().configuration.architecture,
+ files=' '.join(initramfs_includes.files)
+ )
+ cmd = os.path.join('/', INITRAM_GEN_SCRIPT_NAME)
+
# FIXME: issue #376
- context.call([
- '/bin/sh', '-c',
- 'LEAPP_KERNEL_VERSION={kernel_version} '
- 'LEAPP_ADD_DRACUT_MODULES="{dracut_modules}" LEAPP_KERNEL_ARCH={arch} '
- 'LEAPP_ADD_KERNEL_MODULES="{kernel_modules}" '
- 'LEAPP_DRACUT_INSTALL_FILES="{files}" {cmd}'.format(
- kernel_version=_get_target_kernel_version(context),
- dracut_modules=fmt_module_list(initramfs_includes.dracut_modules),
- kernel_modules=fmt_module_list(initramfs_includes.kernel_modules),
- arch=api.current_actor().configuration.architecture,
- files=' '.join(initramfs_includes.files),
- cmd=os.path.join('/', INITRAM_GEN_SCRIPT_NAME))
- ], env=env)
+ context.call(['/bin/sh', '-c', f'{env_variables} {cmd}'], env=env)
boot_files_info = copy_boot_files(context)
return boot_files_info
diff --git a/repos/system_upgrade/common/actors/scanlvmconfig/actor.py b/repos/system_upgrade/common/actors/scanlvmconfig/actor.py
new file mode 100644
index 00000000..23ed032d
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scanlvmconfig/actor.py
@@ -0,0 +1,18 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import scanlvmconfig
+from leapp.models import DistributionSignedRPM, LVMConfig
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class ScanLVMConfig(Actor):
+ """
+ Scan LVM configuration.
+ """
+
+ name = 'scan_lvm_config'
+ consumes = (DistributionSignedRPM,)
+ produces = (LVMConfig,)
+ tags = (FactsPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ scanlvmconfig.scan()
diff --git a/repos/system_upgrade/common/actors/scanlvmconfig/libraries/scanlvmconfig.py b/repos/system_upgrade/common/actors/scanlvmconfig/libraries/scanlvmconfig.py
new file mode 100644
index 00000000..37755e7c
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scanlvmconfig/libraries/scanlvmconfig.py
@@ -0,0 +1,52 @@
+import os
+
+from leapp.libraries.common.config import version
+from leapp.libraries.common.rpms import has_package
+from leapp.libraries.stdlib import api
+from leapp.models import DistributionSignedRPM, LVMConfig, LVMConfigDevicesSection
+
+LVM_CONFIG_PATH = '/etc/lvm/lvm.conf'
+
+
+def _lvm_config_devices_parser(lvm_config_lines):
+ in_section = False
+ config = {}
+ for line in lvm_config_lines:
+ line = line.split("#", 1)[0].strip()
+ if not line:
+ continue
+ if "devices {" in line:
+ in_section = True
+ continue
+ if in_section and "}" in line:
+ in_section = False
+ if in_section:
+ value = line.split("=", 1)
+ config[value[0].strip()] = value[1].strip().strip('"')
+ return config
+
+
+def _read_config_lines(path):
+ with open(path) as lvm_conf_file:
+ return lvm_conf_file.readlines()
+
+
+def scan():
+ if not has_package(DistributionSignedRPM, 'lvm2'):
+ return
+
+ if not os.path.isfile(LVM_CONFIG_PATH):
+ api.current_logger().debug('The "{}" is not present on the system.'.format(LVM_CONFIG_PATH))
+ return
+
+ lvm_config_lines = _read_config_lines(LVM_CONFIG_PATH)
+ devices_section = _lvm_config_devices_parser(lvm_config_lines)
+
+ lvm_config_devices = LVMConfigDevicesSection(use_devicesfile=int(version.get_source_major_version()) > 8)
+ if 'devicesfile' in devices_section:
+ lvm_config_devices.devicesfile = devices_section['devicesfile']
+
+ if 'use_devicesfile' in devices_section and devices_section['use_devicesfile'] in ['0', '1']:
+ lvm_config_devices.use_devicesfile = devices_section['use_devicesfile'] == '1'
+
+ api.produce(LVMConfig(devices=lvm_config_devices))
diff --git a/repos/system_upgrade/common/actors/scanlvmconfig/tests/test_scanlvmconfig.py b/repos/system_upgrade/common/actors/scanlvmconfig/tests/test_scanlvmconfig.py
new file mode 100644
index 00000000..26728fd8
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scanlvmconfig/tests/test_scanlvmconfig.py
@@ -0,0 +1,176 @@
+import os
+
+import pytest
+
+from leapp.libraries.actor import scanlvmconfig
+from leapp.libraries.common.config import version
+from leapp.libraries.common.testutils import CurrentActorMocked, produce_mocked
+from leapp.libraries.stdlib import api
+from leapp.models import DistributionSignedRPM, LVMConfig, LVMConfigDevicesSection, RPM
+
+
+@pytest.mark.parametrize(
+ ("config_as_lines", "config_as_dict"),
+ [
+ ([], {}),
+ (
+ ['devices {\n',
+ '\t# comment\n'
+ '}\n'],
+ {}
+ ),
+ (
+ ['global {\n',
+ 'use_lvmetad = 1\n',
+ '}\n'],
+ {}
+ ),
+ (
+ ['devices {\n',
+ 'filter = [ "r|/dev/cdrom|", "a|.*|" ]\n',
+ 'use_devicesfile=0\n',
+ 'devicesfile="file-name.devices"\n',
+ '}'],
+ {'filter': '[ "r|/dev/cdrom|", "a|.*|" ]',
+ 'use_devicesfile': '0',
+ 'devicesfile': 'file-name.devices'}
+ ),
+ (
+ ['devices {\n',
+ 'use_devicesfile = 1\n',
+ 'devicesfile = "file-name.devices"\n',
+ ' }\n'],
+ {'use_devicesfile': '1',
+ 'devicesfile': 'file-name.devices'}
+ ),
+ (
+ ['devices {\n',
+ ' # comment\n',
+ 'use_devicesfile = 1 # comment\n',
+ '#devicesfile = "file-name.devices"\n',
+ ' }\n'],
+ {'use_devicesfile': '1'}
+ ),
+ (
+ ['config {\n',
+ '# configuration section\n',
+ '\tabort_on_errors = 1\n',
+ '\tprofile_dir = "/etc/lvm/prifile\n',
+ '}\n',
+ 'devices {\n',
+ ' \n',
+ '\tfilter = ["a|.*|"] \n',
+ '\tuse_devicesfile=0\n',
+ '}\n',
+ 'allocation {\n',
+ '\tcling_tag_list = [ "@site1", "@site2" ]\n',
+ '\tcache_settings {\n',
+ '\t}\n',
+ '}\n'
+ ],
+ {'filter': '["a|.*|"]', 'use_devicesfile': '0'}
+ ),
+ ]
+
+)
+def test_lvm_config_devices_parser(config_as_lines, config_as_dict):
+ lvm_config = scanlvmconfig._lvm_config_devices_parser(config_as_lines)
+ assert lvm_config == config_as_dict
+
+
+def test_scan_when_lvm_not_installed(monkeypatch):
+ def isfile_mocked(_):
+ assert False
+
+ def read_config_lines_mocked(_):
+ assert False
+
+ msgs = [
+ DistributionSignedRPM(items=[])
+ ]
+
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(msgs=msgs))
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(os.path, 'isfile', isfile_mocked)
+ monkeypatch.setattr(scanlvmconfig, '_read_config_lines', read_config_lines_mocked)
+
+ scanlvmconfig.scan()
+
+ assert not api.produce.called
+
+
+@pytest.mark.parametrize(
+ ('source_major_version', 'devices_section_dict', 'produced_devices_section'),
+ [
+ ('8', {}, LVMConfigDevicesSection(use_devicesfile=False)),
+ ('9', {}, LVMConfigDevicesSection(use_devicesfile=True)),
+ ('8', {
+ 'use_devicesfile': '0',
+ }, LVMConfigDevicesSection(use_devicesfile=False,
+ devicesfile='system.devices')
+ ),
+ ('9', {
+ 'use_devicesfile': '0',
+ 'devicesfile': 'file-name.devices'
+ }, LVMConfigDevicesSection(use_devicesfile=False,
+ devicesfile='file-name.devices')
+ ),
+
+ ('8', {
+ 'use_devicesfile': '1',
+ 'devicesfile': 'file-name.devices'
+ }, LVMConfigDevicesSection(use_devicesfile=True,
+ devicesfile='file-name.devices')
+ ),
+ ('9', {
+ 'use_devicesfile': '1',
+ }, LVMConfigDevicesSection(use_devicesfile=True,
+ devicesfile='system.devices')
+ ),
+
+ ]
+
+)
+def test_scan_when_lvm_installed(monkeypatch, source_major_version, devices_section_dict, produced_devices_section):
+
+ def isfile_mocked(file):
+ assert file == scanlvmconfig.LVM_CONFIG_PATH
+ return True
+
+ def read_config_lines_mocked(file):
+ assert file == scanlvmconfig.LVM_CONFIG_PATH
+ return ["test_line"]
+
+ def lvm_config_devices_parser_mocked(lines):
+ assert lines == ["test_line"]
+ return devices_section_dict
+
+ lvm_package = RPM(
+ name='lvm2',
+ version='2',
+ release='1',
+ epoch='1',
+ packager='',
+ arch='x86_64',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID 199e2f91fd431d51'
+ )
+
+ msgs = [
+ DistributionSignedRPM(items=[lvm_package])
+ ]
+
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(msgs=msgs))
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(version, 'get_source_major_version', lambda: source_major_version)
+ monkeypatch.setattr(os.path, 'isfile', isfile_mocked)
+ monkeypatch.setattr(scanlvmconfig, '_read_config_lines', read_config_lines_mocked)
+ monkeypatch.setattr(scanlvmconfig, '_lvm_config_devices_parser', lvm_config_devices_parser_mocked)
+
+ scanlvmconfig.scan()
+
+ assert api.produce.called == 1
+ assert len(api.produce.model_instances) == 1
+
+ produced_model = api.produce.model_instances[0]
+ assert isinstance(produced_model, LVMConfig)
+ assert produced_model.devices == produced_devices_section
diff --git a/repos/system_upgrade/common/models/lvmconfig.py b/repos/system_upgrade/common/models/lvmconfig.py
new file mode 100644
index 00000000..ab5e7815
--- /dev/null
+++ b/repos/system_upgrade/common/models/lvmconfig.py
@@ -0,0 +1,26 @@
+from leapp.models import fields, Model
+from leapp.topics import SystemInfoTopic
+
+
+class LVMConfigDevicesSection(Model):
+ """The devices section from the LVM configuration."""
+ topic = SystemInfoTopic
+
+ use_devicesfile = fields.Boolean()
+ """
+ Determines whether only the devices in the devices file are used by LVM. Note
+ that the default value changed on the RHEL 9 to True.
+ """
+
+ devicesfile = fields.String(default="system.devices")
+ """
+ Defines the name of the devices file that should be used. The default devices
+ file is located in '/etc/lvm/devices/system.devices'.
+ """
+
+
+class LVMConfig(Model):
+ """LVM configuration split into sections."""
+ topic = SystemInfoTopic
+
+ devices = fields.Model(LVMConfigDevicesSection)
--
2.52.0

View File

@ -1,152 +0,0 @@
From c960a70efc2d9fbbd9819b0276bcd9fbac2416e9 Mon Sep 17 00:00:00 2001
From: Michal Hecko <mhecko@redhat.com>
Date: Mon, 8 Dec 2025 15:49:09 +0100
Subject: [PATCH 083/111] multipath: do not crash when there is no
multipath.conf
Our newly introduced handling of multipath in the upgrade initramfs has
a bug when it tries to check whether multipath_info.config_dir exists.
However, when multipath config does not exists, a default message with
multipath_info.config_dir=None is produced, causing an unhandled
exception. This patch fixes the issue. Moreover, an additional issue
when updated configs are not *guaranteed* to be placed into the target
uspace was discovered.
---
.../target_uspace_multipath_configs.py | 16 +++-
.../tests/test_target_uspace_configs.py | 86 +++++++++++++++++++
2 files changed, 101 insertions(+), 1 deletion(-)
create mode 100644 repos/system_upgrade/common/actors/multipath/target_uspace_configs/tests/test_target_uspace_configs.py
diff --git a/repos/system_upgrade/common/actors/multipath/target_uspace_configs/libraries/target_uspace_multipath_configs.py b/repos/system_upgrade/common/actors/multipath/target_uspace_configs/libraries/target_uspace_multipath_configs.py
index 0deda56b..72afc477 100644
--- a/repos/system_upgrade/common/actors/multipath/target_uspace_configs/libraries/target_uspace_multipath_configs.py
+++ b/repos/system_upgrade/common/actors/multipath/target_uspace_configs/libraries/target_uspace_multipath_configs.py
@@ -34,6 +34,11 @@ def request_mpath_confs(multipath_info):
for config_updates in api.consume(MultipathConfigUpdatesInfo):
for update in config_updates.updates:
+ # Detect /etc/multipath.conf > /etc/multipath.conf, and replace it with the patched
+ # version PATCHED > /etc/multipath.conf
+ if update.target_path in files_to_put_into_uspace:
+ del files_to_put_into_uspace[update.target_path]
+
files_to_put_into_uspace[update.updated_config_location] = update.target_path
# Note: original implementation would copy the /etc/multipath directory, which contains
@@ -56,11 +61,20 @@ def request_mpath_confs(multipath_info):
def process():
multipath_info = next(api.consume(MultipathInfo), None)
+
if not multipath_info:
api.current_logger().debug(
- 'Received no MultipathInfo message. No configfiles will '
+ 'Received no MultipathInfo message. No config files will '
+ 'be requested to be placed into target userspace.'
+ )
+ return
+
+ if not multipath_info.is_configured:
+ api.current_logger().debug(
+ 'Multipath is not configured. No config files will '
'be requested to be placed into target userspace.'
)
return
+
request_mpath_confs(multipath_info)
request_mpath_dracut_module_for_upgrade_initramfs()
diff --git a/repos/system_upgrade/common/actors/multipath/target_uspace_configs/tests/test_target_uspace_configs.py b/repos/system_upgrade/common/actors/multipath/target_uspace_configs/tests/test_target_uspace_configs.py
new file mode 100644
index 00000000..ffb63322
--- /dev/null
+++ b/repos/system_upgrade/common/actors/multipath/target_uspace_configs/tests/test_target_uspace_configs.py
@@ -0,0 +1,86 @@
+import os
+import shutil
+
+import pytest
+
+from leapp.libraries.actor import target_uspace_multipath_configs as actor_lib
+from leapp.libraries.common.testutils import CurrentActorMocked, produce_mocked
+from leapp.libraries.stdlib import api
+from leapp.models import (
+ MultipathConfigUpdatesInfo,
+ MultipathInfo,
+ TargetUserSpaceUpgradeTasks,
+ UpdatedMultipathConfig,
+ UpgradeInitramfsTasks
+)
+
+
+@pytest.mark.parametrize(
+ ('multipath_info', 'should_produce'),
+ [
+ (None, False), # No multipath info message
+ (MultipathInfo(is_configured=False), False), # Multipath is not configured
+ (MultipathInfo(is_configured=True, config_dir='/etc/multipath/conf.d'), True)
+ ]
+)
+def test_production_conditions(monkeypatch, multipath_info, should_produce):
+ """ Test whether messages are produced under right conditions. """
+ produce_mock = produce_mocked()
+ monkeypatch.setattr(api, 'produce', produce_mock)
+
+ msgs = [multipath_info] if multipath_info else []
+ if multipath_info and multipath_info.is_configured:
+ update = UpdatedMultipathConfig(
+ updated_config_location='/var/lib/leapp/proposed_changes/etc/multipath/conf.d/config.conf',
+ target_path='/etc/multipath/conf.d/config.conf'
+ )
+ msgs.append(MultipathConfigUpdatesInfo(updates=[update]))
+
+ actor_mock = CurrentActorMocked(msgs=msgs)
+ monkeypatch.setattr(api, 'current_actor', actor_mock)
+
+ def listdir_mock(path):
+ assert path == '/etc/multipath/conf.d'
+ return ['config.conf', 'config-not-to-be-touched.conf']
+
+ def exists_mock(path):
+ return path == '/etc/multipath/conf.d'
+
+ monkeypatch.setattr(os.path, 'exists', exists_mock)
+ monkeypatch.setattr(os, 'listdir', listdir_mock)
+
+ actor_lib.process()
+
+ if should_produce:
+ _target_uspace_tasks = [
+ msg for msg in produce_mock.model_instances if isinstance(msg, TargetUserSpaceUpgradeTasks)
+ ]
+ assert len(_target_uspace_tasks) == 1
+
+ target_uspace_tasks = _target_uspace_tasks[0]
+
+ copies = sorted((copy.src, copy.dst) for copy in target_uspace_tasks.copy_files)
+ expected_copies = [
+ (
+ '/etc/multipath.conf',
+ '/etc/multipath.conf'
+ ),
+ (
+ '/var/lib/leapp/proposed_changes/etc/multipath/conf.d/config.conf',
+ '/etc/multipath/conf.d/config.conf'
+ ),
+ (
+ '/etc/multipath/conf.d/config-not-to-be-touched.conf',
+ '/etc/multipath/conf.d/config-not-to-be-touched.conf'
+ )
+ ]
+ assert copies == sorted(expected_copies)
+
+ _upgrade_initramfs_tasks = [m for m in produce_mock.model_instances if isinstance(m, UpgradeInitramfsTasks)]
+ assert len(_upgrade_initramfs_tasks) == 1
+ upgrade_initramfs_tasks = _upgrade_initramfs_tasks[0]
+
+ dracut_modules = [dracut_mod.name for dracut_mod in upgrade_initramfs_tasks.include_dracut_modules]
+ assert dracut_modules == ['multipath']
+ else:
+ assert not produce_mock.called
--
2.52.0

View File

@ -1,470 +0,0 @@
From 5b3ccd99ece89f880acf42162e456710ea13b1d4 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Tue, 18 Nov 2025 17:46:11 +0100
Subject: [PATCH 084/111] Replace distro specific packages during conversion
There are certain packages that are distribution specific and need to be
replaced in the DNF upgrade transaction with their target distro
counterpart when converting during the upgrade. For example the release
and logos packages. Some packages, such as packages containing
repository definitions or GPG keys, need to be removed without any
replacement.
This patch introduces a new convert/swapdistropackages actor to
accomplish this. Currently only packages that need to be handled during
CS->RHEL and AL->RHEL conversion are handled, however the actor contains
a config dict to easily add more paths.
Jira: RHEL-110568
---
.../convert/swapdistropackages/actor.py | 20 ++
.../libraries/swapdistropackages.py | 111 +++++++
.../tests/test_swapdistropackages.py | 291 ++++++++++++++++++
3 files changed, 422 insertions(+)
create mode 100644 repos/system_upgrade/common/actors/convert/swapdistropackages/actor.py
create mode 100644 repos/system_upgrade/common/actors/convert/swapdistropackages/libraries/swapdistropackages.py
create mode 100644 repos/system_upgrade/common/actors/convert/swapdistropackages/tests/test_swapdistropackages.py
diff --git a/repos/system_upgrade/common/actors/convert/swapdistropackages/actor.py b/repos/system_upgrade/common/actors/convert/swapdistropackages/actor.py
new file mode 100644
index 00000000..f8d9c446
--- /dev/null
+++ b/repos/system_upgrade/common/actors/convert/swapdistropackages/actor.py
@@ -0,0 +1,20 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import swapdistropackages
+from leapp.models import DistributionSignedRPM, RpmTransactionTasks
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+
+
+class SwapDistroPackages(Actor):
+ """
+ Swap distribution specific packages.
+
+ Does nothing if not converting.
+ """
+
+ name = 'swap_distro_packages'
+ consumes = (DistributionSignedRPM,)
+ produces = (RpmTransactionTasks,)
+ tags = (IPUWorkflowTag, ChecksPhaseTag)
+
+ def process(self):
+ swapdistropackages.process()
diff --git a/repos/system_upgrade/common/actors/convert/swapdistropackages/libraries/swapdistropackages.py b/repos/system_upgrade/common/actors/convert/swapdistropackages/libraries/swapdistropackages.py
new file mode 100644
index 00000000..f7e2ce68
--- /dev/null
+++ b/repos/system_upgrade/common/actors/convert/swapdistropackages/libraries/swapdistropackages.py
@@ -0,0 +1,111 @@
+import fnmatch
+
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.common.config import get_source_distro_id, get_target_distro_id
+from leapp.libraries.common.config.version import get_target_major_version
+from leapp.libraries.stdlib import api
+from leapp.models import DistributionSignedRPM, RpmTransactionTasks
+
+# Config for swapping distribution-specific RPMs
+# The keys can be in 2 "formats":
+# (<source_distro_id>, <target_distro_id>)
+# (<source_distro_id>, <target_distro_id>, <target_major_version as int>)
+# The "swap" dict maps packages on the source distro to their replacements on
+# the target distro
+# The "remove" set lists packages or glob pattern for matching packages from
+# the source distro to remove without any replacement.
+_CONFIG = {
+ ("centos", "rhel"): {
+ "swap": {
+ "centos-logos": "redhat-logos",
+ "centos-logos-httpd": "redhat-logos-httpd",
+ "centos-logos-ipa": "redhat-logos-ipa",
+ "centos-indexhtml": "redhat-indexhtml",
+ "centos-backgrounds": "redhat-backgrounds",
+ "centos-stream-release": "redhat-release",
+ },
+ "remove": {
+ "centos-gpg-keys",
+ "centos-stream-repos",
+ # various release packages, typically contain repofiles
+ "centos-release-*",
+ # present on Centos (not Stream) 8, let's include them if they are potentially leftover
+ "centos-linux-release",
+ "centos-linux-repos",
+ "centos-obsolete-packages",
+ },
+ },
+ ("almalinux", "rhel"): {
+ "swap": {
+ "almalinux-logos": "redhat-logos",
+ "almalinux-logos-httpd": "redhat-logos-httpd",
+ "almalinux-logos-ipa": "redhat-logos-ipa",
+ "almalinux-indexhtml": "redhat-indexhtml",
+ "almalinux-backgrounds": "redhat-backgrounds",
+ "almalinux-release": "redhat-release",
+ },
+ "remove": {
+ "almalinux-repos",
+ "almalinux-gpg-keys",
+
+ "almalinux-release-*",
+ "centos-release-*",
+ "elrepo-release",
+ "epel-release",
+ },
+ },
+}
+
+
+def _get_config(source_distro, target_distro, target_major):
+ key = (source_distro, target_distro, target_major)
+ config = _CONFIG.get(key)
+ if config:
+ return config
+
+ key = (source_distro, target_distro)
+ return _CONFIG.get(key)
+
+
+def _glob_match_rpms(rpms, pattern):
+ return [rpm for rpm in rpms if fnmatch.fnmatch(rpm, pattern)]
+
+
+def _make_transaction_tasks(config, rpms):
+ to_install = set()
+ to_remove = set()
+ for source_pkg, target_pkg in config.get("swap", {}).items():
+ if source_pkg in rpms:
+ to_remove.add(source_pkg)
+ to_install.add(target_pkg)
+
+ for pkg in config.get("remove", {}):
+ matches = _glob_match_rpms(rpms, pkg)
+ to_remove.update(matches)
+
+ return RpmTransactionTasks(to_install=list(to_install), to_remove=list(to_remove))
+
+
+def process():
+ rpms_msg = next(api.consume(DistributionSignedRPM), None)
+ if not rpms_msg:
+ raise StopActorExecutionError("Did not receive DistributionSignedRPM message")
+
+ source_distro = get_source_distro_id()
+ target_distro = get_target_distro_id()
+
+ if source_distro == target_distro:
+ return
+
+ config = _get_config(source_distro, target_distro, get_target_major_version())
+ if not config:
+ api.current_logger().warning(
+ "Could not find config for handling distro specific packages for {}->{} upgrade.".format(
+ source_distro, target_distro
+ )
+ )
+ return
+
+ rpms = {rpm.name for rpm in rpms_msg.items}
+ task = _make_transaction_tasks(config, rpms)
+ api.produce(task)
diff --git a/repos/system_upgrade/common/actors/convert/swapdistropackages/tests/test_swapdistropackages.py b/repos/system_upgrade/common/actors/convert/swapdistropackages/tests/test_swapdistropackages.py
new file mode 100644
index 00000000..99bb9c20
--- /dev/null
+++ b/repos/system_upgrade/common/actors/convert/swapdistropackages/tests/test_swapdistropackages.py
@@ -0,0 +1,291 @@
+from unittest import mock
+
+import pytest
+
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.actor import swapdistropackages
+from leapp.libraries.common.testutils import CurrentActorMocked, logger_mocked, produce_mocked
+from leapp.libraries.stdlib import api
+from leapp.models import DistributionSignedRPM, RPM, RpmTransactionTasks
+
+
+def test_get_config(monkeypatch):
+ test_config = {
+ ("centos", "rhel"): {
+ "swap": {"pkgA": "pkgB"},
+ "remove": {
+ "pkgC",
+ },
+ },
+ ("centos", "rhel", 10): {"swap": {"pkg1": "pkg2"}},
+ }
+ monkeypatch.setattr(swapdistropackages, "_CONFIG", test_config)
+
+ expect = {
+ "swap": {"pkgA": "pkgB"},
+ "remove": {
+ "pkgC",
+ },
+ }
+ # fallback to (centos, rhel) when there is no target version specific config
+ cfg = swapdistropackages._get_config("centos", "rhel", 9)
+ assert cfg == expect
+
+ # has it's own target version specific config
+ cfg = swapdistropackages._get_config("centos", "rhel", 10)
+ assert cfg == {"swap": {"pkg1": "pkg2"}}
+
+ # not mapped
+ cfg = swapdistropackages._get_config("almalinux", "rhel", 9)
+ assert not cfg
+
+
+@pytest.mark.parametrize(
+ "rpms,config,expected",
+ [
+ (
+ ["pkgA", "pkgB", "pkgC"],
+ {
+ "swap": {"pkgA": "pkgB"},
+ "remove": {
+ "pkgC",
+ },
+ },
+ RpmTransactionTasks(to_install=["pkgB"], to_remove=["pkgA", "pkgC"]),
+ ),
+ # only some pkgs present
+ (
+ ["pkg1", "pkgA", "pkg-other"],
+ {
+ "swap": {"pkgX": "pkgB", "pkg1": "pkg2"},
+ "remove": {"pkg*"},
+ },
+ RpmTransactionTasks(
+ to_install=["pkg2"], to_remove=["pkgA", "pkg1", "pkg-other"]
+ ),
+ ),
+ (
+ ["pkgA", "pkgB"],
+ {},
+ RpmTransactionTasks(to_install=[], to_remove=[]),
+ ),
+ ],
+)
+def test__make_transaction_tasks(rpms, config, expected):
+ tasks = swapdistropackages._make_transaction_tasks(config, rpms)
+ assert set(tasks.to_install) == set(expected.to_install)
+ assert set(tasks.to_remove) == set(expected.to_remove)
+
+
+def test_process_ok(monkeypatch):
+ def _msg_pkgs(pkgnames):
+ rpms = []
+ for name in pkgnames:
+ rpms.append(RPM(
+ name=name,
+ epoch="0",
+ packager="packager",
+ version="1.2",
+ release="el9",
+ arch="noarch",
+ pgpsig="",
+ ))
+ return DistributionSignedRPM(items=rpms)
+
+ rpms = [
+ "centos-logos",
+ "centos-logos-httpd",
+ "centos-logos-ipa",
+ "centos-indexhtml",
+ "centos-backgrounds",
+ "centos-stream-release",
+ "centos-gpg-keys",
+ "centos-stream-repos",
+ "centos-linux-release",
+ "centos-linux-repos",
+ "centos-obsolete-packages",
+ "centos-release-automotive",
+ "centos-release-automotive-experimental",
+ "centos-release-autosd",
+ "centos-release-ceph-pacific",
+ "centos-release-ceph-quincy",
+ "centos-release-ceph-reef",
+ "centos-release-ceph-squid",
+ "centos-release-ceph-tentacle",
+ "centos-release-cloud",
+ "centos-release-gluster10",
+ "centos-release-gluster11",
+ "centos-release-gluster9",
+ "centos-release-hyperscale",
+ "centos-release-hyperscale-experimental",
+ "centos-release-hyperscale-experimental-testing",
+ "centos-release-hyperscale-spin",
+ "centos-release-hyperscale-spin-testing",
+ "centos-release-hyperscale-testing",
+ "centos-release-isa-override",
+ "centos-release-kmods",
+ "centos-release-kmods-kernel",
+ "centos-release-kmods-kernel-6",
+ "centos-release-messaging",
+ "centos-release-nfs-ganesha4",
+ "centos-release-nfs-ganesha5",
+ "centos-release-nfs-ganesha6",
+ "centos-release-nfs-ganesha7",
+ "centos-release-nfs-ganesha8",
+ "centos-release-nfv-common",
+ "centos-release-nfv-openvswitch",
+ "centos-release-okd-4",
+ "centos-release-openstack-antelope",
+ "centos-release-openstack-bobcat",
+ "centos-release-openstack-caracal",
+ "centos-release-openstack-dalmatian",
+ "centos-release-openstack-epoxy",
+ "centos-release-openstack-yoga",
+ "centos-release-openstack-zed",
+ "centos-release-openstackclient-xena",
+ "centos-release-opstools",
+ "centos-release-ovirt45",
+ "centos-release-ovirt45-testing",
+ "centos-release-proposed_updates",
+ "centos-release-rabbitmq-38",
+ "centos-release-samba414",
+ "centos-release-samba415",
+ "centos-release-samba416",
+ "centos-release-samba417",
+ "centos-release-samba418",
+ "centos-release-samba419",
+ "centos-release-samba420",
+ "centos-release-samba421",
+ "centos-release-samba422",
+ "centos-release-samba423",
+ "centos-release-storage-common",
+ "centos-release-virt-common",
+ ]
+ curr_actor_mocked = CurrentActorMocked(
+ src_distro="centos",
+ dst_distro="rhel",
+ msgs=[_msg_pkgs(rpms)],
+ )
+ monkeypatch.setattr(api, 'current_actor', curr_actor_mocked)
+ produce_mock = produce_mocked()
+ monkeypatch.setattr(api, 'produce', produce_mock)
+
+ swapdistropackages.process()
+
+ expected = RpmTransactionTasks(
+ to_install=[
+ "redhat-logos",
+ "redhat-logos-httpd",
+ "redhat-logos-ipa",
+ "redhat-indexhtml",
+ "redhat-backgrounds",
+ "redhat-release",
+ ],
+ to_remove=rpms,
+ )
+
+ assert produce_mock.called == 1
+ produced = produce_mock.model_instances[0]
+ assert set(produced.to_install) == set(expected.to_install)
+ assert set(produced.to_remove) == set(expected.to_remove)
+
+
+def test_process_no_config_skip(monkeypatch):
+ curr_actor_mocked = CurrentActorMocked(
+ src_distro="distroA", dst_distro="distroB", msgs=[DistributionSignedRPM()]
+ )
+ monkeypatch.setattr(api, "current_actor", curr_actor_mocked)
+ monkeypatch.setattr(swapdistropackages, "_get_config", lambda *args: None)
+ monkeypatch.setattr(api, "current_logger", logger_mocked())
+ produce_mock = produce_mocked()
+ monkeypatch.setattr(api, "produce", produce_mock)
+
+ swapdistropackages.process()
+
+ assert produce_mock.called == 0
+ assert (
+ "Could not find config for handling distro specific packages for distroA->distroB upgrade"
+ ) in api.current_logger.warnmsg[0]
+
+
+@pytest.mark.parametrize("distro", ["rhel", "centos"])
+def test_process_not_converting_skip(monkeypatch, distro):
+ curr_actor_mocked = CurrentActorMocked(
+ src_distro=distro, dst_distro=distro, msgs=[DistributionSignedRPM()]
+ )
+ monkeypatch.setattr(api, "current_actor", curr_actor_mocked)
+ monkeypatch.setattr(api, "current_logger", logger_mocked())
+ produce_mock = produce_mocked()
+ monkeypatch.setattr(api, "produce", produce_mock)
+
+ with mock.patch(
+ "leapp.libraries.actor.swapdistropackages._get_config"
+ ) as _get_config_mocked:
+ swapdistropackages.process()
+ _get_config_mocked.assert_not_called()
+ assert produce_mock.called == 0
+
+
+def test_process_no_rpms_mgs(monkeypatch):
+ curr_actor_mocked = CurrentActorMocked(src_distro='centos', dst_distro='rhel')
+ monkeypatch.setattr(api, "current_actor", curr_actor_mocked)
+ produce_mock = produce_mocked()
+ monkeypatch.setattr(api, "produce", produce_mock)
+
+ with pytest.raises(
+ StopActorExecutionError,
+ match="Did not receive DistributionSignedRPM message"
+ ):
+ swapdistropackages.process()
+
+ assert produce_mock.called == 0
+
+
+@pytest.mark.parametrize(
+ "pattern, expect",
+ [
+ (
+ "centos-release-*",
+ [
+ "centos-release-samba420",
+ "centos-release-okd-4",
+ "centos-release-opstools",
+ ],
+ ),
+ (
+ "almalinux-release-*",
+ [
+ "almalinux-release-testing",
+ "almalinux-release-devel",
+ ],
+ ),
+ (
+ "epel-release",
+ ["epel-release"],
+ ),
+ ],
+)
+def test_glob_match_rpms(pattern, expect):
+ """
+ A simple test making sure the fnmatch works correctly for RPM names
+ since it was originally meant for filepaths.
+ """
+
+ TEST_GLOB_RPMS = [
+ "centos-release-samba420",
+ "centos-stream-repos",
+ "centos-release-okd-4",
+ "centos-release",
+ "centos-release-opstools",
+ "release-centos",
+ "almalinux-release-devel",
+ "almalinux-release",
+ "almalinux-repos",
+ "release-almalinux",
+ "vim",
+ "epel-release",
+ "almalinux-release-testing",
+ "gcc-devel"
+ ]
+ actual = swapdistropackages._glob_match_rpms(TEST_GLOB_RPMS, pattern)
+ assert set(actual) == set(expect)
--
2.52.0

View File

@ -1,145 +0,0 @@
From 78e226508a201c16354a8acfd5238787872505a8 Mon Sep 17 00:00:00 2001
From: Daniel Diblik <ddiblik@redhat.com>
Date: Mon, 10 Nov 2025 16:04:19 +0100
Subject: [PATCH 085/111] Enable CentOS Stream test pipelines
Signed-off-by: Daniel Diblik <ddiblik@redhat.com>
---
.packit.yaml | 103 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 103 insertions(+)
diff --git a/.packit.yaml b/.packit.yaml
index 83b7ce6a..e158c7e4 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -460,6 +460,15 @@ jobs:
tmt:
plan_filter: 'tag:9to10'
environments:
+ - &tmt-env-settings-centos9to10
+ tmt:
+ context: &tmt-context-centos9to10
+ distro: "centos-9"
+ distro_target: "centos-10"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
- &tmt-env-settings-96to100
tmt:
context: &tmt-context-96to100
@@ -478,6 +487,15 @@ jobs:
provisioning:
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
+ - &tmt-env-settings-centos9torhel101
+ tmt:
+ context: &tmt-context-centos9torhel101
+ distro: "centos-9"
+ distro_target: "rhel-10.1"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
- &tmt-env-settings-98to102
tmt:
context: &tmt-context-98to102
@@ -487,6 +505,15 @@ jobs:
provisioning:
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
+ - &tmt-env-settings-centos9torhel102
+ tmt:
+ context: &tmt-context-centos9torhel102
+ distro: "centos-9"
+ distro_target: "rhel-10.2"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
- &sanity-abstract-9to10-aws
<<: *sanity-abstract-9to10
@@ -705,3 +732,79 @@ jobs:
env:
<<: *env-98to102
+# ###################################################################### #
+# ########################## CentOS Stream ############################# #
+# ###################################################################### #
+
+# ###################################################################### #
+# ###################### CentOS Stream > RHEL ########################## #
+# ###################################################################### #
+
+# ###################################################################### #
+# ############################ 9 > 10.1 ################################ #
+# ###################################################################### #
+
+- &sanity-centos9torhel101
+ <<: *sanity-abstract-9to10
+ trigger: pull_request
+ identifier: sanity-CentOS9toRHEL10.1
+ targets:
+ epel-9-x86_64:
+ distros: [CentOS-Stream-9]
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:9to10 & tag:tier0 & enabled:true & tag:-rhsm'
+ environments:
+ - *tmt-env-settings-centos9torhel101
+ env: &env-centos9to101
+ SOURCE_RELEASE: "9"
+ TARGET_RELEASE: "10.1"
+
+# ###################################################################### #
+# ############################ 9 > 10.2 ################################ #
+# ###################################################################### #
+
+- &sanity-centos9torhel102
+ <<: *sanity-abstract-9to10
+ trigger: pull_request
+ identifier: sanity-CentOS9toRHEL10.2
+ targets:
+ epel-9-x86_64:
+ distros: [CentOS-Stream-9]
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:9to10 & tag:tier0 & enabled:true & tag:-rhsm'
+ name:
+ environments:
+ - *tmt-env-settings-centos9torhel102
+ env: &env-centos9torhel102
+ SOURCE_RELEASE: "9"
+ TARGET_RELEASE: "10.2"
+
+# ###################################################################### #
+# ################## CentOS Stream > CentOS Stream ##################### #
+# ###################################################################### #
+
+# ###################################################################### #
+# ############################## 9 > 10 ################################ #
+# ###################################################################### #
+
+- &sanity-centos-9to10
+ <<: *sanity-abstract-9to10
+ trigger: pull_request
+ identifier: sanity-CentOS9to10
+ targets:
+ epel-9-x86_64:
+ distros: [CentOS-Stream-9]
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:9to10 & tag:tier0 & enabled:true & tag:-rhsm'
+ environments:
+ - *tmt-env-settings-centos9to10
+ env: &env-centos9to10
+ SOURCE_RELEASE: "9"
+ TARGET_RELEASE: "10"
+ TARGET_OS: "centos"
--
2.52.0

View File

@ -1,26 +0,0 @@
From 4ddf53061291db9b9bbd921a320ba2f306a2ffc8 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Mon, 8 Dec 2025 14:03:30 +0100
Subject: [PATCH 086/111] docs: Fix search not working
The jquery.js file was not getting properly put into the
build/html/_static/ directory. Removing this line seems to fix that.
---
docs/source/conf.py | 1 -
1 file changed, 1 deletion(-)
diff --git a/docs/source/conf.py b/docs/source/conf.py
index a0e6a1de..dd39d3fa 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -40,7 +40,6 @@ exclude_patterns = []
html_static_path = ['_static']
html_theme = 'sphinx_rtd_theme'
-html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
pygments_style = 'sphinx'
--
2.52.0

View File

@ -1,95 +0,0 @@
From 4105452bc89b36359124f5a20d17b73b7512a928 Mon Sep 17 00:00:00 2001
From: karolinku <kkula@redhat.com>
Date: Mon, 15 Dec 2025 12:16:03 +0100
Subject: [PATCH 087/111] Handle invalid values for case-sensitive SSH options
Catch ModelViolationError when parsing sshd configuration files that
contain invalid values for case-sensitive options like PermitRootLogin
and UsePrivilegeSeparation.
This change provides a clear error message
explaining that arguments are case-sensitive and lists the valid values
based on the model definition.
Jira: RHEL-19247
---
.../libraries/readopensshconfig.py | 26 ++++++++++++++++++-
..._readopensshconfig_opensshconfigscanner.py | 13 ++++++++++
2 files changed, 38 insertions(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/opensshconfigscanner/libraries/readopensshconfig.py b/repos/system_upgrade/common/actors/opensshconfigscanner/libraries/readopensshconfig.py
index 50e37092..f467676b 100644
--- a/repos/system_upgrade/common/actors/opensshconfigscanner/libraries/readopensshconfig.py
+++ b/repos/system_upgrade/common/actors/opensshconfigscanner/libraries/readopensshconfig.py
@@ -7,6 +7,7 @@ from leapp.exceptions import StopActorExecutionError
from leapp.libraries.common.rpms import check_file_modification
from leapp.libraries.stdlib import api
from leapp.models import OpenSshConfig, OpenSshPermitRootLogin
+from leapp.models.fields import ModelViolationError
CONFIG = '/etc/ssh/sshd_config'
DEPRECATED_DIRECTIVES = ['showpatchlevel']
@@ -60,12 +61,35 @@ def parse_config(config, base_config=None, current_cfg_depth=0):
# convert deprecated alias
if value == "without-password":
value = "prohibit-password"
- v = OpenSshPermitRootLogin(value=value, in_match=in_match)
+ try:
+ v = OpenSshPermitRootLogin(value=value, in_match=in_match)
+ except ModelViolationError:
+ valid_values = OpenSshPermitRootLogin.value.serialize()['choices']
+ raise StopActorExecutionError(
+ 'Invalid SSH configuration: Invalid value for PermitRootLogin',
+ details={
+ 'details': 'Invalid value "{}" for PermitRootLogin in {}. '
+ 'Arguments for SSH configuration options are case-sensitive. '
+ 'Valid values are: {}.'
+ .format(value, CONFIG, ', '.join(valid_values))
+ }
+ )
ret.permit_root_login.append(v)
elif el[0].lower() == 'useprivilegeseparation':
# Record only first occurrence, which is effective
if not ret.use_privilege_separation:
+ valid_values = OpenSshConfig.use_privilege_separation.serialize()['choices']
+ if value not in valid_values:
+ raise StopActorExecutionError(
+ 'Invalid SSH configuration: Invalid value for UsePrivilegeSeparation',
+ details={
+ 'details': 'Invalid value "{}" for UsePrivilegeSeparation in {}. '
+ 'Arguments for SSH configuration options are case-sensitive. '
+ 'Valid values are: {}.'
+ .format(value, CONFIG, ', '.join(valid_values))
+ }
+ )
ret.use_privilege_separation = value
elif el[0].lower() == 'protocol':
diff --git a/repos/system_upgrade/common/actors/opensshconfigscanner/tests/test_readopensshconfig_opensshconfigscanner.py b/repos/system_upgrade/common/actors/opensshconfigscanner/tests/test_readopensshconfig_opensshconfigscanner.py
index 64c16f7f..1a6a1c9f 100644
--- a/repos/system_upgrade/common/actors/opensshconfigscanner/tests/test_readopensshconfig_opensshconfigscanner.py
+++ b/repos/system_upgrade/common/actors/opensshconfigscanner/tests/test_readopensshconfig_opensshconfigscanner.py
@@ -351,6 +351,19 @@ def test_produce_config():
assert cfg.subsystem_sftp == 'internal-sftp'
+@pytest.mark.parametrize('config_line,option_name,invalid_value', [
+ ('PermitRootLogin NO', 'PermitRootLogin', 'NO'),
+ ('UsePrivilegeSeparation YES', 'UsePrivilegeSeparation', 'YES'),
+])
+def test_parse_config_invalid_option_case(config_line, option_name, invalid_value):
+ config = [config_line]
+
+ with pytest.raises(StopActorExecutionError) as err:
+ parse_config(config)
+
+ assert str(err.value).startswith('Invalid SSH configuration')
+
+
def test_actor_execution(current_actor_context):
current_actor_context.run()
assert current_actor_context.consume(OpenSshConfig)
--
2.52.0

View File

@ -1,38 +0,0 @@
From f66867ab6dfcc998bf8df39753639936d5552048 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 18:28:54 +0200
Subject: [PATCH 088/111] pes_events_scanner: Also remove RHEL 9 events in
remove_leapp_related_events()
---
.../peseventsscanner/libraries/pes_events_scanner.py | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
index 67e517d1..02107314 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
@@ -492,15 +492,16 @@ def apply_transaction_configuration(source_pkgs, transaction_configuration):
def remove_leapp_related_events(events):
- # NOTE(ivasilev) Need to revisit this once rhel9->rhel10 upgrades become a thing
- leapp_pkgs = rpms.get_leapp_dep_packages(
- major_version=['7', '8']) + rpms.get_leapp_packages(major_version=['7', '8'])
+ major_vers = ['7', '8', '9']
+ leapp_pkgs = rpms.get_leapp_dep_packages(major_vers) + rpms.get_leapp_packages(major_vers)
res = []
for event in events:
if not any(pkg.name in leapp_pkgs for pkg in event.in_pkgs):
res.append(event)
else:
- api.current_logger().debug('Filtered out leapp related event, event id: {}'.format(event.id))
+ api.current_logger().debug(
+ 'Filtered out leapp related event, event id: {}'.format(event.id)
+ )
return res
--
2.52.0

View File

@ -1,26 +0,0 @@
From 0dce9ea14e28804746e10c40e659fbe525f6787a Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:18:39 +0200
Subject: [PATCH 089/111] lib/overlaygen: Fix possibly unbound var
---
repos/system_upgrade/common/libraries/overlaygen.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/libraries/overlaygen.py b/repos/system_upgrade/common/libraries/overlaygen.py
index 83dc33b8..81342557 100644
--- a/repos/system_upgrade/common/libraries/overlaygen.py
+++ b/repos/system_upgrade/common/libraries/overlaygen.py
@@ -670,8 +670,8 @@ def _overlay_disk_size_old():
"""
Convenient function to retrieve the overlay disk size
"""
+ env_size = get_env('LEAPP_OVL_SIZE', '2048')
try:
- env_size = get_env('LEAPP_OVL_SIZE', '2048')
disk_size = int(env_size)
except ValueError:
disk_size = 2048
--
2.52.0

View File

@ -1,124 +0,0 @@
From bdcd9440b1ca3130e40d98233d60b76bdd674b3b Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 17:36:19 +0200
Subject: [PATCH 090/111] lib/rhui: Remove RHEL 7 RHUI setups
---
.../checkrhui/tests/component_test_checkrhui.py | 2 +-
repos/system_upgrade/common/libraries/rhui.py | 17 +----------------
2 files changed, 2 insertions(+), 17 deletions(-)
diff --git a/repos/system_upgrade/common/actors/cloud/checkrhui/tests/component_test_checkrhui.py b/repos/system_upgrade/common/actors/cloud/checkrhui/tests/component_test_checkrhui.py
index 2e6f279e..7fa2112f 100644
--- a/repos/system_upgrade/common/actors/cloud/checkrhui/tests/component_test_checkrhui.py
+++ b/repos/system_upgrade/common/actors/cloud/checkrhui/tests/component_test_checkrhui.py
@@ -108,7 +108,7 @@ def mk_cloud_map(variants):
]
)
def test_determine_rhui_src_variant(monkeypatch, extra_pkgs, rhui_setups, expected_result):
- actor = CurrentActorMocked(src_ver='7.9', config=_make_default_config(all_rhui_cfg))
+ actor = CurrentActorMocked(src_ver='8.10', config=_make_default_config(all_rhui_cfg))
monkeypatch.setattr(api, 'current_actor', actor)
installed_pkgs = {'zip', 'zsh', 'bash', 'grubby'}.union(set(extra_pkgs))
diff --git a/repos/system_upgrade/common/libraries/rhui.py b/repos/system_upgrade/common/libraries/rhui.py
index c90c8c14..e200075f 100644
--- a/repos/system_upgrade/common/libraries/rhui.py
+++ b/repos/system_upgrade/common/libraries/rhui.py
@@ -8,9 +8,6 @@ from leapp.libraries.common.config.version import get_source_major_version, get_
from leapp.libraries.stdlib import api
from leapp.utils.deprecation import deprecated
-# when on AWS and upgrading from RHEL 7, we need also Python2 version of "Amazon-id" dnf
-# plugin which is served by "leapp-rhui-aws" rpm package (please note this package is not
-# in any RH official repository but only in "rhui-client-config-*" repo)
DNF_PLUGIN_PATH_PY2 = '/usr/lib/python2.7/site-packages/dnf-plugins/'
YUM_REPOS_PATH = '/etc/yum.repos.d'
@@ -101,7 +98,7 @@ class RHUIFamily:
def mk_rhui_setup(clients=None, leapp_pkg='', mandatory_files=None, optional_files=None,
- extra_info=None, os_version='7.0', arch=arch.ARCH_X86_64, content_channel=ContentChannel.GA,
+ extra_info=None, os_version='8.0', arch=arch.ARCH_X86_64, content_channel=ContentChannel.GA,
files_supporting_client_operation=None):
os_version_fragments = os_version.split('.')
@@ -131,7 +128,6 @@ def mk_rhui_setup(clients=None, leapp_pkg='', mandatory_files=None, optional_fil
# the search for target equivalent to setups sharing the same family, and thus reducing a chance of error.
RHUI_SETUPS = {
RHUIFamily(RHUIProvider.AWS, client_files_folder='aws'): [
- mk_rhui_setup(clients={'rh-amazon-rhui-client'}, optional_files=[], os_version='7'),
mk_rhui_setup(clients={'rh-amazon-rhui-client'}, leapp_pkg='leapp-rhui-aws',
mandatory_files=[
('rhui-client-config-server-8.crt', RHUI_PKI_PRODUCT_DIR),
@@ -171,7 +167,6 @@ RHUI_SETUPS = {
], os_version='10'),
],
RHUIFamily(RHUIProvider.AWS, arch=arch.ARCH_ARM64, client_files_folder='aws'): [
- mk_rhui_setup(clients={'rh-amazon-rhui-client-arm'}, optional_files=[], os_version='7', arch=arch.ARCH_ARM64),
mk_rhui_setup(clients={'rh-amazon-rhui-client'}, leapp_pkg='leapp-rhui-aws',
mandatory_files=[
('rhui-client-config-server-8.crt', RHUI_PKI_PRODUCT_DIR),
@@ -209,8 +204,6 @@ RHUI_SETUPS = {
], os_version='10'),
],
RHUIFamily(RHUIProvider.AWS, variant=RHUIVariant.SAP, client_files_folder='aws-sap-e4s'): [
- mk_rhui_setup(clients={'rh-amazon-rhui-client-sap-bundle'}, optional_files=[], os_version='7',
- content_channel=ContentChannel.E4S),
mk_rhui_setup(clients={'rh-amazon-rhui-client-sap-bundle-e4s'}, leapp_pkg='leapp-rhui-aws-sap-e4s',
mandatory_files=[
('rhui-client-config-server-8-sap-bundle.crt', RHUI_PKI_PRODUCT_DIR),
@@ -265,8 +258,6 @@ RHUI_SETUPS = {
], os_version='10', content_channel=ContentChannel.E4S),
],
RHUIFamily(RHUIProvider.AZURE, client_files_folder='azure'): [
- mk_rhui_setup(clients={'rhui-azure-rhel7'}, os_version='7',
- extra_info={'agent_pkg': 'WALinuxAgent'}),
mk_rhui_setup(clients={'rhui-azure-rhel8'}, leapp_pkg='leapp-rhui-azure',
mandatory_files=[('leapp-azure.repo', YUM_REPOS_PATH)],
optional_files=[
@@ -298,7 +289,6 @@ RHUI_SETUPS = {
os_version='10'),
],
RHUIFamily(RHUIProvider.AZURE, variant=RHUIVariant.SAP_APPS, client_files_folder='azure-sap-apps'): [
- mk_rhui_setup(clients={'rhui-azure-rhel7-base-sap-apps'}, os_version='7', content_channel=ContentChannel.EUS),
mk_rhui_setup(clients={'rhui-azure-rhel8-sapapps'}, leapp_pkg='leapp-rhui-azure-sap',
mandatory_files=[('leapp-azure-sap-apps.repo', YUM_REPOS_PATH)],
optional_files=[
@@ -336,7 +326,6 @@ RHUI_SETUPS = {
os_version='10', content_channel=ContentChannel.EUS),
],
RHUIFamily(RHUIProvider.AZURE, variant=RHUIVariant.SAP_HA, client_files_folder='azure-sap-ha'): [
- mk_rhui_setup(clients={'rhui-azure-rhel7-base-sap-ha'}, os_version='7', content_channel=ContentChannel.E4S),
mk_rhui_setup(clients={'rhui-azure-rhel8-sap-ha'}, leapp_pkg='leapp-rhui-azure-sap',
mandatory_files=[('leapp-azure-sap-ha.repo', YUM_REPOS_PATH)],
optional_files=[
@@ -374,8 +363,6 @@ RHUI_SETUPS = {
os_version='10', content_channel=ContentChannel.E4S),
],
RHUIFamily(RHUIProvider.GOOGLE, client_files_folder='google'): [
- mk_rhui_setup(clients={'google-rhui-client-rhel7'}, os_version='7'),
- mk_rhui_setup(clients={'google-rhui-client-rhel7-els'}, os_version='7'),
mk_rhui_setup(clients={'google-rhui-client-rhel8'}, leapp_pkg='leapp-rhui-google',
mandatory_files=[('leapp-google.repo', YUM_REPOS_PATH)],
files_supporting_client_operation=['leapp-google.repo'],
@@ -386,7 +373,6 @@ RHUI_SETUPS = {
os_version='9'),
],
RHUIFamily(RHUIProvider.GOOGLE, variant=RHUIVariant.SAP, client_files_folder='google-sap'): [
- mk_rhui_setup(clients={'google-rhui-client-rhel79-sap'}, os_version='7', content_channel=ContentChannel.E4S),
mk_rhui_setup(clients={'google-rhui-client-rhel8-sap'}, leapp_pkg='leapp-rhui-google-sap',
mandatory_files=[('leapp-google-sap.repo', YUM_REPOS_PATH)],
files_supporting_client_operation=['leapp-google-sap.repo'],
@@ -401,7 +387,6 @@ RHUI_SETUPS = {
os_version='9', content_channel=ContentChannel.E4S),
],
RHUIFamily(RHUIProvider.ALIBABA, client_files_folder='alibaba'): [
- mk_rhui_setup(clients={'client-rhel7'}, os_version='7'),
mk_rhui_setup(clients={'aliyun_rhui_rhel8'}, leapp_pkg='leapp-rhui-alibaba',
mandatory_files=[('leapp-alibaba.repo', YUM_REPOS_PATH)],
optional_files=[
--
2.52.0

View File

@ -1,311 +0,0 @@
From 6ab7f341c706ca32f8344c214e421e43fe657bae Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 12:13:14 +0200
Subject: [PATCH 091/111] lib/rhui: Remove deprecated code and setups map
---
.../tests/component_test_checkrhui.py | 10 -
repos/system_upgrade/common/libraries/rhui.py | 250 ------------------
2 files changed, 260 deletions(-)
diff --git a/repos/system_upgrade/common/actors/cloud/checkrhui/tests/component_test_checkrhui.py b/repos/system_upgrade/common/actors/cloud/checkrhui/tests/component_test_checkrhui.py
index 7fa2112f..f0820c86 100644
--- a/repos/system_upgrade/common/actors/cloud/checkrhui/tests/component_test_checkrhui.py
+++ b/repos/system_upgrade/common/actors/cloud/checkrhui/tests/component_test_checkrhui.py
@@ -53,16 +53,6 @@ def mk_setup_info():
return TargetRHUISetupInfo(preinstall_tasks=pre_tasks, postinstall_tasks=post_tasks)
-def iter_known_rhui_setups():
- for upgrade_path, providers in rhui.RHUI_CLOUD_MAP.items():
- for provider_variant, variant_description in providers.items():
- src_clients = variant_description['src_pkg']
- if isinstance(src_clients, str):
- src_clients = {src_clients, }
-
- yield provider_variant, upgrade_path, src_clients
-
-
def mk_cloud_map(variants):
upg_path = {}
for variant_desc in variants:
diff --git a/repos/system_upgrade/common/libraries/rhui.py b/repos/system_upgrade/common/libraries/rhui.py
index e200075f..7639a64f 100644
--- a/repos/system_upgrade/common/libraries/rhui.py
+++ b/repos/system_upgrade/common/libraries/rhui.py
@@ -1,12 +1,8 @@
import os
from collections import namedtuple
-import six
-
from leapp.libraries.common.config import architecture as arch
from leapp.libraries.common.config.version import get_source_major_version, get_target_major_version
-from leapp.libraries.stdlib import api
-from leapp.utils.deprecation import deprecated
DNF_PLUGIN_PATH_PY2 = '/usr/lib/python2.7/site-packages/dnf-plugins/'
YUM_REPOS_PATH = '/etc/yum.repos.d'
@@ -435,220 +431,6 @@ RHUI_SETUPS = {
}
-# DEPRECATED, use RHUI_SETUPS instead
-RHUI_CLOUD_MAP = {
- '7to8': {
- 'aws': {
- 'src_pkg': 'rh-amazon-rhui-client',
- 'target_pkg': 'rh-amazon-rhui-client',
- 'leapp_pkg': 'leapp-rhui-aws',
- 'leapp_pkg_repo': 'leapp-aws.repo',
- 'files_map': [
- ('rhui-client-config-server-8.crt', RHUI_PKI_PRODUCT_DIR),
- ('rhui-client-config-server-8.key', RHUI_PKI_DIR),
- ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
- (AWS_DNF_PLUGIN_NAME, DNF_PLUGIN_PATH_PY2),
- ('leapp-aws.repo', YUM_REPOS_PATH)
- ],
- },
- 'aws-sap-e4s': {
- 'src_pkg': 'rh-amazon-rhui-client-sap-bundle',
- 'target_pkg': 'rh-amazon-rhui-client-sap-bundle-e4s',
- 'leapp_pkg': 'leapp-rhui-aws-sap-e4s',
- 'leapp_pkg_repo': 'leapp-aws-sap-e4s.repo',
- 'files_map': [
- ('rhui-client-config-server-8-sap-bundle.crt', RHUI_PKI_PRODUCT_DIR),
- ('rhui-client-config-server-8-sap-bundle.key', RHUI_PKI_DIR),
- ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
- (AWS_DNF_PLUGIN_NAME, DNF_PLUGIN_PATH_PY2),
- ('leapp-aws-sap-e4s.repo', YUM_REPOS_PATH)
- ],
- },
- 'azure': {
- 'src_pkg': 'rhui-azure-rhel7',
- 'target_pkg': 'rhui-azure-rhel8',
- 'agent_pkg': 'WALinuxAgent',
- 'leapp_pkg': 'leapp-rhui-azure',
- 'leapp_pkg_repo': 'leapp-azure.repo',
- 'files_map': [
- ('leapp-azure.repo', YUM_REPOS_PATH)
- ],
- },
- 'azure-sap-apps': {
- 'src_pkg': 'rhui-azure-rhel7-base-sap-apps',
- 'target_pkg': 'rhui-azure-rhel8-sapapps',
- 'agent_pkg': 'WALinuxAgent',
- 'leapp_pkg': 'leapp-rhui-azure-sap',
- 'leapp_pkg_repo': 'leapp-azure-sap-apps.repo',
- 'files_map': [
- ('leapp-azure-sap-apps.repo', YUM_REPOS_PATH),
- ],
- },
- 'azure-sap-ha': {
- 'src_pkg': 'rhui-azure-rhel7-base-sap-ha',
- 'target_pkg': 'rhui-azure-rhel8-sap-ha',
- 'agent_pkg': 'WALinuxAgent',
- 'leapp_pkg': 'leapp-rhui-azure-sap',
- 'leapp_pkg_repo': 'leapp-azure-sap-ha.repo',
- 'files_map': [
- ('leapp-azure-sap-ha.repo', YUM_REPOS_PATH)
- ],
- },
- 'google': {
- 'src_pkg': 'google-rhui-client-rhel7',
- 'target_pkg': 'google-rhui-client-rhel8',
- 'leapp_pkg': 'leapp-rhui-google',
- 'leapp_pkg_repo': 'leapp-google.repo',
- 'files_map': [
- ('content.crt', RHUI_PKI_PRODUCT_DIR),
- ('key.pem', RHUI_PKI_DIR),
- ('leapp-google.repo', YUM_REPOS_PATH)
- ],
- },
- 'google-sap': {
- 'src_pkg': 'google-rhui-client-rhel79-sap',
- 'target_pkg': 'google-rhui-client-rhel8-sap',
- 'leapp_pkg': 'leapp-rhui-google-sap',
- 'leapp_pkg_repo': 'leapp-google-sap.repo',
- 'files_map': [
- ('content.crt', RHUI_PKI_PRODUCT_DIR),
- ('key.pem', RHUI_PKI_DIR),
- ('leapp-google-sap.repo', YUM_REPOS_PATH)
- ],
- },
- 'alibaba': {
- 'src_pkg': 'client-rhel7',
- 'target_pkg': 'aliyun_rhui_rhel8',
- 'leapp_pkg': 'leapp-rhui-alibaba',
- 'leapp_pkg_repo': 'leapp-alibaba.repo',
- 'files_map': [
- ('content.crt', RHUI_PKI_PRODUCT_DIR),
- ('key.pem', RHUI_PKI_DIR),
- ('leapp-alibaba.repo', YUM_REPOS_PATH)
- ],
- }
- },
- '8to9': {
- 'aws': {
- 'src_pkg': 'rh-amazon-rhui-client',
- 'target_pkg': 'rh-amazon-rhui-client',
- 'leapp_pkg': 'leapp-rhui-aws',
- 'leapp_pkg_repo': 'leapp-aws.repo',
- 'files_map': [
- ('rhui-client-config-server-9.crt', RHUI_PKI_PRODUCT_DIR),
- ('rhui-client-config-server-9.key', RHUI_PKI_DIR),
- ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
- ('leapp-aws.repo', YUM_REPOS_PATH)
- ],
- },
- 'aws-sap-e4s': {
- 'src_pkg': 'rh-amazon-rhui-client-sap-bundle-e4s',
- 'target_pkg': 'rh-amazon-rhui-client-sap-bundle-e4s',
- 'leapp_pkg': 'leapp-rhui-aws-sap-e4s',
- 'leapp_pkg_repo': 'leapp-aws-sap-e4s.repo',
- 'files_map': [
- ('rhui-client-config-server-9-sap-bundle.crt', RHUI_PKI_PRODUCT_DIR),
- ('rhui-client-config-server-9-sap-bundle.key', RHUI_PKI_DIR),
- ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
- ('leapp-aws-sap-e4s.repo', YUM_REPOS_PATH)
- ],
- },
- 'azure': {
- 'src_pkg': 'rhui-azure-rhel8',
- 'target_pkg': 'rhui-azure-rhel9',
- 'agent_pkg': 'WALinuxAgent',
- 'leapp_pkg': 'leapp-rhui-azure',
- 'leapp_pkg_repo': 'leapp-azure.repo',
- 'files_map': [
- ('leapp-azure.repo', YUM_REPOS_PATH)
- ],
- },
- # FIXME(mhecko): This entry is identical to the azure one, since we have no EUS content yet, therefore, it
- # # serves only the purpose of containing the name of rhui client package to correctly detect
- # # cloud provider. Trying to work around this entry by specifying --channel, will result in
- # # failures - there is no repomapping for EUS content, and the name of target pkg differs on EUS.
- # # If the EUS image is available sooner than the 'azure-eus' entry gets modified, the user can
- # # still upgrade to non-EUS, and switch the newly upgraded system to EUS manually.
- 'azure-eus': {
- 'src_pkg': 'rhui-azure-rhel8-eus',
- 'target_pkg': 'rhui-azure-rhel9',
- 'agent_pkg': 'WALinuxAgent',
- 'leapp_pkg': 'leapp-rhui-azure-eus',
- 'leapp_pkg_repo': 'leapp-azure.repo',
- 'files_map': [
- ('leapp-azure.repo', YUM_REPOS_PATH)
- ],
- },
- 'azure-sap-ha': {
- 'src_pkg': 'rhui-azure-rhel8-sap-ha',
- 'target_pkg': 'rhui-azure-rhel9-sap-ha',
- 'agent_pkg': 'WALinuxAgent',
- 'leapp_pkg': 'leapp-rhui-azure-sap',
- 'leapp_pkg_repo': 'leapp-azure-sap-ha.repo',
- 'files_map': [
- ('leapp-azure-sap-ha.repo', YUM_REPOS_PATH)
- ],
- },
- 'azure-sap-apps': {
- 'src_pkg': 'rhui-azure-rhel8-sapapps',
- 'target_pkg': 'rhui-azure-rhel9-sapapps',
- 'agent_pkg': 'WALinuxAgent',
- 'leapp_pkg': 'leapp-rhui-azure-sap',
- 'leapp_pkg_repo': 'leapp-azure-sap-apps.repo',
- 'files_map': [
- ('leapp-azure-sap-apps.repo', YUM_REPOS_PATH)
- ],
- },
- 'google': {
- 'src_pkg': 'google-rhui-client-rhel8',
- 'target_pkg': 'google-rhui-client-rhel9',
- 'leapp_pkg': 'leapp-rhui-google',
- 'leapp_pkg_repo': 'leapp-google.repo',
- 'files_map': [
- ('content.crt', RHUI_PKI_PRODUCT_DIR),
- ('key.pem', RHUI_PKI_DIR),
- ('leapp-google.repo', YUM_REPOS_PATH)
- ],
- },
- 'google-sap': {
- 'src_pkg': 'google-rhui-client-rhel8-sap',
- 'target_pkg': 'google-rhui-client-rhel9-sap',
- 'leapp_pkg': 'leapp-rhui-google-sap',
- 'leapp_pkg_repo': 'leapp-google-sap.repo',
- 'files_map': [
- ('content.crt', RHUI_PKI_PRODUCT_DIR),
- ('key.pem', RHUI_PKI_DIR),
- ('leapp-google-sap.repo', YUM_REPOS_PATH)
- ],
- },
- 'alibaba': {
- 'src_pkg': 'aliyun_rhui_rhel8',
- 'target_pkg': 'aliyun_rhui_rhel9',
- 'leapp_pkg': 'leapp-rhui-alibaba',
- 'leapp_pkg_repo': 'leapp-alibaba.repo',
- 'files_map': [
- ('content.crt', RHUI_PKI_PRODUCT_DIR),
- ('key.pem', RHUI_PKI_DIR),
- ('leapp-alibaba.repo', YUM_REPOS_PATH)
- ],
- },
- },
- '9to10': {
- 'alibaba': {
- 'src_pkg': 'aliyun_rhui_rhel9',
- 'target_pkg': 'aliyun_rhui_rhel10',
- 'leapp_pkg': 'leapp-rhui-alibaba',
- 'leapp_pkg_repo': 'leapp-alibaba.repo',
- 'files_map': [
- ('content.crt', RHUI_PKI_PRODUCT_DIR),
- ('key.pem', RHUI_PKI_DIR),
- ('leapp-alibaba.repo', YUM_REPOS_PATH)
- ],
- },
- }
-}
-
-
def get_upg_path():
"""
Get upgrade path in specific string format
@@ -658,38 +440,6 @@ def get_upg_path():
return '{0}to{1}'.format(source_major_version, target_major_version)
-@deprecated(since='2023-07-27', message='This functionality has been replaced with the RHUIInfo message.')
-def gen_rhui_files_map():
- """
- Generate RHUI files map based on architecture and upgrade path
- """
- arch = api.current_actor().configuration.architecture
- upg_path = get_upg_path()
-
- cloud_map = RHUI_CLOUD_MAP
- # for the moment the only arch related difference in RHUI package naming is on ARM
- if arch == 'aarch64':
- cloud_map[get_upg_path()]['aws']['src_pkg'] = 'rh-amazon-rhui-client-arm'
-
- files_map = dict((k, v['files_map']) for k, v in six.iteritems(cloud_map[upg_path]))
- return files_map
-
-
-@deprecated(since='2023-07-27', message='This functionality has been integrated into target_userspace_creator.')
-def copy_rhui_data(context, provider):
- """
- Copy relevant RHUI certificates and key into the target userspace container
- """
- rhui_dir = api.get_common_folder_path('rhui')
- data_dir = os.path.join(rhui_dir, provider)
-
- context.call(['mkdir', '-p', RHUI_PKI_PRODUCT_DIR])
- context.call(['mkdir', '-p', RHUI_PKI_PRIVATE_DIR])
-
- for path_ in gen_rhui_files_map().get(provider, ()):
- context.copy_to(os.path.join(data_dir, path_[0]), path_[1])
-
-
def get_all_known_rhui_pkgs_for_current_upg():
upg_major_versions = (get_source_major_version(), get_target_major_version())
--
2.52.0

View File

@ -1,139 +0,0 @@
From 8e0265729f92741665a0465d6d9ad0e7fafbc4ef Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 12:14:33 +0200
Subject: [PATCH 092/111] lib/gpg: Remove RHEL 7 "workarounds"
---
repos/system_upgrade/common/libraries/gpg.py | 15 ++----
.../common/libraries/tests/test_gpg.py | 48 +++++--------------
2 files changed, 16 insertions(+), 47 deletions(-)
diff --git a/repos/system_upgrade/common/libraries/gpg.py b/repos/system_upgrade/common/libraries/gpg.py
index 9990cdcf..4c5133e7 100644
--- a/repos/system_upgrade/common/libraries/gpg.py
+++ b/repos/system_upgrade/common/libraries/gpg.py
@@ -1,7 +1,7 @@
import os
from leapp.libraries.common import config
-from leapp.libraries.common.config.version import get_source_major_version, get_target_major_version
+from leapp.libraries.common.config.version import get_target_major_version
from leapp.libraries.stdlib import api, run
from leapp.models import GpgKey
@@ -28,18 +28,11 @@ def _gpg_show_keys(key_path):
"""
Show keys in given file in version-agnostic manner
- This runs gpg --show-keys (EL8) or gpg --with-fingerprints (EL7)
- to verify the given file exists, is readable and contains valid
- OpenPGP key data, which is printed in parsable format (--with-colons).
+ This runs gpg --show-keys to verify the given file exists, is readable and
+ contains valid OpenPGP key data, which is printed in parsable format (--with-colons).
"""
try:
- cmd = ['gpg2']
- # RHEL7 gnupg requires different switches to get the same output
- if get_source_major_version() == '7':
- cmd.append('--with-fingerprint')
- else:
- cmd.append('--show-keys')
- cmd += ['--with-colons', key_path]
+ cmd = ['gpg2', '--show-keys', '--with-colons', key_path]
# TODO: discussed, most likely the checked=False will be dropped
# and error will be handled in other functions
return run(cmd, split=True, checked=False)
diff --git a/repos/system_upgrade/common/libraries/tests/test_gpg.py b/repos/system_upgrade/common/libraries/tests/test_gpg.py
index 1394e60d..ec44f921 100644
--- a/repos/system_upgrade/common/libraries/tests/test_gpg.py
+++ b/repos/system_upgrade/common/libraries/tests/test_gpg.py
@@ -12,8 +12,6 @@ from leapp.models import GpgKey, InstalledRPM, RPM
@pytest.mark.parametrize('target, product_type, distro, exp', [
- ('8.6', 'beta', 'rhel', '../../files/distro/rhel/rpm-gpg/8beta'),
- ('8.8', 'htb', 'rhel', '../../files/distro/rhel/rpm-gpg/8'),
('9.0', 'beta', 'rhel', '../../files/distro/rhel/rpm-gpg/9beta'),
('9.2', 'ga', 'rhel', '../../files/distro/rhel/rpm-gpg/9'),
('10.0', 'ga', 'rhel', '../../files/distro/rhel/rpm-gpg/10'),
@@ -30,14 +28,9 @@ def test_get_path_to_gpg_certs(monkeypatch, target, product_type, distro, exp):
assert p == exp
-def is_rhel7():
- return int(distro.major_version()) < 8
-
-
@pytest.mark.skipif(distro.id() not in ("rhel", "centos"), reason="Requires RHEL or CentOS for valid results.")
def test_gpg_show_keys(loaded_leapp_repository, monkeypatch):
- src = '7.9' if is_rhel7() else '8.6'
- current_actor = CurrentActorMocked(src_ver=src, release_id='rhel')
+ current_actor = CurrentActorMocked(src_ver='8.10', release_id='rhel')
monkeypatch.setattr(api, 'current_actor', current_actor)
# python2 compatibility :/
@@ -50,11 +43,8 @@ def test_gpg_show_keys(loaded_leapp_repository, monkeypatch):
# non-existing file
non_existent_path = os.path.join(dirpath, 'nonexistent')
res = gpg._gpg_show_keys(non_existent_path)
- if is_rhel7():
- err_msg = "gpg: can't open `{}'".format(non_existent_path)
- else:
- err_msg = "gpg: can't open '{}': No such file or directory\n".format(non_existent_path)
assert not res['stdout']
+ err_msg = "gpg: can't open '{}': No such file or directory\n".format(non_existent_path)
assert err_msg in res['stderr']
assert res['exit_code'] == 2
@@ -67,13 +57,8 @@ def test_gpg_show_keys(loaded_leapp_repository, monkeypatch):
f.write('test')
res = gpg._gpg_show_keys(no_key_path)
- if is_rhel7():
- err_msg = ('gpg: no valid OpenPGP data found.\n'
- 'gpg: processing message failed: Unknown system error\n')
- else:
- err_msg = 'gpg: no valid OpenPGP data found.\n'
assert not res['stdout']
- assert res['stderr'] == err_msg
+ assert res['stderr'] == 'gpg: no valid OpenPGP data found.\n'
assert res['exit_code'] == 2
fp = gpg._parse_fp_from_gpg(res)
@@ -89,24 +74,15 @@ def test_gpg_show_keys(loaded_leapp_repository, monkeypatch):
finally:
shutil.rmtree(dirpath)
- if is_rhel7():
- assert len(res['stdout']) == 4
- assert res['stdout'][0] == ('pub:-:4096:1:199E2F91FD431D51:1256212795:::-:'
- 'Red Hat, Inc. (release key 2) <security@redhat.com>:')
- assert res['stdout'][1] == 'fpr:::::::::567E347AD0044ADE55BA8A5F199E2F91FD431D51:'
- assert res['stdout'][2] == ('pub:-:4096:1:5054E4A45A6340B3:1646863006:::-:'
- 'Red Hat, Inc. (auxiliary key 3) <security@redhat.com>:')
- assert res['stdout'][3] == 'fpr:::::::::7E4624258C406535D56D6F135054E4A45A6340B3:'
- else:
- assert len(res['stdout']) == 6
- assert res['stdout'][0] == 'pub:-:4096:1:199E2F91FD431D51:1256212795:::-:::scSC::::::23::0:'
- assert res['stdout'][1] == 'fpr:::::::::567E347AD0044ADE55BA8A5F199E2F91FD431D51:'
- assert res['stdout'][2] == ('uid:-::::1256212795::DC1CAEC7997B3575101BB0FCAAC6191792660D8F::'
- 'Red Hat, Inc. (release key 2) <security@redhat.com>::::::::::0:')
- assert res['stdout'][3] == 'pub:-:4096:1:5054E4A45A6340B3:1646863006:::-:::scSC::::::23::0:'
- assert res['stdout'][4] == 'fpr:::::::::7E4624258C406535D56D6F135054E4A45A6340B3:'
- assert res['stdout'][5] == ('uid:-::::1646863006::DA7F68E3872D6E7BDCE05225E7EB5F3ACDD9699F::'
- 'Red Hat, Inc. (auxiliary key 3) <security@redhat.com>::::::::::0:')
+ assert len(res['stdout']) == 6
+ assert res['stdout'][0] == 'pub:-:4096:1:199E2F91FD431D51:1256212795:::-:::scSC::::::23::0:'
+ assert res['stdout'][1] == 'fpr:::::::::567E347AD0044ADE55BA8A5F199E2F91FD431D51:'
+ assert res['stdout'][2] == ('uid:-::::1256212795::DC1CAEC7997B3575101BB0FCAAC6191792660D8F::'
+ 'Red Hat, Inc. (release key 2) <security@redhat.com>::::::::::0:')
+ assert res['stdout'][3] == 'pub:-:4096:1:5054E4A45A6340B3:1646863006:::-:::scSC::::::23::0:'
+ assert res['stdout'][4] == 'fpr:::::::::7E4624258C406535D56D6F135054E4A45A6340B3:'
+ assert res['stdout'][5] == ('uid:-::::1646863006::DA7F68E3872D6E7BDCE05225E7EB5F3ACDD9699F::'
+ 'Red Hat, Inc. (auxiliary key 3) <security@redhat.com>::::::::::0:')
err = '{}/trustdb.gpg: trustdb created'.format(dirpath)
assert err in res['stderr']
--
2.52.0

View File

@ -1,158 +0,0 @@
From 9557e64f84af0097ce45b0187381d4d5a097679d Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 18:27:37 +0200
Subject: [PATCH 093/111] lib/rpms: Update tests for 9->10
---
repos/system_upgrade/common/libraries/rpms.py | 55 ++++++++-------
.../common/libraries/tests/test_rpms.py | 67 +++++++++++++++++--
2 files changed, 92 insertions(+), 30 deletions(-)
diff --git a/repos/system_upgrade/common/libraries/rpms.py b/repos/system_upgrade/common/libraries/rpms.py
index 8f98c1a4..11a31882 100644
--- a/repos/system_upgrade/common/libraries/rpms.py
+++ b/repos/system_upgrade/common/libraries/rpms.py
@@ -18,30 +18,39 @@ class LeappComponents:
TOOLS = 'tools'
+# NOTE: need to keep package for dropped upgrade paths so peseventsscanner can drop
+# related PES events
_LEAPP_PACKAGES_MAP = {
- LeappComponents.FRAMEWORK: {'7': {'pkgs': ['leapp', 'python2-leapp'],
- 'deps': ['leapp-deps']},
- '8': {'pkgs': ['leapp', 'python3-leapp'],
- 'deps': ['leapp-deps']},
- '9': {'pkgs': ['leapp', 'python3-leapp'],
- 'deps': ['leapp-deps']}
- },
- LeappComponents.REPOSITORY: {'7': {'pkgs': ['leapp-upgrade-el7toel8'],
- 'deps': ['leapp-upgrade-el7toel8-deps']},
- '8': {'pkgs': ['leapp-upgrade-el8toel9', 'leapp-upgrade-el8toel9-fapolicyd'],
- 'deps': ['leapp-upgrade-el8toel9-deps']},
- '9': {'pkgs': ['leapp-upgrade-el9toel10', 'leapp-upgrade-el9toel10-fapolicyd'],
- 'deps': ['leapp-upgrade-el9toel10-deps']}
- },
- LeappComponents.COCKPIT: {'7': {'pkgs': ['cockpit-leapp']},
- '8': {'pkgs': ['cockpit-leapp']},
- '9': {'pkgs': ['cockpit-leapp']},
- },
- LeappComponents.TOOLS: {'7': {'pkgs': ['snactor']},
- '8': {'pkgs': ['snactor']},
- '9': {'pkgs': ['snactor']}
- }
- }
+ LeappComponents.FRAMEWORK: {
+ '7': {'pkgs': ['leapp', 'python2-leapp'], 'deps': ['leapp-deps']},
+ '8': {'pkgs': ['leapp', 'python3-leapp'], 'deps': ['leapp-deps']},
+ '9': {'pkgs': ['leapp', 'python3-leapp'], 'deps': ['leapp-deps']},
+ },
+ LeappComponents.REPOSITORY: {
+ '7': {
+ 'pkgs': ['leapp-upgrade-el7toel8'],
+ 'deps': ['leapp-upgrade-el7toel8-deps'],
+ },
+ '8': {
+ 'pkgs': ['leapp-upgrade-el8toel9', 'leapp-upgrade-el8toel9-fapolicyd'],
+ 'deps': ['leapp-upgrade-el8toel9-deps'],
+ },
+ '9': {
+ 'pkgs': ['leapp-upgrade-el9toel10', 'leapp-upgrade-el9toel10-fapolicyd'],
+ 'deps': ['leapp-upgrade-el9toel10-deps'],
+ },
+ },
+ LeappComponents.COCKPIT: {
+ '7': {'pkgs': ['cockpit-leapp']},
+ '8': {'pkgs': ['cockpit-leapp']},
+ '9': {'pkgs': ['cockpit-leapp']},
+ },
+ LeappComponents.TOOLS: {
+ '7': {'pkgs': ['snactor']},
+ '8': {'pkgs': ['snactor']},
+ '9': {'pkgs': ['snactor']},
+ },
+}
GET_LEAPP_PACKAGES_DEFAULT_COMPONENTS = frozenset((LeappComponents.FRAMEWORK,
LeappComponents.REPOSITORY,
diff --git a/repos/system_upgrade/common/libraries/tests/test_rpms.py b/repos/system_upgrade/common/libraries/tests/test_rpms.py
index 13f87651..c9d7f420 100644
--- a/repos/system_upgrade/common/libraries/tests/test_rpms.py
+++ b/repos/system_upgrade/common/libraries/tests/test_rpms.py
@@ -36,13 +36,66 @@ def test_parse_config_modification():
assert _parse_config_modification(data, "/etc/ssh/sshd_config")
-@pytest.mark.parametrize('major_version,component,result', [
- (None, None, ['leapp', 'python3-leapp', 'leapp-upgrade-el8toel9', 'leapp-upgrade-el8toel9-fapolicyd', 'snactor']),
- ('7', None, ['leapp', 'python2-leapp', 'leapp-upgrade-el7toel8', 'snactor']),
- (['7', '8'], None, ['leapp', 'python2-leapp', 'leapp-upgrade-el7toel8',
- 'python3-leapp', 'leapp-upgrade-el8toel9', 'leapp-upgrade-el8toel9-fapolicyd', 'snactor']),
- ('8', 'framework', ['leapp', 'python3-leapp']),
- ])
+@pytest.mark.parametrize(
+ "major_version,component,result",
+ [
+ (
+ None,
+ None,
+ [
+ "leapp",
+ "python3-leapp",
+ "leapp-upgrade-el8toel9",
+ "leapp-upgrade-el8toel9-fapolicyd",
+ "snactor",
+ ],
+ ),
+ ("7", None, ["leapp", "python2-leapp", "leapp-upgrade-el7toel8", "snactor"]),
+ (
+ "8",
+ None,
+ [
+ "leapp",
+ "python3-leapp",
+ "leapp-upgrade-el8toel9",
+ "leapp-upgrade-el8toel9-fapolicyd",
+ "snactor",
+ ],
+ ),
+ (
+ ["7", "8"],
+ None,
+ [
+ "leapp",
+ "python2-leapp",
+ "leapp-upgrade-el7toel8",
+ "python3-leapp",
+ "leapp-upgrade-el8toel9",
+ "leapp-upgrade-el8toel9-fapolicyd",
+ "snactor",
+ ],
+ ),
+ (
+ ["8", "9"],
+ None,
+ [
+ "leapp",
+ "python3-leapp",
+ "leapp-upgrade-el8toel9",
+ "leapp-upgrade-el8toel9-fapolicyd",
+ "leapp-upgrade-el9toel10",
+ "leapp-upgrade-el9toel10-fapolicyd",
+ "snactor",
+ ],
+ ),
+ ("8", "framework", ["leapp", "python3-leapp"]),
+ (
+ "9",
+ "repository",
+ ["leapp-upgrade-el9toel10", "leapp-upgrade-el9toel10-fapolicyd"],
+ ),
+ ],
+)
def test_get_leapp_packages(major_version, component, result, monkeypatch):
monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='8.9', dst_ver='9.3'))
--
2.52.0

View File

@ -1,31 +0,0 @@
From 5547c926b0c1bf5c2c8d943a178b878a8df50120 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:21:13 +0200
Subject: [PATCH 094/111] lib/module: Remove 7->8 releasever workaround
---
repos/system_upgrade/common/libraries/module.py | 8 --------
1 file changed, 8 deletions(-)
diff --git a/repos/system_upgrade/common/libraries/module.py b/repos/system_upgrade/common/libraries/module.py
index db725e71..ba7ecba9 100644
--- a/repos/system_upgrade/common/libraries/module.py
+++ b/repos/system_upgrade/common/libraries/module.py
@@ -26,14 +26,6 @@ def _create_or_get_dnf_base(base=None):
# preload releasever from what we know, this will be our fallback
conf.substitutions['releasever'] = get_source_major_version()
- # dnf on EL7 doesn't load vars from /etc/yum, so we need to help it a bit
- if get_source_major_version() == '7':
- try:
- with open('/etc/yum/vars/releasever') as releasever_file:
- conf.substitutions['releasever'] = releasever_file.read().strip()
- except IOError:
- pass
-
# load all substitutions from etc
conf.substitutions.update_from_etc('/')
--
2.52.0

View File

@ -1,41 +0,0 @@
From 1ec6ea8f8081c6895ed42696df9de51343e6c8ba Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:28:06 +0200
Subject: [PATCH 095/111] lib/dnfplugin: Remove RHEL 7 bind mount code path
---
repos/system_upgrade/common/libraries/dnfplugin.py | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/repos/system_upgrade/common/libraries/dnfplugin.py b/repos/system_upgrade/common/libraries/dnfplugin.py
index 1af52dc5..7e1fd497 100644
--- a/repos/system_upgrade/common/libraries/dnfplugin.py
+++ b/repos/system_upgrade/common/libraries/dnfplugin.py
@@ -19,7 +19,6 @@ _DEDICATED_URL = 'https://access.redhat.com/solutions/7011704'
class _DnfPluginPathStr(str):
_PATHS = {
- "8": os.path.join('/lib/python3.6/site-packages/dnf-plugins', DNF_PLUGIN_NAME),
"9": os.path.join('/lib/python3.9/site-packages/dnf-plugins', DNF_PLUGIN_NAME),
"10": os.path.join('/lib/python3.12/site-packages/dnf-plugins', DNF_PLUGIN_NAME),
}
@@ -405,13 +404,9 @@ def perform_transaction_install(target_userspace_info, storage_info, used_repos,
'/run/udev:/installroot/run/udev',
]
- if get_target_major_version() == '8':
- bind_mounts.append('/sys:/installroot/sys')
- else:
- # the target major version is RHEL 9+
- # we are bindmounting host's "/sys" to the intermediate "/hostsys"
- # in the upgrade initramdisk to avoid cgroups tree layout clash
- bind_mounts.append('/hostsys:/installroot/sys')
+ # we are bindmounting host's "/sys" to the intermediate "/hostsys"
+ # in the upgrade initramdisk to avoid cgroups tree layout clash
+ bind_mounts.append('/hostsys:/installroot/sys')
already_mounted = {entry.split(':')[0] for entry in bind_mounts}
for entry in storage_info.fstab:
--
2.52.0

View File

@ -1,45 +0,0 @@
From e2be1ed71d8985e836a3a0df2fc2d1a9b47c1b99 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:47:47 +0200
Subject: [PATCH 096/111] lib/mounting: Remove RHEL 7 nspawn options
---
repos/system_upgrade/common/libraries/mounting.py | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/repos/system_upgrade/common/libraries/mounting.py b/repos/system_upgrade/common/libraries/mounting.py
index 279d31dc..ea59164c 100644
--- a/repos/system_upgrade/common/libraries/mounting.py
+++ b/repos/system_upgrade/common/libraries/mounting.py
@@ -5,7 +5,7 @@ import shutil
from collections import namedtuple
from leapp.libraries.common.config import get_all_envs
-from leapp.libraries.common.config.version import get_source_major_version, matches_source_version
+from leapp.libraries.common.config.version import matches_source_version
from leapp.libraries.stdlib import api, CalledProcessError, run
# Using ALWAYS_BIND will crash the upgrade process if the file does not exist.
@@ -83,12 +83,13 @@ class IsolationType:
""" Transform the command to be executed with systemd-nspawn """
binds = ['--bind={}'.format(bind) for bind in self.binds]
setenvs = ['--setenv={}={}'.format(env.name, env.value) for env in self.env_vars]
- final_cmd = ['systemd-nspawn', '--register=no', '--quiet']
- if get_source_major_version() != '7':
- # TODO: check whether we could use the --keep unit on el7 too.
- # in such a case, just add line into the previous solution..
- # TODO: the same about --capability=all
- final_cmd += ['--keep-unit', '--capability=all']
+ final_cmd = [
+ 'systemd-nspawn',
+ '--register=no',
+ '--quiet',
+ '--keep-unit',
+ '--capability=all',
+ ]
if matches_source_version('>= 9.0'):
# Disable pseudo-TTY in container
final_cmd += ['--pipe']
--
2.52.0

View File

@ -1,44 +0,0 @@
From 82fd5e9844ef0d7910959c601a9e5c25252e53cf Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:48:45 +0200
Subject: [PATCH 097/111] lib/version: Remove RHEL 7 from supported version
---
.../common/actors/checkosrelease/tests/test_checkosrelease.py | 4 ++--
repos/system_upgrade/common/libraries/config/version.py | 4 ----
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checkosrelease/tests/test_checkosrelease.py b/repos/system_upgrade/common/actors/checkosrelease/tests/test_checkosrelease.py
index aa0fd636..1ca8a1d7 100644
--- a/repos/system_upgrade/common/actors/checkosrelease/tests/test_checkosrelease.py
+++ b/repos/system_upgrade/common/actors/checkosrelease/tests/test_checkosrelease.py
@@ -27,8 +27,8 @@ def test_no_skip_check(monkeypatch):
def test_not_supported_release(monkeypatch):
monkeypatch.setattr(version, "is_supported_version", lambda: False)
- monkeypatch.setattr(version, "get_source_major_version", lambda: '7')
- monkeypatch.setattr(version, "current_version", lambda: ('bad', '7'))
+ monkeypatch.setattr(version, "get_source_major_version", lambda: '8')
+ monkeypatch.setattr(version, "current_version", lambda: ('bad', '8'))
monkeypatch.setattr(reporting, "create_report", create_report_mocked())
checkosrelease.check_os_version()
diff --git a/repos/system_upgrade/common/libraries/config/version.py b/repos/system_upgrade/common/libraries/config/version.py
index 84cbd753..c9bc3fb2 100644
--- a/repos/system_upgrade/common/libraries/config/version.py
+++ b/repos/system_upgrade/common/libraries/config/version.py
@@ -14,11 +14,7 @@ OP_MAP = {
'<=': operator.le
}
-# TODO(pstodulk): drop 9.4 & 9.5 before May 2025 release
-# These will not be supported fo IPU 9 -> 10
_SUPPORTED_VERSIONS = {
- # Note: 'rhel-alt' is detected when on 'rhel' with kernel 4.x
- '7': {'rhel': ['7.9'], 'rhel-alt': [], 'rhel-saphana': ['7.9']},
'8': {'rhel': ['8.10'], 'rhel-saphana': ['8.10']},
'9': {'rhel': ['9.6'], 'rhel-saphana': ['9.6']},
}
--
2.52.0

View File

@ -1,137 +0,0 @@
From d327f568a9ecb5de67e219c9174f547dadfbc8bd Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 18:00:53 +0200
Subject: [PATCH 098/111] checkfips: Drop RHEL 7 inhibitor and update tests
The tests never covered the part where UpgradeInitramfsTasks have to get
produced by the actor on 8->9.
---
.../common/actors/checkfips/actor.py | 46 ++++++++-----------
.../actors/checkfips/tests/test_checkfips.py | 37 +++++++++++----
2 files changed, 45 insertions(+), 38 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checkfips/actor.py b/repos/system_upgrade/common/actors/checkfips/actor.py
index 73408655..8c379bfd 100644
--- a/repos/system_upgrade/common/actors/checkfips/actor.py
+++ b/repos/system_upgrade/common/actors/checkfips/actor.py
@@ -1,4 +1,3 @@
-from leapp import reporting
from leapp.actors import Actor
from leapp.exceptions import StopActorExecutionError
from leapp.libraries.common.config import version
@@ -20,39 +19,30 @@ class CheckFips(Actor):
fips_info = next(self.consume(FIPSInfo), None)
if not fips_info:
- raise StopActorExecutionError('Cannot check FIPS state due to not receiving necessary FIPSInfo message',
- details={'Problem': 'Did not receive a message with information about FIPS '
- 'usage'})
-
- if version.get_target_major_version() == '8':
- if fips_info.is_enabled:
- title = 'Automated upgrades from RHEL 7 to RHEL 8 in FIPS mode are not supported'
- summary = ('Leapp has detected that FIPS is enabled on this system. '
- 'Automated in-place upgrade of RHEL 7 systems in FIPS mode is currently unsupported '
- 'and manual intervention is required.')
-
- fips_7to8_steps_docs_url = 'https://red.ht/planning-upgrade-to-rhel8'
-
- reporting.create_report([
- reporting.Title(title),
- reporting.Summary(summary),
- reporting.Severity(reporting.Severity.HIGH),
- reporting.Groups([reporting.Groups.SECURITY, reporting.Groups.INHIBITOR]),
- reporting.ExternalLink(url=fips_7to8_steps_docs_url,
- title='Planning an upgrade from RHEL 7 to RHEL 8')
- ])
- elif version.get_target_major_version() == '9':
- # FIXME(mhecko): We include these files manually as they are not included automatically when the fips
- # module is used due to a bug in dracut. This code should be removed, once the dracut bug is resolved.
- # See https://bugzilla.redhat.com/show_bug.cgi?id=2176560
+ raise StopActorExecutionError(
+ 'Cannot check FIPS state due to not receiving necessary FIPSInfo message',
+ details={
+ 'Problem': 'Did not receive a message with information about FIPS usage'
+ },
+ )
+
+ if version.get_target_major_version() == '9':
+ # FIXME(mhecko): We include these files manually as they are not
+ # included automatically when the fips module is used due to a bug
+ # in dracut. This code should be removed, once the dracut bug is
+ # resolved. See https://bugzilla.redhat.com/show_bug.cgi?id=2176560
if fips_info.is_enabled:
fips_required_initramfs_files = [
'/etc/crypto-policies/back-ends/opensslcnf.config',
'/etc/pki/tls/openssl.cnf',
'/usr/lib64/ossl-modules/fips.so',
]
- self.produce(UpgradeInitramfsTasks(include_files=fips_required_initramfs_files,
- include_dracut_modules=[DracutModule(name='fips')]))
+ self.produce(
+ UpgradeInitramfsTasks(
+ include_files=fips_required_initramfs_files,
+ include_dracut_modules=[DracutModule(name='fips')],
+ )
+ )
elif version.get_target_major_version() == '10':
# TODO(mmatuska): What to do with FIPS on 9to10? OAMG-11431
pass
diff --git a/repos/system_upgrade/common/actors/checkfips/tests/test_checkfips.py b/repos/system_upgrade/common/actors/checkfips/tests/test_checkfips.py
index 5498bf23..8057bc0d 100644
--- a/repos/system_upgrade/common/actors/checkfips/tests/test_checkfips.py
+++ b/repos/system_upgrade/common/actors/checkfips/tests/test_checkfips.py
@@ -1,23 +1,40 @@
import pytest
from leapp.libraries.common.config import version
-from leapp.models import FIPSInfo, Report
+from leapp.models import DracutModule, FIPSInfo, Report, UpgradeInitramfsTasks
from leapp.utils.report import is_inhibitor
-@pytest.mark.parametrize(('fips_info', 'target_major_version', 'should_inhibit'), [
- (FIPSInfo(is_enabled=True), '8', True),
- (FIPSInfo(is_enabled=True), '9', False),
- (FIPSInfo(is_enabled=False), '8', False),
+@pytest.mark.parametrize(('fips_info', 'target_major_version', 'should_produce'), [
(FIPSInfo(is_enabled=False), '9', False),
+ (FIPSInfo(is_enabled=True), '9', True),
+ (FIPSInfo(is_enabled=False), '10', False),
+ (FIPSInfo(is_enabled=True), '10', False),
])
-def test_check_fips(monkeypatch, current_actor_context, fips_info, target_major_version, should_inhibit):
+def test_check_fips(monkeypatch, current_actor_context, fips_info, target_major_version, should_produce):
monkeypatch.setattr(version, 'get_target_major_version', lambda: target_major_version)
+
current_actor_context.feed(fips_info)
current_actor_context.run()
- if should_inhibit:
- output = current_actor_context.consume(Report)
+
+ # no inhibitor in any case
+ assert not any(is_inhibitor(msg.report) for msg in current_actor_context.consume(Report))
+
+ output = current_actor_context.consume(UpgradeInitramfsTasks)
+ if should_produce:
assert len(output) == 1
- assert is_inhibitor(output[0].report)
+
+ expected_initramfs_files = [
+ '/etc/crypto-policies/back-ends/opensslcnf.config',
+ '/etc/pki/tls/openssl.cnf',
+ '/usr/lib64/ossl-modules/fips.so',
+ ]
+
+ assert output[0].include_files == expected_initramfs_files
+
+ assert len(output[0].include_dracut_modules) == 1
+ mod = output[0].include_dracut_modules[0]
+ assert isinstance(mod, DracutModule)
+ assert mod.name == "fips"
else:
- assert not any(is_inhibitor(msg.report) for msg in current_actor_context.consume(Report))
+ assert not output
--
2.52.0

View File

@ -1,86 +0,0 @@
From 418773c5ea5b6c47468d33f273ef0777fbbd0cef Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:52:40 +0200
Subject: [PATCH 099/111] scangrubconfig: Comment out RHEL 7 config error
detection
The comment left there by ivasilev suggest that this could possibly be
used on newer RHEL versions too. Let's leave commented out until it's
confirmed it's not needed.
---
.../actors/scangrubconfig/libraries/scanner.py | 16 ++++++++--------
.../scangrubconfig/tests/test_scangrubconfig.py | 10 ++++------
2 files changed, 12 insertions(+), 14 deletions(-)
diff --git a/repos/system_upgrade/common/actors/scangrubconfig/libraries/scanner.py b/repos/system_upgrade/common/actors/scangrubconfig/libraries/scanner.py
index 86bba22b..ac4591ed 100644
--- a/repos/system_upgrade/common/actors/scangrubconfig/libraries/scanner.py
+++ b/repos/system_upgrade/common/actors/scangrubconfig/libraries/scanner.py
@@ -1,7 +1,7 @@
import os
import re
-from leapp.libraries.common.config import architecture, version
+from leapp.libraries.common.config import architecture
from leapp.models import GrubConfigError
@@ -57,13 +57,13 @@ def scan():
config = '/etc/default/grub'
# Check for GRUB_CMDLINE_LINUX syntax errors
# XXX FIXME(ivasilev) Can we make this check a common one? For now let's limit it to rhel7->rhel8 only
- if version.get_source_major_version() == '7':
- if not architecture.matches_architecture(architecture.ARCH_S390X):
- # For now, skip just s390x, that's only one that is failing now
- # because ZIPL is used there
- if detect_config_error(config):
- errors.append(GrubConfigError(error_detected=True, files=[config],
- error_type=GrubConfigError.ERROR_GRUB_CMDLINE_LINUX_SYNTAX))
+ # if version.get_source_major_version() == '7':
+ # if not architecture.matches_architecture(architecture.ARCH_S390X):
+ # # For now, skip just s390x, that's only one that is failing now
+ # # because ZIPL is used there
+ # if detect_config_error(config):
+ # errors.append(GrubConfigError(error_detected=True, files=[config],
+ # error_type=GrubConfigError.ERROR_GRUB_CMDLINE_LINUX_SYNTAX))
# Check for missing newline errors
if is_grub_config_missing_final_newline(config):
diff --git a/repos/system_upgrade/common/actors/scangrubconfig/tests/test_scangrubconfig.py b/repos/system_upgrade/common/actors/scangrubconfig/tests/test_scangrubconfig.py
index 926f0f27..be1b2cc6 100644
--- a/repos/system_upgrade/common/actors/scangrubconfig/tests/test_scangrubconfig.py
+++ b/repos/system_upgrade/common/actors/scangrubconfig/tests/test_scangrubconfig.py
@@ -4,7 +4,7 @@ import pytest
from leapp.libraries.actor import scanner
from leapp.libraries.common.config import architecture, version
-from leapp.models import GrubConfigError, Report
+from leapp.models import GrubConfigError
CUR_DIR = os.path.dirname(os.path.abspath(__file__))
@@ -24,18 +24,16 @@ def test_wrong_config_error_detection():
def test_all_errors_produced(current_actor_context, monkeypatch):
# Tell the actor we are not running on s390x
monkeypatch.setattr(architecture, 'matches_architecture', lambda _: False)
- monkeypatch.setattr(version, 'get_source_version', lambda: '7.9')
# Set that all checks failed
monkeypatch.setattr(scanner, 'is_grub_config_missing_final_newline', lambda _: True)
monkeypatch.setattr(scanner, 'is_grubenv_corrupted', lambda _: True)
monkeypatch.setattr(scanner, 'detect_config_error', lambda _: True)
# Run the actor
current_actor_context.run()
- # Check that exactly 3 messages of different types are produced
+ # Check that exactly 2 messages of different types are produced
errors = current_actor_context.consume(GrubConfigError)
- assert len(errors) == 3
- for err_type in [GrubConfigError.ERROR_MISSING_NEWLINE, GrubConfigError.ERROR_CORRUPTED_GRUBENV,
- GrubConfigError.ERROR_GRUB_CMDLINE_LINUX_SYNTAX]:
+ assert len(errors) == 2
+ for err_type in [GrubConfigError.ERROR_MISSING_NEWLINE, GrubConfigError.ERROR_CORRUPTED_GRUBENV]:
distinct_error = next((e for e in errors if e.error_type == err_type), None)
assert distinct_error
assert distinct_error.files
--
2.52.0

View File

@ -1,30 +0,0 @@
From 1ded7bc6b5107852e44db078ad0d27a75d1acb20 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:54:22 +0200
Subject: [PATCH 100/111] checkipaserver: Remove RHEL 7 article link
---
.../common/actors/checkipaserver/libraries/checkipaserver.py | 3 ---
1 file changed, 3 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checkipaserver/libraries/checkipaserver.py b/repos/system_upgrade/common/actors/checkipaserver/libraries/checkipaserver.py
index 6a1c887c..60d4db86 100644
--- a/repos/system_upgrade/common/actors/checkipaserver/libraries/checkipaserver.py
+++ b/repos/system_upgrade/common/actors/checkipaserver/libraries/checkipaserver.py
@@ -1,13 +1,10 @@
from leapp import reporting
from leapp.libraries.common.config.version import get_source_major_version
-MIGRATION_GUIDE_7 = "https://red.ht/IdM-upgrading-RHEL-7-to-RHEL-8"
-
# TBD: update the doc url when migration guide 8->9 becomes available
MIGRATION_GUIDE_8 = "https://red.ht/IdM-upgrading-RHEL-8-to-RHEL-9"
MIGRATION_GUIDE_9 = "https://red.ht/IdM-upgrading-RHEL-9-to-RHEL-10"
MIGRATION_GUIDES = {
- '7': MIGRATION_GUIDE_7,
'8': MIGRATION_GUIDE_8,
'9': MIGRATION_GUIDE_9
}
--
2.52.0

View File

@ -1,126 +0,0 @@
From 082863d904b4e0c3cc5f160a28f41a02758fab63 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:06:16 +0200
Subject: [PATCH 101/111] checkluks: Remove RHEL 7 inhibitor and related code
---
.../common/actors/checkluks/actor.py | 3 +-
.../actors/checkluks/libraries/checkluks.py | 21 ------------
.../actors/checkluks/tests/test_checkluks.py | 32 -------------------
3 files changed, 1 insertion(+), 55 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checkluks/actor.py b/repos/system_upgrade/common/actors/checkluks/actor.py
index 607fd040..2ea16985 100644
--- a/repos/system_upgrade/common/actors/checkluks/actor.py
+++ b/repos/system_upgrade/common/actors/checkluks/actor.py
@@ -9,9 +9,8 @@ class CheckLuks(Actor):
"""
Check if any encrypted partitions are in use and whether they are supported for the upgrade.
- Upgrading EL7 system with encrypted partition is not supported (but ceph OSDs).
For EL8+ it's ok if the discovered used encrypted storage has LUKS2 format
- and it's bounded to clevis-tpm2 token (so it can be automatically unlocked
+ and it's bound to clevis-tpm2 token (so it can be automatically unlocked
during the process).
"""
diff --git a/repos/system_upgrade/common/actors/checkluks/libraries/checkluks.py b/repos/system_upgrade/common/actors/checkluks/libraries/checkluks.py
index 84e8e61f..4626cf63 100644
--- a/repos/system_upgrade/common/actors/checkluks/libraries/checkluks.py
+++ b/repos/system_upgrade/common/actors/checkluks/libraries/checkluks.py
@@ -6,7 +6,6 @@ from leapp.models import (
CopyFile,
DracutModule,
LuksDumps,
- StorageInfo,
TargetUserSpaceUpgradeTasks,
UpgradeInitramfsTasks
)
@@ -35,21 +34,6 @@ def _get_ceph_volumes():
return ceph_info.encrypted_volumes[:] if ceph_info else []
-def apply_obsoleted_check_ipu_7_8():
- ceph_vol = _get_ceph_volumes()
- for storage_info in api.consume(StorageInfo):
- for blk in storage_info.lsblk:
- if blk.tp == 'crypt' and blk.name not in ceph_vol:
- create_report([
- reporting.Title('LUKS encrypted partition detected'),
- reporting.Summary('Upgrading system with encrypted partitions is not supported'),
- reporting.Severity(reporting.Severity.HIGH),
- reporting.Groups([reporting.Groups.BOOT, reporting.Groups.ENCRYPTION]),
- reporting.Groups([reporting.Groups.INHIBITOR]),
- ])
- break
-
-
def report_inhibitor(luks1_partitions, no_tpm2_partitions):
source_major_version = get_source_major_version()
clevis_doc_url = CLEVIS_DOC_URL_FMT.format(source_major_version)
@@ -119,11 +103,6 @@ def report_inhibitor(luks1_partitions, no_tpm2_partitions):
def check_invalid_luks_devices():
- if get_source_major_version() == '7':
- # NOTE: keeping unchanged behaviour for IPU 7 -> 8
- apply_obsoleted_check_ipu_7_8()
- return
-
luks_dumps = next(api.consume(LuksDumps), None)
if not luks_dumps:
api.current_logger().debug('No LUKS volumes detected. Skipping.')
diff --git a/repos/system_upgrade/common/actors/checkluks/tests/test_checkluks.py b/repos/system_upgrade/common/actors/checkluks/tests/test_checkluks.py
index d559b54c..13b8bc55 100644
--- a/repos/system_upgrade/common/actors/checkluks/tests/test_checkluks.py
+++ b/repos/system_upgrade/common/actors/checkluks/tests/test_checkluks.py
@@ -1,11 +1,3 @@
-"""
-Unit tests for inhibitwhenluks actor
-
-Skip isort as it's kind of broken when mixing grid import and one line imports
-
-isort:skip_file
-"""
-
from leapp.libraries.common.config import version
from leapp.models import (
CephInfo,
@@ -13,7 +5,6 @@ from leapp.models import (
LuksDump,
LuksDumps,
LuksToken,
- StorageInfo,
TargetUserSpaceUpgradeTasks,
UpgradeInitramfsTasks
)
@@ -148,26 +139,3 @@ LSBLK_ENTRY = LsblkEntry(
parent_name="",
parent_path=""
)
-
-
-def test_inhibitor_on_el7(monkeypatch, current_actor_context):
- # NOTE(pstodulk): consider it good enough as el7 stuff is going to be removed
- # soon.
- monkeypatch.setattr(version, 'get_source_major_version', lambda: '7')
-
- luks_dump = LuksDump(
- version=2,
- uuid='83050bd9-61c6-4ff0-846f-bfd3ac9bfc67',
- device_path='/dev/sda',
- device_name='sda',
- tokens=[LuksToken(token_id=0, keyslot=1, token_type='clevis-tpm2')])
- current_actor_context.feed(LuksDumps(dumps=[luks_dump]))
- current_actor_context.feed(CephInfo(encrypted_volumes=[]))
-
- current_actor_context.feed(StorageInfo(lsblk=[LSBLK_ENTRY]))
- current_actor_context.run()
- assert current_actor_context.consume(Report)
-
- report_fields = current_actor_context.consume(Report)[0].report
- assert is_inhibitor(report_fields)
- assert report_fields['title'] == 'LUKS encrypted partition detected'
--
2.52.0

View File

@ -1,37 +0,0 @@
From ece256fcc5f5e9e952a2a17377c1ea386135ae5b Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 20:00:01 +0200
Subject: [PATCH 102/111] scankernel: Remove RHEL 7 kernel names
---
.../libraries/scankernel.py | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/repos/system_upgrade/common/actors/scaninstalledtargetkernelversion/libraries/scankernel.py b/repos/system_upgrade/common/actors/scaninstalledtargetkernelversion/libraries/scankernel.py
index 35683cca..76f13caf 100644
--- a/repos/system_upgrade/common/actors/scaninstalledtargetkernelversion/libraries/scankernel.py
+++ b/repos/system_upgrade/common/actors/scaninstalledtargetkernelversion/libraries/scankernel.py
@@ -20,16 +20,10 @@ def get_kernel_pkg_name(rhel_major_version, kernel_type):
:returns: Kernel package name
:rtype: str
"""
- if rhel_major_version == '7':
- kernel_pkg_name_table = {
- kernel_lib.KernelType.ORDINARY: 'kernel',
- kernel_lib.KernelType.REALTIME: 'kernel-rt'
- }
- else:
- kernel_pkg_name_table = {
- kernel_lib.KernelType.ORDINARY: 'kernel-core',
- kernel_lib.KernelType.REALTIME: 'kernel-rt-core'
- }
+ kernel_pkg_name_table = {
+ kernel_lib.KernelType.ORDINARY: 'kernel-core',
+ kernel_lib.KernelType.REALTIME: 'kernel-rt-core'
+ }
return kernel_pkg_name_table[kernel_type]
--
2.52.0

View File

@ -1,175 +0,0 @@
From a83309de2b633ccb35a16b2c0ccca6d1f317c6ae Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:59:07 +0200
Subject: [PATCH 103/111] repomap lib: Drop RHEL 7 default PESID
---
.../libraries/peseventsscanner_repomap.py | 1 -
.../tests/test_pes_event_scanner.py | 44 +++++++++++--------
.../libraries/setuptargetrepos_repomap.py | 1 -
.../tests/test_repomapping.py | 20 ++++-----
4 files changed, 35 insertions(+), 31 deletions(-)
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner_repomap.py b/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner_repomap.py
index abd35e0b..b1e46903 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner_repomap.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner_repomap.py
@@ -3,7 +3,6 @@ from leapp.libraries.common.config.version import get_source_major_version, get_
from leapp.libraries.stdlib import api
DEFAULT_PESID = {
- '7': 'rhel7-base',
'8': 'rhel8-BaseOS',
'9': 'rhel9-BaseOS',
'10': 'rhel10-BaseOS'
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py b/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
index f67f3840..c8c14528 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
@@ -214,22 +214,22 @@ def test_actor_performs(monkeypatch):
events = [
Event(1, Action.SPLIT,
- {Pkg('split-in', 'rhel7-base')},
- {Pkg('split-out0', 'rhel8-BaseOS'), Pkg('split-out1', 'rhel8-BaseOS')},
- (7, 9), (8, 0), []),
+ {Pkg('split-in', 'rhel8-BaseOS')},
+ {Pkg('split-out0', 'rhel9-Baseos'), Pkg('split-out1', 'rhel9-Baseos')},
+ (8, 10), (9, 0), []),
Event(2, Action.MERGED,
- {Pkg('split-out0', 'rhel8-BaseOS'), Pkg('split-out1', 'rhel8-BaseOS')},
- {Pkg('merged-out', 'rhel8-BaseOS')},
- (8, 0), (8, 1), []),
+ {Pkg('split-out0', 'rhel9-Baseos'), Pkg('split-out1', 'rhel9-Baseos')},
+ {Pkg('merged-out', 'rhel9-Baseos')},
+ (9, 0), (9, 1), []),
Event(3, Action.MOVED,
- {Pkg('moved-in', 'rhel7-base')}, {Pkg('moved-out', 'rhel8-BaseOS')},
- (7, 9), (8, 0), []),
+ {Pkg('moved-in', 'rhel8-BaseOS')}, {Pkg('moved-out', 'rhel9-Baseos')},
+ (8, 10), (9, 0), []),
Event(4, Action.REMOVED,
- {Pkg('removed', 'rhel7-base')}, set(),
- (8, 0), (8, 1), []),
+ {Pkg('removed', 'rhel8-BaseOS')}, set(),
+ (9, 0), (9, 1), []),
Event(5, Action.DEPRECATED,
- {Pkg('irrelevant', 'rhel7-base')}, set(),
- (8, 0), (8, 1), []),
+ {Pkg('irrelevant', 'rhel8-BaseOS')}, set(),
+ (9, 0), (9, 1), []),
]
monkeypatch.setattr(pes_events_scanner, 'get_pes_events', lambda data_folder, json_filename: events)
@@ -242,23 +242,29 @@ def test_actor_performs(monkeypatch):
repositories_mapping = RepositoriesMapping(
mapping=[
- RepoMapEntry(source='rhel7-base', target=['rhel8-BaseOS'], ),
+ RepoMapEntry(source='rhel8-BaseOS', target=['rhel9-Baseos'], ),
],
repositories=[
- PESIDRepositoryEntry(pesid='rhel7-base', major_version='7', repoid='rhel7-repo', arch='x86_64',
- repo_type='rpm', channel='ga', rhui='', distro='rhel'),
PESIDRepositoryEntry(pesid='rhel8-BaseOS', major_version='8', repoid='rhel8-repo', arch='x86_64',
+ repo_type='rpm', channel='ga', rhui='', distro='rhel'),
+ PESIDRepositoryEntry(pesid='rhel9-Baseos', major_version='9', repoid='rhel9-repo', arch='x86_64',
repo_type='rpm', channel='ga', rhui='', distro='rhel')]
)
enabled_modules = EnabledModules(modules=[])
repo_facts = RepositoriesFacts(
- repositories=[RepositoryFile(file='', data=[RepositoryData(repoid='rhel7-repo', name='RHEL7 repo')])]
+ repositories=[RepositoryFile(file='', data=[RepositoryData(repoid='rhel8-repo', name='RHEL8 repo')])]
)
- monkeypatch.setattr(api, 'current_actor',
- CurrentActorMocked(msgs=[installed_pkgs, repositories_mapping, enabled_modules, repo_facts],
- src_ver='7.9', dst_ver='8.1'))
+ monkeypatch.setattr(
+ api,
+ "current_actor",
+ CurrentActorMocked(
+ msgs=[installed_pkgs, repositories_mapping, enabled_modules, repo_facts],
+ src_ver="8.10",
+ dst_ver="9.1",
+ ),
+ )
produced_messages = produce_mocked()
created_report = create_report_mocked()
diff --git a/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos_repomap.py b/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos_repomap.py
index 3286609d..763eddc6 100644
--- a/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos_repomap.py
+++ b/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos_repomap.py
@@ -3,7 +3,6 @@ from leapp.libraries.common.config.version import get_source_major_version, get_
from leapp.libraries.stdlib import api
DEFAULT_PESID = {
- '7': 'rhel7-base',
'8': 'rhel8-BaseOS',
'9': 'rhel9-BaseOS',
'10': 'rhel10-BaseOS'
diff --git a/repos/system_upgrade/common/actors/setuptargetrepos/tests/test_repomapping.py b/repos/system_upgrade/common/actors/setuptargetrepos/tests/test_repomapping.py
index 30c415c0..32af8609 100644
--- a/repos/system_upgrade/common/actors/setuptargetrepos/tests/test_repomapping.py
+++ b/repos/system_upgrade/common/actors/setuptargetrepos/tests/test_repomapping.py
@@ -689,14 +689,14 @@ def test_get_default_repository_channels_simple(monkeypatch):
where there is only one repository enabled from the pesid family in which are
the default repositories searched in.
"""
- monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='7.9', dst_ver='8.4'))
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='8.10', dst_ver='9.6'))
repository_mapping = RepositoriesMapping(
mapping=[],
- repositories=[make_pesid_repo('rhel7-base', '7', 'rhel7-repoid-ga', channel='ga')]
+ repositories=[make_pesid_repo('rhel8-BaseOS', '8', 'rhel8-repoid-ga', channel='ga')]
)
handler = RepoMapDataHandler(repository_mapping)
- assert ['ga'] == get_default_repository_channels(handler, ['rhel7-repoid-ga'])
+ assert ['ga'] == get_default_repository_channels(handler, ['rhel8-repoid-ga'])
def test_get_default_repository_channels_highest_priority_channel(monkeypatch):
@@ -706,17 +706,17 @@ def test_get_default_repository_channels_highest_priority_channel(monkeypatch):
Verifies that the returned list contains the highest priority channel if there is a repository
with the channel enabled on the source system.
"""
- monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='7.9', dst_ver='8.4'))
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='8.10', dst_ver='9.6'))
repository_mapping = RepositoriesMapping(
mapping=[],
repositories=[
- make_pesid_repo('rhel7-base', '7', 'rhel7-repoid-ga', channel='ga'),
- make_pesid_repo('rhel7-base', '7', 'rhel7-repoid-eus', channel='eus'),
+ make_pesid_repo('rhel8-BaseOS', '8', 'rhel8-repoid-ga', channel='ga'),
+ make_pesid_repo('rhel8-BaseOS', '8', 'rhel8-repoid-eus', channel='eus'),
]
)
handler = RepoMapDataHandler(repository_mapping)
- assert ['eus', 'ga'] == get_default_repository_channels(handler, ['rhel7-repoid-ga', 'rhel7-repoid-eus'])
+ assert ['eus', 'ga'] == get_default_repository_channels(handler, ['rhel8-repoid-ga', 'rhel8-repoid-eus'])
def test_get_default_repository_channels_no_default_pesid_repo(monkeypatch):
@@ -726,12 +726,12 @@ def test_get_default_repository_channels_no_default_pesid_repo(monkeypatch):
Verifies that the returned list contains some fallback channel even if no repository from the default
pesid family in which are the channels searched is enabled.
"""
- monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='7.9', dst_ver='8.4'))
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='8.10', dst_ver='9.6'))
repository_mapping = RepositoriesMapping(
mapping=[],
repositories=[
- make_pesid_repo('rhel7-base', '7', 'rhel7-repoid-ga', channel='ga'),
- make_pesid_repo('rhel7-base', '7', 'rhel7-repoid-eus', channel='eus'),
+ make_pesid_repo('rhel8-BaseOS', '8', 'rhel8-repoid-ga', channel='ga'),
+ make_pesid_repo('rhel8-BaseOS', '8', 'rhel8-repoid-eus', channel='eus'),
]
)
handler = RepoMapDataHandler(repository_mapping)
--
2.52.0

View File

@ -1,186 +0,0 @@
From 656e301fe7749a77f7f4a5c652610ac27105dd33 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:50:32 +0200
Subject: [PATCH 104/111] scanpkgmanager: Remove yum-related code
---
.../scanpkgmanager/libraries/pluginscanner.py | 21 ++++++-------------
.../libraries/scanpkgmanager.py | 13 ++++--------
.../tests/test_pluginscanner.py | 16 ++------------
.../tests/test_scanpkgmanager.py | 11 +++-------
4 files changed, 15 insertions(+), 46 deletions(-)
diff --git a/repos/system_upgrade/common/actors/scanpkgmanager/libraries/pluginscanner.py b/repos/system_upgrade/common/actors/scanpkgmanager/libraries/pluginscanner.py
index 7bb03996..f83050ee 100644
--- a/repos/system_upgrade/common/actors/scanpkgmanager/libraries/pluginscanner.py
+++ b/repos/system_upgrade/common/actors/scanpkgmanager/libraries/pluginscanner.py
@@ -1,6 +1,5 @@
import re
-from leapp.libraries.common.config.version import get_source_major_version
from leapp.libraries.stdlib import run
# When the output spans multiple lines, each of the lines after the first one
@@ -50,7 +49,7 @@ def _parse_loaded_plugins(package_manager_output):
def scan_enabled_package_manager_plugins():
"""
- Runs package manager (yum/dnf) command and parses its output for enabled/loaded plugins.
+ Runs package manager (dnf) command and parses its output for enabled/loaded plugins.
:return: A list of enabled plugins.
:rtype: List
@@ -60,16 +59,8 @@ def scan_enabled_package_manager_plugins():
# An alternative approach would be to check the install path for package manager plugins
# and parse corresponding plugin configuration files.
- if get_source_major_version() == '7':
- # in case of yum, set debuglevel=2 to be sure the output is always
- # same. The format of data is different for various debuglevels
- cmd = ['yum', '--setopt=debuglevel=2']
- else:
- # the verbose mode in dnf always set particular debuglevel, so the
- # output is not affected by the default debug level set on the
- # system
- cmd = ['dnf', '-v'] # On RHEL8 we need to supply an extra switch
-
- pkg_manager_output = run(cmd, split=True, checked=False) # The command will certainly fail (does not matter).
-
- return _parse_loaded_plugins(pkg_manager_output)
+ # the verbose mode in dnf always set particular debuglevel, so the
+ # output is not affected by the default debug level set on the
+ # system
+ output = run(['dnf', '-v'], split=True, checked=False) # The command will certainly fail (does not matter).
+ return _parse_loaded_plugins(output)
diff --git a/repos/system_upgrade/common/actors/scanpkgmanager/libraries/scanpkgmanager.py b/repos/system_upgrade/common/actors/scanpkgmanager/libraries/scanpkgmanager.py
index bf7ec0be..2fcac423 100644
--- a/repos/system_upgrade/common/actors/scanpkgmanager/libraries/scanpkgmanager.py
+++ b/repos/system_upgrade/common/actors/scanpkgmanager/libraries/scanpkgmanager.py
@@ -2,17 +2,13 @@ import os
import re
from leapp.libraries.actor import pluginscanner
-from leapp.libraries.common.config.version import get_source_major_version
from leapp.libraries.stdlib import api
from leapp.models import PkgManagerInfo
YUM_CONFIG_PATH = '/etc/yum.conf'
DNF_CONFIG_PATH = '/etc/dnf/dnf.conf'
-
-def _get_releasever_path():
- default_manager = 'yum' if get_source_major_version() == '7' else 'dnf'
- return '/etc/{}/vars/releasever'.format(default_manager)
+RELEASEVER_PATH = '/etc/dnf/vars/releasever'
def _releasever_exists(releasever_path):
@@ -20,13 +16,12 @@ def _releasever_exists(releasever_path):
def get_etc_releasever():
- """ Get release version from "/etc/{yum,dnf}/vars/releasever" file """
+ """ Get release version from "/etc/dnf/vars/releasever" file """
- releasever_path = _get_releasever_path()
- if not _releasever_exists(releasever_path):
+ if not _releasever_exists(RELEASEVER_PATH):
return None
- with open(releasever_path, 'r') as fo:
+ with open(RELEASEVER_PATH, 'r') as fo:
# we care about the first line only
releasever = fo.readline().strip()
diff --git a/repos/system_upgrade/common/actors/scanpkgmanager/tests/test_pluginscanner.py b/repos/system_upgrade/common/actors/scanpkgmanager/tests/test_pluginscanner.py
index f0260e54..0b2bd5b7 100644
--- a/repos/system_upgrade/common/actors/scanpkgmanager/tests/test_pluginscanner.py
+++ b/repos/system_upgrade/common/actors/scanpkgmanager/tests/test_pluginscanner.py
@@ -21,18 +21,11 @@ def assert_plugins_identified_as_enabled(expected_plugins, identified_plugins):
assert expected_enabled_plugin in identified_plugins, fail_description
-@pytest.mark.parametrize(
- ('source_major_version', 'command'),
- [
- ('7', ['yum', '--setopt=debuglevel=2']),
- ('8', ['dnf', '-v']),
- ]
-)
-def test_scan_enabled_plugins(monkeypatch, source_major_version, command):
+def test_scan_enabled_plugins(monkeypatch):
"""Tests whether the enabled plugins are correctly retrieved from the package manager output."""
def run_mocked(cmd, **kwargs):
- if cmd == command:
+ if cmd == ['dnf', '-v']:
return {
'stdout': CMD_YUM_OUTPUT.split('\n'),
'stderr': 'You need to give some command',
@@ -40,13 +33,9 @@ def test_scan_enabled_plugins(monkeypatch, source_major_version, command):
}
raise ValueError('Tried to run an unexpected command.')
- def get_source_major_version_mocked():
- return source_major_version
-
# The library imports `run` all the way into its namespace (from ...stdlib import run),
# we must overwrite it there then:
monkeypatch.setattr(pluginscanner, 'run', run_mocked)
- monkeypatch.setattr(pluginscanner, 'get_source_major_version', get_source_major_version_mocked)
enabled_plugins = pluginscanner.scan_enabled_package_manager_plugins()
assert_plugins_identified_as_enabled(
@@ -72,7 +61,6 @@ def test_yum_loaded_plugins_multiline_output(yum_output, monkeypatch):
}
monkeypatch.setattr(pluginscanner, 'run', run_mocked)
- monkeypatch.setattr(pluginscanner, 'get_source_major_version', lambda: '7')
enabled_plugins = pluginscanner.scan_enabled_package_manager_plugins()
diff --git a/repos/system_upgrade/common/actors/scanpkgmanager/tests/test_scanpkgmanager.py b/repos/system_upgrade/common/actors/scanpkgmanager/tests/test_scanpkgmanager.py
index 75c5c5ba..dc94060a 100644
--- a/repos/system_upgrade/common/actors/scanpkgmanager/tests/test_scanpkgmanager.py
+++ b/repos/system_upgrade/common/actors/scanpkgmanager/tests/test_scanpkgmanager.py
@@ -2,15 +2,12 @@ import os
import pytest
-from leapp.libraries import stdlib
from leapp.libraries.actor import pluginscanner, scanpkgmanager
-from leapp.libraries.common import testutils
from leapp.libraries.common.testutils import CurrentActorMocked, produce_mocked
from leapp.libraries.stdlib import api
CUR_DIR = os.path.dirname(os.path.abspath(__file__))
PROXY_ADDRESS = 'https://192.168.121.123:3128'
-YUM_CONFIG_PATH = '/etc/yum.conf'
DNF_CONFIG_PATH = '/etc/dnf/dnf.conf'
@@ -22,10 +19,6 @@ def mock_releasever_exists(overrides):
return mocked_releasever_exists
-def mocked_get_releasever_path():
- return os.path.join(CUR_DIR, 'files/releasever')
-
-
@pytest.mark.parametrize('etcrelease_exists', [True, False])
def test_get_etcreleasever(monkeypatch, etcrelease_exists):
monkeypatch.setattr(
@@ -38,7 +31,9 @@ def test_get_etcreleasever(monkeypatch, etcrelease_exists):
)
monkeypatch.setattr(scanpkgmanager.api, 'produce', produce_mocked())
monkeypatch.setattr(scanpkgmanager.api, 'current_actor', CurrentActorMocked())
- monkeypatch.setattr(scanpkgmanager, '_get_releasever_path', mocked_get_releasever_path)
+ monkeypatch.setattr(
+ scanpkgmanager, 'RELEASEVER_PATH', os.path.join(CUR_DIR, 'files/releasever')
+ )
monkeypatch.setattr(scanpkgmanager, '_get_proxy_if_set', lambda x: None)
monkeypatch.setattr(pluginscanner, 'scan_enabled_package_manager_plugins', lambda: [])
--
2.52.0

View File

@ -1,185 +0,0 @@
From c746806784c06fccac28a3e92578fa9abf9e9a1a Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 18:59:01 +0200
Subject: [PATCH 105/111] checkyumpluginsenabled: Drop yum-related code
The name of the actor is kept for backwards compatibility.
---
.../actors/checkyumpluginsenabled/actor.py | 7 +--
.../libraries/checkyumpluginsenabled.py | 51 ++++++++-----------
.../tests/test_checkyumpluginsenabled.py | 14 +++--
3 files changed, 32 insertions(+), 40 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checkyumpluginsenabled/actor.py b/repos/system_upgrade/common/actors/checkyumpluginsenabled/actor.py
index fbc2f8bc..c5a4853a 100644
--- a/repos/system_upgrade/common/actors/checkyumpluginsenabled/actor.py
+++ b/repos/system_upgrade/common/actors/checkyumpluginsenabled/actor.py
@@ -1,13 +1,14 @@
from leapp.actors import Actor
-from leapp.libraries.actor.checkyumpluginsenabled import check_required_yum_plugins_enabled
+from leapp.libraries.actor.checkyumpluginsenabled import check_required_dnf_plugins_enabled
from leapp.models import PkgManagerInfo
from leapp.reporting import Report
from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+# NOTE: the name is kept for backwards compatibility, even though this scans only DNF now
class CheckYumPluginsEnabled(Actor):
"""
- Checks that the required yum plugins are enabled.
+ Checks that the required DNF plugins are enabled.
"""
name = 'check_yum_plugins_enabled'
@@ -17,4 +18,4 @@ class CheckYumPluginsEnabled(Actor):
def process(self):
pkg_manager_info = next(self.consume(PkgManagerInfo))
- check_required_yum_plugins_enabled(pkg_manager_info)
+ check_required_dnf_plugins_enabled(pkg_manager_info)
diff --git a/repos/system_upgrade/common/actors/checkyumpluginsenabled/libraries/checkyumpluginsenabled.py b/repos/system_upgrade/common/actors/checkyumpluginsenabled/libraries/checkyumpluginsenabled.py
index 5522af9c..87ff6511 100644
--- a/repos/system_upgrade/common/actors/checkyumpluginsenabled/libraries/checkyumpluginsenabled.py
+++ b/repos/system_upgrade/common/actors/checkyumpluginsenabled/libraries/checkyumpluginsenabled.py
@@ -1,25 +1,24 @@
import os
from leapp import reporting
-from leapp.libraries.common.config.version import get_source_major_version
from leapp.libraries.common.rhsm import skip_rhsm
# If LEAPP_NO_RHSM is set, subscription-manager and product-id will not be
# considered as required when checking whether the required plugins are enabled.
-REQUIRED_YUM_PLUGINS = {'subscription-manager', 'product-id'}
+REQUIRED_DNF_PLUGINS = {'subscription-manager', 'product-id'}
FMT_LIST_SEPARATOR = '\n - '
-def check_required_yum_plugins_enabled(pkg_manager_info):
+def check_required_dnf_plugins_enabled(pkg_manager_info):
"""
- Checks whether the yum plugins required by the IPU are enabled.
+ Checks whether the DNF plugins required by the IPU are enabled.
If they are not enabled, a report is produced informing the user about it.
:param pkg_manager_info: PkgManagerInfo
"""
- missing_required_plugins = REQUIRED_YUM_PLUGINS - set(pkg_manager_info.enabled_plugins)
+ missing_required_plugins = REQUIRED_DNF_PLUGINS - set(pkg_manager_info.enabled_plugins)
if skip_rhsm():
missing_required_plugins -= {'subscription-manager', 'product-id'}
@@ -29,37 +28,30 @@ def check_required_yum_plugins_enabled(pkg_manager_info):
for missing_plugin in missing_required_plugins:
missing_required_plugins_text += '{0}{1}'.format(FMT_LIST_SEPARATOR, missing_plugin)
- if get_source_major_version() == '7':
- pkg_manager = 'YUM'
- pkg_manager_config_path = '/etc/yum.conf'
- plugin_configs_dir = '/etc/yum/pluginconf.d'
- else:
- # On RHEL8+ the yum package might not be installed
- pkg_manager = 'DNF'
- pkg_manager_config_path = '/etc/dnf/dnf.conf'
- plugin_configs_dir = '/etc/dnf/plugins'
-
- # pkg_manager_config_path - enable/disable plugins globally
- # subscription_manager_plugin_conf, product_id_plugin_conf - plugins can be disabled individually
- subscription_manager_plugin_conf = os.path.join(plugin_configs_dir, 'subscription-manager.conf')
+ # dnf_conf_path - enable/disable plugins globally
+ # rhsm_plugin_conf, product_id_plugin_conf - plugins can be disabled individually
+ dnf_conf_path = '/etc/dnf/dnf.conf'
+ plugin_configs_dir = '/etc/dnf/plugins'
+ rhsm_plugin_conf = os.path.join(plugin_configs_dir, 'subscription-manager.conf')
product_id_plugin_conf = os.path.join(plugin_configs_dir, 'product-id.conf')
remediation_commands = [
- 'sed -i \'s/^plugins=0/plugins=1/\' \'{0}\''.format(pkg_manager_config_path),
- 'sed -i \'s/^enabled=0/enabled=1/\' \'{0}\''.format(subscription_manager_plugin_conf),
- 'sed -i \'s/^enabled=0/enabled=1/\' \'{0}\''.format(product_id_plugin_conf)
+ f"sed -i 's/^plugins=0/plugins=1/' '{dnf_conf_path}'"
+ f"sed -i 's/^enabled=0/enabled=1/' '{rhsm_plugin_conf}'"
+ f"sed -i 's/^enabled=0/enabled=1/' '{product_id_plugin_conf}'"
]
reporting.create_report([
- reporting.Title('Required {0} plugins are not being loaded.'.format(pkg_manager)),
+ reporting.Title('Required DNF plugins are not being loaded.'),
reporting.Summary(
- 'The following {0} plugins are not being loaded: {1}'.format(pkg_manager,
- missing_required_plugins_text)
+ 'The following DNF plugins are not being loaded: {}'.format(missing_required_plugins_text)
),
reporting.Remediation(
- hint='If you have yum plugins globally disabled, please enable them by editing the {0}. '
- 'Individually, the {1} plugins can be enabled in their corresponding configurations found at: {2}'
- .format(pkg_manager_config_path, pkg_manager, plugin_configs_dir),
+ hint=(
+ 'If you have DNF plugins globally disabled, please enable them by editing the {0}. '
+ 'Individually, the DNF plugins can be enabled in their corresponding configurations found at: {1}'
+ .format(dnf_conf_path, plugin_configs_dir)
+ ),
# Provide all commands as one due to problems with satellites
commands=[['bash', '-c', '"{0}"'.format('; '.join(remediation_commands))]]
),
@@ -67,10 +59,11 @@ def check_required_yum_plugins_enabled(pkg_manager_info):
url='https://access.redhat.com/solutions/7028063',
title='Why is Leapp preupgrade generating "Inhibitor: Required YUM plugins are not being loaded."'
),
- reporting.RelatedResource('file', pkg_manager_config_path),
- reporting.RelatedResource('file', subscription_manager_plugin_conf),
+ reporting.RelatedResource('file', dnf_conf_path),
+ reporting.RelatedResource('file', rhsm_plugin_conf),
reporting.RelatedResource('file', product_id_plugin_conf),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.INHIBITOR]),
reporting.Groups([reporting.Groups.REPOSITORY]),
+ reporting.Key("2a0ff91bea885cfe9d763cf3a379789848a501b9"),
])
diff --git a/repos/system_upgrade/common/actors/checkyumpluginsenabled/tests/test_checkyumpluginsenabled.py b/repos/system_upgrade/common/actors/checkyumpluginsenabled/tests/test_checkyumpluginsenabled.py
index 9bf9a3ba..1f7e916c 100644
--- a/repos/system_upgrade/common/actors/checkyumpluginsenabled/tests/test_checkyumpluginsenabled.py
+++ b/repos/system_upgrade/common/actors/checkyumpluginsenabled/tests/test_checkyumpluginsenabled.py
@@ -1,7 +1,5 @@
-import pytest
-
from leapp import reporting
-from leapp.libraries.actor.checkyumpluginsenabled import check_required_yum_plugins_enabled
+from leapp.libraries.actor.checkyumpluginsenabled import check_required_dnf_plugins_enabled
from leapp.libraries.common.testutils import create_report_mocked, CurrentActorMocked
from leapp.libraries.stdlib import api
from leapp.models import PkgManagerInfo
@@ -37,15 +35,15 @@ def test__create_report_mocked(monkeypatch):
def test_report_when_missing_required_plugins(monkeypatch):
- """Test whether a report entry is created when any of the required YUM plugins are missing."""
- yum_config = PkgManagerInfo(enabled_plugins=['product-id', 'some-user-plugin'])
+ """Test whether a report entry is created when any of the required DNF plugins are missing."""
+ dnf_config = PkgManagerInfo(enabled_plugins=['product-id', 'some-user-plugin'])
actor_reports = create_report_mocked()
monkeypatch.setattr(api, 'current_actor', CurrentActorMocked())
monkeypatch.setattr(reporting, 'create_report', actor_reports)
- check_required_yum_plugins_enabled(yum_config)
+ check_required_dnf_plugins_enabled(dnf_config)
assert actor_reports.called, "Report wasn't created when required a plugin is missing."
@@ -62,7 +60,7 @@ def test_nothing_is_reported_when_rhsm_disabled(monkeypatch):
monkeypatch.setattr(api, 'current_actor', actor_mocked)
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
- yum_config = PkgManagerInfo(enabled_plugins=[])
- check_required_yum_plugins_enabled(yum_config)
+ dnf_config = PkgManagerInfo(enabled_plugins=[])
+ check_required_dnf_plugins_enabled(dnf_config)
assert not reporting.create_report.called, 'Report was created even if LEAPP_NO_RHSM was set'
--
2.52.0

View File

@ -1,82 +0,0 @@
From f543432d31e3c84ea98fff348f23641a855acbc3 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:49:42 +0200
Subject: [PATCH 106/111] scancryptopolicies: Remove RHEL 7 early return
---
.../common/actors/scancryptopolicies/actor.py | 5 ----
.../component_test_scancryptopolicies.py | 25 +++++--------------
2 files changed, 6 insertions(+), 24 deletions(-)
diff --git a/repos/system_upgrade/common/actors/scancryptopolicies/actor.py b/repos/system_upgrade/common/actors/scancryptopolicies/actor.py
index 6f871243..dc695bc3 100644
--- a/repos/system_upgrade/common/actors/scancryptopolicies/actor.py
+++ b/repos/system_upgrade/common/actors/scancryptopolicies/actor.py
@@ -1,6 +1,5 @@
from leapp.actors import Actor
from leapp.libraries.actor import scancryptopolicies
-from leapp.libraries.common.config import version
from leapp.models import CryptoPolicyInfo
from leapp.tags import FactsPhaseTag, IPUWorkflowTag
@@ -24,8 +23,4 @@ class ScanCryptoPolicies(Actor):
tags = (IPUWorkflowTag, FactsPhaseTag)
def process(self):
- if version.get_source_major_version() == '7':
- # there are no crypto policies in EL 7
- return
-
scancryptopolicies.process()
diff --git a/repos/system_upgrade/common/actors/scancryptopolicies/tests/component_test_scancryptopolicies.py b/repos/system_upgrade/common/actors/scancryptopolicies/tests/component_test_scancryptopolicies.py
index 06029734..1f745574 100644
--- a/repos/system_upgrade/common/actors/scancryptopolicies/tests/component_test_scancryptopolicies.py
+++ b/repos/system_upgrade/common/actors/scancryptopolicies/tests/component_test_scancryptopolicies.py
@@ -1,18 +1,10 @@
import os
-import pytest
-
from leapp.libraries.actor import scancryptopolicies
-from leapp.libraries.common.config import version
from leapp.models import CryptoPolicyInfo
-@pytest.mark.parametrize(('source_version', 'should_run'), [
- ('7', False),
- ('8', True),
- ('9', True),
-])
-def test_actor_execution(monkeypatch, current_actor_context, source_version, should_run):
+def test_actor_execution(monkeypatch, current_actor_context):
def read_current_policy_mock(filename):
return "DEFAULT_XXX"
@@ -28,19 +20,14 @@ def test_actor_execution(monkeypatch, current_actor_context, source_version, sho
return _original_listdir(path)
def isfile_mock(filename):
- if filename.endswith('/modules'):
- return False
- return True
+ return not filename.endswith('/modules')
- monkeypatch.setattr(version, 'get_source_major_version', lambda: source_version)
monkeypatch.setattr(scancryptopolicies, 'read_current_policy', read_current_policy_mock)
_original_listdir = os.listdir
monkeypatch.setattr(os, 'listdir', listdir_mock)
monkeypatch.setattr(os.path, 'isfile', isfile_mock)
current_actor_context.run()
- if should_run:
- cpi = current_actor_context.consume(CryptoPolicyInfo)
- assert cpi
- assert cpi[0].current_policy == 'DEFAULT_XXX'
- else:
- assert not current_actor_context.consume(CryptoPolicyInfo)
+
+ cpi = current_actor_context.consume(CryptoPolicyInfo)
+ assert cpi
+ assert cpi[0].current_policy == 'DEFAULT_XXX'
--
2.52.0

View File

@ -1,129 +0,0 @@
From 1886ca234e03a12ad9496de3172a5cfdd5eb16ee Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:51:46 +0200
Subject: [PATCH 107/111] cryptopoliciescheck: Remove RHEL 7 early return and
update tests
---
.../actors/cryptopoliciescheck/actor.py | 5 --
.../component_test_cryptopoliciescheck.py | 58 +++++--------------
2 files changed, 16 insertions(+), 47 deletions(-)
diff --git a/repos/system_upgrade/common/actors/cryptopoliciescheck/actor.py b/repos/system_upgrade/common/actors/cryptopoliciescheck/actor.py
index e5f67644..41a90d5d 100644
--- a/repos/system_upgrade/common/actors/cryptopoliciescheck/actor.py
+++ b/repos/system_upgrade/common/actors/cryptopoliciescheck/actor.py
@@ -1,6 +1,5 @@
from leapp.actors import Actor
from leapp.libraries.actor import cryptopoliciescheck
-from leapp.libraries.common.config import version
from leapp.models import CryptoPolicyInfo, Report, TargetUserSpacePreupgradeTasks
from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
@@ -22,8 +21,4 @@ class CryptoPoliciesCheck(Actor):
tags = (IPUWorkflowTag, ChecksPhaseTag,)
def process(self):
- if version.get_source_major_version() == '7':
- # there are no crypto policies in EL 7
- return
-
cryptopoliciescheck.process(self.consume(CryptoPolicyInfo))
diff --git a/repos/system_upgrade/common/actors/cryptopoliciescheck/tests/component_test_cryptopoliciescheck.py b/repos/system_upgrade/common/actors/cryptopoliciescheck/tests/component_test_cryptopoliciescheck.py
index 0eb58ef6..3ce1e1ae 100644
--- a/repos/system_upgrade/common/actors/cryptopoliciescheck/tests/component_test_cryptopoliciescheck.py
+++ b/repos/system_upgrade/common/actors/cryptopoliciescheck/tests/component_test_cryptopoliciescheck.py
@@ -1,5 +1,3 @@
-import pytest
-
from leapp.libraries.common.config import version
from leapp.models import (
CopyFile,
@@ -11,13 +9,7 @@ from leapp.models import (
)
-@pytest.mark.parametrize(('source_version'), [
- ('7'),
- ('8'),
- ('9'),
-])
-def test_actor_execution_default(monkeypatch, current_actor_context, source_version):
- monkeypatch.setattr(version, 'get_source_major_version', lambda: source_version)
+def test_actor_execution_default(current_actor_context):
current_actor_context.feed(
CryptoPolicyInfo(
current_policy="DEFAULT",
@@ -29,13 +21,7 @@ def test_actor_execution_default(monkeypatch, current_actor_context, source_vers
assert not current_actor_context.consume(TargetUserSpacePreupgradeTasks)
-@pytest.mark.parametrize(('source_version', 'should_run'), [
- ('7', False),
- ('8', True),
- ('9', True),
-])
-def test_actor_execution_legacy(monkeypatch, current_actor_context, source_version, should_run):
- monkeypatch.setattr(version, 'get_source_major_version', lambda: source_version)
+def test_actor_execution_legacy(current_actor_context):
current_actor_context.feed(
CryptoPolicyInfo(
current_policy="LEGACY",
@@ -45,24 +31,15 @@ def test_actor_execution_legacy(monkeypatch, current_actor_context, source_versi
)
current_actor_context.run()
- if should_run:
- assert current_actor_context.consume(TargetUserSpacePreupgradeTasks)
- u = current_actor_context.consume(TargetUserSpacePreupgradeTasks)[0]
- assert u.install_rpms == ['crypto-policies-scripts']
- assert u.copy_files == []
+ assert current_actor_context.consume(TargetUserSpacePreupgradeTasks)
+ u = current_actor_context.consume(TargetUserSpacePreupgradeTasks)[0]
+ assert u.install_rpms == ['crypto-policies-scripts']
+ assert u.copy_files == []
- assert current_actor_context.consume(Report)
- else:
- assert not current_actor_context.consume(TargetUserSpacePreupgradeTasks)
+ assert current_actor_context.consume(Report)
-@pytest.mark.parametrize(('source_version', 'should_run'), [
- ('7', False),
- ('8', True),
- ('9', True),
-])
-def test_actor_execution_custom(monkeypatch, current_actor_context, source_version, should_run):
- monkeypatch.setattr(version, 'get_source_major_version', lambda: source_version)
+def test_actor_execution_custom(current_actor_context):
current_actor_context.feed(
CryptoPolicyInfo(
current_policy="CUSTOM:SHA2",
@@ -76,15 +53,12 @@ def test_actor_execution_custom(monkeypatch, current_actor_context, source_versi
)
current_actor_context.run()
- if should_run:
- assert current_actor_context.consume(TargetUserSpacePreupgradeTasks)
- u = current_actor_context.consume(TargetUserSpacePreupgradeTasks)[0]
- assert u.install_rpms == ['crypto-policies-scripts']
- assert u.copy_files == [
- CopyFile(src='/etc/crypto-policies/policies/CUSTOM.pol'),
- CopyFile(src='/etc/crypto-policies/policies/modules/SHA2.pmod'),
- ]
+ assert current_actor_context.consume(TargetUserSpacePreupgradeTasks)
+ u = current_actor_context.consume(TargetUserSpacePreupgradeTasks)[0]
+ assert u.install_rpms == ['crypto-policies-scripts']
+ assert u.copy_files == [
+ CopyFile(src='/etc/crypto-policies/policies/CUSTOM.pol'),
+ CopyFile(src='/etc/crypto-policies/policies/modules/SHA2.pmod'),
+ ]
- assert current_actor_context.consume(Report)
- else:
- assert not current_actor_context.consume(TargetUserSpacePreupgradeTasks)
+ assert current_actor_context.consume(Report)
--
2.52.0

View File

@ -1,53 +0,0 @@
From e588f34744192d80d01bcda286b1013b7d7afffc Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:55:32 +0200
Subject: [PATCH 108/111] checkopensslconf: Always use IBMCA provider wording
instead of IBMCA engine
On RHEL 7 it's called engine, drop that as RHEL 7 upgrades repo has been
dropped.
---
.../checkopensslconf/libraries/checkopensslconf.py | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/repos/system_upgrade/common/actors/openssl/checkopensslconf/libraries/checkopensslconf.py b/repos/system_upgrade/common/actors/openssl/checkopensslconf/libraries/checkopensslconf.py
index 53e803b2..d005e205 100644
--- a/repos/system_upgrade/common/actors/openssl/checkopensslconf/libraries/checkopensslconf.py
+++ b/repos/system_upgrade/common/actors/openssl/checkopensslconf/libraries/checkopensslconf.py
@@ -20,29 +20,27 @@ def check_ibmca():
return
# In RHEL 9 has been introduced new technology: openssl providers. The engine
# is deprecated, so keep proper teminology to not confuse users.
- dst_tech = 'engine' if version.get_target_major_version() == '8' else 'providers'
summary = (
'The presence of openssl-ibmca package suggests that the system may be configured'
' to use the IBMCA OpenSSL engine.'
' Due to major changes in OpenSSL and libica between RHEL {source} and RHEL {target} it is not'
' possible to migrate OpenSSL configuration files automatically. Therefore,'
- ' it is necessary to enable IBMCA {tech} in the OpenSSL config file manually'
+ ' it is necessary to enable IBMCA providers in the OpenSSL config file manually'
' after the system upgrade.'
.format(
source=version.get_source_major_version(),
target=version.get_target_major_version(),
- tech=dst_tech
)
)
hint = (
- 'Configure the IBMCA {tech} manually after the upgrade.'
+ 'Configure the IBMCA providers manually after the upgrade.'
' Please, be aware that it is not recommended to configure the system default'
' {fpath}. Instead, it is recommended to configure a copy of'
' that file and use this copy only for particular applications that are supposed'
- ' to utilize the IBMCA {tech}. The location of the OpenSSL configuration file'
+ ' to utilize the IBMCA providers. The location of the OpenSSL configuration file'
' can be specified using the OPENSSL_CONF environment variable.'
- .format(tech=dst_tech, fpath=DEFAULT_OPENSSL_CONF)
+ .format(fpath=DEFAULT_OPENSSL_CONF)
)
reporting.create_report([
--
2.52.0

View File

@ -1,58 +0,0 @@
From 2748e9920f9626973b7d1cf38c6a61445a13660a Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 21 Aug 2025 19:57:33 +0200
Subject: [PATCH 109/111] opensslconfigscanner: Drop early return on el7
---
.../opensslconfigscanner/libraries/readconf.py | 6 ------
.../tests/test_opensslconfigscanner.py | 13 ++-----------
2 files changed, 2 insertions(+), 17 deletions(-)
diff --git a/repos/system_upgrade/common/actors/openssl/opensslconfigscanner/libraries/readconf.py b/repos/system_upgrade/common/actors/openssl/opensslconfigscanner/libraries/readconf.py
index 678cc7aa..e1037033 100644
--- a/repos/system_upgrade/common/actors/openssl/opensslconfigscanner/libraries/readconf.py
+++ b/repos/system_upgrade/common/actors/openssl/opensslconfigscanner/libraries/readconf.py
@@ -1,6 +1,5 @@
import errno
-from leapp.libraries.common.config import version
from leapp.libraries.common.rpms import check_file_modification
from leapp.libraries.stdlib import api
from leapp.models import OpenSslConfig, OpenSslConfigBlock, OpenSslConfigPair
@@ -88,11 +87,6 @@ def scan_config(producer):
Parse openssl.cnf file to create OpenSslConfig message.
"""
- if version.get_source_major_version() == '7':
- # Apply this only for EL 8+ as we are not interested about this
- # on EL 7 anymore (moved from el8toel9)
- return
-
# direct access to configuration file
output = read_config()
config = parse_config(output)
diff --git a/repos/system_upgrade/common/actors/openssl/opensslconfigscanner/tests/test_opensslconfigscanner.py b/repos/system_upgrade/common/actors/openssl/opensslconfigscanner/tests/test_opensslconfigscanner.py
index 8978e133..dedc82f2 100644
--- a/repos/system_upgrade/common/actors/openssl/opensslconfigscanner/tests/test_opensslconfigscanner.py
+++ b/repos/system_upgrade/common/actors/openssl/opensslconfigscanner/tests/test_opensslconfigscanner.py
@@ -142,15 +142,6 @@ def test_produce_config():
assert cfg.blocks[2].pairs[0].value == "/etc/crypto-policies/back-ends/opensslcnf.config"
-@pytest.mark.parametrize(('source_version', 'should_run'), [
- ('7', False),
- ('8', True),
- ('9', True),
-])
-def test_actor_execution(monkeypatch, current_actor_context, source_version, should_run):
- monkeypatch.setattr(version, 'get_source_major_version', lambda: source_version)
+def test_actor_execution(current_actor_context):
current_actor_context.run()
- if should_run:
- assert current_actor_context.consume(OpenSslConfig)
- else:
- assert not current_actor_context.consume(OpenSslConfig)
+ assert current_actor_context.consume(OpenSslConfig)
--
2.52.0

View File

@ -1,163 +0,0 @@
From 93429f7b3ee08ac8763d5ed8b5cf49d30a3ecfb5 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Tue, 16 Dec 2025 15:35:35 +0100
Subject: [PATCH 110/111] lib/overlaygen: Remove the legacy OVL solution
This solution has been kept primarily for 7->8 upgrades, that is no
longer relevant and the solution is buggy, remove it.
No docs update - the env var was not documented to begin with.
---
.../common/libraries/overlaygen.py | 125 +-----------------
1 file changed, 2 insertions(+), 123 deletions(-)
diff --git a/repos/system_upgrade/common/libraries/overlaygen.py b/repos/system_upgrade/common/libraries/overlaygen.py
index 81342557..f0d0ba1d 100644
--- a/repos/system_upgrade/common/libraries/overlaygen.py
+++ b/repos/system_upgrade/common/libraries/overlaygen.py
@@ -608,7 +608,7 @@ def create_source_overlay(
:type mounts_dir: str
:param scratch_dir: Absolute path to the directory in which all disk and OVL images are stored.
:type scratch_dir: str
- :param xfs_info: The XFSPresence message.
+ :param xfs_info: The XFSPresence message (this is currently unused, but kept for compatibility).
:type xfs_info: leapp.models.XFSPresence
:param storage_info: The StorageInfo message.
:type storage_info: leapp.models.StorageInfo
@@ -626,11 +626,7 @@ def create_source_overlay(
scratch_dir=scratch_dir, mounts_dir=mounts_dir))
try:
_create_mounts_dir(scratch_dir, mounts_dir)
- if get_env('LEAPP_OVL_LEGACY', '0') != '1':
- mounts = _prepare_required_mounts(scratch_dir, mounts_dir, storage_info, scratch_reserve)
- else:
- # fallback to the deprecated OVL solution
- mounts = _prepare_required_mounts_old(scratch_dir, mounts_dir, _get_mountpoints(storage_info), xfs_info)
+ mounts = _prepare_required_mounts(scratch_dir, mounts_dir, storage_info, scratch_reserve)
with mounts.pop('/') as root_mount:
with mounting.OverlayMount(name='system_overlay', source='/', workdir=root_mount.target) as root_overlay:
if mount_target:
@@ -643,120 +639,3 @@ def create_source_overlay(
yield overlay
finally:
cleanup_scratch(scratch_dir, mounts_dir)
-
-
-# #############################################################################
-# Deprecated OVL solution ...
-# This is going to be removed in future as the whole functionality is going to
-# be replaced by new one. The problem is that the new solution can potentially
-# negatively affect systems with many loop mountpoints, so let's keep this
-# as a workaround for now. I am separating the old and new code in this way
-# to make the future removal easy.
-# The code below is triggered when LEAPP_OVL_LEGACY=1 envar is set.
-# IMPORTANT: Before an update of functions above, ensure the functionality of
-# the code below is not affected, otherwise copy the function below with the
-# "_old" suffix.
-# #############################################################################
-def _ensure_enough_diskimage_space_old(space_needed, directory):
- stat = os.statvfs(directory)
- if (stat.f_frsize * stat.f_bavail) < (space_needed * 1024 * 1024):
- message = ('Not enough space available for creating required disk images in {directory}. ' +
- 'Needed: {space_needed} MiB').format(space_needed=space_needed, directory=directory)
- api.current_logger().error(message)
- raise StopActorExecutionError(message)
-
-
-def _overlay_disk_size_old():
- """
- Convenient function to retrieve the overlay disk size
- """
- env_size = get_env('LEAPP_OVL_SIZE', '2048')
- try:
- disk_size = int(env_size)
- except ValueError:
- disk_size = 2048
- api.current_logger().warning(
- 'Invalid "LEAPP_OVL_SIZE" environment variable "%s". Setting default "%d" value', env_size, disk_size
- )
- return disk_size
-
-
-def _create_diskimages_dir_old(scratch_dir, diskimages_dir):
- """
- Prepares directories for disk images
- """
- api.current_logger().debug('Creating disk images directory.')
- try:
- utils.makedirs(diskimages_dir)
- api.current_logger().debug('Done creating disk images directory.')
- except OSError:
- api.current_logger().error('Failed to create disk images directory %s', diskimages_dir, exc_info=True)
-
- # This is an attempt for giving the user a chance to resolve it on their own
- raise StopActorExecutionError(
- message='Failed to prepare environment for package download while creating directories.',
- details={
- 'hint': 'Please ensure that {scratch_dir} is empty and modifiable.'.format(scratch_dir=scratch_dir)
- }
- )
-
-
-def _create_mount_disk_image_old(disk_images_directory, path):
- """
- Creates the mount disk image, for cases when we hit XFS with ftype=0
- """
- diskimage_path = os.path.join(disk_images_directory, _mount_name(path))
- disk_size = _overlay_disk_size_old()
-
- api.current_logger().debug('Attempting to create disk image with size %d MiB at %s', disk_size, diskimage_path)
- utils.call_with_failure_hint(
- cmd=['/bin/dd', 'if=/dev/zero', 'of={}'.format(diskimage_path), 'bs=1M', 'count={}'.format(disk_size)],
- hint='Please ensure that there is enough diskspace in {} at least {} MiB are needed'.format(
- diskimage_path, disk_size)
- )
-
- api.current_logger().debug('Creating ext4 filesystem in disk image at %s', diskimage_path)
- try:
- utils.call_with_oserror_handled(cmd=['/sbin/mkfs.ext4', '-F', diskimage_path])
- except CalledProcessError as e:
- api.current_logger().error('Failed to create ext4 filesystem in %s', diskimage_path, exc_info=True)
- raise StopActorExecutionError(
- message=str(e)
- )
-
- return diskimage_path
-
-
-def _prepare_required_mounts_old(scratch_dir, mounts_dir, mount_points, xfs_info):
- result = {
- mount_point.fs_file: mounting.NullMount(
- _mount_dir(mounts_dir, mount_point.fs_file)) for mount_point in mount_points
- }
-
- if not xfs_info.mountpoints_without_ftype:
- return result
-
- space_needed = _overlay_disk_size_old() * len(xfs_info.mountpoints_without_ftype)
- disk_images_directory = os.path.join(scratch_dir, 'diskimages')
-
- # Ensure we cleanup old disk images before we check for space constraints.
- run(['rm', '-rf', disk_images_directory])
- _create_diskimages_dir_old(scratch_dir, disk_images_directory)
- _ensure_enough_diskimage_space_old(space_needed, scratch_dir)
-
- mount_names = [mount_point.fs_file for mount_point in mount_points]
-
- # TODO(pstodulk): this (adding rootfs into the set always) is hotfix for
- # bz #1911802 (not ideal one..). The problem occurs one rootfs is ext4 fs,
- # but /var/lib/leapp/... is under XFS without ftype; In such a case we can
- # see still the very same problems as before. But letting you know that
- # probably this is not the final solution, as we could possibly see the
- # same problems on another partitions too (needs to be tested...). However,
- # it could fit for now until we provide the complete solution around XFS
- # workarounds (including management of required spaces for virtual FSs per
- # mountpoints - without that, we cannot fix this properly)
- for mountpoint in set(xfs_info.mountpoints_without_ftype + ['/']):
- if mountpoint in mount_names:
- image = _create_mount_disk_image_old(disk_images_directory, mountpoint)
- result[mountpoint] = mounting.LoopMount(source=image, target=_mount_dir(mounts_dir, mountpoint))
- return result
--
2.52.0

View File

@ -1,548 +0,0 @@
From 284494d7cb03f0aae71ec0065b494805fe33ea7b Mon Sep 17 00:00:00 2001
From: Leapp BOT <37839841+leapp-bot@users.noreply.github.com>
Date: Wed, 17 Dec 2025 14:09:32 +0100
Subject: [PATCH 111/111] data: update data files - pes_evets.json (#1451)
Regulard update of PES data, containing some fixes of original data and new events as well.
---
etc/leapp/files/pes-events.json | 491 +++++++++++++++++++++++++++++---
1 file changed, 451 insertions(+), 40 deletions(-)
diff --git a/etc/leapp/files/pes-events.json b/etc/leapp/files/pes-events.json
index fec9a900..964b7117 100644
--- a/etc/leapp/files/pes-events.json
+++ b/etc/leapp/files/pes-events.json
@@ -1,5 +1,5 @@
{
-"timestamp": "202511121106Z",
+"timestamp": "202512021706Z",
"provided_data_streams": [
"4.1"
],
@@ -250419,9 +250419,6 @@ null
{
"action": 6,
"architectures": [
-"aarch64",
-"ppc64le",
-"s390x",
"x86_64"
],
"id": 6802,
@@ -315101,7 +315098,7 @@ null
},
"initial_release": {
"major_version": 8,
-"minor_version": 5,
+"minor_version": 10,
"os_name": "RHEL"
},
"modulestream_maps": [
@@ -315124,7 +315121,7 @@ null
null
],
"name": "openexr-devel",
-"repository": "rhel9-AppStream"
+"repository": "rhel9-CRB"
}
],
"set_id": 12502
@@ -583248,40 +583245,6 @@ null
"s390x",
"x86_64"
],
-"id": 16179,
-"in_packageset": {
-"package": [
-{
-"modulestreams": [
-null
-],
-"name": "openexr-devel",
-"repository": "rhel9-CRB"
-}
-],
-"set_id": 22334
-},
-"initial_release": {
-"major_version": 8,
-"minor_version": 10,
-"os_name": "RHEL"
-},
-"modulestream_maps": [],
-"out_packageset": null,
-"release": {
-"major_version": 9,
-"minor_version": 0,
-"os_name": "RHEL"
-}
-},
-{
-"action": 0,
-"architectures": [
-"aarch64",
-"ppc64le",
-"s390x",
-"x86_64"
-],
"id": 16180,
"in_packageset": {
"package": [
@@ -708770,6 +708733,454 @@ null
"minor_version": 2,
"os_name": "RHEL"
}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19909,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "rhc-playbook-verifier",
+"repository": "rhel10-AppStream"
+}
+],
+"set_id": 26586
+},
+"initial_release": {
+"major_version": 10,
+"minor_version": 1,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 10,
+"minor_version": 2,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19910,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "rhc-playbook-verifier",
+"repository": "rhel9-AppStream"
+}
+],
+"set_id": 26587
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 7,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 8,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19911,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "python3-zstandard",
+"repository": "rhel10-AppStream"
+}
+],
+"set_id": 26588
+},
+"initial_release": {
+"major_version": 10,
+"minor_version": 1,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 10,
+"minor_version": 2,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19912,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "unbound-utils",
+"repository": "rhel10-BaseOS"
+}
+],
+"set_id": 26589
+},
+"initial_release": {
+"major_version": 10,
+"minor_version": 1,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 10,
+"minor_version": 2,
+"os_name": "RHEL"
+}
+},
+{
+"action": 5,
+"architectures": [
+"x86_64"
+],
+"id": 19913,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "kernel-rt",
+"repository": "rhel10-NFV"
+},
+{
+"modulestreams": [
+null
+],
+"name": "kernel-rt-kvm",
+"repository": "rhel10-NFV"
+}
+],
+"set_id": 26593
+},
+"initial_release": {
+"major_version": 10,
+"minor_version": 0,
+"os_name": "RHEL"
+},
+"modulestream_maps": [
+{
+"in_modulestream": null,
+"out_modulestream": null
+}
+],
+"out_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "kernel-rt",
+"repository": "rhel10-NFV"
+}
+],
+"set_id": 26600
+},
+"release": {
+"major_version": 10,
+"minor_version": 1,
+"os_name": "RHEL"
+}
+},
+{
+"action": 5,
+"architectures": [
+"x86_64"
+],
+"id": 19915,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "kernel-rt",
+"repository": "rhel9-NFV"
+},
+{
+"modulestreams": [
+null
+],
+"name": "kernel-rt-kvm",
+"repository": "rhel9-NFV"
+}
+],
+"set_id": 26595
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 6,
+"os_name": "RHEL"
+},
+"modulestream_maps": [
+{
+"in_modulestream": null,
+"out_modulestream": null
+}
+],
+"out_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "kernel-rt",
+"repository": "rhel9-NFV"
+}
+],
+"set_id": 26601
+},
+"release": {
+"major_version": 9,
+"minor_version": 7,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19916,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "gnome-autoar-devel",
+"repository": "rhel8-CRB"
+}
+],
+"set_id": 26596
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 8,
+"minor_version": 10,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19917,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "rest-devel",
+"repository": "rhel8-CRB"
+}
+],
+"set_id": 26597
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 8,
+"minor_version": 10,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19918,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "plymouth-devel",
+"repository": "rhel9-AppStream"
+}
+],
+"set_id": 26598
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 7,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 8,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19919,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "plymouth-devel",
+"repository": "rhel8-AppStream"
+}
+],
+"set_id": 26599
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 8,
+"minor_version": 10,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19920,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "libdmx-devel",
+"repository": "rhel8-CRB"
+}
+],
+"set_id": 26602
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 8,
+"minor_version": 10,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 19921,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "libdmx-devel",
+"repository": "rhel9-CRB"
+}
+],
+"set_id": 26603
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 7,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 8,
+"os_name": "RHEL"
+}
}
]
}
--
2.52.0

File diff suppressed because it is too large Load Diff

View File

@ -50,9 +50,10 @@ py2_byte_compile "%1" "%2"}
# to create such an rpm. Instead, we are going to introduce new naming for
# RHEL 8+ packages to be consistent with other leapp projects in future.
Epoch: 1
Name: leapp-repository
Version: 0.23.0
Release: 3%{?dist}
Release: 2%{?dist}.elevate.1
Summary: Repositories for leapp
License: ASL 2.0
@ -134,50 +135,7 @@ Patch0066: 0066-docs-Document-LEAPP_DEVEL_TARGET_OS-envar.patch
Patch0067: 0067-fix-unsupported-source-version-crash.patch
Patch0068: 0068-define-fallback-entries-in-the-upgrade-paths-map-for.patch
Patch0069: 0069-Add-basic-CS8-repomapping.patch
# CTC 2 - 1
Patch0070: 0070-chore-deps-update-peter-evans-create-or-update-comme.patch
Patch0071: 0071-fix-cgroups-v1-inhibitor-remediation.patch
Patch0072: 0072-Update-rhel-gpg-signatures-map.patch
Patch0073: 0073-Update-centos-gpg-signatures-map.patch
Patch0074: 0074-gpg-almalinux-Remove-Eurolinux-Tuxcare-GPG-key.patch
Patch0075: 0075-removeobsoletegpgkeys-Adjust-for-converting.patch
Patch0076: 0076-docs-Add-doc-about-community-upgrades.patch
Patch0077: 0077-Point-the-test-pipelines-to-the-next-branch-1461.patch
Patch0078: 0078-boot-fix-deps-when-bindmounting-boot-to-sysroot-boot.patch
Patch0079: 0079-handle-multipath-devices-in-upgrade-initramfs.patch
Patch0080: 0080-Fix-remediation-command-to-wrap-it-with-quotes.patch
Patch0081: 0081-Add-upstream-doc-about-running-single-actor.patch
Patch0082: 0082-add-handling-for-LVM-configuration.patch
Patch0083: 0083-multipath-do-not-crash-when-there-is-no-multipath.co.patch
Patch0084: 0084-Replace-distro-specific-packages-during-conversion.patch
Patch0085: 0085-Enable-CentOS-Stream-test-pipelines.patch
Patch0086: 0086-docs-Fix-search-not-working.patch
Patch0087: 0087-Handle-invalid-values-for-case-sensitive-SSH-options.patch
Patch0088: 0088-pes_events_scanner-Also-remove-RHEL-9-events-in-remo.patch
Patch0089: 0089-lib-overlaygen-Fix-possibly-unbound-var.patch
Patch0090: 0090-lib-rhui-Remove-RHEL-7-RHUI-setups.patch
Patch0091: 0091-lib-rhui-Remove-deprecated-code-and-setups-map.patch
Patch0092: 0092-lib-gpg-Remove-RHEL-7-workarounds.patch
Patch0093: 0093-lib-rpms-Update-tests-for-9-10.patch
Patch0094: 0094-lib-module-Remove-7-8-releasever-workaround.patch
Patch0095: 0095-lib-dnfplugin-Remove-RHEL-7-bind-mount-code-path.patch
Patch0096: 0096-lib-mounting-Remove-RHEL-7-nspawn-options.patch
Patch0097: 0097-lib-version-Remove-RHEL-7-from-supported-version.patch
Patch0098: 0098-checkfips-Drop-RHEL-7-inhibitor-and-update-tests.patch
Patch0099: 0099-scangrubconfig-Comment-out-RHEL-7-config-error-detec.patch
Patch0100: 0100-checkipaserver-Remove-RHEL-7-article-link.patch
Patch0101: 0101-checkluks-Remove-RHEL-7-inhibitor-and-related-code.patch
Patch0102: 0102-scankernel-Remove-RHEL-7-kernel-names.patch
Patch0103: 0103-repomap-lib-Drop-RHEL-7-default-PESID.patch
Patch0104: 0104-scanpkgmanager-Remove-yum-related-code.patch
Patch0105: 0105-checkyumpluginsenabled-Drop-yum-related-code.patch
Patch0106: 0106-scancryptopolicies-Remove-RHEL-7-early-return.patch
Patch0107: 0107-cryptopoliciescheck-Remove-RHEL-7-early-return-and-u.patch
Patch0108: 0108-checkopensslconf-Always-use-IBMCA-provider-wording-i.patch
Patch0109: 0109-opensslconfigscanner-Drop-early-return-on-el7.patch
Patch0110: 0110-lib-overlaygen-Remove-the-legacy-OVL-solution.patch
Patch0111: 0111-data-update-data-files-pes_evets.json-1451.patch
Patch0100: leapp-repository-0.23.0-elevate.patch
%description
%{summary}
@ -427,48 +385,7 @@ Requires: fapolicyd
%patch -P 0067 -p1
%patch -P 0068 -p1
%patch -P 0069 -p1
%patch -P 0070 -p1
%patch -P 0071 -p1
%patch -P 0072 -p1
%patch -P 0073 -p1
%patch -P 0074 -p1
%patch -P 0075 -p1
%patch -P 0076 -p1
%patch -P 0077 -p1
%patch -P 0078 -p1
%patch -P 0079 -p1
%patch -P 0080 -p1
%patch -P 0081 -p1
%patch -P 0082 -p1
%patch -P 0083 -p1
%patch -P 0084 -p1
%patch -P 0085 -p1
%patch -P 0086 -p1
%patch -P 0087 -p1
%patch -P 0088 -p1
%patch -P 0089 -p1
%patch -P 0090 -p1
%patch -P 0091 -p1
%patch -P 0092 -p1
%patch -P 0093 -p1
%patch -P 0094 -p1
%patch -P 0095 -p1
%patch -P 0096 -p1
%patch -P 0097 -p1
%patch -P 0098 -p1
%patch -P 0099 -p1
%patch -P 0100 -p1
%patch -P 0101 -p1
%patch -P 0102 -p1
%patch -P 0103 -p1
%patch -P 0104 -p1
%patch -P 0105 -p1
%patch -P 0106 -p1
%patch -P 0107 -p1
%patch -P 0108 -p1
%patch -P 0109 -p1
%patch -P 0110 -p1
%patch -P 0111 -p1
%build
cp -a leapp*deps*el%{next_major_ver}.noarch.rpm repos/system_upgrade/%{repo_shortname}/files/bundled-rpms/
@ -549,9 +466,13 @@ fi
%config %{_sysconfdir}/leapp/files/*
# uncomment to package installed configs
#%%config %%{_sysconfdir}/leapp/actor_conf.d/*
%exclude %{_sysconfdir}/leapp/files/device_driver_deprecation_data.json
%exclude %{_sysconfdir}/leapp/files/pes-events.json
%exclude %{_sysconfdir}/leapp/files/repomap.json
%{_sysconfdir}/leapp/repos.d/*
%{_sysconfdir}/leapp/transaction/*
%{repositorydir}/*
%exclude %{repositorydir}/system_upgrade/common/files/distro/centos/rpm-gpg/*
%{leapp_python_sitelib}/leapp/cli/commands/*
@ -564,15 +485,14 @@ fi
%changelog
* Wed Dec 17 2025 Matej Matuska <mmatuska@redhat.com> - 0.23.0-3
- Fix handling of LVM and Multipath during the upgrade
- Fix remediation command for making symlinks in root directory relative
- Remove RPM GPG keys of the source distribution when upgrading and converting the system
- Replace distro specific packages during upgrade and conversion
- Improve error message when scanning invalid SSHD configuration
- Update the leapp data files
- Minor changes in logs and reports
- Resolves: RHEL-14712, RHEL-30447, RHEL-19247, RHEL-127066, RHEL-124923, RHEL-119516
* Mon Dec 01 2025 Yuriy Kohut <ykohut@almalinux.org> - 0.23.0-2.elevate.1
- ELevate vendors support for upstream 0.23.0-2 version (eabab8c496a7d6a76ff1aa0d7e34b0345530e30a)
* Fri Nov 28 2025 Yuriy Kohut <ykohut@almalinux.org> - 0.23.0-1.elevate.5
- ELevate vendors support for upstream 0.23.0-1 version (eabab8c496a7d6a76ff1aa0d7e34b0345530e30a)
* Thu Nov 14 2025 Yuriy Kohut <ykohut@almalinux.org> - 0.23.0-1.elevate.4
- ELevate vendors support for upstream 0.23.0-1 version (249cd3b203d05937a4d4a02b484444291f4aed85)
* Thu Nov 13 2025 Karolina Kula <kkula@redhat.com> - 0.23.0-2
- Requires leapp-framework 6.2+
@ -592,6 +512,15 @@ fi
- Update the leapp data files
- Resolves: RHEL-25838, RHEL-35446, RHEL-69601, RHEL-71882, RHEL-90098, RHEL-120328, RHEL-123886
* Thu Nov 13 2025 Yuriy Kohut <ykohut@almalinux.org> - 0.23.0-1.elevate.3
- ELevate vendors support for upstream 0.23.0-1 version (b7f862249e2227d2c5f3f6e33d74f8d2a2367a11)
* Tue Sep 30 2025 Yuriy Kohut <ykohut@almalinux.org> - 0.23.0-1.elevate.2
- ELevate vendors support for upstream 0.23.0-1 version (47fce173e75408d9a7a26225d389161caf72e244)
* Fri Sep 12 2025 Yuriy Kohut <ykohut@almalinux.org> - 0.23.0-1.elevate.1
- ELevate vendors support for upstream 0.23.0-1 version (dcf53c28ea9c3fdd03277abcdeb1d124660f7f8e)
* Thu Aug 14 2025 Karolina Kula <kkula@redhat.com> - 0.23.0-1
- Rebase to new upstream 0.23.0
- Enable in-place upgrades on CentOS Stream systems
@ -1427,4 +1356,3 @@ fi
* Wed Nov 07 2018 Petr Stodulka <pstodulk@redhat.com> - 0.3-1
- Initial RPM
Resolves: #1636481