Rebase to new upstream v0.20.0.

Fix semanage import issue
Fix handling of libvirt's systemd services
Add a dracut breakpoint for the pre-upgrade step.
Drop obsoleted upgrade paths (obsoleted releases: 8.6, 8.9, 9.0, 9.3)
Resolves: RHEL-16729
This commit is contained in:
Toshio Kuratomi 2024-02-13 09:55:17 -08:00
parent 14fa763399
commit d64f54a92c
72 changed files with 12 additions and 292106 deletions

3
.gitignore vendored
View File

@ -1,6 +1,5 @@
SOURCES/deps-pkgs-9.tar.gz
SOURCES/leapp-repository-0.18.0.tar.gz
/deps-pkgs-9.tar.gz
/leapp-repository-0.18.0.tar.gz
/leapp-repository-0.19.0.tar.gz
/deps-pkgs-10.tar.gz
/leapp-repository-0.20.0.tar.gz

View File

@ -1,57 +0,0 @@
From 0f4212f989ad5907091651c6c1c179240c21f4cb Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Thu, 10 Aug 2023 14:01:32 +0200
Subject: [PATCH 01/38] Further narrow down packit tests
- Get rid of the sad uefi_upgrade test for now;
- Reduce time consuming partitioning tests to 3.
in demand /rerun command-scheduled tests will still be running
the full destructive test set (no max_sst though).
---
.packit.yaml | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index b7b4c0be..50a50747 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -94,7 +94,7 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.6
- tmt_plan: "^(?!.*max_sst)(.*tier1)"
+ tmt_plan: "((?!.*uefi_upgrade)(?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*uefi_upgrade)(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
tf_extra_params:
environments:
- tmt:
@@ -120,7 +120,7 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.8
- tmt_plan: "^(?!.*max_sst)(.*tier1)"
+ tmt_plan: "((?!.*uefi_upgrade)(?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*uefi_upgrade)(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
tf_extra_params:
environments:
- tmt:
@@ -193,7 +193,7 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
identifier: tests-8.6to9.0
- tmt_plan: "^(?!.*max_sst)(.*tier1)"
+ tmt_plan: "((?!.*uefi_upgrade)(?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*uefi_upgrade)(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
tf_extra_params:
environments:
- tmt:
@@ -220,7 +220,7 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
identifier: tests-8.8to9.2
- tmt_plan: "^(?!.*max_sst)(.*tier1)"
+ tmt_plan: "((?!.*uefi_upgrade)(?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*uefi_upgrade)(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
tf_extra_params:
environments:
- tmt:
--
2.41.0

View File

@ -1,54 +0,0 @@
From 9890df46356bb28a941bc5659b16f890918c8b4f Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Fri, 11 Aug 2023 10:49:41 +0200
Subject: [PATCH 02/38] Bring back uefi_test
A fix for the issue that has been causing this test to fail
was merged, so let's bring back that test.
---
.packit.yaml | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index 50a50747..820d2151 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -94,7 +94,7 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.6
- tmt_plan: "((?!.*uefi_upgrade)(?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*uefi_upgrade)(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
tf_extra_params:
environments:
- tmt:
@@ -120,7 +120,7 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.8
- tmt_plan: "((?!.*uefi_upgrade)(?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*uefi_upgrade)(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
tf_extra_params:
environments:
- tmt:
@@ -193,7 +193,7 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
identifier: tests-8.6to9.0
- tmt_plan: "((?!.*uefi_upgrade)(?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*uefi_upgrade)(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
tf_extra_params:
environments:
- tmt:
@@ -220,7 +220,7 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
identifier: tests-8.8to9.2
- tmt_plan: "((?!.*uefi_upgrade)(?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*uefi_upgrade)(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
tf_extra_params:
environments:
- tmt:
--
2.41.0

View File

@ -1,125 +0,0 @@
From ecffc19fd75ea3caa9d36b8ce311bcf5a36aa998 Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Fri, 11 Aug 2023 12:38:33 +0200
Subject: [PATCH 03/38] Add 7.9->8.9 and 8.9->9.3 upgrade paths
Also let's get rid of commented out max_sst tests, there
is no way packit let's us support customized runs in the
nearest future and keeping dead commented out code is not cool.
---
.packit.yaml | 92 +++++++++++++++++++++++++++++-----------------------
1 file changed, 52 insertions(+), 40 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index 820d2151..9c30e0c8 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -137,25 +137,31 @@ jobs:
TARGET_RELEASE: "8.8"
LEAPPDATA_BRANCH: "upstream"
-# - job: tests
-# fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
-# fmf_ref: "master"
-# use_internal_tf: True
-# trigger: pull_request
-# targets:
-# epel-7-x86_64:
-# distros: [RHEL-7.9-ZStream]
-# identifier: tests-7.9to8.8-sst
-# tmt_plan: "^(?!.*tier[2-3].*)(.*max_sst.*)"
-# tf_post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
-# tf_extra_params:
-# environments:
-# - tmt:
-# context:
-# distro: "rhel-7.9"
-# env:
-# SOURCE_RELEASE: "7.9"
-# TARGET_RELEASE: "8.8"
+- job: tests
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
+ fmf_ref: "master"
+ use_internal_tf: True
+ trigger: pull_request
+ targets:
+ epel-7-x86_64:
+ distros: [RHEL-7.9-ZStream]
+ identifier: tests-7.9to8.9
+ tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tf_extra_params:
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-7.9"
+ # tag resources as sst_upgrades to enable cost metrics collection
+ settings:
+ provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ tags:
+ BusinessUnit: sst_upgrades
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.9"
+ LEAPPDATA_BRANCH: "upstream"
- job: tests
fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
@@ -239,27 +245,33 @@ jobs:
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.2"
-# - job: tests
-# fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
-# fmf_ref: "master"
-# use_internal_tf: True
-# trigger: pull_request
-# targets:
-# epel-8-x86_64:
-# distros: [RHEL-8.6.0-Nightly]
-# identifier: tests-8.6to9.0-sst
-# tmt_plan: "^(?!.*tier[2-3].*)(.*max_sst.*)"
-# tf_post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
-# tf_extra_params:
-# environments:
-# - tmt:
-# context:
-# distro: "rhel-8.6"
-# env:
-# SOURCE_RELEASE: "8.6"
-# TARGET_RELEASE: "9.0"
-# RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
-# LEAPPDATA_BRANCH: "upstream"
+- job: tests
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
+ fmf_ref: "master"
+ use_internal_tf: True
+ trigger: pull_request
+ targets:
+ epel-8-x86_64:
+ distros: [RHEL-8.9.0-Nightly]
+ identifier: tests-8.9to9.3
+ tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tf_extra_params:
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.9"
+ # tag resources as sst_upgrades to enable cost metrics collection
+ settings:
+ provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ tags:
+ BusinessUnit: sst_upgrades
+ env:
+ SOURCE_RELEASE: "8.9"
+ TARGET_RELEASE: "9.3"
+ RHSM_REPOS: "rhel-8-for-x86_64-appstream-beta-rpms,rhel-8-for-x86_64-baseos-beta-rpms"
+ LEAPPDATA_BRANCH: "upstream"
+ LEAPP_DEVEL_TARGET_RELEASE: "9.3"
- job: tests
fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
--
2.41.0

View File

@ -1,273 +0,0 @@
From 63963200e5fdc02d4ad2a0abb1632c26774af8bb Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Tue, 22 Aug 2023 14:50:15 +0200
Subject: [PATCH 04/38] Split tier1 tests into default-on-push and on-demand
Default test set will have the fastest tests that run on cloud only.
On demand tests will contain the minimal beaker test set and will
be triggered via /packit test --labels minimal-beaker
Later we could add labels-based triggering that will remove the
need to manually leave a comment.
https://packit.dev/posts/manual-triggering#manual-only-triggering-of-jobs
OAMG-9458
---
.packit.yaml | 198 +++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 192 insertions(+), 6 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index 9c30e0c8..32d2b02e 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -85,6 +85,7 @@ jobs:
# builds from master branch should start with 100 release, to have high priority
- bash -c "sed -i \"s/1%{?dist}/100%{?dist}/g\" packaging/leapp-repository.spec"
+
- job: tests
fmf_url: "https://gitlab.cee.redhat.com/oamg/tmt-plans"
fmf_ref: "master"
@@ -94,7 +95,37 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.6
- tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
+ tf_extra_params:
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-7.9"
+ # tag resources as sst_upgrades to enable cost metrics collection
+ settings:
+ provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ tags:
+ BusinessUnit: sst_upgrades
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.6"
+ LEAPPDATA_BRANCH: "upstream"
+
+# On-demand minimal beaker tests
+- job: tests
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/tmt-plans"
+ fmf_ref: "master"
+ use_internal_tf: True
+ trigger: pull_request
+ manual_trigger: True
+ labels:
+ - minimal-beaker
+ targets:
+ epel-7-x86_64:
+ distros: [RHEL-7.9-ZStream]
+ identifier: tests-7.9to8.6-minimal-beaker
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
tf_extra_params:
environments:
- tmt:
@@ -120,7 +151,37 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.8
- tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
+ tf_extra_params:
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-7.9"
+ # tag resources as sst_upgrades to enable cost metrics collection
+ settings:
+ provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ tags:
+ BusinessUnit: sst_upgrades
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.8"
+ LEAPPDATA_BRANCH: "upstream"
+
+# On-demand minimal beaker tests
+- job: tests
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
+ fmf_ref: "master"
+ use_internal_tf: True
+ trigger: pull_request
+ manual_trigger: True
+ labels:
+ - minimal-beaker
+ targets:
+ epel-7-x86_64:
+ distros: [RHEL-7.9-ZStream]
+ identifier: tests-7.9to8.8-minimal-beaker
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
tf_extra_params:
environments:
- tmt:
@@ -146,7 +207,37 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.9
- tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
+ tf_extra_params:
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-7.9"
+ # tag resources as sst_upgrades to enable cost metrics collection
+ settings:
+ provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ tags:
+ BusinessUnit: sst_upgrades
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.9"
+ LEAPPDATA_BRANCH: "upstream"
+
+# On-demand minimal beaker tests
+- job: tests
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
+ fmf_ref: "master"
+ use_internal_tf: True
+ trigger: pull_request
+ manual_trigger: True
+ labels:
+ - minimal-beaker
+ targets:
+ epel-7-x86_64:
+ distros: [RHEL-7.9-ZStream]
+ identifier: tests-7.9to8.9-minimal-beaker
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
tf_extra_params:
environments:
- tmt:
@@ -199,7 +290,38 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
identifier: tests-8.6to9.0
- tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
+ tf_extra_params:
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.6"
+ # tag resources as sst_upgrades to enable cost metrics collection
+ settings:
+ provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ tags:
+ BusinessUnit: sst_upgrades
+ env:
+ SOURCE_RELEASE: "8.6"
+ TARGET_RELEASE: "9.0"
+ RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
+ LEAPPDATA_BRANCH: "upstream"
+
+# On-demand minimal beaker tests
+- job: tests
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
+ fmf_ref: "master"
+ use_internal_tf: True
+ trigger: pull_request
+ manual_trigger: True
+ labels:
+ - minimal-beaker
+ targets:
+ epel-8-x86_64:
+ distros: [RHEL-8.6.0-Nightly]
+ identifier: tests-8.6to9.0-minimal-beaker
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
tf_extra_params:
environments:
- tmt:
@@ -226,7 +348,39 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
identifier: tests-8.8to9.2
- tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
+ tf_extra_params:
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.8"
+ # tag resources as sst_upgrades to enable cost metrics collection
+ settings:
+ provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ tags:
+ BusinessUnit: sst_upgrades
+ env:
+ SOURCE_RELEASE: "8.8"
+ TARGET_RELEASE: "9.2"
+ RHSM_REPOS: "rhel-8-for-x86_64-appstream-beta-rpms,rhel-8-for-x86_64-baseos-beta-rpms"
+ LEAPPDATA_BRANCH: "upstream"
+ LEAPP_DEVEL_TARGET_RELEASE: "9.2"
+
+# On-demand minimal beaker tests
+- job: tests
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
+ fmf_ref: "master"
+ use_internal_tf: True
+ trigger: pull_request
+ manual_trigger: True
+ labels:
+ - minimal-beaker
+ targets:
+ epel-8-x86_64:
+ distros: [RHEL-8.8.0-Nightly]
+ identifier: tests-8.8to9.2-minimal-beaker
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
tf_extra_params:
environments:
- tmt:
@@ -254,7 +408,39 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.9.0-Nightly]
identifier: tests-8.9to9.3
- tmt_plan: "((?!.*max_sst)(?!.*partitioning)(.*tier1)|(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog))"
+ tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
+ tf_extra_params:
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.9"
+ # tag resources as sst_upgrades to enable cost metrics collection
+ settings:
+ provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ tags:
+ BusinessUnit: sst_upgrades
+ env:
+ SOURCE_RELEASE: "8.9"
+ TARGET_RELEASE: "9.3"
+ RHSM_REPOS: "rhel-8-for-x86_64-appstream-beta-rpms,rhel-8-for-x86_64-baseos-beta-rpms"
+ LEAPPDATA_BRANCH: "upstream"
+ LEAPP_DEVEL_TARGET_RELEASE: "9.3"
+
+# On-demand minimal beaker tests
+- job: tests
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
+ fmf_ref: "master"
+ use_internal_tf: True
+ trigger: pull_request
+ manual_trigger: True
+ labels:
+ - minimal-beaker
+ targets:
+ epel-8-x86_64:
+ distros: [RHEL-8.9.0-Nightly]
+ identifier: tests-8.9to9.3-minimal-beaker
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
tf_extra_params:
environments:
- tmt:
--
2.41.0

View File

@ -1,145 +0,0 @@
From 78542a7a58c3ee1a719cdbbd139409319402de0f Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Tue, 22 Aug 2023 15:39:48 +0200
Subject: [PATCH 05/38] Add labels to all tests
- On-demand minimal beaker tests will have a generic
minimal-beaker label and minimal-beaker-XtoY label to
enable micro control over test scheduling
- Aws tests will have aws label
- Tests triggered automatically will have default label.
OAMG-9458
---
.packit.yaml | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/.packit.yaml b/.packit.yaml
index 32d2b02e..9a697838 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -91,6 +91,8 @@ jobs:
fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
+ labels:
+ - default
targets:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
@@ -121,6 +123,7 @@ jobs:
manual_trigger: True
labels:
- minimal-beaker
+ - minimal-beaker-7.9to8.6
targets:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
@@ -147,6 +150,8 @@ jobs:
fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
+ labels:
+ - default
targets:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
@@ -177,6 +182,7 @@ jobs:
manual_trigger: True
labels:
- minimal-beaker
+ - minimal-beaker-7.9to8.8
targets:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
@@ -203,6 +209,8 @@ jobs:
fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
+ labels:
+ - default
targets:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
@@ -233,6 +241,7 @@ jobs:
manual_trigger: True
labels:
- minimal-beaker
+ - minimal-beaker-7.9to8.9
targets:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
@@ -259,6 +268,9 @@ jobs:
fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
+ labels:
+ - default
+ - aws
targets:
epel-7-x86_64:
distros: [RHEL-7.9-rhui]
@@ -286,6 +298,8 @@ jobs:
fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
+ labels:
+ - default
targets:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
@@ -317,6 +331,7 @@ jobs:
manual_trigger: True
labels:
- minimal-beaker
+ - minimal-beaker-8.6to9.0
targets:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
@@ -344,6 +359,8 @@ jobs:
fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
+ labels:
+ - default
targets:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
@@ -376,6 +393,7 @@ jobs:
manual_trigger: True
labels:
- minimal-beaker
+ - minimal-beaker-8.8to9.2
targets:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
@@ -404,6 +422,8 @@ jobs:
fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
+ labels:
+ - default
targets:
epel-8-x86_64:
distros: [RHEL-8.9.0-Nightly]
@@ -436,6 +456,7 @@ jobs:
manual_trigger: True
labels:
- minimal-beaker
+ - minimal-beaker-8.9to9.3
targets:
epel-8-x86_64:
distros: [RHEL-8.9.0-Nightly]
@@ -464,6 +485,9 @@ jobs:
fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
+ labels:
+ - default
+ - aws
targets:
epel-8-x86_64:
distros: [RHEL-8.6-rhui]
--
2.41.0

View File

@ -1,396 +0,0 @@
From 6bb005605732e18b1921bf207898fa8499ceedc6 Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Wed, 23 Aug 2023 14:49:27 +0200
Subject: [PATCH 06/38] Refactor using YAML anchors
Let's remove duplication during similar test jobs definition
by using YAML anchors.
---
.packit.yaml | 228 ++++++++++-----------------------------------------
1 file changed, 43 insertions(+), 185 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index 9a697838..06c681b3 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -85,8 +85,8 @@ jobs:
# builds from master branch should start with 100 release, to have high priority
- bash -c "sed -i \"s/1%{?dist}/100%{?dist}/g\" packaging/leapp-repository.spec"
-
-- job: tests
+- &default-79to86
+ job: tests
fmf_url: "https://gitlab.cee.redhat.com/oamg/tmt-plans"
fmf_ref: "master"
use_internal_tf: True
@@ -97,7 +97,7 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.6
- tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
+ tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(?!.*oamg4250_lvm_var_xfs_ftype0)(?!.*kernel-rt)(.*tier1)"
tf_extra_params:
environments:
- tmt:
@@ -114,21 +114,16 @@ jobs:
TARGET_RELEASE: "8.6"
LEAPPDATA_BRANCH: "upstream"
-# On-demand minimal beaker tests
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/tmt-plans"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- manual_trigger: True
+- &default-79to86-aws
+ <<: *default-79to86
labels:
- - minimal-beaker
- - minimal-beaker-7.9to8.6
+ - default
+ - aws
targets:
epel-7-x86_64:
- distros: [RHEL-7.9-ZStream]
- identifier: tests-7.9to8.6-minimal-beaker
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
+ distros: [RHEL-7.9-rhui]
+ identifier: tests-7to8-aws-e2e
+ tmt_plan: "(?!.*sap)(.*e2e)"
tf_extra_params:
environments:
- tmt:
@@ -137,174 +132,71 @@ jobs:
# tag resources as sst_upgrades to enable cost metrics collection
settings:
provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys; yum-config-manager --enable rhel-7-server-rhui-optional-rpms"
tags:
BusinessUnit: sst_upgrades
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.6"
+ RHUI: "aws"
LEAPPDATA_BRANCH: "upstream"
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
+# On-demand minimal beaker tests
+- &beaker-minimal-79to86
+ <<: *default-79to86
+ manual_trigger: True
labels:
- - default
- targets:
- epel-7-x86_64:
- distros: [RHEL-7.9-ZStream]
+ - minimal-beaker
+ - minimal-beaker-7.9to8.6
+ identifier: tests-7.9to8.6-minimal-beaker
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi|.*oamg4250_lvm_var_xfs_ftype0)"
+
+- &default-79to88
+ <<: *default-79to86
identifier: tests-7.9to8.8
- tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
- tf_extra_params:
- environments:
- - tmt:
- context:
- distro: "rhel-7.9"
- # tag resources as sst_upgrades to enable cost metrics collection
- settings:
- provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
- tags:
- BusinessUnit: sst_upgrades
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.8"
LEAPPDATA_BRANCH: "upstream"
# On-demand minimal beaker tests
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- manual_trigger: True
+- &beaker-minimal-79to88
+ <<: *beaker-minimal-79to86
labels:
- minimal-beaker
- minimal-beaker-7.9to8.8
- targets:
- epel-7-x86_64:
- distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.8-minimal-beaker
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
- tf_extra_params:
- environments:
- - tmt:
- context:
- distro: "rhel-7.9"
- # tag resources as sst_upgrades to enable cost metrics collection
- settings:
- provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
- tags:
- BusinessUnit: sst_upgrades
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.8"
LEAPPDATA_BRANCH: "upstream"
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- labels:
- - default
- targets:
- epel-7-x86_64:
- distros: [RHEL-7.9-ZStream]
+- &default-79to89
+ <<: *default-79to86
identifier: tests-7.9to8.9
- tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
- tf_extra_params:
- environments:
- - tmt:
- context:
- distro: "rhel-7.9"
- # tag resources as sst_upgrades to enable cost metrics collection
- settings:
- provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
- tags:
- BusinessUnit: sst_upgrades
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.9"
LEAPPDATA_BRANCH: "upstream"
# On-demand minimal beaker tests
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- manual_trigger: True
+- &beaker-minimal-79to89
+ <<: *beaker-minimal-79to86
labels:
- minimal-beaker
- minimal-beaker-7.9to8.9
- targets:
- epel-7-x86_64:
- distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.9-minimal-beaker
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
- tf_extra_params:
- environments:
- - tmt:
- context:
- distro: "rhel-7.9"
- # tag resources as sst_upgrades to enable cost metrics collection
- settings:
- provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
- tags:
- BusinessUnit: sst_upgrades
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.9"
LEAPPDATA_BRANCH: "upstream"
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- labels:
- - default
- - aws
- targets:
- epel-7-x86_64:
- distros: [RHEL-7.9-rhui]
- identifier: tests-7to8-aws-e2e
- tmt_plan: "^(?!.*upgrade_plugin)(?!.*tier[2-3].*)(?!.*rhsm)(?!.*c2r)(?!.*sap)(?!.*8to9)(.*e2e)"
- tf_extra_params:
- environments:
- - tmt:
- context:
- distro: "rhel-7.9"
- # tag resources as sst_upgrades to enable cost metrics collection
- settings:
- provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys; yum-config-manager --enable rhel-7-server-rhui-optional-rpms"
- tags:
- BusinessUnit: sst_upgrades
- env:
- SOURCE_RELEASE: "7.9"
- TARGET_RELEASE: "8.6"
- RHUI: "aws"
- LEAPPDATA_BRANCH: "upstream"
-
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- labels:
- - default
+- &default-86to90
+ <<: *default-79to86
targets:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
identifier: tests-8.6to9.0
- tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
tf_extra_params:
environments:
- tmt:
@@ -323,12 +215,8 @@ jobs:
LEAPPDATA_BRANCH: "upstream"
# On-demand minimal beaker tests
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- manual_trigger: True
+- &beaker-minimal-86to90
+ <<: *beaker-minimal-79to86
labels:
- minimal-beaker
- minimal-beaker-8.6to9.0
@@ -336,7 +224,6 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
identifier: tests-8.6to9.0-minimal-beaker
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
tf_extra_params:
environments:
- tmt:
@@ -354,18 +241,12 @@ jobs:
RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
LEAPPDATA_BRANCH: "upstream"
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- labels:
- - default
+- &default-88to92
+ <<: *default-86to90
targets:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
identifier: tests-8.8to9.2
- tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
tf_extra_params:
environments:
- tmt:
@@ -385,12 +266,8 @@ jobs:
LEAPP_DEVEL_TARGET_RELEASE: "9.2"
# On-demand minimal beaker tests
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- manual_trigger: True
+- &beaker-minimal-88to92
+ <<: *beaker-minimal-86to90
labels:
- minimal-beaker
- minimal-beaker-8.8to9.2
@@ -398,7 +275,6 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
identifier: tests-8.8to9.2-minimal-beaker
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
tf_extra_params:
environments:
- tmt:
@@ -417,18 +293,12 @@ jobs:
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.2"
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- labels:
- - default
+- &default-89to93
+ <<: *default-88to92
targets:
epel-8-x86_64:
distros: [RHEL-8.9.0-Nightly]
identifier: tests-8.9to9.3
- tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(.*tier1)"
tf_extra_params:
environments:
- tmt:
@@ -448,12 +318,8 @@ jobs:
LEAPP_DEVEL_TARGET_RELEASE: "9.3"
# On-demand minimal beaker tests
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- manual_trigger: True
+- &beaker-minimal-89to93
+ <<: *beaker-minimal-88to92
labels:
- minimal-beaker
- minimal-beaker-8.9to9.3
@@ -461,7 +327,6 @@ jobs:
epel-8-x86_64:
distros: [RHEL-8.9.0-Nightly]
identifier: tests-8.9to9.3-minimal-beaker
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi)"
tf_extra_params:
environments:
- tmt:
@@ -480,19 +345,12 @@ jobs:
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.3"
-- job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
- use_internal_tf: True
- trigger: pull_request
- labels:
- - default
- - aws
+- &default-86to90-aws
+ <<: *default-79to86-aws
targets:
epel-8-x86_64:
distros: [RHEL-8.6-rhui]
identifier: tests-8to9-aws-e2e
- tmt_plan: "^(?!.*upgrade_plugin)(?!.*tier[2-3].*)(?!.*rhsm)(?!.*c2r)(?!.*sap)(?!.*7to8)(.*e2e)"
tf_extra_params:
environments:
- tmt:
--
2.41.0

View File

@ -1,118 +0,0 @@
From 622fa64abe818294ade9d533f2bffdf320849b0f Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Wed, 23 Aug 2023 15:24:57 +0200
Subject: [PATCH 07/38] Add kernel-rt tests and switch to sanity for default
Instead if a bulky regex sanity test plan will be used.
Also kernel-rt tests have been specified as a separate
test set with kernel-rt label.
---
.packit.yaml | 50 +++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 49 insertions(+), 1 deletion(-)
diff --git a/.packit.yaml b/.packit.yaml
index 06c681b3..eb08c9f5 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -97,7 +97,7 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: tests-7.9to8.6
- tmt_plan: "(?!.*uefi)(?!.*max_sst)(?!.*partitioning)(?!.*oamg4250_lvm_var_xfs_ftype0)(?!.*kernel-rt)(.*tier1)"
+ tmt_plan: "sanity_plan"
tf_extra_params:
environments:
- tmt:
@@ -151,6 +151,14 @@ jobs:
identifier: tests-7.9to8.6-minimal-beaker
tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi|.*oamg4250_lvm_var_xfs_ftype0)"
+# On-demand kernel-rt tests
+- &kernel-rt-79to86
+ <<: *beaker-minimal-79to86
+ labels:
+ - kernel-rt
+ identifier: tests-7.9to8.6-kernel-rt
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+
- &default-79to88
<<: *default-79to86
identifier: tests-7.9to8.8
@@ -171,6 +179,14 @@ jobs:
TARGET_RELEASE: "8.8"
LEAPPDATA_BRANCH: "upstream"
+# On-demand kernel-rt tests
+- &kernel-rt-79to88
+ <<: *beaker-minimal-79to88
+ labels:
+ - kernel-rt
+ identifier: tests-7.9to8.8-kernel-rt
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+
- &default-79to89
<<: *default-79to86
identifier: tests-7.9to8.9
@@ -191,6 +207,14 @@ jobs:
TARGET_RELEASE: "8.9"
LEAPPDATA_BRANCH: "upstream"
+# On-demand kernel-rt tests
+- &kernel-rt-79to89
+ <<: *beaker-minimal-79to89
+ labels:
+ - kernel-rt
+ identifier: tests-7.9to8.9-kernel-rt
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+
- &default-86to90
<<: *default-79to86
targets:
@@ -241,6 +265,14 @@ jobs:
RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
LEAPPDATA_BRANCH: "upstream"
+# On-demand kernel-rt tests
+- &kernel-rt-86to90
+ <<: *beaker-minimal-86to90
+ labels:
+ - kernel-rt
+ identifier: tests-8.6to9.0-kernel-rt
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+
- &default-88to92
<<: *default-86to90
targets:
@@ -293,6 +325,14 @@ jobs:
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.2"
+# On-demand kernel-rt tests
+- &kernel-rt-88to92
+ <<: *beaker-minimal-88to92
+ labels:
+ - kernel-rt
+ identifier: tests-8.8to9.2-kernel-rt
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+
- &default-89to93
<<: *default-88to92
targets:
@@ -345,6 +385,14 @@ jobs:
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.3"
+# On-demand kernel-rt tests
+- &kernel-rt-89to93
+ <<: *beaker-minimal-89to93
+ labels:
+ - kernel-rt
+ identifier: tests-8.9to9.3-kernel-rt
+ tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+
- &default-86to90-aws
<<: *default-79to86-aws
targets:
--
2.41.0

View File

@ -1,155 +0,0 @@
From 4932e5ad0baac10db5efae9d57f8b57d2072b976 Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Thu, 24 Aug 2023 11:34:15 +0200
Subject: [PATCH 08/38] Minor label enhancements
- minimal-beaker label has been renamed to beaker-minimal to match
with test job names;
- kernel-rt-XtoY labels have been added to each test to allow for
separate test launch.
---
.packit.yaml | 42 ++++++++++++++++++++++++------------------
1 file changed, 24 insertions(+), 18 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index eb08c9f5..a183674c 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -146,9 +146,9 @@ jobs:
<<: *default-79to86
manual_trigger: True
labels:
- - minimal-beaker
- - minimal-beaker-7.9to8.6
- identifier: tests-7.9to8.6-minimal-beaker
+ - beaker-minimal
+ - beaker-minimal-7.9to8.6
+ identifier: tests-7.9to8.6-beaker-minimal
tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi|.*oamg4250_lvm_var_xfs_ftype0)"
# On-demand kernel-rt tests
@@ -156,6 +156,7 @@ jobs:
<<: *beaker-minimal-79to86
labels:
- kernel-rt
+ - kernel-rt-7.9to8.6
identifier: tests-7.9to8.6-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
@@ -171,9 +172,9 @@ jobs:
- &beaker-minimal-79to88
<<: *beaker-minimal-79to86
labels:
- - minimal-beaker
- - minimal-beaker-7.9to8.8
- identifier: tests-7.9to8.8-minimal-beaker
+ - beaker-minimal
+ - beaker-minimal-7.9to8.8
+ identifier: tests-7.9to8.8-beaker-minimal
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.8"
@@ -184,6 +185,7 @@ jobs:
<<: *beaker-minimal-79to88
labels:
- kernel-rt
+ - kernel-rt-7.9to8.8
identifier: tests-7.9to8.8-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
@@ -199,9 +201,9 @@ jobs:
- &beaker-minimal-79to89
<<: *beaker-minimal-79to86
labels:
- - minimal-beaker
- - minimal-beaker-7.9to8.9
- identifier: tests-7.9to8.9-minimal-beaker
+ - beaker-minimal
+ - beaker-minimal-7.9to8.9
+ identifier: tests-7.9to8.9-beaker-minimal
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.9"
@@ -212,6 +214,7 @@ jobs:
<<: *beaker-minimal-79to89
labels:
- kernel-rt
+ - kernel-rt-7.9to8.9
identifier: tests-7.9to8.9-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
@@ -242,12 +245,12 @@ jobs:
- &beaker-minimal-86to90
<<: *beaker-minimal-79to86
labels:
- - minimal-beaker
- - minimal-beaker-8.6to9.0
+ - beaker-minimal
+ - beaker-minimal-8.6to9.0
targets:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
- identifier: tests-8.6to9.0-minimal-beaker
+ identifier: tests-8.6to9.0-beaker-minimal
tf_extra_params:
environments:
- tmt:
@@ -270,6 +273,7 @@ jobs:
<<: *beaker-minimal-86to90
labels:
- kernel-rt
+ - kernel-rt-8.6to9.0
identifier: tests-8.6to9.0-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
@@ -301,12 +305,12 @@ jobs:
- &beaker-minimal-88to92
<<: *beaker-minimal-86to90
labels:
- - minimal-beaker
- - minimal-beaker-8.8to9.2
+ - beaker-minimal
+ - beaker-minimal-8.8to9.2
targets:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
- identifier: tests-8.8to9.2-minimal-beaker
+ identifier: tests-8.8to9.2-beaker-minimal
tf_extra_params:
environments:
- tmt:
@@ -330,6 +334,7 @@ jobs:
<<: *beaker-minimal-88to92
labels:
- kernel-rt
+ - kernel-rt-8.8to9.2
identifier: tests-8.8to9.2-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
@@ -361,12 +366,12 @@ jobs:
- &beaker-minimal-89to93
<<: *beaker-minimal-88to92
labels:
- - minimal-beaker
- - minimal-beaker-8.9to9.3
+ - beaker-minimal
+ - beaker-minimal-8.9to9.3
targets:
epel-8-x86_64:
distros: [RHEL-8.9.0-Nightly]
- identifier: tests-8.9to9.3-minimal-beaker
+ identifier: tests-8.9to9.3-beaker-minimal
tf_extra_params:
environments:
- tmt:
@@ -390,6 +395,7 @@ jobs:
<<: *beaker-minimal-89to93
labels:
- kernel-rt
+ - kernel-rt-8.9to9.3
identifier: tests-8.9to9.3-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
--
2.41.0

View File

@ -1,31 +0,0 @@
From 0b6d2df149754f26829734240f1b05be2e9d16a4 Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Thu, 24 Aug 2023 14:00:35 +0200
Subject: [PATCH 09/38] Update pr-welcome message
List on-demand packit test launch possibilities.
---
.github/workflows/pr-welcome-msg.yml | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/.github/workflows/pr-welcome-msg.yml b/.github/workflows/pr-welcome-msg.yml
index cec7c778..e791340e 100644
--- a/.github/workflows/pr-welcome-msg.yml
+++ b/.github/workflows/pr-welcome-msg.yml
@@ -26,7 +26,12 @@ jobs:
Packit will automatically schedule regression tests for this PR's build and latest upstream leapp build. If you need a different version of leapp from PR#42, use `/packit test oamg/leapp#42`
- To launch regression testing public members of oamg organization can leave the following comment:
+ It is possible to schedule specific on-demand tests as well. Currently 2 test sets are supported, `beaker-minimal` and `kernel-rt`, both can be used to be run on all upgrade paths or just a couple of specific ones.
+ To launch on-demand tests with packit:
+ - **/packit test --labels kernel-rt** to schedule `kernel-rt` tests set for all upgrade paths
+ - **/packit test --labels beaker-minimal-8.9to9.3,kernel-rt-8.9to9.3** to schedule `kernel-rt` and `beaker-minimal` test sets for 8.9->9.3 upgrade path
+
+ [Deprecated] To launch on-demand regression testing public members of oamg organization can leave the following comment:
- **/rerun** to schedule basic regression tests using this pr build and latest upstream leapp build as artifacts
- **/rerun 42** to schedule basic regression tests using this pr build and leapp\*PR42\* as artifacts
- **/rerun-sst** to schedule sst tests using this pr build and latest upstream leapp build as artifacts
--
2.41.0

View File

@ -1,256 +0,0 @@
From ab94d25f067afa0b974dc6b850687023d982f52f Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Mon, 28 Aug 2023 15:12:38 +0200
Subject: [PATCH 10/38] Address ddiblik's review comments
- Rename default tests to sanity
- Add XtoY label to on-demand test sets for specific upgrade
paths
---
.packit.yaml | 88 +++++++++++++++++++++++++++++-----------------------
1 file changed, 50 insertions(+), 38 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index a183674c..3085ec0a 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -85,18 +85,18 @@ jobs:
# builds from master branch should start with 100 release, to have high priority
- bash -c "sed -i \"s/1%{?dist}/100%{?dist}/g\" packaging/leapp-repository.spec"
-- &default-79to86
+- &sanity-79to86
job: tests
fmf_url: "https://gitlab.cee.redhat.com/oamg/tmt-plans"
fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
labels:
- - default
+ - sanity
targets:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
- identifier: tests-7.9to8.6
+ identifier: sanity-7.9to8.6
tmt_plan: "sanity_plan"
tf_extra_params:
environments:
@@ -114,15 +114,15 @@ jobs:
TARGET_RELEASE: "8.6"
LEAPPDATA_BRANCH: "upstream"
-- &default-79to86-aws
- <<: *default-79to86
+- &sanity-79to86-aws
+ <<: *sanity-79to86
labels:
- - default
+ - sanity
- aws
targets:
epel-7-x86_64:
distros: [RHEL-7.9-rhui]
- identifier: tests-7to8-aws-e2e
+ identifier: sanity-7to8-aws-e2e
tmt_plan: "(?!.*sap)(.*e2e)"
tf_extra_params:
environments:
@@ -143,12 +143,13 @@ jobs:
# On-demand minimal beaker tests
- &beaker-minimal-79to86
- <<: *default-79to86
+ <<: *sanity-79to86
manual_trigger: True
labels:
- beaker-minimal
- beaker-minimal-7.9to8.6
- identifier: tests-7.9to8.6-beaker-minimal
+ - 7.9to8.6
+ identifier: sanity-7.9to8.6-beaker-minimal
tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi|.*oamg4250_lvm_var_xfs_ftype0)"
# On-demand kernel-rt tests
@@ -157,12 +158,13 @@ jobs:
labels:
- kernel-rt
- kernel-rt-7.9to8.6
- identifier: tests-7.9to8.6-kernel-rt
+ - 7.9to8.6
+ identifier: sanity-7.9to8.6-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
-- &default-79to88
- <<: *default-79to86
- identifier: tests-7.9to8.8
+- &sanity-79to88
+ <<: *sanity-79to86
+ identifier: sanity-7.9to8.8
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.8"
@@ -174,7 +176,8 @@ jobs:
labels:
- beaker-minimal
- beaker-minimal-7.9to8.8
- identifier: tests-7.9to8.8-beaker-minimal
+ - 7.9to8.8
+ identifier: sanity-7.9to8.8-beaker-minimal
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.8"
@@ -186,12 +189,13 @@ jobs:
labels:
- kernel-rt
- kernel-rt-7.9to8.8
- identifier: tests-7.9to8.8-kernel-rt
+ - 7.9to8.8
+ identifier: sanity-7.9to8.8-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
-- &default-79to89
- <<: *default-79to86
- identifier: tests-7.9to8.9
+- &sanity-79to89
+ <<: *sanity-79to86
+ identifier: sanity-7.9to8.9
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.9"
@@ -203,7 +207,8 @@ jobs:
labels:
- beaker-minimal
- beaker-minimal-7.9to8.9
- identifier: tests-7.9to8.9-beaker-minimal
+ - 7.9to8.9
+ identifier: sanity-7.9to8.9-beaker-minimal
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.9"
@@ -215,15 +220,16 @@ jobs:
labels:
- kernel-rt
- kernel-rt-7.9to8.9
- identifier: tests-7.9to8.9-kernel-rt
+ - 7.9to8.9
+ identifier: sanity-7.9to8.9-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
-- &default-86to90
- <<: *default-79to86
+- &sanity-86to90
+ <<: *sanity-79to86
targets:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
- identifier: tests-8.6to9.0
+ identifier: sanity-8.6to9.0
tf_extra_params:
environments:
- tmt:
@@ -247,10 +253,11 @@ jobs:
labels:
- beaker-minimal
- beaker-minimal-8.6to9.0
+ - 8.6to9.0
targets:
epel-8-x86_64:
distros: [RHEL-8.6.0-Nightly]
- identifier: tests-8.6to9.0-beaker-minimal
+ identifier: sanity-8.6to9.0-beaker-minimal
tf_extra_params:
environments:
- tmt:
@@ -274,15 +281,16 @@ jobs:
labels:
- kernel-rt
- kernel-rt-8.6to9.0
- identifier: tests-8.6to9.0-kernel-rt
+ - 8.6to9.0
+ identifier: sanity-8.6to9.0-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
-- &default-88to92
- <<: *default-86to90
+- &sanity-88to92
+ <<: *sanity-86to90
targets:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
- identifier: tests-8.8to9.2
+ identifier: sanity-8.8to9.2
tf_extra_params:
environments:
- tmt:
@@ -307,10 +315,11 @@ jobs:
labels:
- beaker-minimal
- beaker-minimal-8.8to9.2
+ - 8.6to9.2
targets:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
- identifier: tests-8.8to9.2-beaker-minimal
+ identifier: sanity-8.8to9.2-beaker-minimal
tf_extra_params:
environments:
- tmt:
@@ -335,15 +344,16 @@ jobs:
labels:
- kernel-rt
- kernel-rt-8.8to9.2
- identifier: tests-8.8to9.2-kernel-rt
+ - 8.8to9.2
+ identifier: sanity-8.8to9.2-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
-- &default-89to93
- <<: *default-88to92
+- &sanity-89to93
+ <<: *sanity-88to92
targets:
epel-8-x86_64:
distros: [RHEL-8.9.0-Nightly]
- identifier: tests-8.9to9.3
+ identifier: sanity-8.9to9.3
tf_extra_params:
environments:
- tmt:
@@ -368,10 +378,11 @@ jobs:
labels:
- beaker-minimal
- beaker-minimal-8.9to9.3
+ - 8.9to9.3
targets:
epel-8-x86_64:
distros: [RHEL-8.9.0-Nightly]
- identifier: tests-8.9to9.3-beaker-minimal
+ identifier: sanity-8.9to9.3-beaker-minimal
tf_extra_params:
environments:
- tmt:
@@ -396,15 +407,16 @@ jobs:
labels:
- kernel-rt
- kernel-rt-8.9to9.3
- identifier: tests-8.9to9.3-kernel-rt
+ - 8.9to9.3
+ identifier: sanity-8.9to9.3-kernel-rt
tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
-- &default-86to90-aws
- <<: *default-79to86-aws
+- &sanity-86to90-aws
+ <<: *sanity-79to86-aws
targets:
epel-8-x86_64:
distros: [RHEL-8.6-rhui]
- identifier: tests-8to9-aws-e2e
+ identifier: sanity-8to9-aws-e2e
tf_extra_params:
environments:
- tmt:
--
2.41.0

View File

@ -1,173 +0,0 @@
From 93c6fd4f150229a01ba43ce74214043cffaf7dce Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Tue, 29 Aug 2023 18:18:01 +0200
Subject: [PATCH 11/38] Address mmoran's review comments
- Use RHSM_REPOS_EUS='eus' instead of RHSM_REPOS for 8.6->9.0
- Remove beta repos from 8.8->9.2
- Change BusinessUnit tag value to sst_upgrades@leapp_upstream_test
---
.packit.yaml | 43 +++++++++++++++++++++----------------------
1 file changed, 21 insertions(+), 22 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index 3085ec0a..cd6dd7d1 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -103,12 +103,12 @@ jobs:
- tmt:
context:
distro: "rhel-7.9"
- # tag resources as sst_upgrades to enable cost metrics collection
+ # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
- BusinessUnit: sst_upgrades
+ BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.6"
@@ -129,12 +129,12 @@ jobs:
- tmt:
context:
distro: "rhel-7.9"
- # tag resources as sst_upgrades to enable cost metrics collection
+ # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys; yum-config-manager --enable rhel-7-server-rhui-optional-rpms"
tags:
- BusinessUnit: sst_upgrades
+ BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.6"
@@ -235,16 +235,16 @@ jobs:
- tmt:
context:
distro: "rhel-8.6"
- # tag resources as sst_upgrades to enable cost metrics collection
+ # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
- BusinessUnit: sst_upgrades
+ BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "8.6"
TARGET_RELEASE: "9.0"
- RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
+ RHSM_REPOS_EUS: "eus"
LEAPPDATA_BRANCH: "upstream"
# On-demand minimal beaker tests
@@ -263,16 +263,16 @@ jobs:
- tmt:
context:
distro: "rhel-8.6"
- # tag resources as sst_upgrades to enable cost metrics collection
+ # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
- BusinessUnit: sst_upgrades
+ BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "8.6"
TARGET_RELEASE: "9.0"
- RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
+ RHSM_REPOS_EUS: "eus"
LEAPPDATA_BRANCH: "upstream"
# On-demand kernel-rt tests
@@ -296,16 +296,16 @@ jobs:
- tmt:
context:
distro: "rhel-8.8"
- # tag resources as sst_upgrades to enable cost metrics collection
+ # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
- BusinessUnit: sst_upgrades
+ BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "8.8"
TARGET_RELEASE: "9.2"
- RHSM_REPOS: "rhel-8-for-x86_64-appstream-beta-rpms,rhel-8-for-x86_64-baseos-beta-rpms"
+ RHSM_REPOS_EUS: "eus"
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.2"
@@ -325,16 +325,15 @@ jobs:
- tmt:
context:
distro: "rhel-8.8"
- # tag resources as sst_upgrades to enable cost metrics collection
+ # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
- BusinessUnit: sst_upgrades
+ BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "8.8"
TARGET_RELEASE: "9.2"
- RHSM_REPOS: "rhel-8-for-x86_64-appstream-beta-rpms,rhel-8-for-x86_64-baseos-beta-rpms"
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.2"
@@ -359,12 +358,12 @@ jobs:
- tmt:
context:
distro: "rhel-8.9"
- # tag resources as sst_upgrades to enable cost metrics collection
+ # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
- BusinessUnit: sst_upgrades
+ BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "8.9"
TARGET_RELEASE: "9.3"
@@ -388,12 +387,12 @@ jobs:
- tmt:
context:
distro: "rhel-8.9"
- # tag resources as sst_upgrades to enable cost metrics collection
+ # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
- BusinessUnit: sst_upgrades
+ BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "8.9"
TARGET_RELEASE: "9.3"
@@ -422,12 +421,12 @@ jobs:
- tmt:
context:
distro: "rhel-8.6"
- # tag resources as sst_upgrades to enable cost metrics collection
+ # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
- BusinessUnit: sst_upgrades
+ BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "8.6"
TARGET_RELEASE: "9.0"
--
2.41.0

View File

@ -1,50 +0,0 @@
From f83702c6e78b535a9511e0842c478773a1271cad Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Petr=20Men=C5=A1=C3=ADk?= <pemensik@redhat.com>
Date: Wed, 30 Aug 2023 16:58:45 +0200
Subject: [PATCH 12/38] Add isccfg library manual running mode
For simplified manual testing add waking mode to parser script. Allows
direct test run displaying just chosen statements or blocks.
---
.../el7toel8/libraries/isccfg.py | 28 +++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/repos/system_upgrade/el7toel8/libraries/isccfg.py b/repos/system_upgrade/el7toel8/libraries/isccfg.py
index dff9bf24..1d29ff21 100644
--- a/repos/system_upgrade/el7toel8/libraries/isccfg.py
+++ b/repos/system_upgrade/el7toel8/libraries/isccfg.py
@@ -948,3 +948,31 @@ class IscConfigParser(object):
self.load_main_config()
self.load_included_files()
pass
+
+
+if __name__ == '__main__':
+ """Run parser to default path or path in the first argument.
+
+ Additional parameters are statements or blocks to print.
+ Defaults to options and zone.
+ """
+
+ from sys import argv
+
+ def print_cb(section, state):
+ print(section)
+
+ cfgpath = IscConfigParser.CONFIG_FILE
+ if len(argv) > 1:
+ cfgpath = argv[1]
+ if len(argv) > 2:
+ cb = {}
+ for key in argv[2:]:
+ cb[key] = print_cb
+ else:
+ cb = {'options': print_cb, 'zone': print_cb}
+
+ parser = IscConfigParser(cfgpath)
+ for section in parser.FILES_TO_CHECK:
+ print("# Walking file '{}'".format(section.path))
+ parser.walk(section.root_section(), cb)
--
2.41.0

View File

@ -1,26 +0,0 @@
From fa0773ddd5d27762d10ad769c119ef87b1684e5e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Petr=20Men=C5=A1=C3=ADk?= <pemensik@redhat.com>
Date: Thu, 31 Aug 2023 13:04:34 +0200
Subject: [PATCH 13/38] Avoid warnings on python2
Use python3 compatible print function
---
repos/system_upgrade/el7toel8/libraries/isccfg.py | 2 ++
1 file changed, 2 insertions(+)
diff --git a/repos/system_upgrade/el7toel8/libraries/isccfg.py b/repos/system_upgrade/el7toel8/libraries/isccfg.py
index 1d29ff21..45baba0b 100644
--- a/repos/system_upgrade/el7toel8/libraries/isccfg.py
+++ b/repos/system_upgrade/el7toel8/libraries/isccfg.py
@@ -2,6 +2,8 @@
#
# Simplified parsing of bind configuration, with include support and nested sections.
+from __future__ import print_function
+
import re
import string
--
2.41.0

View File

@ -1,172 +0,0 @@
From 6ae2d5aadbf6a626cf27ca4594a3945e2c249122 Mon Sep 17 00:00:00 2001
From: mhecko <mhecko@redhat.com>
Date: Tue, 1 Aug 2023 12:44:47 +0200
Subject: [PATCH 14/38] makefile: add dev_test_no_lint target
Add a target for testing individual actors with almost-instant
execution time. Testing individual actors currently involves
a process in which every actor is instantiated in a separate
process, the created instance reports actor information such as actor's
name and then exits. As many processes are created, this process is
time consuming (cca 7s) which disrupts developer's workflow and causes
attention shift.
A newly added target `dev_test_no_lint` uses an introduced script
`find_actors`. To achieve the similar level of framework protection
as spawning actors in a separate process, the `find_actors` script
does not execute actors at all, and instead works on their ASTs.
Specifically, the script looks for all files named `actor.py`, finds
all classes that (explicitely) subclass Actor, and reads its `name`
attribute.
Usage example:
ACTOR=check_target_iso make dev_test_no_lint
---
Makefile | 15 +++++---
utils/find_actors.py | 81 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 91 insertions(+), 5 deletions(-)
create mode 100644 utils/find_actors.py
diff --git a/Makefile b/Makefile
index b63192e3..e3c40e01 100644
--- a/Makefile
+++ b/Makefile
@@ -16,9 +16,12 @@ REPOSITORIES ?= $(shell ls $(_SYSUPG_REPOS) | xargs echo | tr " " ",")
SYSUPG_TEST_PATHS=$(shell echo $(REPOSITORIES) | sed -r "s|(,\\|^)| $(_SYSUPG_REPOS)/|g")
TEST_PATHS:=commands repos/common $(SYSUPG_TEST_PATHS)
+# python version to run test with
+_PYTHON_VENV=$${PYTHON_VENV:-python2.7}
ifdef ACTOR
- TEST_PATHS=`python utils/actor_path.py $(ACTOR)`
+ TEST_PATHS=`$(_PYTHON_VENV) utils/actor_path.py $(ACTOR)`
+ APPROX_TEST_PATHS=$(shell $(_PYTHON_VENV) utils/find_actors.py -C repos $(ACTOR)) # Dev only
endif
ifeq ($(TEST_LIBS),y)
@@ -32,9 +35,6 @@ endif
# needed only in case the Python2 should be used
_USE_PYTHON_INTERPRETER=$${_PYTHON_INTERPRETER}
-# python version to run test with
-_PYTHON_VENV=$${PYTHON_VENV:-python2.7}
-
# by default use values you can see below, but in case the COPR_* var is defined
# use it instead of the default
_COPR_REPO=$${COPR_REPO:-leapp}
@@ -127,6 +127,7 @@ help:
@echo " - can be changed by setting TEST_CONTAINER env"
@echo " test_container_all run lint and tests in all available containers"
@echo " test_container_no_lint run tests without linting in container, see test_container"
+ @echo " dev_test_no_lint (advanced users) run only tests of a single actor specified by the ACTOR variable"
@echo " test_container_all_no_lint run tests without linting in all available containers"
@echo " clean_containers clean all testing and building container images (to force a rebuild for example)"
@echo ""
@@ -486,6 +487,10 @@ fast_lint:
echo "No files to lint."; \
fi
+dev_test_no_lint:
+ . $(VENVNAME)/bin/activate; \
+ $(_PYTHON_VENV) -m pytest $(REPORT_ARG) $(APPROX_TEST_PATHS) $(LIBRARY_PATH)
+
dashboard_data:
. $(VENVNAME)/bin/activate; \
snactor repo find --path repos/; \
@@ -494,4 +499,4 @@ dashboard_data:
popd
.PHONY: help build clean prepare source srpm copr_build _build_local build_container print_release register install-deps install-deps-fedora lint test_no_lint test dashboard_data fast_lint
-.PHONY: test_container test_container_no_lint test_container_all test_container_all_no_lint clean_containers _build_container_image _test_container_ipu
+.PHONY: test_container test_container_no_lint test_container_all test_container_all_no_lint clean_containers _build_container_image _test_container_ipu dev_test_no_lint
diff --git a/utils/find_actors.py b/utils/find_actors.py
new file mode 100644
index 00000000..25cc2217
--- /dev/null
+++ b/utils/find_actors.py
@@ -0,0 +1,81 @@
+import argparse
+import ast
+import os
+import sys
+
+
+def is_direct_actor_def(ast_node):
+ if not isinstance(ast_node, ast.ClassDef):
+ return False
+
+ direcly_named_bases = (base for base in ast_node.bases if isinstance(base, ast.Name))
+ for class_base in direcly_named_bases:
+ # We are looking for direct name 'Actor'
+ if class_base.id == 'Actor':
+ return True
+
+ return False
+
+
+def extract_actor_name_from_def(actor_class_def):
+ assignment_value_class = ast.Str if sys.version_info < (3,8) else ast.Constant
+ assignment_value_attrib = 's' if sys.version_info < (3,8) else 'value'
+
+ actor_name = None
+ class_level_assignments = (child for child in actor_class_def.body if isinstance(child, ast.Assign))
+ # Search for class-level assignment specifying actor's name: `name = 'name'`
+ for child in class_level_assignments:
+ assignment = child
+ for target in assignment.targets:
+ assignment_adds_name_attrib = isinstance(target, ast.Name) and target.id == 'name'
+ assignment_uses_a_constant_string = isinstance(assignment.value, assignment_value_class)
+ if assignment_adds_name_attrib and assignment_uses_a_constant_string:
+ rhs = assignment.value # <lhs> = <rhs>
+ actor_name = getattr(rhs, assignment_value_attrib)
+ break
+ if actor_name is not None:
+ break
+ return actor_name
+
+
+def get_actor_names(actor_path):
+ with open(actor_path) as actor_file:
+ try:
+ actor_def = ast.parse(actor_file.read())
+ except SyntaxError:
+ error = ('Failed to parse {0}. The actor might contain syntax errors, or perhaps it '
+ 'is written with Python3-specific syntax?\n')
+ sys.stderr.write(error.format(actor_path))
+ return []
+ actor_defs = [ast_node for ast_node in actor_def.body if is_direct_actor_def(ast_node)]
+ actors = [extract_actor_name_from_def(actor_def) for actor_def in actor_defs]
+ return actors
+
+
+def make_parser():
+ parser = argparse.ArgumentParser()
+ parser.add_argument('actor_names', nargs='+',
+ help='Actor names (the name attribute of the actor class) to look for.')
+ parser.add_argument('-C', '--change-dir', dest='cwd',
+ help='Path in which the actors will be looked for.', default='.')
+ return parser
+
+
+if __name__ == '__main__':
+ parser = make_parser()
+ args = parser.parse_args()
+ cwd = os.path.abspath(args.cwd)
+ actor_names_to_search_for = set(args.actor_names)
+
+ actor_paths = []
+ for directory, dummy_subdirs, dir_files in os.walk(cwd):
+ for actor_path in dir_files:
+ actor_path = os.path.join(directory, actor_path)
+ if os.path.basename(actor_path) != 'actor.py':
+ continue
+
+ defined_actor_names = set(get_actor_names(actor_path))
+ if defined_actor_names.intersection(actor_names_to_search_for):
+ actor_module_path = directory
+ actor_paths.append(actor_module_path)
+ print('\n'.join(actor_paths))
--
2.41.0

View File

@ -1,82 +0,0 @@
From 4d8ad1c0363fc21f5d8a557f3319a6efacac9f2a Mon Sep 17 00:00:00 2001
From: SandakovMM <G0odvinSun@gmail.com>
Date: Thu, 24 Aug 2023 16:01:39 +0300
Subject: [PATCH 15/38] Fix the issue of going out of bounds in the isccfg
parser.
This problem can occur when attempting to parse an empty file.
---
.../el7toel8/libraries/isccfg.py | 5 ++-
.../el7toel8/libraries/tests/test_isccfg.py | 32 +++++++++++++++++++
2 files changed, 36 insertions(+), 1 deletion(-)
diff --git a/repos/system_upgrade/el7toel8/libraries/isccfg.py b/repos/system_upgrade/el7toel8/libraries/isccfg.py
index 45baba0b..6cebb289 100644
--- a/repos/system_upgrade/el7toel8/libraries/isccfg.py
+++ b/repos/system_upgrade/el7toel8/libraries/isccfg.py
@@ -688,9 +688,12 @@ class IscConfigParser(object):
while index != -1:
keystart = index
- while istr[index] in self.CHAR_KEYWORD and index < end_index:
+ while index < end_index and istr[index] in self.CHAR_KEYWORD:
index += 1
+ if index >= end_index:
+ break
+
if keystart < index <= end_index and istr[index] not in self.CHAR_KEYWORD:
# key has been found
return ConfigSection(cfg, istr[keystart:index], keystart, index-1)
diff --git a/repos/system_upgrade/el7toel8/libraries/tests/test_isccfg.py b/repos/system_upgrade/el7toel8/libraries/tests/test_isccfg.py
index 7438fa37..00753681 100644
--- a/repos/system_upgrade/el7toel8/libraries/tests/test_isccfg.py
+++ b/repos/system_upgrade/el7toel8/libraries/tests/test_isccfg.py
@@ -116,6 +116,10 @@ view "v2" {
};
""")
+config_empty = isccfg.MockConfig('')
+
+config_empty_include = isccfg.MockConfig('options { include "/dev/null"; };')
+
def check_in_section(parser, section, key, value):
""" Helper to check some section was found
@@ -343,5 +347,33 @@ def test_walk():
assert 'dnssec-validation' not in state
+def test_empty_config():
+ """ Test empty configuration """
+
+ callbacks = {}
+
+ parser = isccfg.IscConfigParser(config_empty)
+ assert len(parser.FILES_TO_CHECK) == 1
+ cfg = parser.FILES_TO_CHECK[0]
+ parser.walk(cfg.root_section(), callbacks)
+ assert cfg.buffer == ''
+
+
+def test_empty_include_config():
+ """ Test empty configuration """
+
+ callbacks = {}
+
+ parser = isccfg.IscConfigParser(config_empty_include)
+ assert len(parser.FILES_TO_CHECK) == 2
+ cfg = parser.FILES_TO_CHECK[0]
+ parser.walk(cfg.root_section(), callbacks)
+ assert cfg.buffer == 'options { include "/dev/null"; };'
+
+ null_cfg = parser.FILES_TO_CHECK[1]
+ parser.walk(null_cfg.root_section(), callbacks)
+ assert null_cfg.buffer == ''
+
+
if __name__ == '__main__':
test_key_views_lookaside()
--
2.41.0

View File

@ -1,209 +0,0 @@
From d74ff90e46c1acc2a16dc387a863f2aaf86f85d1 Mon Sep 17 00:00:00 2001
From: PeterMocary <petermocary@gmail.com>
Date: Mon, 9 Oct 2023 23:35:30 +0200
Subject: [PATCH 16/38] make pylint and spellcheck happy again
---
.pylintrc | 4 +++-
.../common/actors/checksaphana/libraries/checksaphana.py | 4 ++--
.../actors/checktargetiso/libraries/check_target_iso.py | 2 +-
.../files/dracut/85sys-upgrade-redhat/do-upgrade.sh | 2 +-
.../actors/createisorepofile/libraries/create_iso_repofile.py | 2 +-
.../repositoriesmapping/libraries/repositoriesmapping.py | 2 +-
.../system_upgrade/common/actors/scancpu/libraries/scancpu.py | 2 +-
.../common/actors/scansaphana/libraries/scansaphana.py | 4 ++--
.../actors/scantargetiso/libraries/scan_target_os_iso.py | 4 ++--
.../actors/targetuserspacecreator/libraries/userspacegen.py | 4 ++--
repos/system_upgrade/common/libraries/rhui.py | 2 +-
repos/system_upgrade/common/libraries/tests/test_rhsm.py | 2 +-
12 files changed, 18 insertions(+), 16 deletions(-)
diff --git a/.pylintrc b/.pylintrc
index 2ef31167..0adb7dcc 100644
--- a/.pylintrc
+++ b/.pylintrc
@@ -54,7 +54,9 @@ disable=
duplicate-string-formatting-argument, # TMP: will be fixed in close future
consider-using-f-string, # sorry, not gonna happen, still have to support py2
use-dict-literal,
- redundant-u-string-prefix # still have py2 to support
+ redundant-u-string-prefix, # still have py2 to support
+ logging-format-interpolation,
+ logging-not-lazy
[FORMAT]
# Maximum number of characters on a single line.
diff --git a/repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py b/repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py
index 1b08f3d2..7cd83de8 100644
--- a/repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py
+++ b/repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py
@@ -132,7 +132,7 @@ def _major_version_check(instance):
return False
return True
except (ValueError, IndexError):
- api.current_logger().warn(
+ api.current_logger().warning(
'Failed to parse manifest release field for instance {}'.format(instance.name), exc_info=True)
return False
@@ -164,7 +164,7 @@ def _sp_rev_patchlevel_check(instance, patchlevels):
return True
return False
# if not 'len(number) > 2 and number.isdigit()'
- api.current_logger().warn(
+ api.current_logger().warning(
'Invalid rev-number field value `{}` in manifest for instance {}'.format(number, instance.name))
return False
diff --git a/repos/system_upgrade/common/actors/checktargetiso/libraries/check_target_iso.py b/repos/system_upgrade/common/actors/checktargetiso/libraries/check_target_iso.py
index b5b66901..fcb23028 100644
--- a/repos/system_upgrade/common/actors/checktargetiso/libraries/check_target_iso.py
+++ b/repos/system_upgrade/common/actors/checktargetiso/libraries/check_target_iso.py
@@ -170,7 +170,7 @@ def perform_target_iso_checks():
return
if next(requested_target_iso_msg_iter, None):
- api.current_logger().warn('Received multiple msgs with target ISO to use.')
+ api.current_logger().warning('Received multiple msgs with target ISO to use.')
# Cascade the inhibiting conditions so that we do not spam the user with inhibitors
is_iso_invalid = inhibit_if_not_valid_iso_file(target_iso)
diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
index 491b85ec..c181c5cf 100755
--- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
+++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
@@ -130,7 +130,7 @@ ibdmp() {
#
# 1. encode tarball using base64
#
- # 2. pre-pend line `chunks=CHUNKS,md5=MD5` where
+ # 2. prepend line `chunks=CHUNKS,md5=MD5` where
# MD5 is the MD5 digest of original tarball and
# CHUNKS is number of upcoming Base64 chunks
#
diff --git a/repos/system_upgrade/common/actors/createisorepofile/libraries/create_iso_repofile.py b/repos/system_upgrade/common/actors/createisorepofile/libraries/create_iso_repofile.py
index b4470b68..3f4f75e0 100644
--- a/repos/system_upgrade/common/actors/createisorepofile/libraries/create_iso_repofile.py
+++ b/repos/system_upgrade/common/actors/createisorepofile/libraries/create_iso_repofile.py
@@ -13,7 +13,7 @@ def produce_repofile_if_iso_used():
return
if next(target_iso_msgs_iter, None):
- api.current_logger().warn('Received multiple TargetISInstallationImage messages, using the first one')
+ api.current_logger().warning('Received multiple TargetISInstallationImage messages, using the first one')
# Mounting was successful, create a repofile to copy into target userspace
repofile_entry_template = ('[{repoid}]\n'
diff --git a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
index 416034ac..6f2b2e0f 100644
--- a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
+++ b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
@@ -145,7 +145,7 @@ def _inhibit_upgrade(msg):
def _read_repofile(repofile):
# NOTE: what about catch StopActorExecution error when the file cannot be
# obtained -> then check whether old_repomap file exists and in such a case
- # inform user they have to provde the new repomap.json file (we have the
+ # inform user they have to provide the new repomap.json file (we have the
# warning now only which could be potentially overlooked)
repofile_data = load_data_asset(api.current_actor(),
repofile,
diff --git a/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py b/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py
index e5555f99..9de50fae 100644
--- a/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py
+++ b/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py
@@ -133,7 +133,7 @@ def _find_deprecation_data_entries(lscpu):
if is_detected(lscpu, entry)
]
- api.current_logger().warn('Unsupported platform could not detect relevant CPU information')
+ api.current_logger().warning('Unsupported platform could not detect relevant CPU information')
return []
diff --git a/repos/system_upgrade/common/actors/scansaphana/libraries/scansaphana.py b/repos/system_upgrade/common/actors/scansaphana/libraries/scansaphana.py
index 04195b57..99490477 100644
--- a/repos/system_upgrade/common/actors/scansaphana/libraries/scansaphana.py
+++ b/repos/system_upgrade/common/actors/scansaphana/libraries/scansaphana.py
@@ -37,7 +37,7 @@ def parse_manifest(path):
# Most likely an empty line, but we're being permissive here and ignore failures.
# In the end it's all about having the right values available.
if line:
- api.current_logger().warn(
+ api.current_logger().warning(
'Failed to parse line in manifest: {file}. Line was: `{line}`'.format(file=path,
line=line),
exc_info=True)
@@ -128,6 +128,6 @@ def get_instance_status(instance_number, sapcontrol_path, admin_name):
# In that case there are always more than 7 lines.
return len(output['stdout'].split('\n')) > 7
except CalledProcessError:
- api.current_logger().warn(
+ api.current_logger().warning(
'Failed to retrieve SAP HANA instance status from sapcontrol - Considering it as not running.')
return False
diff --git a/repos/system_upgrade/common/actors/scantargetiso/libraries/scan_target_os_iso.py b/repos/system_upgrade/common/actors/scantargetiso/libraries/scan_target_os_iso.py
index 281389cf..a5f0750a 100644
--- a/repos/system_upgrade/common/actors/scantargetiso/libraries/scan_target_os_iso.py
+++ b/repos/system_upgrade/common/actors/scantargetiso/libraries/scan_target_os_iso.py
@@ -18,8 +18,8 @@ def determine_rhel_version_from_iso_mountpoint(iso_mountpoint):
return '' # We did not determine anything
if len(redhat_release_pkgs) > 1:
- api.current_logger().warn('Multiple packages with name redhat-release* found when '
- 'determining RHEL version of the supplied installation ISO.')
+ api.current_logger().warning('Multiple packages with name redhat-release* found when '
+ 'determining RHEL version of the supplied installation ISO.')
redhat_release_pkg = redhat_release_pkgs[0]
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
index 9dfa0f14..0982a796 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
@@ -347,7 +347,7 @@ def _get_files_owned_by_rpms(context, dirpath, pkgs=None, recursive=False):
def _copy_certificates(context, target_userspace):
"""
- Copy the needed cetificates into the container, but preserve original ones
+ Copy the needed certificates into the container, but preserve original ones
Some certificates are already installed in the container and those are
default certificates for the target OS, so we preserve these.
@@ -378,7 +378,7 @@ def _copy_certificates(context, target_userspace):
# The path original path of the broken symlink in the container
report_path = os.path.join(target_pki, os.path.relpath(src_path, backup_pki))
- api.current_logger().warn('File {} is a broken symlink!'.format(report_path))
+ api.current_logger().warning('File {} is a broken symlink!'.format(report_path))
break
src_path = next_path
diff --git a/repos/system_upgrade/common/libraries/rhui.py b/repos/system_upgrade/common/libraries/rhui.py
index 4578ecd2..14a91c42 100644
--- a/repos/system_upgrade/common/libraries/rhui.py
+++ b/repos/system_upgrade/common/libraries/rhui.py
@@ -258,7 +258,7 @@ def gen_rhui_files_map():
def copy_rhui_data(context, provider):
"""
- Copy relevant RHUI cerificates and key into the target userspace container
+ Copy relevant RHUI certificates and key into the target userspace container
"""
rhui_dir = api.get_common_folder_path('rhui')
data_dir = os.path.join(rhui_dir, provider)
diff --git a/repos/system_upgrade/common/libraries/tests/test_rhsm.py b/repos/system_upgrade/common/libraries/tests/test_rhsm.py
index a6dbea96..957616f4 100644
--- a/repos/system_upgrade/common/libraries/tests/test_rhsm.py
+++ b/repos/system_upgrade/common/libraries/tests/test_rhsm.py
@@ -249,7 +249,7 @@ def test_get_release_with_release_not_set(monkeypatch, actor_mocked, context_moc
release = rhsm.get_release(context_mocked)
- fail_description = 'The release information was obtained, even if "No release set" was repored by rhsm.'
+ fail_description = 'The release information was obtained, even if "No release set" was reported by rhsm.'
assert not release, fail_description
--
2.41.0

View File

@ -1,93 +0,0 @@
From 84d6ce3073e646e8740b72a5e7edda056c1b324a Mon Sep 17 00:00:00 2001
From: Martin Kluson <mkluson@redhat.com>
Date: Tue, 10 Oct 2023 14:57:02 +0200
Subject: [PATCH 17/38] Remove TUV from supported target channels
TUS (mispelled as TUV) is not suported channel for inplace upgrade,
removed from the code.
Jira: OAMG-7288
---
commands/preupgrade/__init__.py | 2 +-
commands/upgrade/__init__.py | 2 +-
.../common/actors/setuptargetrepos/tests/test_repomapping.py | 4 ++--
repos/system_upgrade/common/libraries/config/__init__.py | 2 +-
repos/system_upgrade/common/models/repositoriesmap.py | 2 +-
5 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/commands/preupgrade/__init__.py b/commands/preupgrade/__init__.py
index 03209419..5a89069f 100644
--- a/commands/preupgrade/__init__.py
+++ b/commands/preupgrade/__init__.py
@@ -25,7 +25,7 @@ from leapp.utils.output import beautify_actor_exception, report_errors, report_i
help='Enable specified repository. Can be used multiple times.')
@command_opt('channel',
help='Set preferred channel for the IPU target.',
- choices=['ga', 'tuv', 'e4s', 'eus', 'aus'],
+ choices=['ga', 'e4s', 'eus', 'aus'],
value_type=str.lower) # This allows the choices to be case insensitive
@command_opt('iso', help='Use provided target RHEL installation image to perform the in-place upgrade.')
@command_opt('target', choices=command_utils.get_supported_target_versions(),
diff --git a/commands/upgrade/__init__.py b/commands/upgrade/__init__.py
index 18edcb9b..c42b7cba 100644
--- a/commands/upgrade/__init__.py
+++ b/commands/upgrade/__init__.py
@@ -31,7 +31,7 @@ from leapp.utils.output import beautify_actor_exception, report_errors, report_i
help='Enable specified repository. Can be used multiple times.')
@command_opt('channel',
help='Set preferred channel for the IPU target.',
- choices=['ga', 'tuv', 'e4s', 'eus', 'aus'],
+ choices=['ga', 'e4s', 'eus', 'aus'],
value_type=str.lower) # This allows the choices to be case insensitive
@command_opt('iso', help='Use provided target RHEL installation image to perform the in-place upgrade.')
@command_opt('target', choices=command_utils.get_supported_target_versions(),
diff --git a/repos/system_upgrade/common/actors/setuptargetrepos/tests/test_repomapping.py b/repos/system_upgrade/common/actors/setuptargetrepos/tests/test_repomapping.py
index 53897614..ba5906f4 100644
--- a/repos/system_upgrade/common/actors/setuptargetrepos/tests/test_repomapping.py
+++ b/repos/system_upgrade/common/actors/setuptargetrepos/tests/test_repomapping.py
@@ -614,14 +614,14 @@ def test_get_expected_target_pesid_repos_with_priority_channel_set(monkeypatch):
make_pesid_repo('pesid1', '7', 'pesid1-repoid-ga'),
make_pesid_repo('pesid2', '8', 'pesid2-repoid-ga'),
make_pesid_repo('pesid2', '8', 'pesid2-repoid-eus', channel='eus'),
- make_pesid_repo('pesid2', '8', 'pesid2-repoid-tuv', channel='tuv'),
+ make_pesid_repo('pesid2', '8', 'pesid2-repoid-aus', channel='aus'),
make_pesid_repo('pesid3', '8', 'pesid3-repoid-ga')
]
)
handler = RepoMapDataHandler(repositories_mapping)
# Set defaults to verify that the priority channel is not overwritten by defaults
- handler.set_default_channels(['tuv', 'ga'])
+ handler.set_default_channels(['aus', 'ga'])
target_repoids = handler.get_expected_target_pesid_repos(['pesid1-repoid-ga'])
fail_description = 'get_expected_target_peid_repos does not correctly respect preferred channel.'
diff --git a/repos/system_upgrade/common/libraries/config/__init__.py b/repos/system_upgrade/common/libraries/config/__init__.py
index c37a35cf..b3697a4d 100644
--- a/repos/system_upgrade/common/libraries/config/__init__.py
+++ b/repos/system_upgrade/common/libraries/config/__init__.py
@@ -2,7 +2,7 @@ from leapp.exceptions import StopActorExecutionError
from leapp.libraries.stdlib import api
# The devel variable for target product channel can also contain 'beta'
-SUPPORTED_TARGET_CHANNELS = {'ga', 'tuv', 'e4s', 'eus', 'aus'}
+SUPPORTED_TARGET_CHANNELS = {'ga', 'e4s', 'eus', 'aus'}
CONSUMED_DATA_STREAM_ID = '2.0'
diff --git a/repos/system_upgrade/common/models/repositoriesmap.py b/repos/system_upgrade/common/models/repositoriesmap.py
index 12639e19..7ef0bdb4 100644
--- a/repos/system_upgrade/common/models/repositoriesmap.py
+++ b/repos/system_upgrade/common/models/repositoriesmap.py
@@ -61,7 +61,7 @@ class PESIDRepositoryEntry(Model):
too.
"""
- channel = fields.StringEnum(['ga', 'tuv', 'e4s', 'eus', 'aus', 'beta'])
+ channel = fields.StringEnum(['ga', 'e4s', 'eus', 'aus', 'beta'])
"""
The 'channel' of the repository.
--
2.41.0

View File

@ -1,531 +0,0 @@
From f50de2d3f541ca64934b4488dd1a403c8783a5da Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Tue, 14 Mar 2023 23:26:30 +0100
Subject: [PATCH 18/38] Transition systemd service states during upgrade
Sometimes after the upgrade some services end up disabled even if they
have been enabled on the source system.
There are already two separate actors that fix this for
`device_cio_free.service` and `rsyncd.service`.
A new actor `transition-systemd-services-states` handles this generally
for all services. A "desired" state is determined depending on state and
vendor preset of both source and target system and a
SystemdServicesTasks message is produced with each service that isn't
already in the "desired" state.
Jira ref.: OAMG-1745
---
.../transitionsystemdservicesstates/actor.py | 53 +++++
.../transitionsystemdservicesstates.py | 211 +++++++++++++++++
.../test_transitionsystemdservicesstates.py | 219 ++++++++++++++++++
3 files changed, 483 insertions(+)
create mode 100644 repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/actor.py
create mode 100644 repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/libraries/transitionsystemdservicesstates.py
create mode 100644 repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/tests/test_transitionsystemdservicesstates.py
diff --git a/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/actor.py b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/actor.py
new file mode 100644
index 00000000..139f9f6b
--- /dev/null
+++ b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/actor.py
@@ -0,0 +1,53 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import transitionsystemdservicesstates
+from leapp.models import (
+ SystemdServicesInfoSource,
+ SystemdServicesInfoTarget,
+ SystemdServicesPresetInfoSource,
+ SystemdServicesPresetInfoTarget,
+ SystemdServicesTasks
+)
+from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag
+
+
+class TransitionSystemdServicesStates(Actor):
+ """
+ Transition states of systemd services between source and target systems
+
+ Services on the target system might end up in incorrect/unexpected state
+ after an upgrade. This actor puts such services into correct/expected
+ state.
+
+ A SystemdServicesTasks message is produced containing all tasks that need
+ to be executed to put all services into the correct states.
+
+ The correct states are determined according to following rules:
+ - All enabled services remain enabled
+ - All masked services remain masked
+ - Disabled services will be enabled if they are disabled by default on
+ the source system (by preset files), but enabled by default on target
+ system, otherwise they will remain disabled
+ - Runtime enabled service (state == runtime-enabled) are treated
+ the same as disabled services
+ - Services in other states are not handled as they can't be
+ enabled/disabled
+
+ Two reports are generated:
+ - Report with services that were corrected from disabled to enabled on
+ the upgraded system
+ - Report with services that were newly enabled on the upgraded system
+ by a preset
+ """
+
+ name = 'transition_systemd_services_states'
+ consumes = (
+ SystemdServicesInfoSource,
+ SystemdServicesInfoTarget,
+ SystemdServicesPresetInfoSource,
+ SystemdServicesPresetInfoTarget
+ )
+ produces = (SystemdServicesTasks,)
+ tags = (ApplicationsPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ transitionsystemdservicesstates.process()
diff --git a/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/libraries/transitionsystemdservicesstates.py b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/libraries/transitionsystemdservicesstates.py
new file mode 100644
index 00000000..494271ae
--- /dev/null
+++ b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/libraries/transitionsystemdservicesstates.py
@@ -0,0 +1,211 @@
+from leapp import reporting
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.stdlib import api
+from leapp.models import (
+ SystemdServicesInfoSource,
+ SystemdServicesInfoTarget,
+ SystemdServicesPresetInfoSource,
+ SystemdServicesPresetInfoTarget,
+ SystemdServicesTasks
+)
+
+FMT_LIST_SEPARATOR = "\n - "
+
+
+def _get_desired_service_state(state_source, preset_source, preset_target):
+ """
+ Get the desired service state on the target system
+
+ :param state_source: State on the source system
+ :param preset_source: Preset on the source system
+ :param preset_target: Preset on the target system
+ :return: The desired state on the target system
+ """
+
+ if state_source in ("disabled", "enabled-runtime"):
+ if preset_source == "disable":
+ return preset_target + "d" # use the default from target
+
+ return state_source
+
+
+def _get_desired_states(
+ services_source, presets_source, services_target, presets_target
+):
+ "Get the states that services should be in on the target system"
+ desired_states = {}
+
+ for service in services_target:
+ state_source = services_source.get(service.name)
+ preset_target = _get_service_preset(service.name, presets_target)
+ preset_source = _get_service_preset(service.name, presets_source)
+
+ desired_state = _get_desired_service_state(
+ state_source, preset_source, preset_target
+ )
+ desired_states[service.name] = desired_state
+
+ return desired_states
+
+
+def _get_service_task(service_name, desired_state, state_target, tasks):
+ """
+ Get the task to set the desired state of the service on the target system
+
+ :param service_name: Then name of the service
+ :param desired_state: The state the service should set to
+ :param state_target: State on the target system
+ :param tasks: The tasks to append the task to
+ """
+ if desired_state == state_target:
+ return
+
+ if desired_state == "enabled":
+ tasks.to_enable.append(service_name)
+ if desired_state == "disabled":
+ tasks.to_disable.append(service_name)
+
+
+def _get_service_preset(service_name, presets):
+ preset = presets.get(service_name)
+ if not preset:
+ # shouldn't really happen as there is usually a `disable *` glob as
+ # the last statement in the presets
+ api.current_logger().debug(
+ 'No presets found for service "{}", assuming "disable"'.format(service_name)
+ )
+ return "disable"
+ return preset
+
+
+def _filter_services(services_source, services_target):
+ """
+ Filter out irrelevant services
+ """
+ filtered = []
+ for service in services_target:
+ if service.state not in ("enabled", "disabled", "enabled-runtime"):
+ # Enabling/disabling of services is only relevant to these states
+ continue
+
+ state_source = services_source.get(service.name)
+ if not state_source:
+ # The service doesn't exist on the source system
+ continue
+
+ if state_source == "masked-runtime":
+ # TODO(mmatuska): It's not possible to get the persistent
+ # (non-runtime) state of a service with `systemctl`. One solution
+ # might be to check symlinks
+ api.current_logger().debug(
+ 'Skipping service in "masked-runtime" state: {}'.format(service.name)
+ )
+ continue
+
+ filtered.append(service)
+
+ return filtered
+
+
+def _get_required_tasks(services_target, desired_states):
+ """
+ Get the required tasks to set the services on the target system to their desired state
+
+ :return: The tasks required to be executed
+ :rtype: SystemdServicesTasks
+ """
+ tasks = SystemdServicesTasks()
+
+ for service in services_target:
+ desired_state = desired_states[service.name]
+ _get_service_task(service.name, desired_state, service.state, tasks)
+
+ return tasks
+
+
+def _report_kept_enabled(tasks):
+ summary = (
+ "Systemd services which were enabled on the system before the upgrade"
+ " were kept enabled after the upgrade. "
+ )
+ if tasks:
+ summary += (
+ "The following services were originally disabled on the upgraded system"
+ " and Leapp attempted to enable them:{}{}"
+ ).format(FMT_LIST_SEPARATOR, FMT_LIST_SEPARATOR.join(sorted(tasks.to_enable)))
+ # TODO(mmatuska): When post-upgrade reports are implemented in
+ # `setsystemdservicesstates actor, add a note here to check the reports
+ # if the enabling failed
+
+ reporting.create_report(
+ [
+ reporting.Title("Previously enabled systemd services were kept enabled"),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.INFO),
+ reporting.Groups([reporting.Groups.POST]),
+ ]
+ )
+
+
+def _get_newly_enabled(services_source, desired_states):
+ newly_enabled = []
+ for service, state in desired_states.items():
+ state_source = services_source[service]
+ if state_source == "disabled" and state == "enabled":
+ newly_enabled.append(service)
+
+ return newly_enabled
+
+
+def _report_newly_enabled(newly_enabled):
+ summary = (
+ "The following services were disabled before the upgrade and were set"
+ "to enabled by a systemd preset after the upgrade:{}{}.".format(
+ FMT_LIST_SEPARATOR, FMT_LIST_SEPARATOR.join(sorted(newly_enabled))
+ )
+ )
+
+ reporting.create_report(
+ [
+ reporting.Title("Some systemd services were newly enabled"),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.INFO),
+ reporting.Groups([reporting.Groups.POST]),
+ ]
+ )
+
+
+def _expect_message(model):
+ """
+ Get the expected message or throw an error
+ """
+ message = next(api.consume(model), None)
+ if not message:
+ raise StopActorExecutionError(
+ "Expected {} message, but didn't get any".format(model.__name__)
+ )
+ return message
+
+
+def process():
+ services_source = _expect_message(SystemdServicesInfoSource).service_files
+ services_target = _expect_message(SystemdServicesInfoTarget).service_files
+ presets_source = _expect_message(SystemdServicesPresetInfoSource).presets
+ presets_target = _expect_message(SystemdServicesPresetInfoTarget).presets
+
+ services_source = dict((p.name, p.state) for p in services_source)
+ presets_source = dict((p.service, p.state) for p in presets_source)
+ presets_target = dict((p.service, p.state) for p in presets_target)
+
+ services_target = _filter_services(services_source, services_target)
+
+ desired_states = _get_desired_states(
+ services_source, presets_source, services_target, presets_target
+ )
+ tasks = _get_required_tasks(services_target, desired_states)
+
+ api.produce(tasks)
+ _report_kept_enabled(tasks)
+
+ newly_enabled = _get_newly_enabled(services_source, desired_states)
+ _report_newly_enabled(newly_enabled)
diff --git a/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/tests/test_transitionsystemdservicesstates.py b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/tests/test_transitionsystemdservicesstates.py
new file mode 100644
index 00000000..a19afc7f
--- /dev/null
+++ b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/tests/test_transitionsystemdservicesstates.py
@@ -0,0 +1,219 @@
+import pytest
+
+from leapp import reporting
+from leapp.libraries.actor import transitionsystemdservicesstates
+from leapp.libraries.common.testutils import create_report_mocked, CurrentActorMocked, produce_mocked
+from leapp.libraries.stdlib import api
+from leapp.models import (
+ SystemdServiceFile,
+ SystemdServicePreset,
+ SystemdServicesInfoSource,
+ SystemdServicesInfoTarget,
+ SystemdServicesPresetInfoSource,
+ SystemdServicesPresetInfoTarget,
+ SystemdServicesTasks
+)
+
+
+@pytest.mark.parametrize(
+ "state_source, preset_source, preset_target, expected",
+ (
+ ["enabled", "disable", "enable", "enabled"],
+ ["enabled", "disable", "disable", "enabled"],
+ ["disabled", "disable", "disable", "disabled"],
+ ["disabled", "disable", "enable", "enabled"],
+ ["masked", "disable", "enable", "masked"],
+ ["masked", "disable", "disable", "masked"],
+ ["enabled", "enable", "enable", "enabled"],
+ ["enabled", "enable", "disable", "enabled"],
+ ["masked", "enable", "enable", "masked"],
+ ["masked", "enable", "disable", "masked"],
+ ["disabled", "enable", "enable", "disabled"],
+ ["disabled", "enable", "disable", "disabled"],
+ ),
+)
+def test_get_desired_service_state(
+ state_source, preset_source, preset_target, expected
+):
+ target_state = transitionsystemdservicesstates._get_desired_service_state(
+ state_source, preset_source, preset_target
+ )
+
+ assert target_state == expected
+
+
+@pytest.mark.parametrize(
+ "desired_state, state_target, expected",
+ (
+ ("enabled", "enabled", SystemdServicesTasks()),
+ ("enabled", "disabled", SystemdServicesTasks(to_enable=["test.service"])),
+ ("disabled", "enabled", SystemdServicesTasks(to_disable=["test.service"])),
+ ("disabled", "disabled", SystemdServicesTasks()),
+ ),
+)
+def test_get_service_task(monkeypatch, desired_state, state_target, expected):
+ def _get_desired_service_state_mocked(*args):
+ return desired_state
+
+ monkeypatch.setattr(
+ transitionsystemdservicesstates,
+ "_get_desired_service_state",
+ _get_desired_service_state_mocked,
+ )
+
+ tasks = SystemdServicesTasks()
+ transitionsystemdservicesstates._get_service_task(
+ "test.service", desired_state, state_target, tasks
+ )
+ assert tasks == expected
+
+
+def test_filter_services_services_filtered():
+ services_source = {
+ "test2.service": "static",
+ "test3.service": "masked",
+ "test4.service": "indirect",
+ "test5.service": "indirect",
+ "test6.service": "indirect",
+ }
+ services_target = [
+ SystemdServiceFile(name="test1.service", state="enabled"),
+ SystemdServiceFile(name="test2.service", state="masked"),
+ SystemdServiceFile(name="test3.service", state="indirect"),
+ SystemdServiceFile(name="test4.service", state="static"),
+ SystemdServiceFile(name="test5.service", state="generated"),
+ SystemdServiceFile(name="test6.service", state="masked-runtime"),
+ ]
+
+ filtered = transitionsystemdservicesstates._filter_services(
+ services_source, services_target
+ )
+
+ assert not filtered
+
+
+def test_filter_services_services_not_filtered():
+ services_source = {
+ "test1.service": "enabled",
+ "test2.service": "disabled",
+ "test3.service": "static",
+ "test4.service": "indirect",
+ }
+ services_target = [
+ SystemdServiceFile(name="test1.service", state="enabled"),
+ SystemdServiceFile(name="test2.service", state="disabled"),
+ SystemdServiceFile(name="test3.service", state="enabled-runtime"),
+ SystemdServiceFile(name="test4.service", state="enabled"),
+ ]
+
+ filtered = transitionsystemdservicesstates._filter_services(
+ services_source, services_target
+ )
+
+ assert len(filtered) == len(services_target)
+
+
+@pytest.mark.parametrize(
+ "presets",
+ [
+ dict(),
+ {"other.service": "enable"},
+ ],
+)
+def test_service_preset_missing_presets(presets):
+ preset = transitionsystemdservicesstates._get_service_preset(
+ "test.service", presets
+ )
+ assert preset == "disable"
+
+
+def test_tasks_produced_reports_created(monkeypatch):
+ services_source = [
+ SystemdServiceFile(name="rsyncd.service", state="enabled"),
+ SystemdServiceFile(name="test.service", state="enabled"),
+ ]
+ service_info_source = SystemdServicesInfoSource(service_files=services_source)
+
+ presets_source = [
+ SystemdServicePreset(service="rsyncd.service", state="enable"),
+ SystemdServicePreset(service="test.service", state="enable"),
+ ]
+ preset_info_source = SystemdServicesPresetInfoSource(presets=presets_source)
+
+ services_target = [
+ SystemdServiceFile(name="rsyncd.service", state="disabled"),
+ SystemdServiceFile(name="test.service", state="enabled"),
+ ]
+ service_info_target = SystemdServicesInfoTarget(service_files=services_target)
+
+ presets_target = [
+ SystemdServicePreset(service="rsyncd.service", state="enable"),
+ SystemdServicePreset(service="test.service", state="enable"),
+ ]
+ preset_info_target = SystemdServicesPresetInfoTarget(presets=presets_target)
+
+ monkeypatch.setattr(
+ api,
+ "current_actor",
+ CurrentActorMocked(
+ msgs=[
+ service_info_source,
+ service_info_target,
+ preset_info_source,
+ preset_info_target,
+ ]
+ ),
+ )
+ monkeypatch.setattr(api, "produce", produce_mocked())
+ created_reports = create_report_mocked()
+ monkeypatch.setattr(reporting, "create_report", created_reports)
+
+ expected_tasks = SystemdServicesTasks(to_enable=["rsyncd.service"], to_disable=[])
+ transitionsystemdservicesstates.process()
+
+ assert created_reports.called == 2
+ assert api.produce.called
+ assert api.produce.model_instances[0].to_enable == expected_tasks.to_enable
+ assert api.produce.model_instances[0].to_disable == expected_tasks.to_disable
+
+
+def test_report_kept_enabled(monkeypatch):
+ created_reports = create_report_mocked()
+ monkeypatch.setattr(reporting, "create_report", created_reports)
+
+ tasks = SystemdServicesTasks(
+ to_enable=["test.service", "other.service"], to_disable=["another.service"]
+ )
+ transitionsystemdservicesstates._report_kept_enabled(tasks)
+
+ assert created_reports.called
+ assert all([s in created_reports.report_fields["summary"] for s in tasks.to_enable])
+
+
+def test_get_newly_enabled():
+ services_source = {
+ "test.service": "disabled",
+ "other.service": "enabled",
+ "another.service": "enabled",
+ }
+ desired_states = {
+ "test.service": "enabled",
+ "other.service": "enabled",
+ "another.service": "disabled",
+ }
+
+ newly_enabled = transitionsystemdservicesstates._get_newly_enabled(
+ services_source, desired_states
+ )
+ assert newly_enabled == ['test.service']
+
+
+def test_report_newly_enabled(monkeypatch):
+ created_reports = create_report_mocked()
+ monkeypatch.setattr(reporting, "create_report", created_reports)
+
+ newly_enabled = ["test.service", "other.service"]
+ transitionsystemdservicesstates._report_newly_enabled(newly_enabled)
+
+ assert created_reports.called
+ assert all([s in created_reports.report_fields["summary"] for s in newly_enabled])
--
2.41.0

View File

@ -1,190 +0,0 @@
From bea0f89bd858736418a535de37ddcfeef0ec4d31 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Wed, 15 Mar 2023 16:35:35 +0100
Subject: [PATCH 19/38] Remove obsoleted enablersyncdservice actor
The `transitionsystemdservicesstates` actor now handles all such
services generically, which makes this actor obsolete.
---
.../transitionsystemdservicesstates.py | 10 +++---
.../test_transitionsystemdservicesstates.py | 33 +++++++++++++++----
.../actors/enablersyncdservice/actor.py | 21 ------------
.../libraries/enablersyncdservice.py | 21 ------------
.../tests/test_enablersyncdservice.py | 24 --------------
5 files changed, 32 insertions(+), 77 deletions(-)
delete mode 100644 repos/system_upgrade/el7toel8/actors/enablersyncdservice/actor.py
delete mode 100644 repos/system_upgrade/el7toel8/actors/enablersyncdservice/libraries/enablersyncdservice.py
delete mode 100644 repos/system_upgrade/el7toel8/actors/enablersyncdservice/tests/test_enablersyncdservice.py
diff --git a/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/libraries/transitionsystemdservicesstates.py b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/libraries/transitionsystemdservicesstates.py
index 494271ae..b487366b 100644
--- a/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/libraries/transitionsystemdservicesstates.py
+++ b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/libraries/transitionsystemdservicesstates.py
@@ -130,8 +130,8 @@ def _report_kept_enabled(tasks):
)
if tasks:
summary += (
- "The following services were originally disabled on the upgraded system"
- " and Leapp attempted to enable them:{}{}"
+ "The following services were originally disabled by preset on the"
+ " upgraded system and Leapp attempted to enable them:{}{}"
).format(FMT_LIST_SEPARATOR, FMT_LIST_SEPARATOR.join(sorted(tasks.to_enable)))
# TODO(mmatuska): When post-upgrade reports are implemented in
# `setsystemdservicesstates actor, add a note here to check the reports
@@ -193,9 +193,9 @@ def process():
presets_source = _expect_message(SystemdServicesPresetInfoSource).presets
presets_target = _expect_message(SystemdServicesPresetInfoTarget).presets
- services_source = dict((p.name, p.state) for p in services_source)
- presets_source = dict((p.service, p.state) for p in presets_source)
- presets_target = dict((p.service, p.state) for p in presets_target)
+ services_source = {p.name: p.state for p in services_source}
+ presets_source = {p.service: p.state for p in presets_source}
+ presets_target = {p.service: p.state for p in presets_target}
services_target = _filter_services(services_source, services_target)
diff --git a/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/tests/test_transitionsystemdservicesstates.py b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/tests/test_transitionsystemdservicesstates.py
index a19afc7f..e0611859 100644
--- a/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/tests/test_transitionsystemdservicesstates.py
+++ b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/tests/test_transitionsystemdservicesstates.py
@@ -177,17 +177,38 @@ def test_tasks_produced_reports_created(monkeypatch):
assert api.produce.model_instances[0].to_disable == expected_tasks.to_disable
-def test_report_kept_enabled(monkeypatch):
+@pytest.mark.parametrize(
+ "tasks, expect_extended_summary",
+ (
+ (
+ SystemdServicesTasks(
+ to_enable=["test.service", "other.service"],
+ to_disable=["another.service"],
+ ),
+ True,
+ ),
+ (None, False),
+ ),
+)
+def test_report_kept_enabled(monkeypatch, tasks, expect_extended_summary):
created_reports = create_report_mocked()
monkeypatch.setattr(reporting, "create_report", created_reports)
- tasks = SystemdServicesTasks(
- to_enable=["test.service", "other.service"], to_disable=["another.service"]
- )
transitionsystemdservicesstates._report_kept_enabled(tasks)
+ extended_summary_str = (
+ "The following services were originally disabled by preset on the"
+ " upgraded system and Leapp attempted to enable them"
+ )
+
assert created_reports.called
- assert all([s in created_reports.report_fields["summary"] for s in tasks.to_enable])
+ if expect_extended_summary:
+ assert extended_summary_str in created_reports.report_fields["summary"]
+ assert all(
+ [s in created_reports.report_fields["summary"] for s in tasks.to_enable]
+ )
+ else:
+ assert extended_summary_str not in created_reports.report_fields["summary"]
def test_get_newly_enabled():
@@ -205,7 +226,7 @@ def test_get_newly_enabled():
newly_enabled = transitionsystemdservicesstates._get_newly_enabled(
services_source, desired_states
)
- assert newly_enabled == ['test.service']
+ assert newly_enabled == ["test.service"]
def test_report_newly_enabled(monkeypatch):
diff --git a/repos/system_upgrade/el7toel8/actors/enablersyncdservice/actor.py b/repos/system_upgrade/el7toel8/actors/enablersyncdservice/actor.py
deleted file mode 100644
index bdf2e63e..00000000
--- a/repos/system_upgrade/el7toel8/actors/enablersyncdservice/actor.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from leapp.actors import Actor
-from leapp.libraries.actor import enablersyncdservice
-from leapp.models import SystemdServicesInfoSource, SystemdServicesTasks
-from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
-
-
-class EnableDeviceCioFreeService(Actor):
- """
- Enables rsyncd.service systemd service if it is enabled on source system
-
- After an upgrade this service ends up disabled even if it was enabled on
- the source system.
- """
-
- name = 'enable_rsyncd_service'
- consumes = (SystemdServicesInfoSource,)
- produces = (SystemdServicesTasks,)
- tags = (ChecksPhaseTag, IPUWorkflowTag)
-
- def process(self):
- enablersyncdservice.process()
diff --git a/repos/system_upgrade/el7toel8/actors/enablersyncdservice/libraries/enablersyncdservice.py b/repos/system_upgrade/el7toel8/actors/enablersyncdservice/libraries/enablersyncdservice.py
deleted file mode 100644
index 216ebca9..00000000
--- a/repos/system_upgrade/el7toel8/actors/enablersyncdservice/libraries/enablersyncdservice.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from leapp.exceptions import StopActorExecutionError
-from leapp.libraries.stdlib import api
-from leapp.models import SystemdServicesInfoSource, SystemdServicesTasks
-
-SERVICE_NAME = "rsyncd.service"
-
-
-def _service_enabled_source(service_info, name):
- service_file = next((s for s in service_info.service_files if s.name == name), None)
- return service_file and service_file.state == "enabled"
-
-
-def process():
- service_info_source = next(api.consume(SystemdServicesInfoSource), None)
- if not service_info_source:
- raise StopActorExecutionError(
- "Expected SystemdServicesInfoSource message, but didn't get any"
- )
-
- if _service_enabled_source(service_info_source, SERVICE_NAME):
- api.produce(SystemdServicesTasks(to_enable=[SERVICE_NAME]))
diff --git a/repos/system_upgrade/el7toel8/actors/enablersyncdservice/tests/test_enablersyncdservice.py b/repos/system_upgrade/el7toel8/actors/enablersyncdservice/tests/test_enablersyncdservice.py
deleted file mode 100644
index 34a25afe..00000000
--- a/repos/system_upgrade/el7toel8/actors/enablersyncdservice/tests/test_enablersyncdservice.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import pytest
-
-from leapp.libraries.actor import enablersyncdservice
-from leapp.libraries.common.testutils import CurrentActorMocked, produce_mocked
-from leapp.libraries.stdlib import api
-from leapp.models import SystemdServiceFile, SystemdServicesInfoSource, SystemdServicesTasks
-
-
-@pytest.mark.parametrize('service_file, should_produce', [
- (SystemdServiceFile(name='rsyncd.service', state='enabled'), True),
- (SystemdServiceFile(name='rsyncd.service', state='disabled'), False),
- (SystemdServiceFile(name='not-rsyncd.service', state='enabled'), False),
- (SystemdServiceFile(name='not-rsyncd.service', state='disabled'), False),
-])
-def test_task_produced(monkeypatch, service_file, should_produce):
- service_info = SystemdServicesInfoSource(service_files=[service_file])
- monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(msgs=[service_info]))
- monkeypatch.setattr(api, "produce", produce_mocked())
-
- enablersyncdservice.process()
-
- assert api.produce.called == should_produce
- if should_produce:
- assert api.produce.model_instances[0].to_enable == ['rsyncd.service']
--
2.41.0

View File

@ -1,26 +0,0 @@
From 6661e496143c47e92cd1d83ed1e4f1da8d0d617a Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Sat, 21 Oct 2023 16:26:17 +0200
Subject: [PATCH 20/38] default to NO_RHSM mode when subscription-manager is
not found
---
commands/upgrade/util.py | 2 ++
1 file changed, 2 insertions(+)
diff --git a/commands/upgrade/util.py b/commands/upgrade/util.py
index b52da25c..b11265ee 100644
--- a/commands/upgrade/util.py
+++ b/commands/upgrade/util.py
@@ -191,6 +191,8 @@ def prepare_configuration(args):
os.environ['LEAPP_UNSUPPORTED'] = '0' if os.getenv('LEAPP_UNSUPPORTED', '0') == '0' else '1'
if args.no_rhsm:
os.environ['LEAPP_NO_RHSM'] = '1'
+ elif not os.path.exists('/usr/sbin/subscription-manager'):
+ os.environ['LEAPP_NO_RHSM'] = '1'
elif os.getenv('LEAPP_NO_RHSM') != '1':
os.environ['LEAPP_NO_RHSM'] = os.getenv('LEAPP_DEVEL_SKIP_RHSM', '0')
--
2.41.0

View File

@ -1,55 +0,0 @@
From 17c88d9451774cd3910f81eaa889d4ff14615e1c Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Mon, 30 Oct 2023 17:36:23 +0100
Subject: [PATCH 21/38] call correct mkdir when trying to create
/etc/rhsm/facts (#1132)
os.path has no mkdir, but os does.
traceback without the patch:
Traceback (most recent call last):
File "/bin/leapp", line 11, in <module>
load_entry_point('leapp==0.16.0', 'console_scripts', 'leapp')()
File "/usr/lib/python3.6/site-packages/leapp/cli/__init__.py", line 45, in main
cli.command.execute('leapp version {}'.format(VERSION))
File "/usr/lib/python3.6/site-packages/leapp/utils/clicmd.py", line 111, in execute
args.func(args)
File "/usr/lib/python3.6/site-packages/leapp/utils/clicmd.py", line 133, in called
self.target(args)
File "/usr/lib/python3.6/site-packages/leapp/cli/commands/upgrade/breadcrumbs.py", line 170, in wrapper
breadcrumbs.save()
File "/usr/lib/python3.6/site-packages/leapp/cli/commands/upgrade/breadcrumbs.py", line 116, in save
self._save_rhsm_facts(doc['activities'])
File "/usr/lib/python3.6/site-packages/leapp/cli/commands/upgrade/breadcrumbs.py", line 64, in _save_rhsm_facts
os.path.mkdir('/etc/rhsm/facts')
AttributeError: module 'posixpath' has no attribute 'mkdir'
While at it, also catch OSError with errno 17, to safeguard against race
conditions if anything has created the directory between us checking for
it and us trying to create it.
---
commands/upgrade/breadcrumbs.py | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/commands/upgrade/breadcrumbs.py b/commands/upgrade/breadcrumbs.py
index 16903ee0..3a3dcde3 100644
--- a/commands/upgrade/breadcrumbs.py
+++ b/commands/upgrade/breadcrumbs.py
@@ -61,7 +61,12 @@ class _BreadCrumbs(object):
if not os.path.exists('/etc/rhsm'):
# If there's no /etc/rhsm folder just skip it
return
- os.path.mkdir('/etc/rhsm/facts')
+ try:
+ os.mkdir('/etc/rhsm/facts')
+ except OSError as e:
+ if e.errno == 17:
+ # The directory already exists which is all we need.
+ pass
try:
with open('/etc/rhsm/facts/leapp.facts', 'w') as f:
json.dump(_flattened({
--
2.41.0

View File

@ -1,37 +0,0 @@
From b6e409e1055b5d8b7f27e5df9eae096eb592a9c7 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Fri, 27 Oct 2023 13:34:38 +0200
Subject: [PATCH 22/38] RHSM: Adjust the switch to container mode for new RHSM
RHSM in RHEL 8.9+ & RHEL 9.3+ requires newly for the switch to the
container mode existence and content under /etc/pki/entitlement-host,
which in our case should by symlink to /etc/pki/entitlement.
So currently we need for the correct switch 2 symlinks:
* /etc/pki/rhsm-host -> /etc/pki/rhsm
* /etc/pki/entitlement-host -> /etc/pki/entitlement
Technically we need that only for RHEL 8.9+ but discussing it with
RHSM SST, we can do this change unconditionally for any RHEL system
as older versions of RHSM do not check /etc/pki/entitlement-host.
jira: RHEL-14839
---
repos/system_upgrade/common/libraries/rhsm.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/repos/system_upgrade/common/libraries/rhsm.py b/repos/system_upgrade/common/libraries/rhsm.py
index 4a5b0eb0..18842021 100644
--- a/repos/system_upgrade/common/libraries/rhsm.py
+++ b/repos/system_upgrade/common/libraries/rhsm.py
@@ -334,6 +334,7 @@ def set_container_mode(context):
return
try:
context.call(['ln', '-s', '/etc/rhsm', '/etc/rhsm-host'])
+ context.call(['ln', '-s', '/etc/pki/entitlement', '/etc/pki/entitlement-host'])
except CalledProcessError:
raise StopActorExecutionError(
message='Cannot set the container mode for the subscription-manager.')
--
2.41.0

View File

@ -1,61 +0,0 @@
From 5b0c1d9d6bc96e9718949a03dd717bb4cbc04c10 Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Sat, 21 Oct 2023 19:36:19 +0200
Subject: [PATCH 23/38] load all substitutions from etc
On some distributions (like CentOS Stream and Oracle Linux), we need
more substitutions to be able to load repositories properly.
DNF has a helper for that: conf.substitutions.update_from_etc.
On pure DNF distributions, calling this should be sufficient.
On EL7, where the primary tool is YUM, DNF does not load vars from
/etc/yum, only from /etc/dnf, so we have to help it a bit and explicitly
try to load releasever from /etc/yum.
(DNF since 4.2.15 *does* also load substitutions from /etc/yum, but EL7
ships with 4.0.x)
---
.../system_upgrade/common/libraries/module.py | 23 +++++++++++--------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/repos/system_upgrade/common/libraries/module.py b/repos/system_upgrade/common/libraries/module.py
index abde69e7..7d4e8aa4 100644
--- a/repos/system_upgrade/common/libraries/module.py
+++ b/repos/system_upgrade/common/libraries/module.py
@@ -1,4 +1,3 @@
-import os
import warnings
from leapp.libraries.common.config.version import get_source_major_version
@@ -23,14 +22,20 @@ def _create_or_get_dnf_base(base=None):
# have repositories only for the exact system version (including the minor number). In a case when
# /etc/yum/vars/releasever is present, read its contents so that we can access repositores on such systems.
conf = dnf.conf.Conf()
- pkg_manager = 'yum' if get_source_major_version() == '7' else 'dnf'
- releasever_path = '/etc/{0}/vars/releasever'.format(pkg_manager)
- if os.path.exists(releasever_path):
- with open(releasever_path) as releasever_file:
- releasever = releasever_file.read().strip()
- conf.substitutions['releasever'] = releasever
- else:
- conf.substitutions['releasever'] = get_source_major_version()
+
+ # preload releasever from what we know, this will be our fallback
+ conf.substitutions['releasever'] = get_source_major_version()
+
+ # dnf on EL7 doesn't load vars from /etc/yum, so we need to help it a bit
+ if get_source_major_version() == '7':
+ try:
+ with open('/etc/yum/vars/releasever') as releasever_file:
+ conf.substitutions['releasever'] = releasever_file.read().strip()
+ except IOError:
+ pass
+
+ # load all substitutions from etc
+ conf.substitutions.update_from_etc('/')
base = dnf.Base(conf=conf)
base.init_plugins()
--
2.41.0

View File

@ -1,62 +0,0 @@
From d1f28cbd143f2dce85f7f175308437954847aba8 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Thu, 2 Nov 2023 14:20:11 +0100
Subject: [PATCH 24/38] Do not create dangling symlinks for containerized RHSM
When setting RHSM into the container mode, we are creating symlinks
to /etc/rhsm and /etc/pki/entitlement directories. However, this
creates dangling symlinks if RHSM is not installed or user manually
removes one of these dirs.
If any of these directories is missing, skip other actions and
log the warning. Usually it means that RHSM is not actually used
or installed at all, so in these cases we can do the skip. The
only corner case when system could use RHSM without
/etc/pki/entitlement is when RHSM is configured to put these
certificate on a different path, and we do not support nor cover
such a scenario as we are not scanning the RHSM configuration at
all.
This also solves the problems on systems that does not have RHSM
available at all.
---
repos/system_upgrade/common/libraries/rhsm.py | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/repos/system_upgrade/common/libraries/rhsm.py b/repos/system_upgrade/common/libraries/rhsm.py
index 18842021..eb388829 100644
--- a/repos/system_upgrade/common/libraries/rhsm.py
+++ b/repos/system_upgrade/common/libraries/rhsm.py
@@ -325,6 +325,11 @@ def set_container_mode(context):
could be affected and the generated repo file in the container could be
affected as well (e.g. when the release is set, using rhsm, on the host).
+ We want to put RHSM into the container mode always when /etc/rhsm and
+ /etc/pki/entitlement directories exists, even when leapp is executed with
+ --no-rhsm option. If any of these directories are missing, skip other
+ actions - most likely RHSM is not installed in such a case.
+
:param context: An instance of a mounting.IsolatedActions class
:type context: mounting.IsolatedActions class
"""
@@ -332,6 +337,17 @@ def set_container_mode(context):
api.current_logger().error('Trying to set RHSM into the container mode'
'on host. Skipping the action.')
return
+ # TODO(pstodulk): check "rhsm identity" whether system is registered
+ # and the container mode should be required
+ if (not os.path.exists(context.full_path('/etc/rhsm'))
+ or not os.path.exists(context.full_path('/etc/pki/entitlement'))):
+ api.current_logger().warning(
+ 'Cannot set the container mode for the subscription-manager as'
+ ' one of required directories is missing. Most likely RHSM is not'
+ ' installed. Skipping other actions.'
+ )
+ return
+
try:
context.call(['ln', '-s', '/etc/rhsm', '/etc/rhsm-host'])
context.call(['ln', '-s', '/etc/pki/entitlement', '/etc/pki/entitlement-host'])
--
2.41.0

View File

@ -1,68 +0,0 @@
From 64ec2ec60eac7abd4910c5b2a1a43794d3df11cf Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Sat, 4 Nov 2023 19:54:19 +0100
Subject: [PATCH 25/38] be less strict when figuring out major version in
initrd
We only care for the major part of the version, so it's sufficient to
grep without the dot, which is not present on CentOS initrd.
CentOS Stream 8:
VERSION="8 dracut-049-224.git20230330.el8"
VERSION_ID=049-224.git20230330.el8
CentOS Stream 9:
VERSION="9 dracut-057-38.git20230725.el9"
VERSION_ID="9"
RHEL 8.8:
VERSION="8.8 (Ootpa) dracut-049-223.git20230119.el8"
VERSION_ID=049-223.git20230119.el8
RHEL 9.2:
VERSION="9.2 (Plow) dracut-057-21.git20230214.el9"
VERSION_ID="9.2"
Ideally, we would just use the major part of VERSION_ID, but this is set
to the underlying OS'es VERSION_ID only since dracut 050 [1] and EL8
ships with 049.
[1] https://github.com/dracutdevs/dracut/commit/72ae1c4fe73c5637eb8f6843b9a127a6d69469d6
---
.../files/dracut/85sys-upgrade-redhat/do-upgrade.sh | 2 +-
.../files/dracut/90sys-upgrade/initrd-system-upgrade-generator | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
index c181c5cf..95be87b5 100755
--- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
+++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
@@ -9,7 +9,7 @@ type getarg >/dev/null 2>&1 || . /lib/dracut-lib.sh
get_rhel_major_release() {
local os_version
- os_version=$(grep -o '^VERSION="[0-9][0-9]*\.' /etc/initrd-release | grep -o '[0-9]*')
+ os_version=$(grep -o '^VERSION="[0-9][0-9]*' /etc/initrd-release | grep -o '[0-9]*')
[ -z "$os_version" ] && {
# This should not happen as /etc/initrd-release is supposed to have API
# stability, but check is better than broken system.
diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator
index 5cc6fd92..fe81626f 100755
--- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator
+++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator
@@ -1,7 +1,7 @@
#!/bin/sh
get_rhel_major_release() {
- _os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*\.' | grep -o '[0-9]*')
+ _os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*' | grep -o '[0-9]*')
[ -z "$_os_version" ] && {
# This should not happen as /etc/initrd-release is supposed to have API
# stability, but check is better than broken system.
--
2.41.0

File diff suppressed because it is too large Load Diff

View File

@ -1,167 +0,0 @@
From 594cdb92171ebd66a07c558bfa5c914593569810 Mon Sep 17 00:00:00 2001
From: PeterMocary <petermocary@gmail.com>
Date: Wed, 18 Oct 2023 15:34:22 +0200
Subject: [PATCH 27/38] add backward compatibility for leapp-rhui-(aws|azure)
packages
---
repos/system_upgrade/common/libraries/rhui.py | 76 +++++++++++++++----
1 file changed, 62 insertions(+), 14 deletions(-)
diff --git a/repos/system_upgrade/common/libraries/rhui.py b/repos/system_upgrade/common/libraries/rhui.py
index aa40b597..b31eba0b 100644
--- a/repos/system_upgrade/common/libraries/rhui.py
+++ b/repos/system_upgrade/common/libraries/rhui.py
@@ -127,13 +127,17 @@ RHUI_SETUPS = {
mk_rhui_setup(clients={'rh-amazon-rhui-client'}, optional_files=[], os_version='7'),
mk_rhui_setup(clients={'rh-amazon-rhui-client'}, leapp_pkg='leapp-rhui-aws',
mandatory_files=[
- ('rhui-client-config-server-8.crt', RHUI_PKI_PRODUCT_DIR),
- ('rhui-client-config-server-8.key', RHUI_PKI_DIR),
- (AWS_DNF_PLUGIN_NAME, DNF_PLUGIN_PATH_PY2),
- ('leapp-aws.repo', YUM_REPOS_PATH)
+ ('rhui-client-config-server-8.crt', RHUI_PKI_PRODUCT_DIR),
+ ('rhui-client-config-server-8.key', RHUI_PKI_DIR),
+ (AWS_DNF_PLUGIN_NAME, DNF_PLUGIN_PATH_PY2),
+ ('leapp-aws.repo', YUM_REPOS_PATH)
],
files_supporting_client_operation=[AWS_DNF_PLUGIN_NAME],
- optional_files=[], os_version='8'),
+ optional_files=[
+ ('content-rhel8.key', RHUI_PKI_DIR),
+ ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
+ ('content-rhel8.crt', RHUI_PKI_PRODUCT_DIR)
+ ], os_version='8'),
# @Note(mhecko): We don't need to deal with AWS_DNF_PLUGIN_NAME here as on rhel8+ we can use the plugin
# # provided by the target client - there is no Python2 incompatibility issue there.
mk_rhui_setup(clients={'rh-amazon-rhui-client'}, leapp_pkg='leapp-rhui-aws',
@@ -142,26 +146,38 @@ RHUI_SETUPS = {
('rhui-client-config-server-9.key', RHUI_PKI_DIR),
('leapp-aws.repo', YUM_REPOS_PATH)
],
- optional_files=[], os_version='9'),
+ optional_files=[
+ ('content-rhel9.key', RHUI_PKI_DIR),
+ ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
+ ('content-rhel9.crt', RHUI_PKI_PRODUCT_DIR)
+ ], os_version='9'),
],
RHUIFamily(RHUIProvider.AWS, arch=arch.ARCH_ARM64, client_files_folder='aws'): [
mk_rhui_setup(clients={'rh-amazon-rhui-client-arm'}, optional_files=[], os_version='7', arch=arch.ARCH_ARM64),
mk_rhui_setup(clients={'rh-amazon-rhui-client-arm'}, leapp_pkg='leapp-rhui-aws',
mandatory_files=[
- ('rhui-client-config-server-8.crt', RHUI_PKI_PRODUCT_DIR),
- ('rhui-client-config-server-8.key', RHUI_PKI_DIR),
- (AWS_DNF_PLUGIN_NAME, DNF_PLUGIN_PATH_PY2),
- ('leapp-aws.repo', YUM_REPOS_PATH)
+ ('rhui-client-config-server-8.crt', RHUI_PKI_PRODUCT_DIR),
+ ('rhui-client-config-server-8.key', RHUI_PKI_DIR),
+ (AWS_DNF_PLUGIN_NAME, DNF_PLUGIN_PATH_PY2),
+ ('leapp-aws.repo', YUM_REPOS_PATH)
],
files_supporting_client_operation=[AWS_DNF_PLUGIN_NAME],
- optional_files=[], os_version='8', arch=arch.ARCH_ARM64),
+ optional_files=[
+ ('content-rhel8.key', RHUI_PKI_DIR),
+ ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
+ ('content-rhel8.crt', RHUI_PKI_PRODUCT_DIR)
+ ], os_version='8', arch=arch.ARCH_ARM64),
mk_rhui_setup(clients={'rh-amazon-rhui-client-arm'}, leapp_pkg='leapp-rhui-aws',
mandatory_files=[
('rhui-client-config-server-9.crt', RHUI_PKI_PRODUCT_DIR),
('rhui-client-config-server-9.key', RHUI_PKI_DIR),
('leapp-aws.repo', YUM_REPOS_PATH)
],
- optional_files=[], os_version='9', arch=arch.ARCH_ARM64),
+ optional_files=[
+ ('content-rhel9.key', RHUI_PKI_DIR),
+ ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
+ ('content-rhel9.crt', RHUI_PKI_PRODUCT_DIR)
+ ], os_version='9', arch=arch.ARCH_ARM64),
],
RHUIFamily(RHUIProvider.AWS, variant=RHUIVariant.SAP, client_files_folder='aws-sap-e4s'): [
mk_rhui_setup(clients={'rh-amazon-rhui-client-sap-bundle'}, optional_files=[], os_version='7',
@@ -174,24 +190,40 @@ RHUI_SETUPS = {
('leapp-aws-sap-e4s.repo', YUM_REPOS_PATH)
],
files_supporting_client_operation=[AWS_DNF_PLUGIN_NAME],
- optional_files=[], os_version='8', content_channel=ContentChannel.E4S),
+ optional_files=[
+ ('content-rhel8-sap.key', RHUI_PKI_DIR),
+ ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
+ ('content-rhel8-sap.crt', RHUI_PKI_PRODUCT_DIR)
+ ], os_version='8', content_channel=ContentChannel.E4S),
mk_rhui_setup(clients={'rh-amazon-rhui-client-sap-bundle-e4s'}, leapp_pkg='leapp-rhui-aws-sap-e4s',
mandatory_files=[
('rhui-client-config-server-9-sap-bundle.crt', RHUI_PKI_PRODUCT_DIR),
('rhui-client-config-server-9-sap-bundle.key', RHUI_PKI_DIR),
('leapp-aws-sap-e4s.repo', YUM_REPOS_PATH)
],
- optional_files=[], os_version='9', content_channel=ContentChannel.E4S),
+ optional_files=[
+ ('content-rhel9-sap-bundle-e4s.key', RHUI_PKI_DIR),
+ ('cdn.redhat.com-chain.crt', RHUI_PKI_DIR),
+ ('content-rhel9-sap-bundle-e4s.crt', RHUI_PKI_PRODUCT_DIR)
+ ], os_version='9', content_channel=ContentChannel.E4S),
],
RHUIFamily(RHUIProvider.AZURE, client_files_folder='azure'): [
mk_rhui_setup(clients={'rhui-azure-rhel7'}, os_version='7',
extra_info={'agent_pkg': 'WALinuxAgent'}),
mk_rhui_setup(clients={'rhui-azure-rhel8'}, leapp_pkg='leapp-rhui-azure',
mandatory_files=[('leapp-azure.repo', YUM_REPOS_PATH)],
+ optional_files=[
+ ('key.pem', RHUI_PKI_DIR),
+ ('content.crt', RHUI_PKI_PRODUCT_DIR)
+ ],
extra_info={'agent_pkg': 'WALinuxAgent'},
os_version='8'),
mk_rhui_setup(clients={'rhui-azure-rhel9'}, leapp_pkg='leapp-rhui-azure',
mandatory_files=[('leapp-azure.repo', YUM_REPOS_PATH)],
+ optional_files=[
+ ('key.pem', RHUI_PKI_DIR),
+ ('content.crt', RHUI_PKI_PRODUCT_DIR)
+ ],
extra_info={'agent_pkg': 'WALinuxAgent'},
os_version='9'),
],
@@ -199,10 +231,18 @@ RHUI_SETUPS = {
mk_rhui_setup(clients={'rhui-azure-rhel7-base-sap-apps'}, os_version='7', content_channel=ContentChannel.EUS),
mk_rhui_setup(clients={'rhui-azure-rhel8-sapapps'}, leapp_pkg='leapp-rhui-azure-sap',
mandatory_files=[('leapp-azure-sap-apps.repo', YUM_REPOS_PATH)],
+ optional_files=[
+ ('key-sapapps.pem', RHUI_PKI_DIR),
+ ('content-sapapps.crt', RHUI_PKI_PRODUCT_DIR)
+ ],
extra_info={'agent_pkg': 'WALinuxAgent'},
os_version='8', content_channel=ContentChannel.EUS),
mk_rhui_setup(clients={'rhui-azure-rhel9-sapapps'}, leapp_pkg='leapp-rhui-azure-sap',
mandatory_files=[('leapp-azure-sap-apps.repo', YUM_REPOS_PATH)],
+ optional_files=[
+ ('key-sapapps.pem', RHUI_PKI_DIR),
+ ('content-sapapps.crt', RHUI_PKI_PRODUCT_DIR)
+ ],
extra_info={'agent_pkg': 'WALinuxAgent'},
os_version='9', content_channel=ContentChannel.EUS),
],
@@ -210,10 +250,18 @@ RHUI_SETUPS = {
mk_rhui_setup(clients={'rhui-azure-rhel7-base-sap-ha'}, os_version='7', content_channel=ContentChannel.E4S),
mk_rhui_setup(clients={'rhui-azure-rhel8-sap-ha'}, leapp_pkg='leapp-rhui-azure-sap',
mandatory_files=[('leapp-azure-sap-ha.repo', YUM_REPOS_PATH)],
+ optional_files=[
+ ('key-sap-ha.pem', RHUI_PKI_DIR),
+ ('content-sap-ha.crt', RHUI_PKI_PRODUCT_DIR)
+ ],
extra_info={'agent_pkg': 'WALinuxAgent'},
os_version='8', content_channel=ContentChannel.E4S),
mk_rhui_setup(clients={'rhui-azure-rhel9-sap-ha'}, leapp_pkg='leapp-rhui-azure-sap',
mandatory_files=[('leapp-azure-sap-ha.repo', YUM_REPOS_PATH)],
+ optional_files=[
+ ('key-sap-ha.pem', RHUI_PKI_DIR),
+ ('content-sap-ha.crt', RHUI_PKI_PRODUCT_DIR)
+ ],
extra_info={'agent_pkg': 'WALinuxAgent'},
os_version='9', content_channel=ContentChannel.E4S),
],
--
2.41.0

View File

@ -1,134 +0,0 @@
From bf866cb33d9aefb2d6d79fc6ea0e326c6c2a0cf3 Mon Sep 17 00:00:00 2001
From: mhecko <mhecko@redhat.com>
Date: Thu, 14 Sep 2023 13:43:37 +0200
Subject: [PATCH 28/38] checknfs: do not check systemd mounts
Systemd mounts contain only *block* devices. Therefore, the list can
never contain NFS shares at all and the check is redundant. This is
apparent if one reads storagescanner/libraries/storagescanner.py:L251.
This patch, therefore, removes the check for systemd mount alltogether.
---
.../common/actors/checknfs/actor.py | 15 +-------
.../actors/checknfs/tests/test_checknfs.py | 37 ++-----------------
2 files changed, 5 insertions(+), 47 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checknfs/actor.py b/repos/system_upgrade/common/actors/checknfs/actor.py
index 40ca834e..208c5dd9 100644
--- a/repos/system_upgrade/common/actors/checknfs/actor.py
+++ b/repos/system_upgrade/common/actors/checknfs/actor.py
@@ -10,7 +10,7 @@ class CheckNfs(Actor):
"""
Check if NFS filesystem is in use. If yes, inhibit the upgrade process.
- Actor looks for NFS in the following sources: /ets/fstab, mount and systemd-mount.
+ Actor looks for NFS in the following sources: /ets/fstab and mount.
If there is NFS in any of the mentioned sources, actors inhibits the upgrade.
"""
name = "check_nfs"
@@ -41,14 +41,7 @@ class CheckNfs(Actor):
if _is_nfs(mount.tp):
nfs_mounts.append(" - {} {}\n".format(mount.name, mount.mount))
- # Check systemd-mount
- systemd_nfs_mounts = []
- for systemdmount in storage.systemdmount:
- if _is_nfs(systemdmount.fs_type):
- # mountpoint is not available in the model
- systemd_nfs_mounts.append(" - {}\n".format(systemdmount.node))
-
- if any((fstab_nfs_mounts, nfs_mounts, systemd_nfs_mounts)):
+ if any((fstab_nfs_mounts, nfs_mounts)):
if fstab_nfs_mounts:
details += "- NFS shares found in /etc/fstab:\n"
details += ''.join(fstab_nfs_mounts)
@@ -57,10 +50,6 @@ class CheckNfs(Actor):
details += "- NFS shares currently mounted:\n"
details += ''.join(nfs_mounts)
- if systemd_nfs_mounts:
- details += "- NFS mounts configured with systemd-mount:\n"
- details += ''.join(systemd_nfs_mounts)
-
fstab_related_resource = [reporting.RelatedResource('file', '/etc/fstab')] if fstab_nfs_mounts else []
create_report([
diff --git a/repos/system_upgrade/common/actors/checknfs/tests/test_checknfs.py b/repos/system_upgrade/common/actors/checknfs/tests/test_checknfs.py
index 907dca40..739b3a83 100644
--- a/repos/system_upgrade/common/actors/checknfs/tests/test_checknfs.py
+++ b/repos/system_upgrade/common/actors/checknfs/tests/test_checknfs.py
@@ -1,37 +1,12 @@
import pytest
from leapp.libraries.common import config
-from leapp.models import FstabEntry, MountEntry, StorageInfo, SystemdMountEntry
+from leapp.models import FstabEntry, MountEntry, StorageInfo
from leapp.reporting import Report
from leapp.snactor.fixture import current_actor_context
from leapp.utils.report import is_inhibitor
-@pytest.mark.parametrize('nfs_fstype', ('nfs', 'nfs4'))
-def test_actor_with_systemdmount_entry(current_actor_context, nfs_fstype, monkeypatch):
- monkeypatch.setattr(config, 'get_env', lambda x, y: y)
- with_systemdmount_entry = [SystemdMountEntry(node="nfs", path="n/a", model="n/a",
- wwn="n/a", fs_type=nfs_fstype, label="n/a",
- uuid="n/a")]
- current_actor_context.feed(StorageInfo(systemdmount=with_systemdmount_entry))
- current_actor_context.run()
- report_fields = current_actor_context.consume(Report)[0].report
- assert is_inhibitor(report_fields)
-
-
-def test_actor_without_systemdmount_entry(current_actor_context, monkeypatch):
- monkeypatch.setattr(config, 'get_env', lambda x, y: y)
- without_systemdmount_entry = [SystemdMountEntry(node="/dev/sda1",
- path="pci-0000:00:17.0-ata-2",
- model="TOSHIBA_THNSNJ512GDNU_A",
- wwn="0x500080d9108e8753",
- fs_type="ext4", label="n/a",
- uuid="5675d309-eff7-4eb1-9c27-58bc5880ec72")]
- current_actor_context.feed(StorageInfo(systemdmount=without_systemdmount_entry))
- current_actor_context.run()
- assert not current_actor_context.consume(Report)
-
-
@pytest.mark.parametrize('nfs_fstype', ('nfs', 'nfs4'))
def test_actor_with_fstab_entry(current_actor_context, nfs_fstype, monkeypatch):
monkeypatch.setattr(config, 'get_env', lambda x, y: y)
@@ -89,15 +64,12 @@ def test_actor_skipped_if_initram_network_enabled(current_actor_context, monkeyp
monkeypatch.setattr(config, 'get_env', lambda x, y: 'network-manager' if x == 'LEAPP_DEVEL_INITRAM_NETWORK' else y)
with_mount_share = [MountEntry(name="nfs", mount="/mnt/data", tp='nfs',
options="rw,nosuid,nodev,relatime,user_id=1000,group_id=1000")]
- with_systemdmount_entry = [SystemdMountEntry(node="nfs", path="n/a", model="n/a",
- wwn="n/a", fs_type='nfs', label="n/a",
- uuid="n/a")]
with_fstab_entry = [FstabEntry(fs_spec="lithium:/mnt/data", fs_file="/mnt/data",
fs_vfstype='nfs',
fs_mntops="noauto,noatime,rsize=32768,wsize=32768",
fs_freq="0", fs_passno="0")]
current_actor_context.feed(StorageInfo(mount=with_mount_share,
- systemdmount=with_systemdmount_entry,
+ systemdmount=[],
fstab=with_fstab_entry))
current_actor_context.run()
assert not current_actor_context.consume(Report)
@@ -108,15 +80,12 @@ def test_actor_not_skipped_if_initram_network_empty(current_actor_context, monke
monkeypatch.setattr(config, 'get_env', lambda x, y: '' if x == 'LEAPP_DEVEL_INITRAM_NETWORK' else y)
with_mount_share = [MountEntry(name="nfs", mount="/mnt/data", tp='nfs',
options="rw,nosuid,nodev,relatime,user_id=1000,group_id=1000")]
- with_systemdmount_entry = [SystemdMountEntry(node="nfs", path="n/a", model="n/a",
- wwn="n/a", fs_type='nfs', label="n/a",
- uuid="n/a")]
with_fstab_entry = [FstabEntry(fs_spec="lithium:/mnt/data", fs_file="/mnt/data",
fs_vfstype='nfs',
fs_mntops="noauto,noatime,rsize=32768,wsize=32768",
fs_freq="0", fs_passno="0")]
current_actor_context.feed(StorageInfo(mount=with_mount_share,
- systemdmount=with_systemdmount_entry,
+ systemdmount=[],
fstab=with_fstab_entry))
current_actor_context.run()
report_fields = current_actor_context.consume(Report)[0].report
--
2.41.0

View File

@ -1,327 +0,0 @@
From 88e1e14090bd32acf5635959010c8e9b515fd9c5 Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Fri, 10 Nov 2023 13:39:39 +0100
Subject: [PATCH 29/38] Switch from plan name regex to filter by tags
Necessary work to adapt upstream tests to big refactoring
changes brought by MR303.
---
.packit.yaml | 130 ++++++++++++++++++++++++++++++++++++++++-----------
1 file changed, 102 insertions(+), 28 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index cd6dd7d1..02cc6d52 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -87,8 +87,8 @@ jobs:
- &sanity-79to86
job: tests
- fmf_url: "https://gitlab.cee.redhat.com/oamg/tmt-plans"
- fmf_ref: "master"
+ fmf_url: "https://gitlab.cee.redhat.com/ivasilev/tmt-plans"
+ fmf_ref: "pocgenerator"
use_internal_tf: True
trigger: pull_request
labels:
@@ -97,16 +97,17 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
identifier: sanity-7.9to8.6
- tmt_plan: "sanity_plan"
+ tmt_plan: ""
tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:sanity'
environments:
- tmt:
context:
distro: "rhel-7.9"
- # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
env:
@@ -123,13 +124,16 @@ jobs:
epel-7-x86_64:
distros: [RHEL-7.9-rhui]
identifier: sanity-7to8-aws-e2e
- tmt_plan: "(?!.*sap)(.*e2e)"
+ # NOTE(ivasilev) Unfortunately to use yaml templates we need to rewrite the whole tf_extra_params dict
+ # to use plan_filter (can't just specify one section test.tmt.plan_filter, need to specify environments.* as well)
tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:e2e'
environments:
- tmt:
context:
distro: "rhel-7.9"
- # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys; yum-config-manager --enable rhel-7-server-rhui-optional-rpms"
@@ -150,7 +154,18 @@ jobs:
- beaker-minimal-7.9to8.6
- 7.9to8.6
identifier: sanity-7.9to8.6-beaker-minimal
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*partitioning_monolithic|.*separate_var_usr_varlog|.*uefi|.*oamg4250_lvm_var_xfs_ftype0)"
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:partitioning & tag:7to8'
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-7.9"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
# On-demand kernel-rt tests
- &kernel-rt-79to86
@@ -160,7 +175,18 @@ jobs:
- kernel-rt-7.9to8.6
- 7.9to8.6
identifier: sanity-7.9to8.6-kernel-rt
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:kernel-rt & tag:7to8'
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-7.9"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
- &sanity-79to88
<<: *sanity-79to86
@@ -185,13 +211,16 @@ jobs:
# On-demand kernel-rt tests
- &kernel-rt-79to88
- <<: *beaker-minimal-79to88
+ <<: *kernel-rt-79to86
labels:
- kernel-rt
- kernel-rt-7.9to8.8
- 7.9to8.8
identifier: sanity-7.9to8.8-kernel-rt
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.8"
+ LEAPPDATA_BRANCH: "upstream"
- &sanity-79to89
<<: *sanity-79to86
@@ -216,13 +245,16 @@ jobs:
# On-demand kernel-rt tests
- &kernel-rt-79to89
- <<: *beaker-minimal-79to89
+ <<: *kernel-rt-79to88
labels:
- kernel-rt
- kernel-rt-7.9to8.9
- 7.9to8.9
identifier: sanity-7.9to8.9-kernel-rt
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.9"
+ LEAPPDATA_BRANCH: "upstream"
- &sanity-86to90
<<: *sanity-79to86
@@ -231,14 +263,15 @@ jobs:
distros: [RHEL-8.6.0-Nightly]
identifier: sanity-8.6to9.0
tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:sanity & tag:8to9'
environments:
- tmt:
context:
distro: "rhel-8.6"
- # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
env:
@@ -259,14 +292,15 @@ jobs:
distros: [RHEL-8.6.0-Nightly]
identifier: sanity-8.6to9.0-beaker-minimal
tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:partitioning & tag:8to9'
environments:
- tmt:
context:
distro: "rhel-8.6"
- # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
env:
@@ -283,7 +317,18 @@ jobs:
- kernel-rt-8.6to9.0
- 8.6to9.0
identifier: sanity-8.6to9.0-kernel-rt
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:kernel-rt & tag:8to9'
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.6"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
- &sanity-88to92
<<: *sanity-86to90
@@ -292,14 +337,15 @@ jobs:
distros: [RHEL-8.8.0-Nightly]
identifier: sanity-8.8to9.2
tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:sanity & tag:8to9'
environments:
- tmt:
context:
distro: "rhel-8.8"
- # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
env:
@@ -321,11 +367,13 @@ jobs:
distros: [RHEL-8.8.0-Nightly]
identifier: sanity-8.8to9.2-beaker-minimal
tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:partitioning & tag:8to9'
environments:
- tmt:
context:
distro: "rhel-8.8"
- # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
@@ -345,7 +393,18 @@ jobs:
- kernel-rt-8.8to9.2
- 8.8to9.2
identifier: sanity-8.8to9.2-kernel-rt
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:kernel-rt & tag:8to9'
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.8"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
- &sanity-89to93
<<: *sanity-88to92
@@ -354,14 +413,15 @@ jobs:
distros: [RHEL-8.9.0-Nightly]
identifier: sanity-8.9to9.3
tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:sanity & tag:8to9'
environments:
- tmt:
context:
distro: "rhel-8.9"
- # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
env:
@@ -383,14 +443,15 @@ jobs:
distros: [RHEL-8.9.0-Nightly]
identifier: sanity-8.9to9.3-beaker-minimal
tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:partitioning & tag:8to9'
environments:
- tmt:
context:
distro: "rhel-8.9"
- # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
env:
@@ -408,7 +469,18 @@ jobs:
- kernel-rt-8.9to9.3
- 8.9to9.3
identifier: sanity-8.9to9.3-kernel-rt
- tmt_plan: "(?!.*max_sst)(.*tier1)(.*kernel-rt)"
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:kernel-rt & tag:8to9'
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.9"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
- &sanity-86to90-aws
<<: *sanity-79to86-aws
@@ -417,11 +489,13 @@ jobs:
distros: [RHEL-8.6-rhui]
identifier: sanity-8to9-aws-e2e
tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:e2e'
environments:
- tmt:
context:
distro: "rhel-8.6"
- # tag resources as sst_upgrades@leapp_upstream_test to enable cost metrics collection
settings:
provisioning:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
--
2.41.0

View File

@ -1,29 +0,0 @@
From 60190ff19cc8c1f840ee2d0e20f6b63fdd6e8947 Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Mon, 13 Nov 2023 14:26:07 +0100
Subject: [PATCH 30/38] Bring back reference to oamg/leapp-tests repo
After MR303 is merged to master there is no need
to point to my fork anymore.
---
.packit.yaml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index 02cc6d52..2e606a40 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -87,8 +87,8 @@ jobs:
- &sanity-79to86
job: tests
- fmf_url: "https://gitlab.cee.redhat.com/ivasilev/tmt-plans"
- fmf_ref: "pocgenerator"
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
+ fmf_ref: "master"
use_internal_tf: True
trigger: pull_request
labels:
--
2.41.0

View File

@ -1,543 +0,0 @@
From e9f899c27688007d2e87144ccfd038b8b0a655d1 Mon Sep 17 00:00:00 2001
From: PeterMocary <petermocary@gmail.com>
Date: Wed, 12 Jul 2023 22:24:48 +0200
Subject: [PATCH 31/38] add the posibility to upgrade with a local repository
Upgrade with a local repository required to host the repository locally
for it to be visible from target user-space container during the
upgrade. The added actor ensures that the local repository
will be visible from the container by adjusting the path to it simply by
prefixing a host root mount bind '/installroot' to it. The
local_repos_inhibit actor is no longer needed, thus was removed.
---
.../common/actors/adjustlocalrepos/actor.py | 48 ++++++
.../libraries/adjustlocalrepos.py | 100 ++++++++++++
.../tests/test_adjustlocalrepos.py | 151 ++++++++++++++++++
.../common/actors/localreposinhibit/actor.py | 89 -----------
.../tests/test_unit_localreposinhibit.py | 81 ----------
.../common/libraries/dnfplugin.py | 5 +-
6 files changed, 302 insertions(+), 172 deletions(-)
create mode 100644 repos/system_upgrade/common/actors/adjustlocalrepos/actor.py
create mode 100644 repos/system_upgrade/common/actors/adjustlocalrepos/libraries/adjustlocalrepos.py
create mode 100644 repos/system_upgrade/common/actors/adjustlocalrepos/tests/test_adjustlocalrepos.py
delete mode 100644 repos/system_upgrade/common/actors/localreposinhibit/actor.py
delete mode 100644 repos/system_upgrade/common/actors/localreposinhibit/tests/test_unit_localreposinhibit.py
diff --git a/repos/system_upgrade/common/actors/adjustlocalrepos/actor.py b/repos/system_upgrade/common/actors/adjustlocalrepos/actor.py
new file mode 100644
index 00000000..064e7f3e
--- /dev/null
+++ b/repos/system_upgrade/common/actors/adjustlocalrepos/actor.py
@@ -0,0 +1,48 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import adjustlocalrepos
+from leapp.libraries.common import mounting
+from leapp.libraries.stdlib import api
+from leapp.models import (
+ TargetOSInstallationImage,
+ TargetUserSpaceInfo,
+ TMPTargetRepositoriesFacts,
+ UsedTargetRepositories
+)
+from leapp.tags import IPUWorkflowTag, TargetTransactionChecksPhaseTag
+
+
+class AdjustLocalRepos(Actor):
+ """
+ Adjust local repositories to the target user-space container.
+
+ Changes the path of local file urls (starting with 'file://') for 'baseurl' and
+ 'mirrorlist' fields to the container space for the used repositories. This is
+ done by prefixing host root mount bind ('/installroot') to the path. It ensures
+ that the files will be accessible from the container and thus proper functionality
+ of the local repository.
+ """
+
+ name = 'adjust_local_repos'
+ consumes = (TargetOSInstallationImage,
+ TargetUserSpaceInfo,
+ TMPTargetRepositoriesFacts,
+ UsedTargetRepositories)
+ produces = ()
+ tags = (IPUWorkflowTag, TargetTransactionChecksPhaseTag)
+
+ def process(self):
+ target_userspace_info = next(self.consume(TargetUserSpaceInfo), None)
+ used_target_repos = next(self.consume(UsedTargetRepositories), None)
+ target_repos_facts = next(self.consume(TMPTargetRepositoriesFacts), None)
+ target_iso = next(self.consume(TargetOSInstallationImage), None)
+
+ if not all([target_userspace_info, used_target_repos, target_repos_facts]):
+ api.current_logger().error("Missing required information to proceed!")
+ return
+
+ target_repos_facts = target_repos_facts.repositories
+ iso_repoids = set(repo.repoid for repo in target_iso.repositories) if target_iso else set()
+ used_target_repoids = set(repo.repoid for repo in used_target_repos.repos)
+
+ with mounting.NspawnActions(base_dir=target_userspace_info.path) as context:
+ adjustlocalrepos.process(context, target_repos_facts, iso_repoids, used_target_repoids)
diff --git a/repos/system_upgrade/common/actors/adjustlocalrepos/libraries/adjustlocalrepos.py b/repos/system_upgrade/common/actors/adjustlocalrepos/libraries/adjustlocalrepos.py
new file mode 100644
index 00000000..55a0d075
--- /dev/null
+++ b/repos/system_upgrade/common/actors/adjustlocalrepos/libraries/adjustlocalrepos.py
@@ -0,0 +1,100 @@
+import os
+
+from leapp.libraries.stdlib import api
+
+HOST_ROOT_MOUNT_BIND_PATH = '/installroot'
+LOCAL_FILE_URL_PREFIX = 'file://'
+
+
+def _adjust_local_file_url(repo_file_line):
+ """
+ Adjusts a local file url to the target user-space container in a provided
+ repo file line by prefixing host root mount bind '/installroot' to it
+ when needed.
+
+ :param str repo_file_line: a line from a repo file
+ :returns str: adjusted line or the provided line if no changes are needed
+ """
+ adjust_fields = ['baseurl', 'mirrorlist']
+
+ if LOCAL_FILE_URL_PREFIX in repo_file_line and not repo_file_line.startswith('#'):
+ entry_field, entry_value = repo_file_line.strip().split('=', 1)
+ if not any(entry_field.startswith(field) for field in adjust_fields):
+ return repo_file_line
+
+ entry_value = entry_value.strip('\'\"')
+ path = entry_value[len(LOCAL_FILE_URL_PREFIX):]
+ new_entry_value = LOCAL_FILE_URL_PREFIX + os.path.join(HOST_ROOT_MOUNT_BIND_PATH, path.lstrip('/'))
+ new_repo_file_line = entry_field + '=' + new_entry_value
+ return new_repo_file_line
+ return repo_file_line
+
+
+def _extract_repos_from_repofile(context, repo_file):
+ """
+ Generator function that extracts repositories from a repo file in the given context
+ and yields them as list of lines that belong to the repository.
+
+ :param context: target user-space context
+ :param str repo_file: path to repository file (inside the provided context)
+ """
+ with context.open(repo_file, 'r') as rf:
+ repo_file_lines = rf.readlines()
+
+ # Detect repo and remove lines before first repoid
+ repo_found = False
+ for idx, line in enumerate(repo_file_lines):
+ if line.startswith('['):
+ repo_file_lines = repo_file_lines[idx:]
+ repo_found = True
+ break
+
+ if not repo_found:
+ return
+
+ current_repo = []
+ for line in repo_file_lines:
+ line = line.strip()
+
+ if line.startswith('[') and current_repo:
+ yield current_repo
+ current_repo = []
+
+ current_repo.append(line)
+ yield current_repo
+
+
+def _adjust_local_repos_to_container(context, repo_file, local_repoids):
+ new_repo_file = []
+ for repo in _extract_repos_from_repofile(context, repo_file):
+ repoid = repo[0].strip('[]')
+ adjusted_repo = repo
+ if repoid in local_repoids:
+ adjusted_repo = [_adjust_local_file_url(line) for line in repo]
+ new_repo_file.append(adjusted_repo)
+
+ # Combine the repo file contents into a string and write it back to the file
+ new_repo_file = ['\n'.join(repo) for repo in new_repo_file]
+ new_repo_file = '\n'.join(new_repo_file)
+ with context.open(repo_file, 'w') as rf:
+ rf.write(new_repo_file)
+
+
+def process(context, target_repos_facts, iso_repoids, used_target_repoids):
+ for repo_file_facts in target_repos_facts:
+ repo_file_path = repo_file_facts.file
+ local_repoids = set()
+ for repo in repo_file_facts.data:
+ # Skip repositories that aren't used or are provided by ISO
+ if repo.repoid not in used_target_repoids or repo.repoid in iso_repoids:
+ continue
+ # Note repositories that contain local file url
+ if repo.baseurl and LOCAL_FILE_URL_PREFIX in repo.baseurl or \
+ repo.mirrorlist and LOCAL_FILE_URL_PREFIX in repo.mirrorlist:
+ local_repoids.add(repo.repoid)
+
+ if local_repoids:
+ api.current_logger().debug(
+ 'Adjusting following repos in the repo file - {}: {}'.format(repo_file_path,
+ ', '.join(local_repoids)))
+ _adjust_local_repos_to_container(context, repo_file_path, local_repoids)
diff --git a/repos/system_upgrade/common/actors/adjustlocalrepos/tests/test_adjustlocalrepos.py b/repos/system_upgrade/common/actors/adjustlocalrepos/tests/test_adjustlocalrepos.py
new file mode 100644
index 00000000..41cff200
--- /dev/null
+++ b/repos/system_upgrade/common/actors/adjustlocalrepos/tests/test_adjustlocalrepos.py
@@ -0,0 +1,151 @@
+import pytest
+
+from leapp.libraries.actor import adjustlocalrepos
+
+REPO_FILE_1_LOCAL_REPOIDS = ['myrepo1']
+REPO_FILE_1 = [['[myrepo1]',
+ 'name=mylocalrepo',
+ 'baseurl=file:///home/user/.local/myrepos/repo1'
+ ]]
+REPO_FILE_1_ADJUSTED = [['[myrepo1]',
+ 'name=mylocalrepo',
+ 'baseurl=file:///installroot/home/user/.local/myrepos/repo1'
+ ]]
+
+REPO_FILE_2_LOCAL_REPOIDS = ['myrepo3']
+REPO_FILE_2 = [['[myrepo2]',
+ 'name=mynotlocalrepo',
+ 'baseurl=https://www.notlocal.com/packages'
+ ],
+ ['[myrepo3]',
+ 'name=mylocalrepo',
+ 'baseurl=file:///home/user/.local/myrepos/repo3',
+ 'mirrorlist=file:///home/user/.local/mymirrors/repo3.txt'
+ ]]
+REPO_FILE_2_ADJUSTED = [['[myrepo2]',
+ 'name=mynotlocalrepo',
+ 'baseurl=https://www.notlocal.com/packages'
+ ],
+ ['[myrepo3]',
+ 'name=mylocalrepo',
+ 'baseurl=file:///installroot/home/user/.local/myrepos/repo3',
+ 'mirrorlist=file:///installroot/home/user/.local/mymirrors/repo3.txt'
+ ]]
+
+REPO_FILE_3_LOCAL_REPOIDS = ['myrepo4', 'myrepo5']
+REPO_FILE_3 = [['[myrepo4]',
+ 'name=myrepowithlocalgpgkey',
+ 'baseurl="file:///home/user/.local/myrepos/repo4"',
+ 'gpgkey=file:///home/user/.local/pki/gpgkey',
+ 'gpgcheck=1'
+ ],
+ ['[myrepo5]',
+ 'name=myrepowithcomment',
+ 'baseurl=file:///home/user/.local/myrepos/repo5',
+ '#baseurl=file:///home/user/.local/myotherrepos/repo5',
+ 'enabled=1',
+ 'exclude=sed']]
+REPO_FILE_3_ADJUSTED = [['[myrepo4]',
+ 'name=myrepowithlocalgpgkey',
+ 'baseurl=file:///installroot/home/user/.local/myrepos/repo4',
+ 'gpgkey=file:///home/user/.local/pki/gpgkey',
+ 'gpgcheck=1'
+ ],
+ ['[myrepo5]',
+ 'name=myrepowithcomment',
+ 'baseurl=file:///installroot/home/user/.local/myrepos/repo5',
+ '#baseurl=file:///home/user/.local/myotherrepos/repo5',
+ 'enabled=1',
+ 'exclude=sed']]
+REPO_FILE_EMPTY = []
+
+
+@pytest.mark.parametrize('repo_file_line, expected_adjusted_repo_file_line',
+ [('baseurl=file:///home/user/.local/repositories/repository',
+ 'baseurl=file:///installroot/home/user/.local/repositories/repository'),
+ ('baseurl="file:///home/user/my-repo"',
+ 'baseurl=file:///installroot/home/user/my-repo'),
+ ('baseurl=https://notlocal.com/packages',
+ 'baseurl=https://notlocal.com/packages'),
+ ('mirrorlist=file:///some_mirror_list.txt',
+ 'mirrorlist=file:///installroot/some_mirror_list.txt'),
+ ('gpgkey=file:///etc/pki/some.key',
+ 'gpgkey=file:///etc/pki/some.key'),
+ ('#baseurl=file:///home/user/my-repo',
+ '#baseurl=file:///home/user/my-repo'),
+ ('', ''),
+ ('[repoid]', '[repoid]')])
+def test_adjust_local_file_url(repo_file_line, expected_adjusted_repo_file_line):
+ adjusted_repo_file_line = adjustlocalrepos._adjust_local_file_url(repo_file_line)
+ if 'file://' not in repo_file_line:
+ assert adjusted_repo_file_line == repo_file_line
+ return
+ assert adjusted_repo_file_line == expected_adjusted_repo_file_line
+
+
+class MockedFileDescriptor(object):
+
+ def __init__(self, repo_file, expected_new_repo_file):
+ self.repo_file = repo_file
+ self.expected_new_repo_file = expected_new_repo_file
+
+ @staticmethod
+ def _create_repo_file_lines(repo_file):
+ repo_file_lines = []
+ for repo in repo_file:
+ repo = [line+'\n' for line in repo]
+ repo_file_lines += repo
+ return repo_file_lines
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, *args, **kwargs):
+ return
+
+ def readlines(self):
+ return self._create_repo_file_lines(self.repo_file)
+
+ def write(self, new_contents):
+ assert self.expected_new_repo_file
+ repo_file_lines = self._create_repo_file_lines(self.expected_new_repo_file)
+ expected_repo_file_contents = ''.join(repo_file_lines).rstrip('\n')
+ assert expected_repo_file_contents == new_contents
+
+
+class MockedContext(object):
+
+ def __init__(self, repo_contents, expected_repo_contents):
+ self.repo_contents = repo_contents
+ self.expected_repo_contents = expected_repo_contents
+
+ def open(self, path, mode):
+ return MockedFileDescriptor(self.repo_contents, self.expected_repo_contents)
+
+
+@pytest.mark.parametrize('repo_file, local_repoids, expected_repo_file',
+ [(REPO_FILE_1, REPO_FILE_1_LOCAL_REPOIDS, REPO_FILE_1_ADJUSTED),
+ (REPO_FILE_2, REPO_FILE_2_LOCAL_REPOIDS, REPO_FILE_2_ADJUSTED),
+ (REPO_FILE_3, REPO_FILE_3_LOCAL_REPOIDS, REPO_FILE_3_ADJUSTED)])
+def test_adjust_local_repos_to_container(repo_file, local_repoids, expected_repo_file):
+ # The checks for expected_repo_file comparison to a adjusted form of the
+ # repo_file can be found in the MockedFileDescriptor.write().
+ context = MockedContext(repo_file, expected_repo_file)
+ adjustlocalrepos._adjust_local_repos_to_container(context, '<some_repo_file_path>', local_repoids)
+
+
+@pytest.mark.parametrize('expected_repo_file, add_empty_lines', [(REPO_FILE_EMPTY, False),
+ (REPO_FILE_1, False),
+ (REPO_FILE_2, True)])
+def test_extract_repos_from_repofile(expected_repo_file, add_empty_lines):
+ repo_file = expected_repo_file[:]
+ if add_empty_lines: # add empty lines before the first repo
+ repo_file[0] = ['', ''] + repo_file[0]
+
+ context = MockedContext(repo_file, None)
+ repo_gen = adjustlocalrepos._extract_repos_from_repofile(context, '<some_repo_file_path>')
+
+ for repo in expected_repo_file:
+ assert repo == next(repo_gen, None)
+
+ assert next(repo_gen, None) is None
diff --git a/repos/system_upgrade/common/actors/localreposinhibit/actor.py b/repos/system_upgrade/common/actors/localreposinhibit/actor.py
deleted file mode 100644
index 2bde7f15..00000000
--- a/repos/system_upgrade/common/actors/localreposinhibit/actor.py
+++ /dev/null
@@ -1,89 +0,0 @@
-from leapp import reporting
-from leapp.actors import Actor
-from leapp.models import TargetOSInstallationImage, TMPTargetRepositoriesFacts, UsedTargetRepositories
-from leapp.reporting import Report
-from leapp.tags import IPUWorkflowTag, TargetTransactionChecksPhaseTag
-from leapp.utils.deprecation import suppress_deprecation
-
-
-@suppress_deprecation(TMPTargetRepositoriesFacts)
-class LocalReposInhibit(Actor):
- """Inhibits the upgrade if local repositories were found."""
-
- name = "local_repos_inhibit"
- consumes = (
- UsedTargetRepositories,
- TargetOSInstallationImage,
- TMPTargetRepositoriesFacts,
- )
- produces = (Report,)
- tags = (IPUWorkflowTag, TargetTransactionChecksPhaseTag)
-
- def collect_target_repoids_with_local_url(self, used_target_repos, target_repos_facts, target_iso):
- """Collects all repoids that have a local (file://) URL.
-
- UsedTargetRepositories doesn't contain baseurl attribute. So gathering
- them from model TMPTargetRepositoriesFacts.
- """
- used_target_repoids = set(repo.repoid for repo in used_target_repos.repos)
- iso_repoids = set(iso_repo.repoid for iso_repo in target_iso.repositories) if target_iso else set()
-
- target_repofile_data = (repofile.data for repofile in target_repos_facts.repositories)
-
- local_repoids = []
- for repo_data in target_repofile_data:
- for target_repo in repo_data:
- # Check only in repositories that are used and are not provided by the upgrade ISO, if any
- if target_repo.repoid not in used_target_repoids or target_repo.repoid in iso_repoids:
- continue
-
- # Repo fields potentially containing local URLs have different importance, check based on their prio
- url_field_to_check = target_repo.mirrorlist or target_repo.metalink or target_repo.baseurl or ''
-
- if url_field_to_check.startswith("file://"):
- local_repoids.append(target_repo.repoid)
- return local_repoids
-
- def process(self):
- used_target_repos = next(self.consume(UsedTargetRepositories), None)
- target_repos_facts = next(self.consume(TMPTargetRepositoriesFacts), None)
- target_iso = next(self.consume(TargetOSInstallationImage), None)
-
- if not used_target_repos or not target_repos_facts:
- return
-
- local_repoids = self.collect_target_repoids_with_local_url(used_target_repos, target_repos_facts, target_iso)
- if local_repoids:
- suffix, verb = ("y", "has") if len(local_repoids) == 1 else ("ies", "have")
- local_repoids_str = ", ".join(local_repoids)
-
- warn_msg = ("The following local repositor{suffix} {verb} been found: {local_repoids} "
- "(their baseurl starts with file:///). Currently leapp does not support this option.")
- warn_msg = warn_msg.format(suffix=suffix, verb=verb, local_repoids=local_repoids_str)
- self.log.warning(warn_msg)
-
- reporting.create_report(
- [
- reporting.Title("Local repositor{suffix} detected".format(suffix=suffix)),
- reporting.Summary(warn_msg),
- reporting.Severity(reporting.Severity.HIGH),
- reporting.Groups([reporting.Groups.REPOSITORY]),
- reporting.Groups([reporting.Groups.INHIBITOR]),
- reporting.Remediation(
- hint=(
- "By using Apache HTTP Server you can expose "
- "your local repository via http. See the linked "
- "article for details. "
- )
- ),
- reporting.ExternalLink(
- title=(
- "Customizing your Red Hat Enterprise Linux "
- "in-place upgrade"
- ),
- url=(
- "https://red.ht/ipu-customisation-repos-known-issues"
- ),
- ),
- ]
- )
diff --git a/repos/system_upgrade/common/actors/localreposinhibit/tests/test_unit_localreposinhibit.py b/repos/system_upgrade/common/actors/localreposinhibit/tests/test_unit_localreposinhibit.py
deleted file mode 100644
index 64a79e80..00000000
--- a/repos/system_upgrade/common/actors/localreposinhibit/tests/test_unit_localreposinhibit.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import pytest
-
-from leapp.models import (
- RepositoryData,
- RepositoryFile,
- TargetOSInstallationImage,
- TMPTargetRepositoriesFacts,
- UsedTargetRepositories,
- UsedTargetRepository
-)
-from leapp.snactor.fixture import ActorContext
-
-
-@pytest.mark.parametrize(
- ("baseurl", "mirrorlist", "metalink", "exp_msgs_len"),
- [
- ("file:///root/crb", None, None, 1),
- ("http://localhost/crb", None, None, 0),
- (None, "file:///root/crb", None, 1),
- (None, "http://localhost/crb", None, 0),
- (None, None, "file:///root/crb", 1),
- (None, None, "http://localhost/crb", 0),
- ("http://localhost/crb", "file:///root/crb", None, 1),
- ("file:///root/crb", "http://localhost/crb", None, 0),
- ("http://localhost/crb", None, "file:///root/crb", 1),
- ("file:///root/crb", None, "http://localhost/crb", 0),
- ],
-)
-def test_unit_localreposinhibit(current_actor_context, baseurl, mirrorlist, metalink, exp_msgs_len):
- """Ensure the Report is generated when local path is used as a baseurl.
-
- :type current_actor_context: ActorContext
- """
- with pytest.deprecated_call():
- current_actor_context.feed(
- TMPTargetRepositoriesFacts(
- repositories=[
- RepositoryFile(
- file="the/path/to/some/file",
- data=[
- RepositoryData(
- name="BASEOS",
- baseurl=(
- "http://example.com/path/to/repo/BaseOS/x86_64/os/"
- ),
- repoid="BASEOS",
- ),
- RepositoryData(
- name="APPSTREAM",
- baseurl=(
- "http://example.com/path/to/repo/AppStream/x86_64/os/"
- ),
- repoid="APPSTREAM",
- ),
- RepositoryData(
- name="CRB", repoid="CRB", baseurl=baseurl,
- mirrorlist=mirrorlist, metalink=metalink
- ),
- ],
- )
- ]
- )
- )
- current_actor_context.feed(
- UsedTargetRepositories(
- repos=[
- UsedTargetRepository(repoid="BASEOS"),
- UsedTargetRepository(repoid="CRB"),
- ]
- )
- )
- current_actor_context.run()
- assert len(current_actor_context.messages()) == exp_msgs_len
-
-
-def test_upgrade_not_inhibited_if_iso_used(current_actor_context):
- repofile = RepositoryFile(file="path/to/some/file",
- data=[RepositoryData(name="BASEOS", baseurl="file:///path", repoid="BASEOS")])
- current_actor_context.feed(TMPTargetRepositoriesFacts(repositories=[repofile]))
- current_actor_context.feed(UsedTargetRepositories(repos=[UsedTargetRepository(repoid="BASEOS")]))
- current_actor_context.feed(TargetOSInstallationImage(path='', mountpoint='', repositories=[]))
diff --git a/repos/system_upgrade/common/libraries/dnfplugin.py b/repos/system_upgrade/common/libraries/dnfplugin.py
index ffde211f..26810e94 100644
--- a/repos/system_upgrade/common/libraries/dnfplugin.py
+++ b/repos/system_upgrade/common/libraries/dnfplugin.py
@@ -334,8 +334,9 @@ def install_initramdisk_requirements(packages, target_userspace_info, used_repos
"""
Performs the installation of packages into the initram disk
"""
- with _prepare_transaction(used_repos=used_repos,
- target_userspace_info=target_userspace_info) as (context, target_repoids, _unused):
+ mount_binds = ['/:/installroot']
+ with _prepare_transaction(used_repos=used_repos, target_userspace_info=target_userspace_info,
+ binds=mount_binds) as (context, target_repoids, _unused):
if get_target_major_version() == '9':
_rebuild_rpm_db(context)
repos_opt = [['--enablerepo', repo] for repo in target_repoids]
--
2.41.0

View File

@ -1,455 +0,0 @@
From 5202c9b126c06057e9145b4b7e02afe50c1f879d Mon Sep 17 00:00:00 2001
From: David Kubek <dkubek@redhat.com>
Date: Tue, 24 Oct 2023 11:49:16 +0200
Subject: [PATCH 32/38] Fix certificate symlink handling
In response to the identified flaws in the originally delivered fix, for
feature enabling http repositories, this commit addresses the following
issues:
1. Previously, files installed via RPMs that were originally symlinks
were being switched to standard files. This issue has been resolved
by preserving symlinks within the /etc/pki directory. Any symlink
pointing to a file within the /etc/pki directory (whether present in
the source system or installed by a package in the container) will be
present in the container, ensuring changes to certificates are
properly propagated.
2. Lists of trusted CAs were not being updated, as the update-ca-trust
call was missing inside the container. This commit now includes the
necessary update-ca-trust call.
The solution specification has been modified as follows:
- Certificate _files_ in /etc/pki (excluding symlinks) are copied to
the container as in the original solution.
- Files installed by packages within the container are preserved and
given higher priority.
- Handling of symlinks is enhanced, ensuring that symlinks within
the /etc/pki directory are preserved, while any symlink pointing
outside the /etc/pki directory will be copied as a file.
- Certificates are updated using `update-ca-trust`.
---
.../libraries/userspacegen.py | 124 ++++++++--
.../tests/unit_test_targetuserspacecreator.py | 224 ++++++++++++++++++
2 files changed, 332 insertions(+), 16 deletions(-)
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
index 039b99a5..050ad7fe 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
@@ -331,12 +331,80 @@ def _get_files_owned_by_rpms(context, dirpath, pkgs=None, recursive=False):
return files_owned_by_rpms
+def _copy_decouple(srcdir, dstdir):
+ """
+ Copy `srcdir` to `dstdir` while decoupling symlinks.
+
+ What we mean by decoupling the `srcdir` is that any symlinks pointing
+ outside the directory will be copied as regular files. This means that the
+ directory will become independent from its surroundings with respect to
+ symlinks. Any symlink (or symlink chains) within the directory will be
+ preserved.
+
+ """
+
+ for root, dummy_dirs, files in os.walk(srcdir):
+ for filename in files:
+ relpath = os.path.relpath(root, srcdir)
+ source_filepath = os.path.join(root, filename)
+ target_filepath = os.path.join(dstdir, relpath, filename)
+
+ # Skip and report broken symlinks
+ if not os.path.exists(source_filepath):
+ api.current_logger().warning(
+ 'File {} is a broken symlink! Will not copy the file.'.format(source_filepath))
+ continue
+
+ # Copy symlinks to the target userspace
+ source_is_symlink = os.path.islink(source_filepath)
+ pointee = None
+ if source_is_symlink:
+ pointee = os.readlink(source_filepath)
+
+ # If source file is a symlink within `srcdir` then preserve it,
+ # otherwise resolve and copy it as a file it points to
+ if pointee is not None and not pointee.startswith(srcdir):
+ # Follow the path until we hit a file or get back to /etc/pki
+ while not pointee.startswith(srcdir) and os.path.islink(pointee):
+ pointee = os.readlink(pointee)
+
+ # Pointee points to a _regular file_ outside /etc/pki so we
+ # copy it instead
+ if not pointee.startswith(srcdir) and not os.path.islink(pointee):
+ source_is_symlink = False
+ source_filepath = pointee
+ else:
+ # pointee points back to /etc/pki
+ pass
+
+ # Ensure parent directory exists
+ parent_dir = os.path.dirname(target_filepath)
+ # Note: This is secure because we know that parent_dir is located
+ # inside of `$target_userspace/etc/pki` which is a directory that
+ # is not writable by unprivileged users. If this function is used
+ # elsewhere we may need to be more careful before running `mkdir -p`.
+ run(['mkdir', '-p', parent_dir])
+
+ if source_is_symlink:
+ # Preserve the owner and permissions of the original symlink
+ run(['ln', '-s', pointee, target_filepath])
+ run(['chmod', '--reference={}'.format(source_filepath), target_filepath])
+ continue
+
+ run(['cp', '-a', source_filepath, target_filepath])
+
+
def _copy_certificates(context, target_userspace):
"""
- Copy the needed certificates into the container, but preserve original ones
+ Copy certificates from source system into the container, but preserve
+ original ones
Some certificates are already installed in the container and those are
default certificates for the target OS, so we preserve these.
+
+ We respect the symlink hierarchy of the source system within the /etc/pki
+ folder. Dangling symlinks will be ignored.
+
"""
target_pki = os.path.join(target_userspace, 'etc', 'pki')
@@ -346,36 +414,56 @@ def _copy_certificates(context, target_userspace):
files_owned_by_rpms = _get_files_owned_by_rpms(target_context, '/etc/pki', recursive=True)
api.current_logger().debug('Files owned by rpms: {}'.format(' '.join(files_owned_by_rpms)))
+ # Backup container /etc/pki
run(['mv', target_pki, backup_pki])
- context.copytree_from('/etc/pki', target_pki)
+ # Copy source /etc/pki to the container
+ _copy_decouple('/etc/pki', target_pki)
+
+ # Assertion: after running _copy_decouple(), no broken symlinks exist in /etc/pki in the container
+ # So any broken symlinks created will be by the installed packages.
+
+ # Recover installed packages as they always get precedence
for filepath in files_owned_by_rpms:
src_path = os.path.join(backup_pki, filepath)
dst_path = os.path.join(target_pki, filepath)
# Resolve and skip any broken symlinks
is_broken_symlink = False
- while os.path.islink(src_path):
- # The symlink points to a path relative to the target userspace so
- # we need to readjust it
- next_path = os.path.join(target_userspace, os.readlink(src_path)[1:])
- if not os.path.exists(next_path):
- is_broken_symlink = True
-
- # The path original path of the broken symlink in the container
- report_path = os.path.join(target_pki, os.path.relpath(src_path, backup_pki))
- api.current_logger().warning('File {} is a broken symlink!'.format(report_path))
- break
-
- src_path = next_path
+ pointee = None
+ if os.path.islink(src_path):
+ pointee = os.path.join(target_userspace, os.readlink(src_path)[1:])
+
+ seen = set()
+ while os.path.islink(pointee):
+ # The symlink points to a path relative to the target userspace so
+ # we need to readjust it
+ pointee = os.path.join(target_userspace, os.readlink(src_path)[1:])
+ if not os.path.exists(pointee) or pointee in seen:
+ is_broken_symlink = True
+
+ # The path original path of the broken symlink in the container
+ report_path = os.path.join(target_pki, os.path.relpath(src_path, backup_pki))
+ api.current_logger().warning(
+ 'File {} is a broken symlink! Will not copy!'.format(report_path))
+ break
+
+ seen.add(pointee)
if is_broken_symlink:
continue
+ # Cleanup conflicting files
run(['rm', '-rf', dst_path])
+
+ # Ensure destination exists
parent_dir = os.path.dirname(dst_path)
run(['mkdir', '-p', parent_dir])
- run(['cp', '-a', src_path, dst_path])
+
+ # Copy the new file
+ run(['cp', '-R', '--preserve=all', src_path, dst_path])
+
+ run(['rm', '-rf', backup_pki])
def _prep_repository_access(context, target_userspace):
@@ -387,6 +475,10 @@ def _prep_repository_access(context, target_userspace):
backup_yum_repos_d = os.path.join(target_etc, 'yum.repos.d.backup')
_copy_certificates(context, target_userspace)
+ # NOTE(dkubek): context.call(['update-ca-trust']) seems to not be working.
+ # I am not really sure why. The changes to files are not
+ # being written to disk.
+ run(["chroot", target_userspace, "/bin/bash", "-c", "su - -c update-ca-trust"])
if not rhsm.skip_rhsm():
run(['rm', '-rf', os.path.join(target_etc, 'rhsm')])
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/tests/unit_test_targetuserspacecreator.py b/repos/system_upgrade/common/actors/targetuserspacecreator/tests/unit_test_targetuserspacecreator.py
index cc684c7d..1a1ee56e 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/tests/unit_test_targetuserspacecreator.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/tests/unit_test_targetuserspacecreator.py
@@ -1,4 +1,8 @@
+from __future__ import division
+
import os
+import subprocess
+import sys
from collections import namedtuple
import pytest
@@ -11,6 +15,12 @@ from leapp.libraries.common.config import architecture
from leapp.libraries.common.testutils import CurrentActorMocked, logger_mocked, produce_mocked
from leapp.utils.deprecation import suppress_deprecation
+if sys.version_info < (2, 8):
+ from pathlib2 import Path
+else:
+ from pathlib import Path
+
+
CUR_DIR = os.path.dirname(os.path.abspath(__file__))
_CERTS_PATH = os.path.join(CUR_DIR, '../../../files', userspacegen.PROD_CERTS_FOLDER)
_DEFAULT_CERT_PATH = os.path.join(_CERTS_PATH, '8.1', '479.pem')
@@ -48,6 +58,220 @@ class MockedMountingBase(object):
pass
+def traverse_structure(structure, root=Path('/')):
+ for filename, links_to in structure.items():
+ filepath = root / filename
+
+ if isinstance(links_to, dict):
+ for pair in traverse_structure(links_to, filepath):
+ yield pair
+ else:
+ yield (filepath, links_to)
+
+
+def assert_directory_structure_matches(root, initial, expected):
+ # Assert every file that is supposed to be present is present
+ for filepath, links_to in traverse_structure(expected, root=root / 'expected'):
+ assert filepath.exists()
+
+ if links_to is None:
+ assert filepath.is_file()
+ continue
+
+ assert filepath.is_symlink()
+ assert os.readlink(str(filepath)) == str(root / 'initial' / links_to.lstrip('/'))
+
+ # Assert there are no extra files
+ result_dir = str(root / 'expected')
+ for fileroot, dummy_dirs, files in os.walk(result_dir):
+ for filename in files:
+ dir_path = os.path.relpath(fileroot, result_dir).split('/')
+
+ cwd = expected
+ for directory in dir_path:
+ cwd = cwd[directory]
+
+ assert filename in cwd
+
+ filepath = os.path.join(fileroot, filename)
+ if os.path.islink(filepath):
+ links_to = '/' + os.path.relpath(os.readlink(filepath), str(root / 'initial'))
+ assert cwd[filename] == links_to
+
+
+@pytest.fixture
+def temp_directory_layout(tmp_path, initial_structure):
+ for filepath, links_to in traverse_structure(initial_structure, root=tmp_path / 'initial'):
+ file_path = tmp_path / filepath
+ file_path.parent.mkdir(parents=True, exist_ok=True)
+
+ if links_to is None:
+ file_path.touch()
+ continue
+
+ file_path.symlink_to(tmp_path / 'initial' / links_to.lstrip('/'))
+
+ (tmp_path / 'expected').mkdir()
+ assert (tmp_path / 'expected').exists()
+
+ return tmp_path
+
+
+# The semantics of initial_structure and expected_structure are as follows:
+#
+# 1. The outermost dictionary encodes the root of a directory structure
+#
+# 2. Depending on the value for a key in a dict, each key in the dictionary
+# denotes the name of either a:
+# a) directory -- if value is dict
+# b) regular file -- if value is None
+# c) symlink -- if a value is str
+#
+# 3. The value of a symlink entry is a absolute path to a file in the context of
+# the structure.
+#
+@pytest.mark.parametrize('initial_structure,expected_structure', [
+ ({ # Copy a regular file
+ 'dir': {
+ 'fileA': None
+ }
+ }, {
+ 'dir': {
+ 'fileA': None
+ }
+ }),
+ ({ # Do not copy a broken symlink
+ 'dir': {
+ 'fileA': 'nonexistent'
+ }
+ }, {
+ 'dir': {}
+ }),
+ ({ # Copy a regular symlink
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': None
+ }
+ }, {
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': None
+ }
+ }),
+ ({ # Do not copy a chain of broken symlinks
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': 'nonexistent'
+ }
+ }, {
+ 'dir': {}
+ }),
+ ({ # Copy a chain of symlinks
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': '/dir/fileC',
+ 'fileC': None
+ }
+ }, {
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': '/dir/fileC',
+ 'fileC': None
+ }
+ }),
+ ({ # Circular symlinks
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': '/dir/fileC',
+ 'fileC': '/dir/fileC',
+ }
+ }, {
+ 'dir': {}
+ }),
+ ({ # Copy a link to a file outside the considered directory as file
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': '/dir/fileC',
+ 'fileC': '/outside/fileOut',
+ 'fileE': None
+ },
+ 'outside': {
+ 'fileOut': '/outside/fileD',
+ 'fileD': '/dir/fileE'
+ }
+ }, {
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': '/dir/fileC',
+ 'fileC': '/dir/fileE',
+ 'fileE': None,
+ }
+ }),
+ ({ # Same test with a nested structure within the source dir
+ 'dir': {
+ 'nested': {
+ 'fileA': '/dir/nested/fileB',
+ 'fileB': '/dir/nested/fileC',
+ 'fileC': '/outside/fileOut',
+ 'fileE': None
+ }
+ },
+ 'outside': {
+ 'fileOut': '/outside/fileD',
+ 'fileD': '/dir/nested/fileE'
+ }
+ }, {
+ 'dir': {
+ 'nested': {
+ 'fileA': '/dir/nested/fileB',
+ 'fileB': '/dir/nested/fileC',
+ 'fileC': '/dir/nested/fileE',
+ 'fileE': None
+ }
+ }
+ }),
+ ({ # Same test with a nested structure in the outside dir
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': '/dir/fileC',
+ 'fileC': '/outside/nested/fileOut',
+ 'fileE': None
+ },
+ 'outside': {
+ 'nested': {
+ 'fileOut': '/outside/nested/fileD',
+ 'fileD': '/dir/fileE'
+ }
+ }
+ }, {
+ 'dir': {
+ 'fileA': '/dir/fileB',
+ 'fileB': '/dir/fileC',
+ 'fileC': '/dir/fileE',
+ 'fileE': None,
+ }
+ }),
+]
+)
+def test_copy_decouple(monkeypatch, temp_directory_layout, initial_structure, expected_structure):
+
+ def run_mocked(command):
+ subprocess.Popen(
+ ' '.join(command),
+ shell=True,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ ).wait()
+
+ monkeypatch.setattr(userspacegen, 'run', run_mocked)
+ userspacegen._copy_decouple(
+ str(temp_directory_layout / 'initial' / 'dir'),
+ str(temp_directory_layout / 'expected' / 'dir'),
+ )
+
+ assert_directory_structure_matches(temp_directory_layout, initial_structure, expected_structure)
+
+
@pytest.mark.parametrize('result,dst_ver,arch,prod_type', [
(os.path.join(_CERTS_PATH, '8.1', '479.pem'), '8.1', architecture.ARCH_X86_64, 'ga'),
(os.path.join(_CERTS_PATH, '8.1', '419.pem'), '8.1', architecture.ARCH_ARM64, 'ga'),
--
2.41.0

View File

@ -1,701 +0,0 @@
From b099660b5a11ca09b3bc80bab105ba89322a331f Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Wed, 15 Nov 2023 15:10:15 +0100
Subject: [PATCH 33/38] Add prod certs and upgrade paths for 8.10 & 9.4
---
.../common/files/prod-certs/8.10/279.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/8.10/362.pem | 36 +++++++++++++++++++
.../common/files/prod-certs/8.10/363.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/8.10/419.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/8.10/433.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/8.10/479.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/8.10/486.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/8.10/72.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/9.4/279.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/9.4/362.pem | 36 +++++++++++++++++++
.../common/files/prod-certs/9.4/363.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/9.4/419.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/9.4/433.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/9.4/479.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/9.4/486.pem | 35 ++++++++++++++++++
.../common/files/prod-certs/9.4/72.pem | 35 ++++++++++++++++++
16 files changed, 562 insertions(+)
create mode 100644 repos/system_upgrade/common/files/prod-certs/8.10/279.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/8.10/362.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/8.10/363.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/8.10/419.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/8.10/433.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/8.10/479.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/8.10/486.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/8.10/72.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/9.4/279.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/9.4/362.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/9.4/363.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/9.4/419.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/9.4/433.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/9.4/479.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/9.4/486.pem
create mode 100644 repos/system_upgrade/common/files/prod-certs/9.4/72.pem
diff --git a/repos/system_upgrade/common/files/prod-certs/8.10/279.pem b/repos/system_upgrade/common/files/prod-certs/8.10/279.pem
new file mode 100644
index 00000000..e5cd4895
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/8.10/279.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGJjCCBA6gAwIBAgIJALDxRLt/tVC4MA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxMjIxMjMzOFoXDTQzMDcx
+MjIxMjMzOFowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFtkOTE4MGJk
+ZS1jZjdiLTRlMzktODY3Yy01YjlhZjQwYTczM2ZdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBrzCBrDAJBgNVHRMEAjAAMEMGDCsGAQQBkggJAYIXAQQzDDFSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIFBvd2VyLCBsaXR0bGUgZW5kaWFuMBYGDCsG
+AQQBkggJAYIXAgQGDAQ4LjEwMBkGDCsGAQQBkggJAYIXAwQJDAdwcGM2NGxlMCcG
+DCsGAQQBkggJAYIXBAQXDBVyaGVsLTgscmhlbC04LXBwYzY0bGUwDQYJKoZIhvcN
+AQELBQADggIBAIekB01efwoAed6nHz/siMJ+F4M/AiuaVxl6BoPDxTEC2nLf0pJH
+qaA1wWUltdP7W6YDNuq3KjdigeOG0MYcL6jphWEyC2s94XGdIMpU1lXCIKrjlt/D
+HD2MqYNwMsLOTt7CCayVwkZN0tLpLMybrhPjdMq6hOu3Fg1qyf8KQAjkKRF98n6Y
+dQuEW2rpwaSPAyucgIAKy8w7vwL/ABSNlHO7vL3yNarKSN0cNjS3b/pjBnC1LClL
+zQJY89GzYV2vgctjBqKkpJMccHDwVXkzZIcD5tFOOnq4GwGcKHucQJs7uMY8xvKB
+/7S917v2ryVveHYKm6bUD1AwnXGFd1timpKHxvRqIJqGi0tzTITD2joiLdyF0iPf
+bbet4WWgpwudwLc6Q6lI7SSXMWPOp3eZTtYAQhOcM7BymbST5jum5Rs+lzvY3lHn
+SIJsZnx4Q+31c0D412BH4hLHVrDgzQBIlbDwToVJFays/8dX8nixEZkUlHBZTSHk
+XSYFml/GgKMJ6C3aytK8B84mIzZlc3YMwVEmlqVWwylSufTnK678jBNHjVE/Nm1V
+VgwhNZXacSf5Q0/WBN5GqmkqQqktNlKdIDenr/f1psh9Tvz3j5aJQPV6UOYm6m5A
+FrdJMf4Gc4Snn1WAa/bElspZBc3pXnJkZBkxsk5UvvKMlEvCWqFYtQfY
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/8.10/362.pem b/repos/system_upgrade/common/files/prod-certs/8.10/362.pem
new file mode 100644
index 00000000..51ce132a
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/8.10/362.pem
@@ -0,0 +1,36 @@
+-----BEGIN CERTIFICATE-----
+MIIGNTCCBB2gAwIBAgIJALDxRLt/tVCiMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxMjIxMjMyMFoXDTQzMDcx
+MjIxMjMyMFowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFthOWU3ZmM1
+Mi05MDgyLTRiYWUtODJiMi0yYTEyZDkxYmNiMzZdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBvjCBuzAJBgNVHRMEAjAAMEgGDCsGAQQBkggJAYJqAQQ4DDZSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIFBvd2VyLCBsaXR0bGUgZW5kaWFuIEJldGEw
+GwYMKwYBBAGSCAkBgmoCBAsMCTguMTAgQmV0YTAZBgwrBgEEAZIICQGCagMECQwH
+cHBjNjRsZTAsBgwrBgEEAZIICQGCagQEHAwacmhlbC04LHJoZWwtOC1iZXRhLXBw
+YzY0bGUwDQYJKoZIhvcNAQELBQADggIBAB/7qr5HDUDdX2MBQSGStHTvT2Bepy1L
+ZWWrjFmoGOobW+Lee8/J7hPU5PNka7zqOjFFwi3oWOiPTMnJj3AkqWPaUnPemS/Q
+Jy9YDd14GZGefUAiczJYw5ZeY4HbOBEvPBnU/gSn3qbNiKZzWRR+cpD2SLF1pgIL
+05LU0+EKlIT8SNvTui3pFOqjuOeXPHeCF7sGG8r0ZEFtkyrqFReNT8iXy8wadG7k
+NcwMFttl0XR5qUWJbhkhMasMsyy2JZmdTzmqodxYvlhfpe+4naPOVH8brKkwM+iH
+sDZ2fFL+KOOUmybeV5bsOjGtcfbkKJ5g+h2JyyyO2O2p5hXsnpf7cSjwF2c07QaT
+SihdvNPA5V2UUPCScF9eAXveJeMFS+JOJDDyohxpr8uzg8Pz4dlMFe9YX4YUBP6I
+Kx3BWh5yagrGCyMAlw27IUeoVELWQXRaZnXngDO+2y/RDj2wVJi3gcajsrcHsjSn
+s5yQfNOb2hu6W13QbjXqFj8NZoszG120F3G09oC/wzYf5PCD+7PeVMKKefZfeWSw
+NEWrrBBZI6mJyVVeH1MLLdehI8Qt5ymBNELjNy5l8ITBFWFVqHYoRvY0kyDF1d8X
+o7Vk8hgiqShporkHWvW/sz/rFjvW6VRUu5Qx3KiXWnGIIM/Vq4FF9CjogvIvKWTN
+Oi1mTwT3Uq5c
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/8.10/363.pem b/repos/system_upgrade/common/files/prod-certs/8.10/363.pem
new file mode 100644
index 00000000..7e7377f5
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/8.10/363.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGJzCCBA+gAwIBAgIJALDxRLt/tVChMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxMjIxMjMyMFoXDTQzMDcx
+MjIxMjMyMFowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFs2NDA1ZWIw
+My04OTQzLTQ1ZTAtYjFiMC1mOTBiZmEzZDk2YjNdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBsDCBrTAJBgNVHRMEAjAAMDoGDCsGAQQBkggJAYJrAQQqDChSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIEFSTSA2NCBCZXRhMBsGDCsGAQQBkggJAYJr
+AgQLDAk4LjEwIEJldGEwGQYMKwYBBAGSCAkBgmsDBAkMB2FhcmNoNjQwLAYMKwYB
+BAGSCAkBgmsEBBwMGnJoZWwtOCxyaGVsLTgtYmV0YS1hYXJjaDY0MA0GCSqGSIb3
+DQEBCwUAA4ICAQA6dNrnTJSRXP0e4YM0m+0YRtI7Zh5mI67FA0HcYpnI6VDSl6KC
+9jQwb8AXT/FN35UJ68V/0TdR7K2FFbk4iZ/zTJACQBV+SnjkJRa/a46FA3qgftiW
+Lo74gTdYTqgo3pOgCYDrY8YnEtrTLdTdzVc95MLV5DdQuqyI1whawzW5b/DSildc
+f0rwI7kaSEl4NSc4ZZEiT9Qq3S/QGd2pIYGpDA+4WYXA2Nnlt/W31Khm7G+r7suj
+j9NNYs8Ddc63o86NBSLyKrCwry9lrn/1Vt8j5LQsiuHhjmxu5YMemvUPGR9o87r5
+1dEMAN4fwY4RULy072UjLoyWLHlRx8N9lCcHtQjbakmq9Ic+le2onvlq9yJ3nsWS
+kd1SUHtl/Ag/t6Qe5a+tWxZpUY2sG/nrrtdEK3zlMK665qlWoHuCRPcjQFa2UltR
+8qtO4AehozcKjR8HSS2BeDsR9IyBxDUYLkwY7sS33CbJAJcFfsV2h7usM9gEogp4
+xuzxgEQEEwi/z3dXYvDuw9RPKE7jEYG+7xrYuG5KGz2bD1NEo2pMs5T9ZkklmRGQ
+JOrDe2uI9X1x0Rz+DbFvR6vUYrZ9aYtPOQ5u3VU0pGszwXNZDNILc9W8Qakci4y3
+BBHqh7EVE4MN1PEDoT0NnvXsYBXoEwxBg4KihqgKqPT9titqeFWzUOWtRw==
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/8.10/419.pem b/repos/system_upgrade/common/files/prod-certs/8.10/419.pem
new file mode 100644
index 00000000..7f3e91af
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/8.10/419.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGGDCCBACgAwIBAgIJALDxRLt/tVC3MA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxMjIxMjMzOFoXDTQzMDcx
+MjIxMjMzOFowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFtiNGFlN2Yx
+OS04OGU5LTRhNmItYTU3Ni0xYjllMjU1YWNkZTZdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBoTCBnjAJBgNVHRMEAjAAMDUGDCsGAQQBkggJAYMjAQQlDCNSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIEFSTSA2NDAWBgwrBgEEAZIICQGDIwIEBgwE
+OC4xMDAZBgwrBgEEAZIICQGDIwMECQwHYWFyY2g2NDAnBgwrBgEEAZIICQGDIwQE
+FwwVcmhlbC04LHJoZWwtOC1hYXJjaDY0MA0GCSqGSIb3DQEBCwUAA4ICAQBsYUdo
+XoheLJKoMUt8cnq0eLfsXORLtYbFsFpemGx0i/Bc1Gy/ZO99Z5X50fn7jjGI1jFg
+GkRdz0q+inZilds3xD4hIhMHrX5nxupC6Ao5n1jDLQNYFFpLlKODStQHjv8KUMzY
+iFY4kCnC1AmfClEx+oM32gEb5O9okyNDAZhuQYUT6YMhpbcm2tVNtw08OvcJfXqP
+lQWzzB21jlqW79cBm3u/5mrHWBFSkbqOys6WjznMVBo77y32W4y3/TYebN64IfRA
+QouQasPXJ+PPP34rXZmTMhSEbU712fYmby913w+17M6u6FWQjLpGA3pancWLrXqo
+Fu1THyO0eyZDRf6IoMFlNZTqJs4Sd96zhNQOcetDnebR9n9oDSjs8zO8AmDtAUox
+Ni6hR2SF4JAgViARPC9kqEWNKg957mySz0JifPVCKW+uWhLAej2AaJMWaPsrtQfj
+k4EiDPrgXFw6C6s5ilf1653QT1PN3d4PLVh8K4iTwfanPHIQ5lJX8tYXWBDCwJ6n
+aY5SX340p542uMuP0/LkGu2Q0I8gH2Qv4v12zkQ8lAp1PND79xwbP9QK0Swuc8TP
+ob9tipL9hhp2SJqHjiD5lbP8r3NpZ+NEEKfnv1mH0iMVCRg6Nz4MJyV/u4Zk3bvw
+2vYet0eK5Dy9amxFK+uun5IyPi2xTm29T8E5Nw==
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/8.10/433.pem b/repos/system_upgrade/common/files/prod-certs/8.10/433.pem
new file mode 100644
index 00000000..d2374e61
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/8.10/433.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGKjCCBBKgAwIBAgIJALDxRLt/tVCjMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxMjIxMjMyMVoXDTQzMDcx
+MjIxMjMyMVowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFs0Y2M0ZWZk
+Ni03NzlhLTQwNmQtOTNhMy0zZTI5YzM0NThkNGNdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBszCBsDAJBgNVHRMEAjAAMEEGDCsGAQQBkggJAYMxAQQxDC9SZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIElCTSB6IFN5c3RlbXMgQmV0YTAbBgwrBgEE
+AZIICQGDMQIECwwJOC4xMCBCZXRhMBcGDCsGAQQBkggJAYMxAwQHDAVzMzkweDAq
+BgwrBgEEAZIICQGDMQQEGgwYcmhlbC04LHJoZWwtOC1iZXRhLXMzOTB4MA0GCSqG
+SIb3DQEBCwUAA4ICAQBwAhNSGFtdCSq4Q9NnnUXaH96asoGlwkW1KqcDUOJjw8nP
+j6svp9BB/ieqNpNL4Xa5Yf5oFDb2i0pYVUE1rTsVzsakqg0Ny87ARCZ/Ofwh4J9C
+9as722+VquxVWhvGL+Wx2LNrFseRJsD91dD2kUbKGSPDyW3dwpdTsfKF22LVVcwn
+oWc92VyoPm949wt8nytW2H4Rd4mCGLPpd2xoLemf6fgbDgqdbZEs8EUC0vlRon97
+ZEtNBFYEWNJCi/VMGPasele2rdn1/uYghVlLgQGwk0C0aj0a4P/DIyC9gmL+Wcmo
+ZOslsdAl5wl/7hQ/myRMsjCtd9CTFiXACNmHT+16jjvw09xae3vivd4XaDrUpVPn
+TelOfBM9GDd1yqFDa6t6SdS/SNCw2XV0S41gFvDeeskJjvfvpuJ63otjbc/RATMD
+oIlU7YaL5l0Wx/3IOHX8bo08xxILlBywVOxLYjdjJA0jwWW1rUSXvsZqHHPVObYW
+9eLybvkZ+8Ob72QzgNZA6yCuYrVLQV53pAfliVljB+fQVM6Qh/G8OO9CpiY8fnBr
+z+XbIJb+WlSuHmuCVayTG4/VDlYOMpUvOWw6x3fq8qxj8eX2C8r5v3qa0L2joF+Z
+wlVQOuIsrS5i8lmqBO5+Qg07zmCM7xWEfwxOCVbMMoXmjMlLQDMS2slXRwtKaQ==
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/8.10/479.pem b/repos/system_upgrade/common/files/prod-certs/8.10/479.pem
new file mode 100644
index 00000000..9e98c9c5
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/8.10/479.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGFjCCA/6gAwIBAgIJALDxRLt/tVC6MA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxMjIxMjMzOVoXDTQzMDcx
+MjIxMjMzOVowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFtlMmY2ZTE4
+ZS05OGE3LTRiZjktYWNkYS0yZGVjZjk3Yzg1NzddMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBnzCBnDAJBgNVHRMEAjAAMDUGDCsGAQQBkggJAYNfAQQlDCNSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIHg4Nl82NDAWBgwrBgEEAZIICQGDXwIEBgwE
+OC4xMDAYBgwrBgEEAZIICQGDXwMECAwGeDg2XzY0MCYGDCsGAQQBkggJAYNfBAQW
+DBRyaGVsLTgscmhlbC04LXg4Nl82NDANBgkqhkiG9w0BAQsFAAOCAgEAWP58UYdf
+h7NZhrgxvfDqQBP8tcJaoG96PQUXhOCfNOOwExxncFM7EDn/OvRgNJFQ3Hm+T25m
+xPg6FoCtuVdEtWyKLpuKLdc7NaFVNqqid6ULJyHvsZ8ju3lMaCAGZIV6qwiftHTb
+JhIzbpEak2UeNbLHNJ6WtAQ1pssJYrmq6eK8ScewQ2DtUCnyVF6gJS86bzy+tbst
+8KBImeasSXMjc+XGx22aNBHV+G2LSpi/bSHstqjPHmfFOJvIYGG7grKDVTka/TmX
+yJDl5yydHIPkWlBTu/VLb9m5V4Ke7Zu1nnMkaXoXdtx8DGcfEv8Eqqp5jAiFRUP0
+KfvF4yRcFdsVGeHXiWt3fN8EbwXiNHWO69/9fQgzJXXhkfMHbHAWbGcAgYl7A2r9
+w4SfACOvJAXSgaGr2KAKzNuWiLDDl2UJTLsF5IeGudc/lOlaDUM8RWKmWIOh+jup
+T/g/KuYTtNukyqiwPuaWkwwM6kyuqsm/3z2d76ZbiCkcqTfqfHvOA2fzgxWocUPi
+pg0PQ0NoxJRss1fZ3qu97d0e5p21M92UI1dn+uo/dyw7Xg3Ka2+AWfIs5HP0Fh2e
+lal4LKNjRx+bpApcPSQ2y7exTr1Jni4yHVBC8CQeomoQqmgKLnJ4RB9gsxx4lvf/
+GryScFMDmJk5elrgja1hA5cuV5Rqb3gvyy4=
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/8.10/486.pem b/repos/system_upgrade/common/files/prod-certs/8.10/486.pem
new file mode 100644
index 00000000..491f9f2d
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/8.10/486.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGJTCCBA2gAwIBAgIJALDxRLt/tVCkMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxMjIxMjMyMVoXDTQzMDcx
+MjIxMjMyMVowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFtmOTk3ZmE5
+NC0xMDRlLTQ0YjMtYjA4Yy04MjQzNjA0MjhlZjBdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBrjCBqzAJBgNVHRMEAjAAMDoGDCsGAQQBkggJAYNmAQQqDChSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIHg4Nl82NCBCZXRhMBsGDCsGAQQBkggJAYNm
+AgQLDAk4LjEwIEJldGEwGAYMKwYBBAGSCAkBg2YDBAgMBng4Nl82NDArBgwrBgEE
+AZIICQGDZgQEGwwZcmhlbC04LHJoZWwtOC1iZXRhLXg4Nl82NDANBgkqhkiG9w0B
+AQsFAAOCAgEAvlpN7okXL9T74DNAdmb6YBSCI2fl1By5kSsDkvA6HlWY2nSHJILN
+FCCCl9ygvUXIrxP6ObKV/ylZ5m+p20aa1fvmsmCsgh2QHAxWHlW17lRouQrtd1b6
+EzNnoL0eVkPa2RF1h5Jwr1m5rLLeQH6A46Xgf3cSj4qD22Ru/b6pBGgJxqHMCIaX
+cyC1biwLT3JTJCTe3Y/gi326jPDaIMsKa28y/Tu5swg+7VhhbUNqqC3pMaKzhtF+
+yT33d3X3An8iJ+i8cv6SdqovLV/C8DVM7ZWzFXDWlj1/wmSZ7IBeu6beUhUUkz0x
+VdN1Ud2DFaALFK09LK+JL5SV+thk5q6VmSTzfaIVnIqsbHVcLGjol/ePlm9kGVtr
+shyBYVpbNfSTqXnDsRyK6i7QRGix17b+nwPsVtRW1dBhy2pQ4vnJ53bZ3OnRm9ZW
+9qWu4N7uFtxRqtcEHKOYH7S88RWpjlyaNNAD+NYpnwBq3hSukQx/II619fm5zkR3
+63WyoSQThBxM7D9ZNEVD0ibtNd3Q+8SJB0BFKXCrrWziMD9B7KGVyhK7GbdsBDzU
+fUlvxqCST2bd/beTIuPHanYAGFao4CyIlH7rSgpyR3ikSVrIzVYiR4KpkXzGfaBU
+CJ1v9WRDjALqjx5YABSD0AoP88darao26o6UsxxV4NMjWUc+WLdPpak=
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/8.10/72.pem b/repos/system_upgrade/common/files/prod-certs/8.10/72.pem
new file mode 100644
index 00000000..eff0add4
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/8.10/72.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGFzCCA/+gAwIBAgIJALDxRLt/tVC5MA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxMjIxMjMzOVoXDTQzMDcx
+MjIxMjMzOVowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFthNTEwOTUx
+NS04ZGUzLTQwYzItOTM4Yy0yZjhlNDgxMDA1NzFdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBoDCBnTAJBgNVHRMEAjAAMDsGCysGAQQBkggJAUgBBCwMKlJlZCBIYXQg
+RW50ZXJwcmlzZSBMaW51eCBmb3IgSUJNIHogU3lzdGVtczAVBgsrBgEEAZIICQFI
+AgQGDAQ4LjEwMBYGCysGAQQBkggJAUgDBAcMBXMzOTB4MCQGCysGAQQBkggJAUgE
+BBUME3JoZWwtOCxyaGVsLTgtczM5MHgwDQYJKoZIhvcNAQELBQADggIBAITSTmUd
+W7DTBY77dm05tIRnAiDn4LqMLj2r1jt/6UBGH8e3WDNHmWlZa1kctewwvdbp4wEd
+RJAOpCoNucoy9TDg5atgyTKaqAYZXBu9fCi4faCl7mmTlvbOmIK3+WKOtRXG1pot
+ijq+RRQrS6h8hwOB2Ye/QXfY0C9fHz3OuI26rJ+n24MM9K3AYYOGZ+Xp/neBTLho
+fC0lwkyfZc45N+X/sAgaERz44Zd4JcW98XykFGyUJ0R0tHk2TvWbR7MyVKNaqEVW
+OwZxnlltpe15Dbz8SY5km0gRWfeXpEtmSjBST3cPREcOapL7sL4iJifKYaIJNg+I
+JED+K8BEfKbUH4OHqDS6QYRS+G7B++wkpmyBnlg7/It/dotZM82BIch32jifRj8S
+L2DkxScapLVc/QjyP6yHzUYMvdHHLAmaHZqf3X0TCDuBZ5VOyy2vYaWzroDbuJds
+S0ECnNG20P+IS5kWBXaw8cQ/iQP2HXylraHlXnsQ3xCBAISTbXKI0tHbcfITsb0I
+W+EKJnRyKGUvenffsTHetZ/NqekmNMCNweavg27jmikrFIoZaEGyMd5fterUbHoi
+hejh8bgzh95+r3tiO8lV/ZfGDB6kjlzqGJDFYoVsNIEwVxZ/OqWFbWsiwMpLax+9
+T7gkLBXtuu/5Prp7fT0wy8PqJFaxUCVj27oh
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/9.4/279.pem b/repos/system_upgrade/common/files/prod-certs/9.4/279.pem
new file mode 100644
index 00000000..da9b52bf
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/9.4/279.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGJTCCBA2gAwIBAgIJALDxRLt/tVDOMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxOTE2MzQwOFoXDTQzMDcx
+OTE2MzQwOFowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFtmZTk3MWE4
+Mi1iNzViLTRlMTgtYjM5YS0yY2E3ODlhNmVlNTZdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBrjCBqzAJBgNVHRMEAjAAMEMGDCsGAQQBkggJAYIXAQQzDDFSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIFBvd2VyLCBsaXR0bGUgZW5kaWFuMBUGDCsG
+AQQBkggJAYIXAgQFDAM5LjQwGQYMKwYBBAGSCAkBghcDBAkMB3BwYzY0bGUwJwYM
+KwYBBAGSCAkBghcEBBcMFXJoZWwtOSxyaGVsLTktcHBjNjRsZTANBgkqhkiG9w0B
+AQsFAAOCAgEAT/uBV7Mh+0zatuSO6rTBpTa0kFeVsbhpqc7cMDD7+lRxrKtdtkp5
+WzU/0hw46I11jI29lkd0lLruX9EUxU2AtADK7HonQwpCBPK/3jLduOjx0IRjl8i5
+YbMeKRHWTRiPrb/Avi7dA0ZkacBp9vCWVE1t6s972KgiQEKb85SS+5NvMpVcRaCo
+t5NNmi2+qZU/r/N47EUb9tJtFUPSV30GV97x/xlQgoVy8QAdomVo2wH1fuwgDZRy
+1ylniX/D/638wgYVJQV/H3Fr7CFxcXGTX1gIB9/uyYIjY5fOqVKqQwYYqG3AlNQd
+bIrztMR1b8FjsmX3nmCKYfJTvCOGhwgil9AYQR0g6poEquLYGI95cYxLml1kWTXN
+y4KPxosPwZVSgJ7G+xQLS61Pzk0mdk4+upTrnetqR64VQ/dyja8tSZw8bCga0R6K
+nLOEn55pkJPmDUgRFyyZT016+X8kFYaJqaNT2A2u4fA6hGf1vTqGqluNad2K9DSs
+TTzGiY0RD1aacOCIM2MtVNyIw15TTt9p9RCmwOLnJOn/KhqG51coIKfLgtDXvOoI
+6YTKqIM8Tb06ik12LnyHRj0fn8quqPwSmARMPP4JSLAVPv3Xf7s7CsWEBg89GTs+
+gJln+L+kJPqT9GwUizz2v++ZYe9ZrGJ2Lguyvd+YGJs7HEreU+5uxxM=
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/9.4/362.pem b/repos/system_upgrade/common/files/prod-certs/9.4/362.pem
new file mode 100644
index 00000000..f86ad9c8
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/9.4/362.pem
@@ -0,0 +1,36 @@
+-----BEGIN CERTIFICATE-----
+MIIGNDCCBBygAwIBAgIJALDxRLt/tVDkMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxOTE2MzQ1M1oXDTQzMDcx
+OTE2MzQ1M1owRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFszMjE1Nzg2
+NS01MDZiLTRjZmYtYmU1My01MWViOGY3ZGM2OWNdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBvTCBujAJBgNVHRMEAjAAMEgGDCsGAQQBkggJAYJqAQQ4DDZSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIFBvd2VyLCBsaXR0bGUgZW5kaWFuIEJldGEw
+GgYMKwYBBAGSCAkBgmoCBAoMCDkuNCBCZXRhMBkGDCsGAQQBkggJAYJqAwQJDAdw
+cGM2NGxlMCwGDCsGAQQBkggJAYJqBAQcDBpyaGVsLTkscmhlbC05LWJldGEtcHBj
+NjRsZTANBgkqhkiG9w0BAQsFAAOCAgEAz10M4vhpDqQ0Jq80QGPZZCBpcERMTlbo
+hzqPX21O6pqeCIqS0AN/nuOQ+nfABvixsLZFjJ2cbXDRNQBCIfs3Yhhai/FLOKl1
+zJ4CSfFoVBjp5zOJePS0EI6w7OVZJySzEWIWDWA1aPm+/wivKBQ/jYmGzahvtgLi
+hBdIawe6Mgfb4CMbbhpX9fxjYEohiUxXmxmfVxkXfqthgt6UXodykgk/UkT+Ir4N
+FTBFCm0/3ptaUAISV9/B7Pes7DBrbaYfSlamyRFtnDKBIc4tHJW0Iu6LZDRJzEDL
+yemaYFWRDuM3AodRDPez+leMoyXJOzLfYy9LhriFdZyOMzZCWTUCdIRJVWO7i2Lt
+OSrm7VzpWEno5EBd1tuo6KW7ZW2fJo3VV1Z54elNiItIxvFC9ZI38f1LMcueVpzC
+qZuXT9sICi+CMWXaFGb+3INU5tDqXrX5DyccFmIUJeGMuifLrAJmakT9S0f5AF8z
+QhGQm0pY2CO9IChKxxX1w+Yb4iNQ/GV0vTmFhC4+s7bFsQ/1yazrI91XTKrK125Q
+80KWUuQad8MYw6bs5K04OTdeUn5dEHqcVZLTmNHgpi6+8x3LShIZqqgrNNkzBIZD
+FmbrWIU2YilmX1hRTFn6OaVPmo5OWBcwgwQ/q4LDcxEvWO3C70A/cBn8QOuU8lUm
+bnNddM3PSgc=
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/9.4/363.pem b/repos/system_upgrade/common/files/prod-certs/9.4/363.pem
new file mode 100644
index 00000000..c381a298
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/9.4/363.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGJjCCBA6gAwIBAgIJALDxRLt/tVDjMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxOTE2MzQ1M1oXDTQzMDcx
+OTE2MzQ1M1owRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFtlYzI0NDY1
+ZC1iODRkLTRkZDctYjA0Mi05MzFjZDkxNmQzOTRdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBrzCBrDAJBgNVHRMEAjAAMDoGDCsGAQQBkggJAYJrAQQqDChSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIEFSTSA2NCBCZXRhMBoGDCsGAQQBkggJAYJr
+AgQKDAg5LjQgQmV0YTAZBgwrBgEEAZIICQGCawMECQwHYWFyY2g2NDAsBgwrBgEE
+AZIICQGCawQEHAwacmhlbC05LHJoZWwtOS1iZXRhLWFhcmNoNjQwDQYJKoZIhvcN
+AQELBQADggIBAE4lU1YTA5lGbC1uO2JQv7mSkI6FbhOjw2TE67exhzZfTuzNB0eq
+amuJrMJEN4n3yteBb2zmzYXxHeEkpvN2cpXW61fhC39X2LA51DQTelfXNGLH/kw0
+lpXW47uG9o3qOyU25i1qZdapLUJvGwS6fMwPJrEeIwltbCGgpOen1aIs29KOfNzF
+JRmx1aNV0SA6nhwxPwPCnbHBnSsWYBKWhWxutUdN7SFwCQrJ72LbfkOwBBlf0P8A
+miWTVqJ1ZM051goF0m/5hgjMAW/UN4QsP8k2o+3YLjVho9Zd25d5U1PEqVwjBcxt
+Yjz74LpcZwrvx9MNPSijUZTXSHBD7ATkD+Tj32Wsxcoyce2PlyWpQlMAZdWZh8ve
+osOxNFjt8+sVB9i3gvO5aQibIvRTPIayuMCTla0A776BMv27AKETOclvHBCyEAa+
+BQk4Th51gLnMPrFZEdt75AuZ9Hq3SgNzFnL7cw7KP1KjwicBkHnhNP5+vRTo3JWT
+lNtSeNGxzgtI1HlBnbOalirOBdi3GruEtVIdGkqgJo4bi7t6wj2KscRKwL/193q6
+oJeFxo9To2Kc7V9+jEfYDmToGS6QezjO1wlLT63wpJXstpNdPRnMcHnGQ7iYV1dD
+hY2PTPWCHcKdjOa/Lff2K7MUNTmkhKsPivv4hO1MIbKKzyVoO12jo7Q2
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/9.4/419.pem b/repos/system_upgrade/common/files/prod-certs/9.4/419.pem
new file mode 100644
index 00000000..be9677f7
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/9.4/419.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGFzCCA/+gAwIBAgIJALDxRLt/tVDNMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxOTE2MzQwN1oXDTQzMDcx
+OTE2MzQwN1owRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFtlZjU2MDdh
+Ni1mOThjLTRkYTUtYTQ5MC1jNGRjYTVlODkyNjJdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBoDCBnTAJBgNVHRMEAjAAMDUGDCsGAQQBkggJAYMjAQQlDCNSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIEFSTSA2NDAVBgwrBgEEAZIICQGDIwIEBQwD
+OS40MBkGDCsGAQQBkggJAYMjAwQJDAdhYXJjaDY0MCcGDCsGAQQBkggJAYMjBAQX
+DBVyaGVsLTkscmhlbC05LWFhcmNoNjQwDQYJKoZIhvcNAQELBQADggIBAIhATeEW
+1cnsBWCy26c17gOqduTK1QQEjxoJzr0T1jVT2el7CsNm/TLjASdA3JEIupXhDV1y
+ej1VcdLQjaxwBah4dBPPrISD4mD7i4R9AsY73m4bg0UySkipYFgFJOHoPKXkbd/f
+Uy1/nBcLGKtBwWncI5hQ9qovPOndD9PSlNMN5gteG35d3IIPv4ugDF5hw2wabiaK
+TvUNZVFpCNR7lo11aONhJxfoygWjiNR1L1If3Uvgf7UAixTdIMlAb+Ioqa8o9ZMN
+fJclzk+ltVnWfphw+QdCWSJv1+0rJJzTHnm3S4UtGAIgrabo9WXAopLelwBgnP8l
+GhXWOhzU11FFjzp5pQ2VONUTGKXYfUjdclDj4w94fE3GRXXbwaqc3jaNRHb9JjNB
+aNfQ59O3nl7y2PwZkzCVtGwT3GwCOxrUcUVFdjDTs6WHfGSpt2wwsQl03oS55C+s
+xo8m+1LpQ+iWpxfiFqpKpPV+j3U9L2sTAInx3yuxtnRLhFma7qxJN6GVdrIEYXoi
+H5opy2YTZisvmHtd/pwjzB+yVdHcqvHkqt06mag84Pve3FUV2JQ7VfuCCyN9HsyO
+rdHvOCZK2cSkK+020Q40zTtQQDOmnHb6aLy2vLMNdvufylm6cchXRr+2avYzwEV5
+LcgfwpsgtJFW3GgvR1ElBgJlXKEJlyxQzFws
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/9.4/433.pem b/repos/system_upgrade/common/files/prod-certs/9.4/433.pem
new file mode 100644
index 00000000..c381be24
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/9.4/433.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGKTCCBBGgAwIBAgIJALDxRLt/tVDlMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxOTE2MzQ1M1oXDTQzMDcx
+OTE2MzQ1M1owRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFtkZGFjYmZm
+NS1hZDViLTQwNmQtYjA1OS1hYTI0Zjg2YmMyOWVdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBsjCBrzAJBgNVHRMEAjAAMEEGDCsGAQQBkggJAYMxAQQxDC9SZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIElCTSB6IFN5c3RlbXMgQmV0YTAaBgwrBgEE
+AZIICQGDMQIECgwIOS40IEJldGEwFwYMKwYBBAGSCAkBgzEDBAcMBXMzOTB4MCoG
+DCsGAQQBkggJAYMxBAQaDBhyaGVsLTkscmhlbC05LWJldGEtczM5MHgwDQYJKoZI
+hvcNAQELBQADggIBAI4wHOkCmr5rNJA1agaVflNz7KTFOQdX41VV20MM2PuBxIsi
+pj/pS2MzHFhdCcSZ2YMl2ztGVKLpheoFZi1EA62VppasmnkskayGxaeGh+yvv1x/
+frUW26izPWUNeqpi4oMsO2ByKCySYWyMIZfyPV8LpqU5/VSchohYB0FNzXUdHpVg
+FJSnkiHS28UwQ4RDKp+0uKKY3S9Zq6u3YBer0wf2v0uuVz3R2pFNC86lybe/wihm
+XTjlJOT33zpGUm49jp+xgM1FSx+g1CSQKT9SZJiMQzD+yappyRaYbReZ4a3AWaUn
+juAES9tgBfYNrsmj9vNJ94isRTXifhh6pU5gKjdvbddYFNfaSFRmnOQK+SNcgUr6
+/RqC6yivGKGeZ+W+jn6hlSQPQISmsoy3D0/X+yKJShAVXvEZwtME9iKmVSqtLMKJ
+Exu4t6vguy5frm5rBbuB2XfaGX6de8jF5742bBODj5hdQoNQUw/6E4QHj6HXRWTW
+InpfhOA9Uk8+n4+QmJfJjp9O+cTwbDx2+GAPSu/pMhFE1yfWPb0ZLBQHcSlD1uga
+rVeFld3c1p0MZkVZVU/G6I+aGq1fNSKdtAd068z1/AJr7lLJ5vY3ckwR0sGhMccA
+3BiXXyTbciwVX9ShA/bRa3YXNDYCu2zNaX38arTP8JSq5h8a1zJDG+vnsRfr
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/9.4/479.pem b/repos/system_upgrade/common/files/prod-certs/9.4/479.pem
new file mode 100644
index 00000000..1ea1cd3d
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/9.4/479.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGFTCCA/2gAwIBAgIJALDxRLt/tVDQMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxOTE2MzQwOFoXDTQzMDcx
+OTE2MzQwOFowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFsxZDg0ZDQ5
+Ny1jZmNmLTQxNjEtOTM0YS0zNzk2MDU4M2ZmZGZdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBnjCBmzAJBgNVHRMEAjAAMDUGDCsGAQQBkggJAYNfAQQlDCNSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIHg4Nl82NDAVBgwrBgEEAZIICQGDXwIEBQwD
+OS40MBgGDCsGAQQBkggJAYNfAwQIDAZ4ODZfNjQwJgYMKwYBBAGSCAkBg18EBBYM
+FHJoZWwtOSxyaGVsLTkteDg2XzY0MA0GCSqGSIb3DQEBCwUAA4ICAQCGUDPFBrLs
+sK/RITJothRhKhKNX3zu9TWRG0WKxszCx/y7c4yEfH1TV/yd7BNB2RubaoayWz8E
+TQjcRW8BnVu9JrlbdpWJm4eN+dOOpcESPilLnkz4Tr0WYDsT1/jk/uiorK4h21S0
+EwMicuSuEmm0OUEX0zj2X/IyveFRtpJpH/JktznCkvexysc1JRzqMCbal8GipRX9
+Xf7Oko6QiaUpu5GDLN2OXhizYHdR2f3l+Sn2cScsbi3fSVv+DLsnaz6J0kZ4U8q3
+lYk/ZYifJjG+/7cv3e+usixpmK/qYlpOvunUDnqOkDfUs4/4bZjH8e8CdqJk4YvU
+RRtLr7muXEJsaqF7lxAViXnKxT/z/+1kOgN/+Oyzjs4QDsk2HQpWHFgNYSSG9Mmz
+PUS8tk2T0j5sN55X7QRRl5c0oqrBU5XaWyL26QcfONYcR8dBaKawjxg8CI9KzsYY
+sb2jjS+fBkB1OI2c6z4OZRd+0N6FQ6gq++KiXOLFvi/QSFNi9Veb56c5tR2l6fBk
+0pSH06Gg2s0aQg20NdMIr+HaYsVdJRsE1FgQ2tlfFx9rGkcqhgwV3Za/abgtRb2o
+YVwps28DLm41DXf5DnXK+BXFHrtR/3YAZtga+R7OL/RvcF0kc2kudlxqd/8Y33uL
+nqnoATy31FTW4J4rEfanJTQgTpatZmbaLQ==
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/9.4/486.pem b/repos/system_upgrade/common/files/prod-certs/9.4/486.pem
new file mode 100644
index 00000000..8c6cc292
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/9.4/486.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGJDCCBAygAwIBAgIJALDxRLt/tVDmMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxOTE2MzQ1NFoXDTQzMDcx
+OTE2MzQ1NFowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFthMThhM2Iz
+MC01MTIxLTQ4YmYtOWFjYS01YWUwMTY5Zjk3MDFdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBrTCBqjAJBgNVHRMEAjAAMDoGDCsGAQQBkggJAYNmAQQqDChSZWQgSGF0
+IEVudGVycHJpc2UgTGludXggZm9yIHg4Nl82NCBCZXRhMBoGDCsGAQQBkggJAYNm
+AgQKDAg5LjQgQmV0YTAYBgwrBgEEAZIICQGDZgMECAwGeDg2XzY0MCsGDCsGAQQB
+kggJAYNmBAQbDBlyaGVsLTkscmhlbC05LWJldGEteDg2XzY0MA0GCSqGSIb3DQEB
+CwUAA4ICAQCKLxIlbpPv+pvTx79IsbuZeTgjeTyJ5swT0R6WoAgjjVf3BInjnu5n
+tOqxTFy9f6Vg1sU8/DCNQdY87gQmnDLgx+E/fJRb3DlBqTVMdRQbafdS8H0PK/A8
+wnGuwfiI6IUv/G1nb4Gp9SxzBO6c6iJDfp+UN/v+i0FxpIwq5n5vsGDx9qG7YkC/
+wfgiXB7dvzMjx9GIf0Q0ouTMrB0CN07CBa5qwjLLVAOV4jfXl/PK6DbhmIjCsDEp
+BWmHZKVvn610301W/efrMtzZjH9KgIMmylEPY3QrYXaFjZcKRAl/jEGTSROQmycY
+hF+pmKKqqzRT6ab3aM6zO4LoMj8+VgyJOn1Pep7ETb3uxReYZU0vSKCqa0dYcpsP
+ufmLLYmAThwEoOEEQMn0zOFDLhdBKiP+JaBWVFLyVVquEfWVEsIVGamAdVZUDX1v
+ILhzV4imgboajVPYo/C5yEsuHPkw8idA2L9phZY9kPY2DhYBnfV2ccQSik5wBKpf
+lWajuFMSQFNiUet43YHQGzqmZLA08PgoaQkLRfENTvlhHFOrphnoIu4yNbdzuM3y
+bOjGFem5WwOPwPBs7m0wEpvpUp4UoqbIn6vihtLq7q2mFxwz/iDh7rHDrTkMD7fB
+nSrKb/v4Gnp2k+/fU52rWaV2tjesevGJeWw17YMerzZYhrF+KTt3pQ==
+-----END CERTIFICATE-----
diff --git a/repos/system_upgrade/common/files/prod-certs/9.4/72.pem b/repos/system_upgrade/common/files/prod-certs/9.4/72.pem
new file mode 100644
index 00000000..d5832c16
--- /dev/null
+++ b/repos/system_upgrade/common/files/prod-certs/9.4/72.pem
@@ -0,0 +1,35 @@
+-----BEGIN CERTIFICATE-----
+MIIGFjCCA/6gAwIBAgIJALDxRLt/tVDPMA0GCSqGSIb3DQEBCwUAMIGuMQswCQYD
+VQQGEwJVUzEXMBUGA1UECAwOTm9ydGggQ2Fyb2xpbmExFjAUBgNVBAoMDVJlZCBI
+YXQsIEluYy4xGDAWBgNVBAsMD1JlZCBIYXQgTmV0d29yazEuMCwGA1UEAwwlUmVk
+IEhhdCBFbnRpdGxlbWVudCBQcm9kdWN0IEF1dGhvcml0eTEkMCIGCSqGSIb3DQEJ
+ARYVY2Etc3VwcG9ydEByZWRoYXQuY29tMB4XDTIzMDcxOTE2MzQwOFoXDTQzMDcx
+OTE2MzQwOFowRDFCMEAGA1UEAww5UmVkIEhhdCBQcm9kdWN0IElEIFszYzk0ZTRj
+OS1kYjU5LTQ2ZDktYjBmNS04YmZmNDRkMDFiMjVdMIICIjANBgkqhkiG9w0BAQEF
+AAOCAg8AMIICCgKCAgEAxj9J04z+Ezdyx1U33kFftLv0ntNS1BSeuhoZLDhs18yk
+sepG7hXXtHh2CMFfLZmTjAyL9i1XsxykQpVQdXTGpUF33C2qBQHB5glYs9+d781x
+8p8m8zFxbPcW82TIJXbgW3ErVh8vk5qCbG1cCAAHb+DWMq0EAyy1bl/JgAghYNGB
+RvKJObTdCrdpYh02KUqBLkSPZHvo6DUJFN37MXDpVeQq9VtqRjpKLLwuEfXb0Y7I
+5xEOrR3kYbOaBAWVt3mYZ1t0L/KfY2jVOdU5WFyyB9PhbMdLi1xE801j+GJrwcLa
+xmqvj4UaICRzcPATP86zVM1BBQa+lilkRQes5HyjZzZDiGYudnXhbqmLo/n0cuXo
+QBVVjhzRTMx71Eiiahmiw+U1vGqkHhQNxb13HtN1lcAhUCDrxxeMvrAjYdWpYlpI
+yW3NssPWt1YUHidMBSAJ4KctIf91dyE93aStlxwC/QnyFsZOmcEsBzVCnz9GmWMl
+1/6XzBS1yDUqByklx0TLH+z/sK9A+O2rZAy1mByCYwVxvbOZhnqGxAuToIS+A81v
+5hCjsCiOScVB+cil30YBu0cH85RZ0ILNkHdKdrLLWW4wjphK2nBn2g2i3+ztf+nQ
+ED2pQqZ/rhuW79jcyCZl9kXqe1wOdF0Cwah4N6/3LzIXEEKyEJxNqQwtNc2IVE8C
+AwEAAaOBnzCBnDAJBgNVHRMEAjAAMDsGCysGAQQBkggJAUgBBCwMKlJlZCBIYXQg
+RW50ZXJwcmlzZSBMaW51eCBmb3IgSUJNIHogU3lzdGVtczAUBgsrBgEEAZIICQFI
+AgQFDAM5LjQwFgYLKwYBBAGSCAkBSAMEBwwFczM5MHgwJAYLKwYBBAGSCAkBSAQE
+FQwTcmhlbC05LHJoZWwtOS1zMzkweDANBgkqhkiG9w0BAQsFAAOCAgEAvzalgsaq
+pRPmiEeCjm43KBazl284ua9GBeDVjKAWrlAUmoa6HROrU5x55qH4VMQlDB8q0GIb
+cF5Nde2EhIDeTTomrSD8bA0I/vwAF4mxsxA9Qjm2NqaRN8AwLkhy9Mnl/SDZXarB
+ebOtwSlI7NUFj8+2C6kVCAV37EA2TMkBOjleBVU9y16yFnbgmVoJZQ9DeZreWt/i
+igkpybNE5rdqbnp/cXMgsZgisGt2SyHa6oyuUK/goDN0MAfVrLf7JJWZY7r6Q/Yy
+8NRvIzniWAZEkX6ywoT9f5GsVuiOzGSIvf0uSS9cPrKxSbZeliVSpwZk7GLr5cv/
+rxjEuNNPTv/+KqEfrACAPqx4IuCd+wRD2qbhiWwfG/XBd0qnHbw+TyUdhzVxgVj7
+7curyQUSqJtpAQ868cdGBoqpCR6yV4ZN4ZekqmPdcmGXIBWsvI3Arv7BZO9P4Pt9
+yxBA4hwP6X6+PsVVdOdSV48m6bcFj8QCy1+Q6OyEDtY5NGNISlxa4U4613jKc/rA
+4NAc6sbqaLtRhEC3Bx4jCIP/+ReY+C8RR3569HCz1NU8Bb+xRXsRiV8Zgj8eKSMJ
+6+RrbOCb+MooF1HMPtaSgJJNOkcVFdHAw9xz0iFf2TWm8yVyZtLh0g9pYT+n8UiF
+ILtIL4wWtg67tJLTuXJ2QwLpu/Eow7CXT6M=
+-----END CERTIFICATE-----
--
2.41.0

View File

@ -1,29 +0,0 @@
From 81e85bd5ebadfa90851e22999a851375f7de363e Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Thu, 16 Nov 2023 09:30:22 +0100
Subject: [PATCH 34/38] pylint: ignore too-many-lines
It limits 1000 lines of code per module and targetuserspacecreator
actor's lib is beyond that limit. This is not type of problem
we want to deal with anyway.
---
.pylintrc | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/.pylintrc b/.pylintrc
index 0adb7dcc..57259bcb 100644
--- a/.pylintrc
+++ b/.pylintrc
@@ -56,7 +56,8 @@ disable=
use-dict-literal,
redundant-u-string-prefix, # still have py2 to support
logging-format-interpolation,
- logging-not-lazy
+ logging-not-lazy,
+ too-many-lines # we do not want to take care about that one
[FORMAT]
# Maximum number of characters on a single line.
--
2.41.0

View File

@ -1,66 +0,0 @@
From 5a3bded4be67e6a99ba739e15c3b9d533134f35b Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Thu, 16 Nov 2023 08:46:55 +0100
Subject: [PATCH 35/38] Update upgrade paths: Add 8.10/9.4
Adding upgrade paths (RHEL and RHEL with SAP HANA):
7.9 -> 8.10
8.10 -> 9.4
Following upgrade paths will be dropped later in this release.
Consider them as deprecated now.
7.9 -> 8.6 (RHEL and RHEL with SAP HANA)
7.9 -> 8.9
8.6 -> 9.0 (RHEL and RHEL with SAP HANA)
8.9 -> 8.3
---
.../system_upgrade/common/files/upgrade_paths.json | 14 ++++++++------
.../common/libraries/config/version.py | 2 +-
2 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/repos/system_upgrade/common/files/upgrade_paths.json b/repos/system_upgrade/common/files/upgrade_paths.json
index 2069e26d..25c6db7c 100644
--- a/repos/system_upgrade/common/files/upgrade_paths.json
+++ b/repos/system_upgrade/common/files/upgrade_paths.json
@@ -1,17 +1,19 @@
{
"default": {
- "7.9": ["8.6", "8.8", "8.9"],
+ "7.9": ["8.6", "8.8", "8.9", "8.10"],
"8.6": ["9.0"],
"8.8": ["9.2"],
"8.9": ["9.3"],
- "7": ["8.6", "8.8", "8.9"],
- "8": ["9.3"]
+ "8.10": ["9.4"],
+ "7": ["8.6", "8.8", "8.9", "8.10"],
+ "8": ["9.0", "9.2", "9.3", "9.4"]
},
"saphana": {
- "7.9": ["8.8", "8.6"],
- "7": ["8.8", "8.6"],
+ "7.9": ["8.6", "8.10", "8.8"],
+ "7": ["8.6", "8.10", "8.8"],
"8.6": ["9.0"],
"8.8": ["9.2"],
- "8": ["9.2", "9.0"]
+ "8.10": ["9.4"],
+ "8": ["9.0", "9.4", "9.2"]
}
}
diff --git a/repos/system_upgrade/common/libraries/config/version.py b/repos/system_upgrade/common/libraries/config/version.py
index 0f1e5874..12598960 100644
--- a/repos/system_upgrade/common/libraries/config/version.py
+++ b/repos/system_upgrade/common/libraries/config/version.py
@@ -16,7 +16,7 @@ OP_MAP = {
_SUPPORTED_VERSIONS = {
# Note: 'rhel-alt' is detected when on 'rhel' with kernel 4.x
'7': {'rhel': ['7.9'], 'rhel-alt': [], 'rhel-saphana': ['7.9']},
- '8': {'rhel': ['8.6', '8.8', '8.9'], 'rhel-saphana': ['8.6', '8.8']},
+ '8': {'rhel': ['8.6', '8.8', '8.9', '8.10'], 'rhel-saphana': ['8.6', '8.8', '8.10']},
}
--
2.41.0

View File

@ -1,275 +0,0 @@
From 5bd7bdf5e9c81ec306e567a147dc270adfd27da2 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Tue, 14 Nov 2023 09:51:41 +0100
Subject: [PATCH 36/38] Copy dnf.conf to target userspace and allow a custom
one
This change allows working around the fact that source and target
`dnf.conf` files might be incompatible. For example some of the proxy
configuration between RHEL7 and RHEL8.
Target system compatible configuration can be specified in
/etc/leapp/files/dnf.conf. If this file is present it is copied into
the target userspace and also applied to the target system. If it
doesn't exist, the `/etc/dnf/dnf.conf` from the source system will be
copied instead.
Errors that could be caused by incompatible/incorrect proxy
configuration now contain a hint with a remediation with the steps above
mentioned.
* pstodulk@redhat.com: Updated text in the error msg.
Jira: OAMG-6544
---
.../common/actors/applycustomdnfconf/actor.py | 19 ++++++++++++++
.../libraries/applycustomdnfconf.py | 15 +++++++++++
.../tests/test_applycustomdnfconf.py | 23 ++++++++++++++++
.../copydnfconfintotargetuserspace/actor.py | 24 +++++++++++++++++
.../copydnfconfintotargetuserspace.py | 19 ++++++++++++++
.../tests/test_dnfconfuserspacecopy.py | 26 +++++++++++++++++++
.../libraries/userspacegen.py | 18 ++++++++++---
.../common/libraries/dnfplugin.py | 24 ++++++++++++++++-
8 files changed, 163 insertions(+), 5 deletions(-)
create mode 100644 repos/system_upgrade/common/actors/applycustomdnfconf/actor.py
create mode 100644 repos/system_upgrade/common/actors/applycustomdnfconf/libraries/applycustomdnfconf.py
create mode 100644 repos/system_upgrade/common/actors/applycustomdnfconf/tests/test_applycustomdnfconf.py
create mode 100644 repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/actor.py
create mode 100644 repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/libraries/copydnfconfintotargetuserspace.py
create mode 100644 repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/tests/test_dnfconfuserspacecopy.py
diff --git a/repos/system_upgrade/common/actors/applycustomdnfconf/actor.py b/repos/system_upgrade/common/actors/applycustomdnfconf/actor.py
new file mode 100644
index 00000000..d7c7fe87
--- /dev/null
+++ b/repos/system_upgrade/common/actors/applycustomdnfconf/actor.py
@@ -0,0 +1,19 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import applycustomdnfconf
+from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag
+
+
+class ApplyCustomDNFConf(Actor):
+ """
+ Move /etc/leapp/files/dnf.conf to /etc/dnf/dnf.conf if it exists
+
+ An actor in FactsPhase copies this file to the target userspace if present.
+ In such case we also want to use the file on the target system.
+ """
+ name = "apply_custom_dnf_conf"
+ consumes = ()
+ produces = ()
+ tags = (ApplicationsPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ applycustomdnfconf.process()
diff --git a/repos/system_upgrade/common/actors/applycustomdnfconf/libraries/applycustomdnfconf.py b/repos/system_upgrade/common/actors/applycustomdnfconf/libraries/applycustomdnfconf.py
new file mode 100644
index 00000000..2eabd678
--- /dev/null
+++ b/repos/system_upgrade/common/actors/applycustomdnfconf/libraries/applycustomdnfconf.py
@@ -0,0 +1,15 @@
+import os
+
+from leapp.libraries.stdlib import api, CalledProcessError, run
+
+CUSTOM_DNF_CONF_PATH = "/etc/leapp/files/dnf.conf"
+
+
+def process():
+ if os.path.exists(CUSTOM_DNF_CONF_PATH):
+ try:
+ run(["mv", CUSTOM_DNF_CONF_PATH, "/etc/dnf/dnf.conf"])
+ except (CalledProcessError, OSError) as e:
+ api.current_logger().debug(
+ "Failed to move /etc/leapp/files/dnf.conf to /etc/dnf/dnf.conf: {}".format(e)
+ )
diff --git a/repos/system_upgrade/common/actors/applycustomdnfconf/tests/test_applycustomdnfconf.py b/repos/system_upgrade/common/actors/applycustomdnfconf/tests/test_applycustomdnfconf.py
new file mode 100644
index 00000000..6dbc4291
--- /dev/null
+++ b/repos/system_upgrade/common/actors/applycustomdnfconf/tests/test_applycustomdnfconf.py
@@ -0,0 +1,23 @@
+import os
+
+import pytest
+
+from leapp.libraries.actor import applycustomdnfconf
+
+
+@pytest.mark.parametrize(
+ "exists,should_move",
+ [(False, False), (True, True)],
+)
+def test_copy_correct_dnf_conf(monkeypatch, exists, should_move):
+ monkeypatch.setattr(os.path, "exists", lambda _: exists)
+
+ run_called = [False]
+
+ def mocked_run(_):
+ run_called[0] = True
+
+ monkeypatch.setattr(applycustomdnfconf, 'run', mocked_run)
+
+ applycustomdnfconf.process()
+ assert run_called[0] == should_move
diff --git a/repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/actor.py b/repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/actor.py
new file mode 100644
index 00000000..46ce1934
--- /dev/null
+++ b/repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/actor.py
@@ -0,0 +1,24 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import copydnfconfintotargetuserspace
+from leapp.models import TargetUserSpacePreupgradeTasks
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class CopyDNFConfIntoTargetUserspace(Actor):
+ """
+ Copy dnf.conf into target userspace
+
+ Copies /etc/leapp/files/dnf.conf to target userspace. If it isn't available
+ /etc/dnf/dnf.conf is copied instead. This allows specifying a different
+ config for the target userspace, which might be required if the source
+ system configuration file isn't compatible with the target one. One such
+ example is incompatible proxy configuration between RHEL7 and RHEL8 DNF
+ versions.
+ """
+ name = "copy_dnf_conf_into_target_userspace"
+ consumes = ()
+ produces = (TargetUserSpacePreupgradeTasks,)
+ tags = (FactsPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ copydnfconfintotargetuserspace.process()
diff --git a/repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/libraries/copydnfconfintotargetuserspace.py b/repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/libraries/copydnfconfintotargetuserspace.py
new file mode 100644
index 00000000..4e74acdb
--- /dev/null
+++ b/repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/libraries/copydnfconfintotargetuserspace.py
@@ -0,0 +1,19 @@
+import os
+
+from leapp.libraries.stdlib import api
+from leapp.models import CopyFile, TargetUserSpacePreupgradeTasks
+
+
+def process():
+ src = "/etc/dnf/dnf.conf"
+ if os.path.exists("/etc/leapp/files/dnf.conf"):
+ src = "/etc/leapp/files/dnf.conf"
+
+ api.current_logger().debug(
+ "Copying dnf.conf at {} to the target userspace".format(src)
+ )
+ api.produce(
+ TargetUserSpacePreupgradeTasks(
+ copy_files=[CopyFile(src=src, dst="/etc/dnf/dnf.conf")]
+ )
+ )
diff --git a/repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/tests/test_dnfconfuserspacecopy.py b/repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/tests/test_dnfconfuserspacecopy.py
new file mode 100644
index 00000000..6c99925e
--- /dev/null
+++ b/repos/system_upgrade/common/actors/copydnfconfintotargetuserspace/tests/test_dnfconfuserspacecopy.py
@@ -0,0 +1,26 @@
+import os
+
+import pytest
+
+from leapp.libraries.actor import copydnfconfintotargetuserspace
+from leapp.libraries.common.testutils import logger_mocked, produce_mocked
+
+
+@pytest.mark.parametrize(
+ "userspace_conf_exists,expected",
+ [(False, "/etc/dnf/dnf.conf"), (True, "/etc/leapp/files/dnf.conf")],
+)
+def test_copy_correct_dnf_conf(monkeypatch, userspace_conf_exists, expected):
+ monkeypatch.setattr(os.path, "exists", lambda _: userspace_conf_exists)
+
+ mocked_produce = produce_mocked()
+ monkeypatch.setattr(copydnfconfintotargetuserspace.api, 'produce', mocked_produce)
+ monkeypatch.setattr(copydnfconfintotargetuserspace.api, 'current_logger', logger_mocked())
+
+ copydnfconfintotargetuserspace.process()
+
+ assert mocked_produce.called == 1
+ assert len(mocked_produce.model_instances) == 1
+ assert len(mocked_produce.model_instances[0].copy_files) == 1
+ assert mocked_produce.model_instances[0].copy_files[0].src == expected
+ assert mocked_produce.model_instances[0].copy_files[0].dst == "/etc/dnf/dnf.conf"
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
index 050ad7fe..e015a741 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
@@ -269,15 +269,25 @@ def prepare_target_userspace(context, userspace_dir, enabled_repos, packages):
# failed since leapp does not support updates behind proxy yet.
for manager_info in api.consume(PkgManagerInfo):
if manager_info.configured_proxies:
- details['details'] = ("DNF failed to install userspace packages, likely due to the proxy "
- "configuration detected in the YUM/DNF configuration file.")
+ details['details'] = (
+ "DNF failed to install userspace packages, likely due to the proxy "
+ "configuration detected in the YUM/DNF configuration file. "
+ "Make sure the proxy is properly configured in /etc/dnf/dnf.conf. "
+ "It's also possible the proxy settings in the DNF configuration file are "
+ "incompatible with the target system. A compatible configuration can be "
+ "placed in /etc/leapp/files/dnf.conf which, if present, will be used during "
+ "the upgrade instead of /etc/dnf/dnf.conf. "
+ "In such case the configuration will also be applied to the target system."
+ )
# Similarly if a proxy was set specifically for one of the repositories.
for repo_facts in api.consume(RepositoriesFacts):
for repo_file in repo_facts.repositories:
if any(repo_data.proxy and repo_data.enabled for repo_data in repo_file.data):
- details['details'] = ("DNF failed to install userspace packages, likely due to the proxy "
- "configuration detected in a repository configuration file.")
+ details['details'] = (
+ "DNF failed to install userspace packages, likely due to the proxy "
+ "configuration detected in a repository configuration file."
+ )
raise StopActorExecutionError(message=message, details=details)
diff --git a/repos/system_upgrade/common/libraries/dnfplugin.py b/repos/system_upgrade/common/libraries/dnfplugin.py
index 26810e94..d3ec5901 100644
--- a/repos/system_upgrade/common/libraries/dnfplugin.py
+++ b/repos/system_upgrade/common/libraries/dnfplugin.py
@@ -178,8 +178,30 @@ def _handle_transaction_err_msg(stage, xfs_info, err, is_container=False):
return # not needed actually as the above function raises error, but for visibility
NO_SPACE_STR = 'more space needed on the'
message = 'DNF execution failed with non zero exit code.'
- details = {'STDOUT': err.stdout, 'STDERR': err.stderr}
if NO_SPACE_STR not in err.stderr:
+ # if there was a problem reaching repos and proxy is configured in DNF/YUM configs, the
+ # proxy is likely the problem.
+ # NOTE(mmatuska): We can't consistently detect there was a problem reaching some repos,
+ # because it isn't clear what are all the possible DNF error messages we can encounter,
+ # such as: "Failed to synchronize cache for repo ..." or "Errors during downloading
+ # metadata for # repository" or "No more mirrors to try - All mirrors were already tried
+ # without success"
+ # NOTE(mmatuska): We could check PkgManagerInfo to detect if proxy is indeed configured,
+ # however it would be pretty ugly to pass it all the way down here
+ proxy_hint = (
+ "If there was a problem reaching remote content (see stderr output) and proxy is "
+ "configured in the YUM/DNF configuration file, the proxy configuration is likely "
+ "causing this error. "
+ "Make sure the proxy is properly configured in /etc/dnf/dnf.conf. "
+ "It's also possible the proxy settings in the DNF configuration file are "
+ "incompatible with the target system. A compatible configuration can be "
+ "placed in /etc/leapp/files/dnf.conf which, if present, it will be used during "
+ "some parts of the upgrade instead of original /etc/dnf/dnf.conf. "
+ "In such case the configuration will also be applied to the target system. "
+ "Note that /etc/dnf/dnf.conf needs to be still configured correctly "
+ "for your current system to pass the early phases of the upgrade process."
+ )
+ details = {'STDOUT': err.stdout, 'STDERR': err.stderr, 'hint': proxy_hint}
raise StopActorExecutionError(message=message, details=details)
# Disk Requirements:
--
2.41.0

View File

@ -1,36 +0,0 @@
From 51fd0cc817aa9efea24d62e735fdc47133b1622b Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Thu, 16 Nov 2023 18:30:53 +0100
Subject: [PATCH 37/38] adjustlocalrepos: suppress unwanted deprecation report
---
repos/system_upgrade/common/actors/adjustlocalrepos/actor.py | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/adjustlocalrepos/actor.py b/repos/system_upgrade/common/actors/adjustlocalrepos/actor.py
index 064e7f3e..0d0cc1d0 100644
--- a/repos/system_upgrade/common/actors/adjustlocalrepos/actor.py
+++ b/repos/system_upgrade/common/actors/adjustlocalrepos/actor.py
@@ -9,8 +9,10 @@ from leapp.models import (
UsedTargetRepositories
)
from leapp.tags import IPUWorkflowTag, TargetTransactionChecksPhaseTag
+from leapp.utils.deprecation import suppress_deprecation
+@suppress_deprecation(TMPTargetRepositoriesFacts)
class AdjustLocalRepos(Actor):
"""
Adjust local repositories to the target user-space container.
@@ -25,7 +27,7 @@ class AdjustLocalRepos(Actor):
name = 'adjust_local_repos'
consumes = (TargetOSInstallationImage,
TargetUserSpaceInfo,
- TMPTargetRepositoriesFacts,
+ TMPTargetRepositoriesFacts, # deprecated
UsedTargetRepositories)
produces = ()
tags = (IPUWorkflowTag, TargetTransactionChecksPhaseTag)
--
2.41.0

View File

@ -1,616 +0,0 @@
From 7dabc85a0ab5595bd4c7b232c78f14d04eed40fc Mon Sep 17 00:00:00 2001
From: PeterMocary <petermocary@gmail.com>
Date: Tue, 22 Aug 2023 17:03:48 +0200
Subject: [PATCH 38/38] add detection for custom libraries registered by
ld.so.conf
The in-place upgrade process does not support custom libraries
and also does not handle customized configuration of dynamic linked.
In such a case it can happen (and it happens) that the upgrade could
break in critical phases when linked libraries dissapear or are not
compatible with the new system.
We cannot decide whether or not such a custom configuration affects
the upgrade negatively, so let's detect any customisations
or unexpected configurations related to dynamic linker and in such
a case generate a high severity report, informing user about the
possible impact on the upgrade process.
Currently it's detectect:
* modified default LD configuration: /etc/ld.so.conf
* drop int configuration files under /etc/ld.so.conf.d/ that are
not owned by any RHEL RPMs
* envars: LD_LIBRARY_PATH, LD_PRELOAD
Jira ref.: OAMG-4460 / RHEL-11958
BZ ref.: BZ 1927700
---
.../checkdynamiclinkerconfiguration/actor.py | 22 +++
.../checkdynamiclinkerconfiguration.py | 79 ++++++++
.../test_checkdynamiclinkerconfiguration.py | 65 +++++++
.../scandynamiclinkerconfiguration/actor.py | 23 +++
.../scandynamiclinkerconfiguration.py | 117 +++++++++++
.../test_scandynamiclinkerconfiguration.py | 181 ++++++++++++++++++
.../common/models/dynamiclinker.py | 41 ++++
7 files changed, 528 insertions(+)
create mode 100644 repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/actor.py
create mode 100644 repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/libraries/checkdynamiclinkerconfiguration.py
create mode 100644 repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/tests/test_checkdynamiclinkerconfiguration.py
create mode 100644 repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/actor.py
create mode 100644 repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/libraries/scandynamiclinkerconfiguration.py
create mode 100644 repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/tests/test_scandynamiclinkerconfiguration.py
create mode 100644 repos/system_upgrade/common/models/dynamiclinker.py
diff --git a/repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/actor.py b/repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/actor.py
new file mode 100644
index 00000000..6671eef4
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/actor.py
@@ -0,0 +1,22 @@
+from leapp.actors import Actor
+from leapp.libraries.actor.checkdynamiclinkerconfiguration import check_dynamic_linker_configuration
+from leapp.models import DynamicLinkerConfiguration, Report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+
+
+class CheckDynamicLinkerConfiguration(Actor):
+ """
+ Check for customization of dynamic linker configuration.
+
+ The in-place upgrade could potentionally be impacted in a negative way due
+ to the customization of dynamic linker configuration by user. This actor creates high
+ severity report upon detecting such customization.
+ """
+
+ name = 'check_dynamic_linker_configuration'
+ consumes = (DynamicLinkerConfiguration,)
+ produces = (Report,)
+ tags = (ChecksPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ check_dynamic_linker_configuration()
diff --git a/repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/libraries/checkdynamiclinkerconfiguration.py b/repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/libraries/checkdynamiclinkerconfiguration.py
new file mode 100644
index 00000000..9ead892e
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/libraries/checkdynamiclinkerconfiguration.py
@@ -0,0 +1,79 @@
+from leapp import reporting
+from leapp.libraries.stdlib import api
+from leapp.models import DynamicLinkerConfiguration
+
+LD_SO_CONF_DIR = '/etc/ld.so.conf.d'
+LD_SO_CONF_MAIN = '/etc/ld.so.conf'
+LD_LIBRARY_PATH_VAR = 'LD_LIBRARY_PATH'
+LD_PRELOAD_VAR = 'LD_PRELOAD'
+FMT_LIST_SEPARATOR_1 = '\n- '
+FMT_LIST_SEPARATOR_2 = '\n - '
+
+
+def _report_custom_dynamic_linker_configuration(summary):
+ reporting.create_report([
+ reporting.Title(
+ 'Detected customized configuration for dynamic linker.'
+ ),
+ reporting.Summary(summary),
+ reporting.Remediation(hint=('Remove or revert the custom dynamic linker configurations and apply the changes '
+ 'using the ldconfig command. In case of possible active software collections we '
+ 'suggest disabling them persistently.')),
+ reporting.RelatedResource('file', '/etc/ld.so.conf'),
+ reporting.RelatedResource('directory', '/etc/ld.so.conf.d'),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Groups([reporting.Groups.OS_FACTS]),
+ ])
+
+
+def check_dynamic_linker_configuration():
+ configuration = next(api.consume(DynamicLinkerConfiguration), None)
+ if not configuration:
+ return
+
+ custom_configurations = ''
+ if configuration.main_config.modified:
+ custom_configurations += (
+ '{}The {} file has unexpected contents:{}{}'
+ .format(FMT_LIST_SEPARATOR_1, LD_SO_CONF_MAIN,
+ FMT_LIST_SEPARATOR_2, FMT_LIST_SEPARATOR_2.join(configuration.main_config.modified_lines))
+ )
+
+ custom_configs = []
+ for config in configuration.included_configs:
+ if config.modified:
+ custom_configs.append(config.path)
+
+ if custom_configs:
+ custom_configurations += (
+ '{}The following drop in config files were marked as custom:{}{}'
+ .format(FMT_LIST_SEPARATOR_1, FMT_LIST_SEPARATOR_2, FMT_LIST_SEPARATOR_2.join(custom_configs))
+ )
+
+ if configuration.used_variables:
+ custom_configurations += (
+ '{}The following variables contain unexpected dynamic linker configuration:{}{}'
+ .format(FMT_LIST_SEPARATOR_1, FMT_LIST_SEPARATOR_2,
+ FMT_LIST_SEPARATOR_2.join(configuration.used_variables))
+ )
+
+ if custom_configurations:
+ summary = (
+ 'Custom configurations to the dynamic linker could potentially impact '
+ 'the upgrade in a negative way. The custom configuration includes '
+ 'modifications to {main_conf}, custom or modified drop in config '
+ 'files in the {conf_dir} directory and additional entries in the '
+ '{ldlib_envar} or {ldpre_envar} variables. These modifications '
+ 'configure the dynamic linker to use different libraries that might '
+ 'not be provided by Red Hat products or might not be present during '
+ 'the whole upgrade process. The following custom configurations '
+ 'were detected by leapp:{cust_configs}'
+ .format(
+ main_conf=LD_SO_CONF_MAIN,
+ conf_dir=LD_SO_CONF_DIR,
+ ldlib_envar=LD_LIBRARY_PATH_VAR,
+ ldpre_envar=LD_PRELOAD_VAR,
+ cust_configs=custom_configurations
+ )
+ )
+ _report_custom_dynamic_linker_configuration(summary)
diff --git a/repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/tests/test_checkdynamiclinkerconfiguration.py b/repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/tests/test_checkdynamiclinkerconfiguration.py
new file mode 100644
index 00000000..d640f0c5
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checkdynamiclinkerconfiguration/tests/test_checkdynamiclinkerconfiguration.py
@@ -0,0 +1,65 @@
+import pytest
+
+from leapp import reporting
+from leapp.libraries.actor.checkdynamiclinkerconfiguration import (
+ check_dynamic_linker_configuration,
+ LD_LIBRARY_PATH_VAR,
+ LD_PRELOAD_VAR
+)
+from leapp.libraries.common.testutils import create_report_mocked, CurrentActorMocked
+from leapp.libraries.stdlib import api
+from leapp.models import DynamicLinkerConfiguration, LDConfigFile, MainLDConfigFile
+
+INCLUDED_CONFIG_PATHS = ['/etc/ld.so.conf.d/dyninst-x86_64.conf',
+ '/etc/ld.so.conf.d/mariadb-x86_64.conf',
+ '/custom/path/custom1.conf']
+
+
+@pytest.mark.parametrize(('included_configs_modifications', 'used_variables', 'modified_lines'),
+ [
+ ([False, False, False], [], []),
+ ([True, True, True], [], []),
+ ([False, False, False], [LD_LIBRARY_PATH_VAR], []),
+ ([False, False, False], [], ['modified line 1', 'midified line 2']),
+ ([True, False, True], [LD_LIBRARY_PATH_VAR, LD_PRELOAD_VAR], ['modified line']),
+ ])
+def test_check_ld_so_configuration(monkeypatch, included_configs_modifications, used_variables, modified_lines):
+ assert len(INCLUDED_CONFIG_PATHS) == len(included_configs_modifications)
+
+ main_config = MainLDConfigFile(path="/etc/ld.so.conf", modified=any(modified_lines), modified_lines=modified_lines)
+ included_configs = []
+ for path, modified in zip(INCLUDED_CONFIG_PATHS, included_configs_modifications):
+ included_configs.append(LDConfigFile(path=path, modified=modified))
+
+ configuration = DynamicLinkerConfiguration(main_config=main_config,
+ included_configs=included_configs,
+ used_variables=used_variables)
+
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(msgs=[configuration]))
+ monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
+
+ check_dynamic_linker_configuration()
+
+ report_expected = any(included_configs_modifications) or modified_lines or used_variables
+ if not report_expected:
+ assert reporting.create_report.called == 0
+ return
+
+ assert reporting.create_report.called == 1
+ assert 'configuration for dynamic linker' in reporting.create_report.reports[0]['title']
+ summary = reporting.create_report.reports[0]['summary']
+
+ if any(included_configs_modifications):
+ assert 'The following drop in config files were marked as custom:' in summary
+ for config, modified in zip(INCLUDED_CONFIG_PATHS, included_configs_modifications):
+ assert modified == (config in summary)
+
+ if modified_lines:
+ assert 'The /etc/ld.so.conf file has unexpected contents' in summary
+ for line in modified_lines:
+ assert line in summary
+
+ if used_variables:
+ assert 'The following variables contain unexpected dynamic linker configuration:' in summary
+ for var in used_variables:
+ assert '- {}'.format(var) in summary
diff --git a/repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/actor.py b/repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/actor.py
new file mode 100644
index 00000000..11283cd0
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/actor.py
@@ -0,0 +1,23 @@
+from leapp.actors import Actor
+from leapp.libraries.actor.scandynamiclinkerconfiguration import scan_dynamic_linker_configuration
+from leapp.models import DynamicLinkerConfiguration, InstalledRedHatSignedRPM
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class ScanDynamicLinkerConfiguration(Actor):
+ """
+ Scan the dynamic linker configuration and find modifications.
+
+ The dynamic linker configuration files can be used to replace standard libraries
+ with different custom libraries. The in-place upgrade does not support customization
+ of this configuration by user. This actor produces information about detected
+ modifications.
+ """
+
+ name = 'scan_dynamic_linker_configuration'
+ consumes = (InstalledRedHatSignedRPM,)
+ produces = (DynamicLinkerConfiguration,)
+ tags = (FactsPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ scan_dynamic_linker_configuration()
diff --git a/repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/libraries/scandynamiclinkerconfiguration.py b/repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/libraries/scandynamiclinkerconfiguration.py
new file mode 100644
index 00000000..1a6ab6a2
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/libraries/scandynamiclinkerconfiguration.py
@@ -0,0 +1,117 @@
+import glob
+import os
+
+from leapp.libraries.common.rpms import has_package
+from leapp.libraries.stdlib import api, CalledProcessError, run
+from leapp.models import DynamicLinkerConfiguration, InstalledRedHatSignedRPM, LDConfigFile, MainLDConfigFile
+
+LD_SO_CONF_DIR = '/etc/ld.so.conf.d'
+LD_SO_CONF_MAIN = '/etc/ld.so.conf'
+LD_SO_CONF_DEFAULT_INCLUDE = 'ld.so.conf.d/*.conf'
+LD_SO_CONF_COMMENT_PREFIX = '#'
+LD_LIBRARY_PATH_VAR = 'LD_LIBRARY_PATH'
+LD_PRELOAD_VAR = 'LD_PRELOAD'
+
+
+def _read_file(file_path):
+ with open(file_path, 'r') as fd:
+ return fd.readlines()
+
+
+def _is_modified(config_path):
+ """ Decide if the configuration file was modified based on the package it belongs to. """
+ result = run(['rpm', '-Vf', config_path], checked=False)
+ if not result['exit_code']:
+ return False
+ modification_flags = result['stdout'].split(' ', 1)[0]
+ # The file is considered modified only when the checksum does not match
+ return '5' in modification_flags
+
+
+def _is_included_config_custom(config_path):
+ if not os.path.isfile(config_path):
+ return False
+
+ # Check if the config file has any lines that have an effect on dynamic linker configuration
+ has_effective_line = False
+ for line in _read_file(config_path):
+ line = line.strip()
+ if line and not line.startswith(LD_SO_CONF_COMMENT_PREFIX):
+ has_effective_line = True
+ break
+
+ if not has_effective_line:
+ return False
+
+ is_custom = False
+ try:
+ package_name = run(['rpm', '-qf', '--queryformat', '%{NAME}', config_path])['stdout']
+ is_custom = not has_package(InstalledRedHatSignedRPM, package_name) or _is_modified(config_path)
+ except CalledProcessError:
+ is_custom = True
+
+ return is_custom
+
+
+def _parse_main_config():
+ """
+ Extracts included configs from the main dynamic linker configuration file (/etc/ld.so.conf)
+ along with lines that are likely custom. The lines considered custom are simply those that are
+ not includes.
+
+ :returns: tuple containing all the included files and lines considered custom
+ :rtype: tuple(list, list)
+ """
+ config = _read_file(LD_SO_CONF_MAIN)
+
+ included_configs = []
+ other_lines = []
+ for line in config:
+ line = line.strip()
+ if line.startswith('include'):
+ cfg_glob = line.split(' ', 1)[1].strip()
+ cfg_glob = os.path.join('/etc', cfg_glob) if not os.path.isabs(cfg_glob) else cfg_glob
+ included_configs.append(cfg_glob)
+ elif line and not line.startswith(LD_SO_CONF_COMMENT_PREFIX):
+ other_lines.append(line)
+
+ return included_configs, other_lines
+
+
+def scan_dynamic_linker_configuration():
+ included_configs, other_lines = _parse_main_config()
+
+ is_default_include_present = '/etc/' + LD_SO_CONF_DEFAULT_INCLUDE in included_configs
+ if not is_default_include_present:
+ api.current_logger().debug('The default include "{}" is not present in '
+ 'the {} file.'.format(LD_SO_CONF_DEFAULT_INCLUDE, LD_SO_CONF_MAIN))
+
+ if is_default_include_present and len(included_configs) != 1:
+ # The additional included configs will most likely be created manually by the user
+ # and therefore will get flagged as custom in the next part of this function
+ api.current_logger().debug('The default include "{}" is not the only include in '
+ 'the {} file.'.format(LD_SO_CONF_DEFAULT_INCLUDE, LD_SO_CONF_MAIN))
+
+ main_config_file = MainLDConfigFile(path=LD_SO_CONF_MAIN, modified=any(other_lines), modified_lines=other_lines)
+
+ # Expand the config paths from globs and ensure uniqueness of resulting paths
+ config_paths = set()
+ for cfg_glob in included_configs:
+ for cfg in glob.glob(cfg_glob):
+ config_paths.add(cfg)
+
+ included_config_files = []
+ for config_path in config_paths:
+ config_file = LDConfigFile(path=config_path, modified=_is_included_config_custom(config_path))
+ included_config_files.append(config_file)
+
+ # Check if dynamic linker variables used for specifying custom libraries are set
+ variables = [LD_LIBRARY_PATH_VAR, LD_PRELOAD_VAR]
+ used_variables = [var for var in variables if os.getenv(var, None)]
+
+ configuration = DynamicLinkerConfiguration(main_config=main_config_file,
+ included_configs=included_config_files,
+ used_variables=used_variables)
+
+ if other_lines or any([config.modified for config in included_config_files]) or used_variables:
+ api.produce(configuration)
diff --git a/repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/tests/test_scandynamiclinkerconfiguration.py b/repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/tests/test_scandynamiclinkerconfiguration.py
new file mode 100644
index 00000000..21144951
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scandynamiclinkerconfiguration/tests/test_scandynamiclinkerconfiguration.py
@@ -0,0 +1,181 @@
+import glob
+import os
+
+import pytest
+
+from leapp import reporting
+from leapp.libraries.actor import scandynamiclinkerconfiguration
+from leapp.libraries.common.testutils import produce_mocked
+from leapp.libraries.stdlib import api, CalledProcessError
+from leapp.models import InstalledRedHatSignedRPM
+
+INCLUDED_CONFIGS_GLOB_DICT_1 = {'/etc/ld.so.conf.d/*.conf': ['/etc/ld.so.conf.d/dyninst-x86_64.conf',
+ '/etc/ld.so.conf.d/mariadb-x86_64.conf',
+ '/etc/ld.so.conf.d/bind-export-x86_64.conf']}
+
+INCLUDED_CONFIGS_GLOB_DICT_2 = {'/etc/ld.so.conf.d/*.conf': ['/etc/ld.so.conf.d/dyninst-x86_64.conf',
+ '/etc/ld.so.conf.d/mariadb-x86_64.conf',
+ '/etc/ld.so.conf.d/bind-export-x86_64.conf',
+ '/etc/ld.so.conf.d/custom1.conf',
+ '/etc/ld.so.conf.d/custom2.conf']}
+
+INCLUDED_CONFIGS_GLOB_DICT_3 = {'/etc/ld.so.conf.d/*.conf': ['/etc/ld.so.conf.d/dyninst-x86_64.conf',
+ '/etc/ld.so.conf.d/custom1.conf',
+ '/etc/ld.so.conf.d/mariadb-x86_64.conf',
+ '/etc/ld.so.conf.d/bind-export-x86_64.conf',
+ '/etc/ld.so.conf.d/custom2.conf'],
+ '/custom/path/*.conf': ['/custom/path/custom1.conf',
+ '/custom/path/custom2.conf']}
+
+
+@pytest.mark.parametrize(('included_configs_glob_dict', 'other_lines', 'custom_configs', 'used_variables'),
+ [
+ (INCLUDED_CONFIGS_GLOB_DICT_1, [], [], []),
+ (INCLUDED_CONFIGS_GLOB_DICT_1, ['/custom/path.lib'], [], []),
+ (INCLUDED_CONFIGS_GLOB_DICT_1, [], [], ['LD_LIBRARY_PATH']),
+ (INCLUDED_CONFIGS_GLOB_DICT_2, [], ['/etc/ld.so.conf.d/custom1.conf',
+ '/etc/ld.so.conf.d/custom2.conf'], []),
+ (INCLUDED_CONFIGS_GLOB_DICT_3, ['/custom/path.lib'], ['/etc/ld.so.conf.d/custom1.conf',
+ '/etc/ld.so.conf.d/custom2.conf'
+ '/custom/path/custom1.conf',
+ '/custom/path/custom2.conf'], []),
+ ])
+def test_scan_dynamic_linker_configuration(monkeypatch, included_configs_glob_dict, other_lines,
+ custom_configs, used_variables):
+ monkeypatch.setattr(scandynamiclinkerconfiguration, '_parse_main_config',
+ lambda: (included_configs_glob_dict.keys(), other_lines))
+ monkeypatch.setattr(glob, 'glob', lambda glob: included_configs_glob_dict[glob])
+ monkeypatch.setattr(scandynamiclinkerconfiguration, '_is_included_config_custom',
+ lambda config: config in custom_configs)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+
+ for var in used_variables:
+ monkeypatch.setenv(var, '/some/path')
+
+ scandynamiclinkerconfiguration.scan_dynamic_linker_configuration()
+
+ produce_expected = custom_configs or other_lines or used_variables
+ if not produce_expected:
+ assert not api.produce.called
+ return
+
+ assert api.produce.called == 1
+
+ configuration = api.produce.model_instances[0]
+
+ all_configs = []
+ for configs in included_configs_glob_dict.values():
+ all_configs += configs
+
+ assert len(all_configs) == len(configuration.included_configs)
+ for config in configuration.included_configs:
+ if config.path in custom_configs:
+ assert config.modified
+
+ assert configuration.main_config.path == scandynamiclinkerconfiguration.LD_SO_CONF_MAIN
+ if other_lines:
+ assert configuration.main_config.modified
+ assert configuration.main_config.modified_lines == other_lines
+
+ if used_variables:
+ assert configuration.used_variables == used_variables
+
+
+@pytest.mark.parametrize(('config_contents', 'included_config_paths', 'other_lines'),
+ [
+ (['include ld.so.conf.d/*.conf\n'],
+ ['/etc/ld.so.conf.d/*.conf'], []),
+ (['include ld.so.conf.d/*.conf\n', '\n', '/custom/path.lib\n', '#comment'],
+ ['/etc/ld.so.conf.d/*.conf'], ['/custom/path.lib']),
+ (['include ld.so.conf.d/*.conf\n', 'include /custom/path.conf\n'],
+ ['/etc/ld.so.conf.d/*.conf', '/custom/path.conf'], []),
+ (['include ld.so.conf.d/*.conf\n', '#include /custom/path.conf\n', '#/custom/path.conf\n'],
+ ['/etc/ld.so.conf.d/*.conf'], []),
+ ([' \n'],
+ [], [])
+ ])
+def test_parse_main_config(monkeypatch, config_contents, included_config_paths, other_lines):
+ def mocked_read_file(path):
+ assert path == scandynamiclinkerconfiguration.LD_SO_CONF_MAIN
+ return config_contents
+
+ monkeypatch.setattr(scandynamiclinkerconfiguration, '_read_file', mocked_read_file)
+
+ _included_config_paths, _other_lines = scandynamiclinkerconfiguration._parse_main_config()
+
+ assert _included_config_paths == included_config_paths
+ assert _other_lines == other_lines
+
+
+@pytest.mark.parametrize(('config_path', 'run_result', 'is_modified'),
+ [
+ ('/etc/ld.so.conf.d/dyninst-x86_64.conf',
+ '.......T. c /etc/ld.so.conf.d/dyninst-x86_64.conf', False),
+ ('/etc/ld.so.conf.d/dyninst-x86_64.conf',
+ 'S.5....T. c /etc/ld.so.conf.d/dyninst-x86_64.conf', True),
+ ('/etc/ld.so.conf.d/kernel-3.10.0-1160.el7.x86_64.conf',
+ '', False)
+ ])
+def test_is_modified(monkeypatch, config_path, run_result, is_modified):
+ def mocked_run(command, checked):
+ assert config_path in command
+ assert checked is False
+ exit_code = 1 if run_result else 0
+ return {'stdout': run_result, 'exit_code': exit_code}
+
+ monkeypatch.setattr(scandynamiclinkerconfiguration, 'run', mocked_run)
+
+ _is_modified = scandynamiclinkerconfiguration._is_modified(config_path)
+ assert _is_modified == is_modified
+
+
+@pytest.mark.parametrize(('config_path',
+ 'config_contents', 'run_result',
+ 'is_installed_rh_signed_package', 'is_modified', 'has_effective_lines'),
+ [
+ ('/etc/ld.so.conf.d/dyninst-x86_64.conf',
+ ['/usr/lib64/dyninst\n'], 'dyninst',
+ True, False, True), # RH sighend package without modification - Not custom
+ ('/etc/ld.so.conf.d/dyninst-x86_64.conf',
+ ['/usr/lib64/my_dyninst\n'], 'dyninst',
+ True, True, True), # Was modified by user - Custom
+ ('/etc/custom/custom.conf',
+ ['/usr/lib64/custom'], 'custom',
+ False, None, True), # Third-party package - Custom
+ ('/etc/custom/custom.conf',
+ ['#/usr/lib64/custom\n'], 'custom',
+ False, None, False), # Third-party package without effective lines - Not custom
+ ('/etc/ld.so.conf.d/somelib.conf',
+ ['/usr/lib64/somelib\n'], CalledProcessError,
+ None, None, True), # User created configuration file - Custom
+ ('/etc/ld.so.conf.d/somelib.conf',
+ ['#/usr/lib64/somelib\n'], CalledProcessError,
+ None, None, False) # User created configuration file without effective lines - Not custom
+ ])
+def test_is_included_config_custom(monkeypatch, config_path, config_contents, run_result,
+ is_installed_rh_signed_package, is_modified, has_effective_lines):
+ def mocked_run(command):
+ assert config_path in command
+ if run_result and not isinstance(run_result, str):
+ raise CalledProcessError("message", command, "result")
+ return {'stdout': run_result}
+
+ def mocked_has_package(model, package_name):
+ assert model is InstalledRedHatSignedRPM
+ assert package_name == run_result
+ return is_installed_rh_signed_package
+
+ def mocked_read_file(path):
+ assert path == config_path
+ return config_contents
+
+ monkeypatch.setattr(scandynamiclinkerconfiguration, 'run', mocked_run)
+ monkeypatch.setattr(scandynamiclinkerconfiguration, 'has_package', mocked_has_package)
+ monkeypatch.setattr(scandynamiclinkerconfiguration, '_read_file', mocked_read_file)
+ monkeypatch.setattr(scandynamiclinkerconfiguration, '_is_modified', lambda *_: is_modified)
+ monkeypatch.setattr(os.path, 'isfile', lambda _: True)
+
+ result = scandynamiclinkerconfiguration._is_included_config_custom(config_path)
+ is_custom = not isinstance(run_result, str) or not is_installed_rh_signed_package or is_modified
+ is_custom &= has_effective_lines
+ assert result == is_custom
diff --git a/repos/system_upgrade/common/models/dynamiclinker.py b/repos/system_upgrade/common/models/dynamiclinker.py
new file mode 100644
index 00000000..4dc107f4
--- /dev/null
+++ b/repos/system_upgrade/common/models/dynamiclinker.py
@@ -0,0 +1,41 @@
+from leapp.models import fields, Model
+from leapp.topics import SystemFactsTopic
+
+
+class LDConfigFile(Model):
+ """
+ Represents a config file related to dynamic linker configuration
+ """
+ topic = SystemFactsTopic
+
+ path = fields.String()
+ """ Absolute path to the configuration file """
+
+ modified = fields.Boolean()
+ """ If True the file is considered custom and will trigger a report """
+
+
+class MainLDConfigFile(LDConfigFile):
+ """
+ Represents the main configuration file of the dynamic linker /etc/ld.so.conf
+ """
+ topic = SystemFactsTopic
+
+ modified_lines = fields.List(fields.String(), default=[])
+ """ Lines that are considered custom, generally those that are not includes of other configs """
+
+
+class DynamicLinkerConfiguration(Model):
+ """
+ Facts about configuration of dynamic linker
+ """
+ topic = SystemFactsTopic
+
+ main_config = fields.Model(MainLDConfigFile)
+ """ The main configuration file of dynamic linker (/etc/ld.so.conf) """
+
+ included_configs = fields.List(fields.Model(LDConfigFile))
+ """ All the configs that are included by the main configuration file """
+
+ used_variables = fields.List(fields.String(), default=[])
+ """ Environment variables that are currently used to modify dynamic linker configuration """
--
2.41.0

View File

@ -1,60 +0,0 @@
From c81731f04c479fd9212458054d9ba21daa8e4780 Mon Sep 17 00:00:00 2001
From: Jakub Jelen <jjelen@redhat.com>
Date: Mon, 26 Jun 2023 16:29:45 +0200
Subject: [PATCH 39/41] Fix several typos and Makefile help
- CheckSystemdServicesTasks: Fix typo in the phase name in comment
- utils: fix typo in comment
- Makefile: Fix example in help to actually work
Squashed by Petr Stodulka <pstodulk@redhat.com>
Signed-off-by: Jakub Jelen <jjelen@redhat.com>
---
Makefile | 2 +-
.../common/actors/systemd/checksystemdservicetasks/actor.py | 2 +-
repos/system_upgrade/common/libraries/utils.py | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/Makefile b/Makefile
index e3c40e01..b504a854 100644
--- a/Makefile
+++ b/Makefile
@@ -155,7 +155,7 @@ help:
@echo " PR=7 SUFFIX='my_additional_suffix' make <target>"
@echo " MR=6 COPR_CONFIG='path/to/the/config/copr/file' make <target>"
@echo " ACTOR=<actor> TEST_LIBS=y make test"
- @echo " BUILD_CONTAINER=el7 make build_container"
+ @echo " BUILD_CONTAINER=rhel7 make build_container"
@echo " TEST_CONTAINER=f34 make test_container"
@echo " CONTAINER_TOOL=docker TEST_CONTAINER=rhel7 make test_container_no_lint"
@echo ""
diff --git a/repos/system_upgrade/common/actors/systemd/checksystemdservicetasks/actor.py b/repos/system_upgrade/common/actors/systemd/checksystemdservicetasks/actor.py
index 547a13df..272ebc1f 100644
--- a/repos/system_upgrade/common/actors/systemd/checksystemdservicetasks/actor.py
+++ b/repos/system_upgrade/common/actors/systemd/checksystemdservicetasks/actor.py
@@ -14,7 +14,7 @@ class CheckSystemdServicesTasks(Actor):
- enabled and disabled. This actor inhibits upgrade in such cases.
Note: We expect that SystemdServicesTasks could be produced even after the
- TargetTransactionChecksPhase (e.g. during the ApplicationPhase). The
+ TargetTransactionChecksPhase (e.g. during the ApplicationsPhase). The
purpose of this actor is to report collisions in case we can already detect
them. In case of conflicts caused by messages produced later we just log
the collisions and the services will end up disabled.
diff --git a/repos/system_upgrade/common/libraries/utils.py b/repos/system_upgrade/common/libraries/utils.py
index cd3ad1a6..38b9bb1a 100644
--- a/repos/system_upgrade/common/libraries/utils.py
+++ b/repos/system_upgrade/common/libraries/utils.py
@@ -14,7 +14,7 @@ def parse_config(cfg=None, strict=True):
"""
Applies a workaround to parse a config file using py3 AND py2
- ConfigParser has a new def to read strings/iles in Py3, making
+ ConfigParser has a new def to read strings/files in Py3, making
the old ones (Py2) obsoletes, these function was created to use the
ConfigParser on Py2 and Py3
--
2.41.0

File diff suppressed because it is too large Load Diff

View File

@ -1,184 +0,0 @@
From 930758e269111190f1e5689e75d552d896adab67 Mon Sep 17 00:00:00 2001
From: Jakub Jelen <jjelen@redhat.com>
Date: Tue, 4 Jul 2023 18:22:49 +0200
Subject: [PATCH 41/41] Check no new unexpected keys were installed during the
upgrade
Petr Stodulka:
* some refactoring
* added added error logging
* replace the hard error stop by post upgrade report
We do not want to interrupt the upgrade process after the
DNF transaction execution
Signed-off-by: Jakub Jelen <jjelen@redhat.com>
---
.../common/actors/gpgpubkeycheck/actor.py | 23 ++++
.../libraries/gpgpubkeycheck.py | 124 ++++++++++++++++++
2 files changed, 147 insertions(+)
create mode 100644 repos/system_upgrade/common/actors/gpgpubkeycheck/actor.py
create mode 100644 repos/system_upgrade/common/actors/gpgpubkeycheck/libraries/gpgpubkeycheck.py
diff --git a/repos/system_upgrade/common/actors/gpgpubkeycheck/actor.py b/repos/system_upgrade/common/actors/gpgpubkeycheck/actor.py
new file mode 100644
index 00000000..3d11de38
--- /dev/null
+++ b/repos/system_upgrade/common/actors/gpgpubkeycheck/actor.py
@@ -0,0 +1,23 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import gpgpubkeycheck
+from leapp.models import TrustedGpgKeys
+from leapp.reporting import Report
+from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag
+
+
+class GpgPubkeyCheck(Actor):
+ """
+ Checks no unexpected GPG keys were installed during the upgrade.
+
+ This should be mostly sanity check and this should not happen
+ unless something went very wrong, regardless the gpgcheck was
+ used (default) or not (with --no-gpgcheck option).
+ """
+
+ name = 'gpg_pubkey_check'
+ consumes = (TrustedGpgKeys,)
+ produces = (Report,)
+ tags = (IPUWorkflowTag, ApplicationsPhaseTag,)
+
+ def process(self):
+ gpgpubkeycheck.process()
diff --git a/repos/system_upgrade/common/actors/gpgpubkeycheck/libraries/gpgpubkeycheck.py b/repos/system_upgrade/common/actors/gpgpubkeycheck/libraries/gpgpubkeycheck.py
new file mode 100644
index 00000000..387c6cef
--- /dev/null
+++ b/repos/system_upgrade/common/actors/gpgpubkeycheck/libraries/gpgpubkeycheck.py
@@ -0,0 +1,124 @@
+from leapp import reporting
+from leapp.libraries.common.gpg import is_nogpgcheck_set
+from leapp.libraries.common.rpms import get_installed_rpms
+from leapp.libraries.stdlib import api
+from leapp.models import TrustedGpgKeys
+
+FMT_LIST_SEPARATOR = '\n - '
+
+
+def _get_installed_fps_tuple():
+ """
+ Return list of tuples (fingerprint, packager).
+ """
+ installed_fps_tuple = []
+ rpms = get_installed_rpms()
+ for rpm in rpms:
+ rpm = rpm.strip()
+ if not rpm:
+ continue
+ try:
+ # NOTE: pgpsig is (none) for 'gpg-pubkey' entries
+ name, version, dummy_release, dummy_epoch, packager, dummy_arch, dummy_pgpsig = rpm.split('|')
+ except ValueError as e:
+ # NOTE: it's seatbelt, but if it happens, seeing loong list of errors
+ # will let us know earlier that we missed something really
+ api.current_logger().error('Cannot perform the check of installed GPG keys after the upgrade.')
+ api.current_logger().error('Cannot parse rpm output: {}'.format(e))
+ continue
+ if name != 'gpg-pubkey':
+ continue
+ installed_fps_tuple.append((version, packager))
+ return installed_fps_tuple
+
+
+def _report_cannot_check_keys(installed_fps):
+ # NOTE: in this case, it's expected there will be always some GPG keys present
+ summary = (
+ 'Cannot perform the check of GPG keys installed in the RPM DB'
+ ' due to missing facts (TrustedGpgKeys) supposed to be generated'
+ ' in the start of the upgrade process on the original system.'
+ ' Unexpected unexpected installed GPG keys could be e.g. a mark of'
+ ' a malicious attempt to hijack the upgrade process.'
+ ' The list of all GPG keys in RPM DB:{sep}{key_list}'
+ .format(
+ sep=FMT_LIST_SEPARATOR,
+ key_list=FMT_LIST_SEPARATOR.join(installed_fps)
+ )
+ )
+ hint = (
+ 'Verify the installed GPG keys are expected.'
+ )
+ groups = [
+ reporting.Groups.POST,
+ reporting.Groups.REPOSITORY,
+ reporting.Groups.SECURITY
+ ]
+ reporting.create_report([
+ reporting.Title('Cannot perform the check of installed GPG keys after the upgrade.'),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Groups(groups),
+ reporting.Remediation(hint=hint),
+ ])
+
+
+def _report_unexpected_keys(unexpected_fps):
+ summary = (
+ 'The system contains unexpected GPG keys after upgrade.'
+ ' This can be caused e.g. by a manual intervention'
+ ' or by malicious attempt to hijack the upgrade process.'
+ ' The unexpected keys are the following:'
+ ' {sep}{key_list}'
+ .format(
+ sep=FMT_LIST_SEPARATOR,
+ key_list=FMT_LIST_SEPARATOR.join(unexpected_fps)
+ )
+ )
+ hint = (
+ 'Verify the installed GPG keys are expected.'
+ )
+ groups = [
+ reporting.Groups.POST,
+ reporting.Groups.REPOSITORY,
+ reporting.Groups.SECURITY
+ ]
+ reporting.create_report([
+ reporting.Title('Detected unexpected GPG keys after the upgrade.'),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Groups(groups),
+ reporting.Remediation(hint=hint),
+ ])
+
+
+def process():
+ """
+ Verify the system does not have any unexpected gpg keys installed
+
+ If the --no-gpgcheck option is used, this is skipped as we can not
+ guarantee that what was installed came from trusted source
+ """
+
+ if is_nogpgcheck_set():
+ api.current_logger().warning('The --nogpgcheck option is used: Skipping the check of installed GPG keys.')
+ return
+
+ installed_fps_tuple = _get_installed_fps_tuple()
+
+ try:
+ trusted_gpg_keys = next(api.consume(TrustedGpgKeys))
+ except StopIteration:
+ # unexpected (bug) situation; keeping as seatbelt for the security aspect
+ installed_fps = ['{fp}: {packager}'.format(fp=fp, packager=packager) for fp, packager in installed_fps_tuple]
+ _report_cannot_check_keys(installed_fps)
+ return
+
+ trusted_fps = [key.fingerprint for key in trusted_gpg_keys.items]
+ unexpected_fps = []
+ for fp, packager in installed_fps_tuple:
+ if fp not in trusted_fps:
+ unexpected_fps.append('{fp}: {packager}'.format(fp=fp, packager=packager))
+
+ if unexpected_fps:
+ _report_unexpected_keys(unexpected_fps)
--
2.41.0

View File

@ -1,38 +0,0 @@
From 28a5cc0d49451592f5184c25d155f5e7be81f17e Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Mon, 20 Nov 2023 14:35:03 +0100
Subject: [PATCH 42/60] BZ#2250254 - force removal of tomcat during the upgrade
We need pki-servlet-engine, which we depend on, but tomcat conflicts
with.
---
.../el7toel8/actors/satellite_upgrade_facts/actor.py | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/actor.py b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/actor.py
index 01e63465..3cd9d9da 100644
--- a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/actor.py
+++ b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/actor.py
@@ -42,6 +42,7 @@ class SatelliteUpgradeFacts(Actor):
postgresql_contrib = has_package(InstalledRPM, 'rh-postgresql12-postgresql-contrib')
postgresql_evr = has_package(InstalledRPM, 'rh-postgresql12-postgresql-evr')
+ # SCL-related packages
to_remove = ['tfm-runtime', 'tfm-pulpcore-runtime', 'rh-redis5-runtime', 'rh-ruby27-runtime',
'rh-python38-runtime']
to_install = ['rubygem-foreman_maintain']
@@ -54,6 +55,11 @@ class SatelliteUpgradeFacts(Actor):
# enable modules that are needed for Pulpcore
modules_to_enable.append(Module(name='python38', stream='3.8'))
to_install.append('katello')
+ # Force removal of tomcat
+ # PES data indicates tomcat.el7 can be upgraded to tomcat.el8 since EL 8.8,
+ # but we need pki-servlet-engine from the module instead which will be pulled in via normal
+ # package dependencies
+ to_remove.extend(['tomcat', 'tomcat-lib'])
if has_package(InstalledRPM, 'rh-redis5-redis'):
modules_to_enable.append(Module(name='redis', stream='5'))
--
2.43.0

View File

@ -1,64 +0,0 @@
From de4d8cb60e05ffe7d2ce90282b1884a7d345461c Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Tue, 14 Nov 2023 11:57:58 +0100
Subject: [PATCH 43/60] Add 79to88 and 79to89 aws upgrade paths
Thanks to the detailed downstream review by mmoran
I have realised that upstream upgrade paths have to
be revised and updated as well.
Also change identifier for the dot notation in the upgrade
paths.
---
.packit.yaml | 22 ++++++++++++++++++++--
1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index 2e606a40..a307cc75 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -123,7 +123,7 @@ jobs:
targets:
epel-7-x86_64:
distros: [RHEL-7.9-rhui]
- identifier: sanity-7to8-aws-e2e
+ identifier: sanity-7.9to8.6-aws-e2e
# NOTE(ivasilev) Unfortunately to use yaml templates we need to rewrite the whole tf_extra_params dict
# to use plan_filter (can't just specify one section test.tmt.plan_filter, need to specify environments.* as well)
tf_extra_params:
@@ -145,6 +145,24 @@ jobs:
RHUI: "aws"
LEAPPDATA_BRANCH: "upstream"
+- &sanity-79to88-aws
+ <<: *sanity-79to86-aws
+ identifier: sanity-7.9to8.8-aws-e2e
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.8"
+ RHUI: "aws"
+ LEAPPDATA_BRANCH: "upstream"
+
+- &sanity-79to89-aws
+ <<: *sanity-79to86-aws
+ identifier: sanity-7.9to8.9-aws-e2e
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.9"
+ RHUI: "aws"
+ LEAPPDATA_BRANCH: "upstream"
+
# On-demand minimal beaker tests
- &beaker-minimal-79to86
<<: *sanity-79to86
@@ -487,7 +505,7 @@ jobs:
targets:
epel-8-x86_64:
distros: [RHEL-8.6-rhui]
- identifier: sanity-8to9-aws-e2e
+ identifier: sanity-8.6to9.0-aws-e2e
tf_extra_params:
test:
tmt:
--
2.43.0

View File

@ -1,169 +0,0 @@
From 2340bd5322d3d083c33be065858148e1b32f3d7b Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Mon, 20 Nov 2023 13:03:48 +0100
Subject: [PATCH 44/60] Add 7.9to8.10 and 8.10to9.4 upgrade paths
---
.packit.yaml | 118 ++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 116 insertions(+), 2 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index a307cc75..acbd2b86 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -163,6 +163,15 @@ jobs:
RHUI: "aws"
LEAPPDATA_BRANCH: "upstream"
+- &sanity-79to810-aws
+ <<: *sanity-79to86-aws
+ identifier: sanity-7.9to8.10-aws-e2e
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.10"
+ RHUI: "aws"
+ LEAPPDATA_BRANCH: "upstream"
+
# On-demand minimal beaker tests
- &beaker-minimal-79to86
<<: *sanity-79to86
@@ -274,6 +283,40 @@ jobs:
TARGET_RELEASE: "8.9"
LEAPPDATA_BRANCH: "upstream"
+- &sanity-79to810
+ <<: *sanity-79to86
+ identifier: sanity-7.9to8.10
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.10"
+ LEAPPDATA_BRANCH: "upstream"
+
+# On-demand minimal beaker tests
+- &beaker-minimal-79to810
+ <<: *beaker-minimal-79to86
+ labels:
+ - beaker-minimal
+ - beaker-minimal-7.9to8.10
+ - 7.9to8.10
+ identifier: sanity-7.9to8.10-beaker-minimal
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.10"
+ LEAPPDATA_BRANCH: "upstream"
+
+# On-demand kernel-rt tests
+- &kernel-rt-79to810
+ <<: *kernel-rt-79to88
+ labels:
+ - kernel-rt
+ - kernel-rt-7.9to8.10
+ - 7.9to8.10
+ identifier: sanity-7.9to8.10-kernel-rt
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.10"
+ LEAPPDATA_BRANCH: "upstream"
+
- &sanity-86to90
<<: *sanity-79to86
targets:
@@ -445,7 +488,6 @@ jobs:
env:
SOURCE_RELEASE: "8.9"
TARGET_RELEASE: "9.3"
- RHSM_REPOS: "rhel-8-for-x86_64-appstream-beta-rpms,rhel-8-for-x86_64-baseos-beta-rpms"
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.3"
@@ -475,7 +517,6 @@ jobs:
env:
SOURCE_RELEASE: "8.9"
TARGET_RELEASE: "9.3"
- RHSM_REPOS: "rhel-8-for-x86_64-appstream-beta-rpms,rhel-8-for-x86_64-baseos-beta-rpms"
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.3"
@@ -500,6 +541,79 @@ jobs:
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
+- &sanity-810to94
+ <<: *sanity-88to92
+ targets:
+ epel-8-x86_64:
+ distros: [RHEL-8.10.0-Nightly]
+ identifier: sanity-8.10to9.4
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:sanity & tag:8to9'
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.10"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
+ env:
+ SOURCE_RELEASE: "8.10"
+ TARGET_RELEASE: "9.4"
+ RHSM_REPOS: "rhel-8-for-x86_64-appstream-beta-rpms,rhel-8-for-x86_64-baseos-beta-rpms"
+ LEAPPDATA_BRANCH: "upstream"
+
+# On-demand minimal beaker tests
+- &beaker-minimal-810to94
+ <<: *beaker-minimal-88to92
+ labels:
+ - beaker-minimal
+ - beaker-minimal-8.10to9.4
+ - 8.10to9.4
+ targets:
+ epel-8-x86_64:
+ distros: [RHEL-8.10.0-Nightly]
+ identifier: sanity-8.10to9.4-beaker-minimal
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:partitioning & tag:8to9'
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.10"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
+ env:
+ SOURCE_RELEASE: "8.10"
+ TARGET_RELEASE: "9.4"
+ LEAPPDATA_BRANCH: "upstream"
+
+# On-demand kernel-rt tests
+- &kernel-rt-810to94
+ <<: *beaker-minimal-810to94
+ labels:
+ - kernel-rt
+ - kernel-rt-8.10to9.4
+ - 8.10to9.4
+ identifier: sanity-8.10to9.4-kernel-rt
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:kernel-rt & tag:8to9'
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.10"
+ settings:
+ provisioning:
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
+
- &sanity-86to90-aws
<<: *sanity-79to86-aws
targets:
--
2.43.0

View File

@ -1,66 +0,0 @@
From d9af1f2a19ec3352a4eff596bcb13e7ad073d763 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Andrea=20Waltlov=C3=A1?= <awaltlov@redhat.com>
Date: Sun, 26 Nov 2023 19:31:44 +0100
Subject: [PATCH 45/60] Utilize get_target_major_version in no enabled target
repositories report (#1151)
* Utilize get_target_major_version in no enabled target repositories report
so the shortened URL in the report points to the right documentation based
based on the target OS major version.
* Add expected docs URLs to comments for easier grep
Signed-off-by: Andrea Waltlova <awaltlov@redhat.com>
---
.../libraries/userspacegen.py | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
index d605ba0e..c1d34f18 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
@@ -678,8 +678,10 @@ def _get_rhsm_available_repoids(context):
).format(target_major_version)),
reporting.ExternalLink(
- # TODO: How to handle different documentation links for each version?
- url='https://red.ht/preparing-for-upgrade-to-rhel8',
+ # https://red.ht/preparing-for-upgrade-to-rhel8
+ # https://red.ht/preparing-for-upgrade-to-rhel9
+ # https://red.ht/preparing-for-upgrade-to-rhel10
+ url='https://red.ht/preparing-for-upgrade-to-rhel{}'.format(target_major_version),
title='Preparing for the upgrade')
])
raise StopActorExecution()
@@ -812,6 +814,7 @@ def gather_target_repositories(context, indata):
missing_custom_repoids.append(custom_repo.repoid)
api.current_logger().debug("Gathered target repositories: {}".format(', '.join(target_repoids)))
if not target_repoids:
+ target_major_version = get_target_major_version()
reporting.create_report([
reporting.Title('There are no enabled target repositories'),
reporting.Summary(
@@ -833,8 +836,10 @@ def gather_target_repositories(context, indata):
' Finally, verify that the "/etc/leapp/files/repomap.json" file is up-to-date.'
).format(version=api.current_actor().configuration.version.target)),
reporting.ExternalLink(
- # TODO: How to handle different documentation links for each version?
- url='https://red.ht/preparing-for-upgrade-to-rhel8',
+ # https://red.ht/preparing-for-upgrade-to-rhel8
+ # https://red.ht/preparing-for-upgrade-to-rhel9
+ # https://red.ht/preparing-for-upgrade-to-rhel10
+ url='https://red.ht/preparing-for-upgrade-to-rhel{}'.format(target_major_version),
title='Preparing for the upgrade'),
reporting.RelatedResource("file", "/etc/leapp/files/repomap.json"),
reporting.RelatedResource("file", "/etc/yum.repos.d/")
@@ -854,7 +859,7 @@ def gather_target_repositories(context, indata):
reporting.Groups([reporting.Groups.INHIBITOR]),
reporting.Severity(reporting.Severity.HIGH),
reporting.ExternalLink(
- # TODO: How to handle different documentation links for each version?
+ # NOTE: Article covers both RHEL 7 to RHEL 8 and RHEL 8 to RHEL 9
url='https://access.redhat.com/articles/4977891',
title='Customizing your Red Hat Enterprise Linux in-place upgrade'),
reporting.Remediation(hint=(
--
2.43.0

View File

@ -1,174 +0,0 @@
From 677e5e63829aecf023b01747848e5e1b712350f8 Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Tue, 12 Dec 2023 11:27:19 +0100
Subject: [PATCH 46/60] Workaround tft issue with listing disabled plans
Until TFT-2298 is resolved a mandatory enabled:true tests
filtering won't hurt as we do have some tests that are disabled
for particular distros.
OAMG-10177
---
.packit.yaml | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index acbd2b86..1d0b6433 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -101,7 +101,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:sanity'
+ plan_filter: 'tag:sanity & enabled:true'
environments:
- tmt:
context:
@@ -129,7 +129,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:e2e'
+ plan_filter: 'tag:e2e & enabled:true'
environments:
- tmt:
context:
@@ -184,7 +184,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:partitioning & tag:7to8'
+ plan_filter: 'tag:partitioning & tag:7to8 & enabled:true'
environments:
- tmt:
context:
@@ -205,7 +205,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:kernel-rt & tag:7to8'
+ plan_filter: 'tag:kernel-rt & tag:7to8 & enabled:true'
environments:
- tmt:
context:
@@ -326,7 +326,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:sanity & tag:8to9'
+ plan_filter: 'tag:sanity & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -355,7 +355,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:partitioning & tag:8to9'
+ plan_filter: 'tag:partitioning & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -381,7 +381,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:kernel-rt & tag:8to9'
+ plan_filter: 'tag:kernel-rt & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -400,7 +400,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:sanity & tag:8to9'
+ plan_filter: 'tag:sanity & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -430,7 +430,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:partitioning & tag:8to9'
+ plan_filter: 'tag:partitioning & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -457,7 +457,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:kernel-rt & tag:8to9'
+ plan_filter: 'tag:kernel-rt & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -476,7 +476,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:sanity & tag:8to9'
+ plan_filter: 'tag:sanity & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -505,7 +505,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:partitioning & tag:8to9'
+ plan_filter: 'tag:partitioning & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -531,7 +531,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:kernel-rt & tag:8to9'
+ plan_filter: 'tag:kernel-rt & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -550,7 +550,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:sanity & tag:8to9'
+ plan_filter: 'tag:sanity & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -579,7 +579,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:partitioning & tag:8to9'
+ plan_filter: 'tag:partitioning & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -604,7 +604,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:kernel-rt & tag:8to9'
+ plan_filter: 'tag:kernel-rt & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -623,7 +623,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:e2e'
+ plan_filter: 'tag:e2e & enabled:true'
environments:
- tmt:
context:
--
2.43.0

View File

@ -1,418 +0,0 @@
From 1778818611efc961eda1e44894132689543cfcbe Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Mon, 11 Dec 2023 10:45:22 +0100
Subject: [PATCH 47/60] Distribution agnostick check of signed packages [1/2]
The original detection covered only RHEL system, requiring rpms
to be signed by Red Hat (hardcoded). Also the model
InstalledRedHatSignedRPM didn't provide to much space for detection
of other distros.
The new solution checks RPMs signatures based on the detected
distribution ID (currently: rhel, centos). Fingerprints of GPG keys
and the packager string are stored under
repos/system_upgrade/common/files/distro/<distro>/signatures.json
where <distro> is the distribution id.
RedHatSignedRPMScanner is deprecated, replaced by DistributionSignedRPM
message. The original RedHatSignedRPMScanner will contain till the
removal just packages signed by RH.
The update of all other actors to consume DistributionSignedRPM is
covered in the next commit for the easier reading.
jira: OAMG-9824
Co-authored-by: Petr Stodulka <pstodulk@redhat.com>
---
.../distributionsignedrpmscanner/actor.py | 94 +++++++++++++++++++
.../test_distributionsignedrpmscanner.py} | 73 ++++++++++++++
.../actors/redhatsignedrpmscanner/actor.py | 75 ---------------
.../files/distro/centos/gpg-signatures.json | 8 ++
.../files/distro/rhel/gpg-signatures.json | 10 ++
.../common/models/installedrpm.py | 6 ++
6 files changed, 191 insertions(+), 75 deletions(-)
create mode 100644 repos/system_upgrade/common/actors/distributionsignedrpmscanner/actor.py
rename repos/system_upgrade/common/actors/{redhatsignedrpmscanner/tests/test_redhatsignedrpmscanner.py => distributionsignedrpmscanner/tests/test_distributionsignedrpmscanner.py} (68%)
delete mode 100644 repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py
create mode 100644 repos/system_upgrade/common/files/distro/centos/gpg-signatures.json
create mode 100644 repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json
diff --git a/repos/system_upgrade/common/actors/distributionsignedrpmscanner/actor.py b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/actor.py
new file mode 100644
index 00000000..5772cb25
--- /dev/null
+++ b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/actor.py
@@ -0,0 +1,94 @@
+import json
+import os
+
+from leapp.actors import Actor
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.common import rhui
+from leapp.libraries.common.config import get_env
+from leapp.libraries.stdlib import api
+from leapp.models import DistributionSignedRPM, InstalledRedHatSignedRPM, InstalledRPM, InstalledUnsignedRPM
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+from leapp.utils.deprecation import suppress_deprecation
+
+
+@suppress_deprecation(InstalledRedHatSignedRPM)
+class DistributionSignedRpmScanner(Actor):
+ """Provide data about installed RPM Packages signed by the distribution.
+
+ After filtering the list of installed RPM packages by signature, a message
+ with relevant data will be produced.
+ """
+
+ name = 'distribution_signed_rpm_scanner'
+ consumes = (InstalledRPM,)
+ produces = (DistributionSignedRPM, InstalledRedHatSignedRPM, InstalledUnsignedRPM,)
+ tags = (IPUWorkflowTag, FactsPhaseTag)
+
+ def process(self):
+ # TODO(pstodulk): refactor this function
+ # - move it to the private library
+ # - split it into several functions (so the main function stays small)
+ # FIXME(pstodulk): gpg-pubkey is handled wrong; it's not a real package
+ # and create FP report about unsigned RPMs. Keeping the fix for later.
+ distribution = self.configuration.os_release.release_id
+ distributions_path = api.get_common_folder_path('distro')
+
+ distribution_config = os.path.join(distributions_path, distribution, 'gpg-signatures.json')
+ if os.path.exists(distribution_config):
+ with open(distribution_config) as distro_config_file:
+ distro_config_json = json.load(distro_config_file)
+ distribution_keys = distro_config_json.get('keys', [])
+ distribution_packager = distro_config_json.get('packager', 'not-available')
+ else:
+ raise StopActorExecutionError(
+ 'Cannot find distribution signature configuration.',
+ details={'Problem': 'Distribution {} was not found in {}.'.format(distribution, distributions_path)})
+
+ signed_pkgs = DistributionSignedRPM()
+ rh_signed_pkgs = InstalledRedHatSignedRPM()
+ unsigned_pkgs = InstalledUnsignedRPM()
+
+ all_signed = get_env('LEAPP_DEVEL_RPMS_ALL_SIGNED', '0') == '1'
+
+ def has_distributionsig(pkg):
+ return any(key in pkg.pgpsig for key in distribution_keys)
+
+ def is_gpg_pubkey(pkg):
+ """
+ Check if gpg-pubkey pkg exists or LEAPP_DEVEL_RPMS_ALL_SIGNED=1
+
+ gpg-pubkey is not signed as it would require another package
+ to verify its signature
+ """
+ return ( # pylint: disable-msg=consider-using-ternary
+ pkg.name == 'gpg-pubkey'
+ and pkg.packager.startswith(distribution_packager)
+ or all_signed
+ )
+
+ def has_katello_prefix(pkg):
+ """Whitelist the katello package."""
+ return pkg.name.startswith('katello-ca-consumer')
+
+ whitelisted_cloud_pkgs = rhui.get_all_known_rhui_pkgs_for_current_upg()
+
+ for rpm_pkgs in self.consume(InstalledRPM):
+ for pkg in rpm_pkgs.items:
+ if any(
+ [
+ has_distributionsig(pkg),
+ is_gpg_pubkey(pkg),
+ has_katello_prefix(pkg),
+ pkg.name in whitelisted_cloud_pkgs,
+ ]
+ ):
+ signed_pkgs.items.append(pkg)
+ if distribution == 'rhel':
+ rh_signed_pkgs.items.append(pkg)
+ continue
+
+ unsigned_pkgs.items.append(pkg)
+
+ self.produce(signed_pkgs)
+ self.produce(rh_signed_pkgs)
+ self.produce(unsigned_pkgs)
diff --git a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/tests/test_redhatsignedrpmscanner.py b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/tests/test_distributionsignedrpmscanner.py
similarity index 68%
rename from repos/system_upgrade/common/actors/redhatsignedrpmscanner/tests/test_redhatsignedrpmscanner.py
rename to repos/system_upgrade/common/actors/distributionsignedrpmscanner/tests/test_distributionsignedrpmscanner.py
index 6652142e..a15ae173 100644
--- a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/tests/test_redhatsignedrpmscanner.py
+++ b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/tests/test_distributionsignedrpmscanner.py
@@ -3,12 +3,14 @@ import mock
from leapp.libraries.common import rpms
from leapp.libraries.common.config import mock_configs
from leapp.models import (
+ DistributionSignedRPM,
fields,
InstalledRedHatSignedRPM,
InstalledRPM,
InstalledUnsignedRPM,
IPUConfig,
Model,
+ OSRelease,
RPM
)
@@ -30,6 +32,7 @@ class MockModel(Model):
def test_no_installed_rpms(current_actor_context):
current_actor_context.run(config_model=mock_configs.CONFIG)
+ assert current_actor_context.consume(DistributionSignedRPM)
assert current_actor_context.consume(InstalledRedHatSignedRPM)
assert current_actor_context.consume(InstalledUnsignedRPM)
@@ -57,12 +60,74 @@ def test_actor_execution_with_signed_unsigned_data(current_actor_context):
current_actor_context.feed(InstalledRPM(items=installed_rpm))
current_actor_context.run(config_model=mock_configs.CONFIG)
+ assert current_actor_context.consume(DistributionSignedRPM)
+ assert len(current_actor_context.consume(DistributionSignedRPM)[0].items) == 5
assert current_actor_context.consume(InstalledRedHatSignedRPM)
assert len(current_actor_context.consume(InstalledRedHatSignedRPM)[0].items) == 5
assert current_actor_context.consume(InstalledUnsignedRPM)
assert len(current_actor_context.consume(InstalledUnsignedRPM)[0].items) == 4
+def test_actor_execution_with_signed_unsigned_data_centos(current_actor_context):
+ CENTOS_PACKAGER = 'CentOS BuildSystem <http://bugs.centos.org>'
+ config = mock_configs.CONFIG
+
+ config.os_release = OSRelease(
+ release_id='centos',
+ name='CentOS Linux',
+ pretty_name='CentOS Linux 7 (Core)',
+ version='7 (Core)',
+ version_id='7'
+ )
+
+ installed_rpm = [
+ RPM(name='sample01', version='0.1', release='1.sm01', epoch='1', packager=CENTOS_PACKAGER, arch='noarch',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID 24c6a8a7f4a80eb5'),
+ RPM(name='sample02', version='0.1', release='1.sm01', epoch='1', packager=CENTOS_PACKAGER, arch='noarch',
+ pgpsig='SOME_OTHER_SIG_X'),
+ RPM(name='sample03', version='0.1', release='1.sm01', epoch='1', packager=CENTOS_PACKAGER, arch='noarch',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID 05b555b38483c65d'),
+ RPM(name='sample04', version='0.1', release='1.sm01', epoch='1', packager=CENTOS_PACKAGER, arch='noarch',
+ pgpsig='SOME_OTHER_SIG_X'),
+ RPM(name='sample05', version='0.1', release='1.sm01', epoch='1', packager=CENTOS_PACKAGER, arch='noarch',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID 4eb84e71f2ee9d55'),
+ RPM(name='sample06', version='0.1', release='1.sm01', epoch='1', packager=CENTOS_PACKAGER, arch='noarch',
+ pgpsig='SOME_OTHER_SIG_X'),
+ RPM(name='sample07', version='0.1', release='1.sm01', epoch='1', packager=CENTOS_PACKAGER, arch='noarch',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID fd372689897da07a'),
+ RPM(name='sample08', version='0.1', release='1.sm01', epoch='1', packager=CENTOS_PACKAGER, arch='noarch',
+ pgpsig='SOME_OTHER_SIG_X'),
+ RPM(name='sample09', version='0.1', release='1.sm01', epoch='1', packager=CENTOS_PACKAGER, arch='noarch',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID 45689c882fa658e0')]
+
+ current_actor_context.feed(InstalledRPM(items=installed_rpm))
+ current_actor_context.run(config_model=config)
+ assert current_actor_context.consume(DistributionSignedRPM)
+ assert len(current_actor_context.consume(DistributionSignedRPM)[0].items) == 3
+ assert current_actor_context.consume(InstalledRedHatSignedRPM)
+ assert not current_actor_context.consume(InstalledRedHatSignedRPM)[0].items
+ assert current_actor_context.consume(InstalledUnsignedRPM)
+ assert len(current_actor_context.consume(InstalledUnsignedRPM)[0].items) == 6
+
+
+def test_actor_execution_with_unknown_distro(current_actor_context):
+ config = mock_configs.CONFIG
+
+ config.os_release = OSRelease(
+ release_id='myos',
+ name='MyOS Linux',
+ pretty_name='MyOS Linux 7 (Core)',
+ version='7 (Core)',
+ version_id='7'
+ )
+
+ current_actor_context.feed(InstalledRPM(items=[]))
+ current_actor_context.run(config_model=config)
+ assert not current_actor_context.consume(DistributionSignedRPM)
+ assert not current_actor_context.consume(InstalledRedHatSignedRPM)
+ assert not current_actor_context.consume(InstalledUnsignedRPM)
+
+
def test_all_rpms_signed(current_actor_context):
installed_rpm = [
RPM(name='sample01', version='0.1', release='1.sm01', epoch='1', packager=RH_PACKAGER, arch='noarch',
@@ -77,6 +142,8 @@ def test_all_rpms_signed(current_actor_context):
current_actor_context.feed(InstalledRPM(items=installed_rpm))
current_actor_context.run(config_model=mock_configs.CONFIG_ALL_SIGNED)
+ assert current_actor_context.consume(DistributionSignedRPM)
+ assert len(current_actor_context.consume(DistributionSignedRPM)[0].items) == 4
assert current_actor_context.consume(InstalledRedHatSignedRPM)
assert len(current_actor_context.consume(InstalledRedHatSignedRPM)[0].items) == 4
assert not current_actor_context.consume(InstalledUnsignedRPM)[0].items
@@ -95,6 +162,8 @@ def test_katello_pkg_goes_to_signed(current_actor_context):
current_actor_context.feed(InstalledRPM(items=installed_rpm))
current_actor_context.run(config_model=mock_configs.CONFIG_ALL_SIGNED)
+ assert current_actor_context.consume(DistributionSignedRPM)
+ assert len(current_actor_context.consume(DistributionSignedRPM)[0].items) == 1
assert current_actor_context.consume(InstalledRedHatSignedRPM)
assert len(current_actor_context.consume(InstalledRedHatSignedRPM)[0].items) == 1
assert not current_actor_context.consume(InstalledUnsignedRPM)[0].items
@@ -110,6 +179,8 @@ def test_gpg_pubkey_pkg(current_actor_context):
current_actor_context.feed(InstalledRPM(items=installed_rpm))
current_actor_context.run(config_model=mock_configs.CONFIG)
+ assert current_actor_context.consume(DistributionSignedRPM)
+ assert len(current_actor_context.consume(DistributionSignedRPM)[0].items) == 1
assert current_actor_context.consume(InstalledRedHatSignedRPM)
assert len(current_actor_context.consume(InstalledRedHatSignedRPM)[0].items) == 1
assert current_actor_context.consume(InstalledUnsignedRPM)
@@ -165,6 +236,8 @@ def test_has_package(current_actor_context):
current_actor_context.feed(InstalledRPM(items=installed_rpm))
current_actor_context.run(config_model=mock_configs.CONFIG)
+ assert rpms.has_package(DistributionSignedRPM, 'sample01', context=current_actor_context)
+ assert not rpms.has_package(DistributionSignedRPM, 'nosuchpackage', context=current_actor_context)
assert rpms.has_package(InstalledRedHatSignedRPM, 'sample01', context=current_actor_context)
assert not rpms.has_package(InstalledRedHatSignedRPM, 'nosuchpackage', context=current_actor_context)
assert rpms.has_package(InstalledUnsignedRPM, 'sample02', context=current_actor_context)
diff --git a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py b/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py
deleted file mode 100644
index 41f9d343..00000000
--- a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py
+++ /dev/null
@@ -1,75 +0,0 @@
-from leapp.actors import Actor
-from leapp.libraries.common import rhui
-from leapp.models import InstalledRedHatSignedRPM, InstalledRPM, InstalledUnsignedRPM
-from leapp.tags import FactsPhaseTag, IPUWorkflowTag
-
-
-class RedHatSignedRpmScanner(Actor):
- """Provide data about installed RPM Packages signed by Red Hat.
-
- After filtering the list of installed RPM packages by signature, a message
- with relevant data will be produced.
- """
-
- name = 'red_hat_signed_rpm_scanner'
- consumes = (InstalledRPM,)
- produces = (InstalledRedHatSignedRPM, InstalledUnsignedRPM,)
- tags = (IPUWorkflowTag, FactsPhaseTag)
-
- def process(self):
- RH_SIGS = ['199e2f91fd431d51',
- '5326810137017186',
- '938a80caf21541eb',
- 'fd372689897da07a',
- '45689c882fa658e0']
-
- signed_pkgs = InstalledRedHatSignedRPM()
- unsigned_pkgs = InstalledUnsignedRPM()
-
- env_vars = self.configuration.leapp_env_vars
- # if we start upgrade with LEAPP_DEVEL_RPMS_ALL_SIGNED=1, we consider
- # all packages to be signed
- all_signed = [
- env
- for env in env_vars
- if env.name == 'LEAPP_DEVEL_RPMS_ALL_SIGNED' and env.value == '1'
- ]
-
- def has_rhsig(pkg):
- return any(key in pkg.pgpsig for key in RH_SIGS)
-
- def is_gpg_pubkey(pkg):
- """Check if gpg-pubkey pkg exists or LEAPP_DEVEL_RPMS_ALL_SIGNED=1
-
- gpg-pubkey is not signed as it would require another package
- to verify its signature
- """
- return ( # pylint: disable-msg=consider-using-ternary
- pkg.name == 'gpg-pubkey'
- and pkg.packager.startswith('Red Hat, Inc.')
- or all_signed
- )
-
- def has_katello_prefix(pkg):
- """Whitelist the katello package."""
- return pkg.name.startswith('katello-ca-consumer')
-
- whitelisted_cloud_pkgs = rhui.get_all_known_rhui_pkgs_for_current_upg()
-
- for rpm_pkgs in self.consume(InstalledRPM):
- for pkg in rpm_pkgs.items:
- if any(
- [
- has_rhsig(pkg),
- is_gpg_pubkey(pkg),
- has_katello_prefix(pkg),
- pkg.name in whitelisted_cloud_pkgs,
- ]
- ):
- signed_pkgs.items.append(pkg)
- continue
-
- unsigned_pkgs.items.append(pkg)
-
- self.produce(signed_pkgs)
- self.produce(unsigned_pkgs)
diff --git a/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json b/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json
new file mode 100644
index 00000000..30e329ee
--- /dev/null
+++ b/repos/system_upgrade/common/files/distro/centos/gpg-signatures.json
@@ -0,0 +1,8 @@
+{
+ "keys": [
+ "24c6a8a7f4a80eb5",
+ "05b555b38483c65d",
+ "4eb84e71f2ee9d55"
+ ],
+ "packager": "CentOS"
+}
diff --git a/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json b/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json
new file mode 100644
index 00000000..eccf0106
--- /dev/null
+++ b/repos/system_upgrade/common/files/distro/rhel/gpg-signatures.json
@@ -0,0 +1,10 @@
+{
+ "keys": [
+ "199e2f91fd431d51",
+ "5326810137017186",
+ "938a80caf21541eb",
+ "fd372689897da07a",
+ "45689c882fa658e0"
+ ],
+ "packager": "Red Hat, Inc."
+}
diff --git a/repos/system_upgrade/common/models/installedrpm.py b/repos/system_upgrade/common/models/installedrpm.py
index 5a632b03..cc9fd508 100644
--- a/repos/system_upgrade/common/models/installedrpm.py
+++ b/repos/system_upgrade/common/models/installedrpm.py
@@ -1,5 +1,6 @@
from leapp.models import fields, Model
from leapp.topics import SystemInfoTopic
+from leapp.utils.deprecation import deprecated
class RPM(Model):
@@ -21,6 +22,11 @@ class InstalledRPM(Model):
items = fields.List(fields.Model(RPM), default=[])
+class DistributionSignedRPM(InstalledRPM):
+ pass
+
+
+@deprecated(since='2024-01-31', message='Replaced by DistributionSignedRPM')
class InstalledRedHatSignedRPM(InstalledRPM):
pass
--
2.43.0

File diff suppressed because it is too large Load Diff

View File

@ -1,25 +0,0 @@
From 0776bc34b9b3dc323c98ddb446a5444ea7176970 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Wed, 6 Dec 2023 18:25:44 +0100
Subject: [PATCH 49/60] Pylint: fix superfluous-parens in the code
---
.../libraries/checkinstalleddebugkernels.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/repos/system_upgrade/el7toel8/actors/kernel/checkinstalleddebugkernels/checkinstalleddebugkernels/libraries/checkinstalleddebugkernels.py b/repos/system_upgrade/el7toel8/actors/kernel/checkinstalleddebugkernels/checkinstalleddebugkernels/libraries/checkinstalleddebugkernels.py
index 889196ea..15b7b79e 100644
--- a/repos/system_upgrade/el7toel8/actors/kernel/checkinstalleddebugkernels/checkinstalleddebugkernels/libraries/checkinstalleddebugkernels.py
+++ b/repos/system_upgrade/el7toel8/actors/kernel/checkinstalleddebugkernels/checkinstalleddebugkernels/libraries/checkinstalleddebugkernels.py
@@ -26,7 +26,7 @@ def process():
title = 'Multiple debug kernels installed'
summary = ('DNF cannot produce a valid upgrade transaction when'
' multiple kernel-debug packages are installed.')
- hint = ('Remove all but one kernel-debug packages before running Leapp again.')
+ hint = 'Remove all but one kernel-debug packages before running Leapp again.'
all_but_latest_kernel_debug = pkgs[:-1]
packages = ['{n}-{v}-{r}'.format(n=pkg.name, v=pkg.version, r=pkg.release)
for pkg in all_but_latest_kernel_debug]
--
2.43.0

View File

@ -1,246 +0,0 @@
From 4968bec73947fb83aeb2d89fe7e919fba2ca2776 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Mon, 11 Dec 2023 18:00:40 +0100
Subject: [PATCH 50/60] distributionsignedrpmscanner: refactoring + gpg-pubkey
fix
We have decided to refactor the code in the actor (coming history
time ago) to make it more readable.
Also it's fixing an old issue with gpg-pubkey detection as unsigned
rpm. gpg-pubkey is not a real package and it's just an entry in RPM DB
about the imported RPM GPG keys. Originally it has been checked whether
the packager is vendor/authority of the installed distribution and if
not, such a package (key) has been mared as unsigned.
This led to false positive reports, that we do not know what will
happen with gpg-pubkey packages (reported even multiple times..)
and that they might be removed or do another problems with the upgrade
transaction - which has not been true.
So I dropped the check for the packager and mark gpg-pubkey always
as signed. It's a question whether we should not ignore this package
always and do not put it to any signed/unsigned list. Handling it
in this way for now.
---
.../distributionsignedrpmscanner/actor.py | 94 ++++---------------
.../libraries/distributionsignedrpmscanner.py | 72 ++++++++++++++
.../test_distributionsignedrpmscanner.py | 6 +-
3 files changed, 92 insertions(+), 80 deletions(-)
create mode 100644 repos/system_upgrade/common/actors/distributionsignedrpmscanner/libraries/distributionsignedrpmscanner.py
diff --git a/repos/system_upgrade/common/actors/distributionsignedrpmscanner/actor.py b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/actor.py
index 5772cb25..56016513 100644
--- a/repos/system_upgrade/common/actors/distributionsignedrpmscanner/actor.py
+++ b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/actor.py
@@ -1,11 +1,5 @@
-import json
-import os
-
from leapp.actors import Actor
-from leapp.exceptions import StopActorExecutionError
-from leapp.libraries.common import rhui
-from leapp.libraries.common.config import get_env
-from leapp.libraries.stdlib import api
+from leapp.libraries.actor import distributionsignedrpmscanner
from leapp.models import DistributionSignedRPM, InstalledRedHatSignedRPM, InstalledRPM, InstalledUnsignedRPM
from leapp.tags import FactsPhaseTag, IPUWorkflowTag
from leapp.utils.deprecation import suppress_deprecation
@@ -13,10 +7,22 @@ from leapp.utils.deprecation import suppress_deprecation
@suppress_deprecation(InstalledRedHatSignedRPM)
class DistributionSignedRpmScanner(Actor):
- """Provide data about installed RPM Packages signed by the distribution.
+ """
+ Provide data about distribution signed & unsigned RPM packages.
+
+ For various checks and actions done during the upgrade it's important to
+ know what packages are signed by GPG keys of the installed linux system
+ distribution. RPMs that are not provided in the distribution could have
+ different versions, different behaviour, and also it could be completely
+ different application just with the same RPM name.
+
+ For that reasons, various actors rely on the DistributionSignedRPM message
+ to check whether particular package is installed, to be sure it provides
+ valid data. Fingerprints of distribution GPG keys are stored under
+ common/files/distro/<distro>/gpg_signatures.json
+ where <distro> is distribution ID of the installed system (e.g. centos, rhel).
- After filtering the list of installed RPM packages by signature, a message
- with relevant data will be produced.
+ If the file for the installed distribution is not find, end with error.
"""
name = 'distribution_signed_rpm_scanner'
@@ -25,70 +31,4 @@ class DistributionSignedRpmScanner(Actor):
tags = (IPUWorkflowTag, FactsPhaseTag)
def process(self):
- # TODO(pstodulk): refactor this function
- # - move it to the private library
- # - split it into several functions (so the main function stays small)
- # FIXME(pstodulk): gpg-pubkey is handled wrong; it's not a real package
- # and create FP report about unsigned RPMs. Keeping the fix for later.
- distribution = self.configuration.os_release.release_id
- distributions_path = api.get_common_folder_path('distro')
-
- distribution_config = os.path.join(distributions_path, distribution, 'gpg-signatures.json')
- if os.path.exists(distribution_config):
- with open(distribution_config) as distro_config_file:
- distro_config_json = json.load(distro_config_file)
- distribution_keys = distro_config_json.get('keys', [])
- distribution_packager = distro_config_json.get('packager', 'not-available')
- else:
- raise StopActorExecutionError(
- 'Cannot find distribution signature configuration.',
- details={'Problem': 'Distribution {} was not found in {}.'.format(distribution, distributions_path)})
-
- signed_pkgs = DistributionSignedRPM()
- rh_signed_pkgs = InstalledRedHatSignedRPM()
- unsigned_pkgs = InstalledUnsignedRPM()
-
- all_signed = get_env('LEAPP_DEVEL_RPMS_ALL_SIGNED', '0') == '1'
-
- def has_distributionsig(pkg):
- return any(key in pkg.pgpsig for key in distribution_keys)
-
- def is_gpg_pubkey(pkg):
- """
- Check if gpg-pubkey pkg exists or LEAPP_DEVEL_RPMS_ALL_SIGNED=1
-
- gpg-pubkey is not signed as it would require another package
- to verify its signature
- """
- return ( # pylint: disable-msg=consider-using-ternary
- pkg.name == 'gpg-pubkey'
- and pkg.packager.startswith(distribution_packager)
- or all_signed
- )
-
- def has_katello_prefix(pkg):
- """Whitelist the katello package."""
- return pkg.name.startswith('katello-ca-consumer')
-
- whitelisted_cloud_pkgs = rhui.get_all_known_rhui_pkgs_for_current_upg()
-
- for rpm_pkgs in self.consume(InstalledRPM):
- for pkg in rpm_pkgs.items:
- if any(
- [
- has_distributionsig(pkg),
- is_gpg_pubkey(pkg),
- has_katello_prefix(pkg),
- pkg.name in whitelisted_cloud_pkgs,
- ]
- ):
- signed_pkgs.items.append(pkg)
- if distribution == 'rhel':
- rh_signed_pkgs.items.append(pkg)
- continue
-
- unsigned_pkgs.items.append(pkg)
-
- self.produce(signed_pkgs)
- self.produce(rh_signed_pkgs)
- self.produce(unsigned_pkgs)
+ distributionsignedrpmscanner.process()
diff --git a/repos/system_upgrade/common/actors/distributionsignedrpmscanner/libraries/distributionsignedrpmscanner.py b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/libraries/distributionsignedrpmscanner.py
new file mode 100644
index 00000000..0bc71bfa
--- /dev/null
+++ b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/libraries/distributionsignedrpmscanner.py
@@ -0,0 +1,72 @@
+import json
+import os
+
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.common import rhui
+from leapp.libraries.common.config import get_env
+from leapp.libraries.stdlib import api
+from leapp.models import DistributionSignedRPM, InstalledRedHatSignedRPM, InstalledRPM, InstalledUnsignedRPM
+
+
+def get_distribution_data(distribution):
+ distributions_path = api.get_common_folder_path('distro')
+
+ distribution_config = os.path.join(distributions_path, distribution, 'gpg-signatures.json')
+ if os.path.exists(distribution_config):
+ with open(distribution_config) as distro_config_file:
+ distro_config_json = json.load(distro_config_file)
+ distro_keys = distro_config_json.get('keys', [])
+ # distro_packager = distro_config_json.get('packager', 'not-available')
+ else:
+ raise StopActorExecutionError(
+ 'Cannot find distribution signature configuration.',
+ details={'Problem': 'Distribution {} was not found in {}.'.format(distribution, distributions_path)})
+ return distro_keys
+
+
+def is_distro_signed(pkg, distro_keys):
+ return any(key in pkg.pgpsig for key in distro_keys)
+
+
+def is_exceptional(pkg, allowlist):
+ """
+ Some packages should be marked always as signed
+
+ tl;dr; gpg-pubkey, katello packages, and rhui packages
+
+ gpg-pubkey is not real RPM. It's just an entry representing
+ gpg key imported inside the RPM DB. For that same reason, it cannot be
+ signed. Note that it cannot affect the upgrade transaction, so ignore
+ who vendored the key. Total majority of all machines have imported third
+ party gpg keys.
+
+ Katello packages have various names and are created on a Satellite server.
+
+ The allowlist is now used for any other package names that should be marked
+ always as signed for the particular upgrade.
+ """
+ return pkg.name == 'gpg-pubkey' or pkg.name.startswith('katello-ca-consumer') or pkg.name in allowlist
+
+
+def process():
+ distribution = api.current_actor().configuration.os_release.release_id
+ distro_keys = get_distribution_data(distribution)
+ all_signed = get_env('LEAPP_DEVEL_RPMS_ALL_SIGNED', '0') == '1'
+ rhui_pkgs = rhui.get_all_known_rhui_pkgs_for_current_upg()
+
+ signed_pkgs = DistributionSignedRPM()
+ rh_signed_pkgs = InstalledRedHatSignedRPM()
+ unsigned_pkgs = InstalledUnsignedRPM()
+
+ for rpm_pkgs in api.consume(InstalledRPM):
+ for pkg in rpm_pkgs.items:
+ if all_signed or is_distro_signed(pkg, distro_keys) or is_exceptional(pkg, rhui_pkgs):
+ signed_pkgs.items.append(pkg)
+ if distribution == 'rhel':
+ rh_signed_pkgs.items.append(pkg)
+ continue
+ unsigned_pkgs.items.append(pkg)
+
+ api.produce(signed_pkgs)
+ api.produce(rh_signed_pkgs)
+ api.produce(unsigned_pkgs)
diff --git a/repos/system_upgrade/common/actors/distributionsignedrpmscanner/tests/test_distributionsignedrpmscanner.py b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/tests/test_distributionsignedrpmscanner.py
index a15ae173..f138bcb2 100644
--- a/repos/system_upgrade/common/actors/distributionsignedrpmscanner/tests/test_distributionsignedrpmscanner.py
+++ b/repos/system_upgrade/common/actors/distributionsignedrpmscanner/tests/test_distributionsignedrpmscanner.py
@@ -180,11 +180,11 @@ def test_gpg_pubkey_pkg(current_actor_context):
current_actor_context.feed(InstalledRPM(items=installed_rpm))
current_actor_context.run(config_model=mock_configs.CONFIG)
assert current_actor_context.consume(DistributionSignedRPM)
- assert len(current_actor_context.consume(DistributionSignedRPM)[0].items) == 1
+ assert len(current_actor_context.consume(DistributionSignedRPM)[0].items) == 2
assert current_actor_context.consume(InstalledRedHatSignedRPM)
- assert len(current_actor_context.consume(InstalledRedHatSignedRPM)[0].items) == 1
+ assert len(current_actor_context.consume(InstalledRedHatSignedRPM)[0].items) == 2
assert current_actor_context.consume(InstalledUnsignedRPM)
- assert len(current_actor_context.consume(InstalledUnsignedRPM)[0].items) == 1
+ assert not current_actor_context.consume(InstalledUnsignedRPM)[0].items
def test_create_lookup():
--
2.43.0

View File

@ -1,255 +0,0 @@
From 118133a734987e4d2c01ab9775525b0152adc780 Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Thu, 14 Dec 2023 14:03:25 +0100
Subject: [PATCH 51/60] Introduce two functions for listing which packages are
leapp packages.
* rpms.get_leapp_packages() library function returns the leapp packages which ship the leapp
components.
* rpms.get_leapp_deps_packages() library function returns the leapp deps meta packages which list
the requirements of the associated leapp packages.
This function can be used as leapp-installation-packages-getter.
Refactoring of other actors using it will be done later.
---
repos/system_upgrade/common/libraries/rpms.py | 139 ++++++++++++++++++
.../common/libraries/tests/test_rpms.py | 67 ++++++++-
2 files changed, 205 insertions(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/libraries/rpms.py b/repos/system_upgrade/common/libraries/rpms.py
index 6a5e8637..2890240f 100644
--- a/repos/system_upgrade/common/libraries/rpms.py
+++ b/repos/system_upgrade/common/libraries/rpms.py
@@ -1,7 +1,47 @@
from leapp.libraries import stdlib
+from leapp.libraries.common.config.version import get_source_major_version
from leapp.models import InstalledRPM
+class LeappComponents(object):
+ """
+ Supported component values to be used with get_packages_function:
+ * FRAMEWORK - the core of the leapp project: the leapp executable and
+ associated leapp libraries
+ * REPOSITORY - the leapp-repository project
+ * COCKPIT - the cockpit-leapp project
+ * TOOLS - miscellaneous tooling like snactor
+ """
+ FRAMEWORK = 'framework'
+ REPOSITORY = 'repository'
+ COCKPIT = 'cockpit'
+ TOOLS = 'tools'
+
+
+_LEAPP_PACKAGES_MAP = {
+ LeappComponents.FRAMEWORK: {'7': {'pkgs': ['leapp', 'python2-leapp'],
+ 'deps': ['leapp-deps']},
+ '8': {'pkgs': ['leapp', 'python3-leapp'],
+ 'deps': ['leapp-deps']}
+ },
+ LeappComponents.REPOSITORY: {'7': {'pkgs': ['leapp-upgrade-el7toel8'],
+ 'deps': ['leapp-upgrade-el7toel8-deps']},
+ '8': {'pkgs': ['leapp-upgrade-el8toel9'],
+ 'deps': ['leapp-upgrade-el8toel9-deps']}
+ },
+ LeappComponents.COCKPIT: {'7': {'pkgs': ['cockpit-leapp']},
+ '8': {'pkgs': ['cockpit-leapp']}
+ },
+ LeappComponents.TOOLS: {'7': {'pkgs': ['snactor']},
+ '8': {'pkgs': ['snactor']}
+ }
+ }
+
+GET_LEAPP_PACKAGES_DEFAULT_COMPONENTS = frozenset((LeappComponents.FRAMEWORK,
+ LeappComponents.REPOSITORY,
+ LeappComponents.TOOLS))
+
+
def get_installed_rpms():
rpm_cmd = [
'/bin/rpm',
@@ -114,3 +154,102 @@ def check_file_modification(config):
"""
output = _read_rpm_modifications(config)
return _parse_config_modification(output, config)
+
+
+def _get_leapp_packages_of_type(major_version, component, type_='pkgs'):
+ """
+ Private implementation of get_leapp_packages() and get_leapp_deps_packages().
+
+ :param major_version: Same as for :func:`get_leapp_packages` and
+ :func:`get_leapp_deps_packages`
+ :param component: Same as for :func:`get_leapp_packages` and :func:`get_leapp_deps_packages`
+ :param type_: Either "pkgs" or "deps". Determines which set of packages we're looking for.
+ Corresponds to the keys in the `_LEAPP_PACKAGES_MAP`.
+
+ Retrieving the set of leapp and leapp-deps packages only differs in which key is used to
+ retrieve the packages from _LEAPP_PACKAGES_MAP. This function abstracts that difference.
+ """
+ res = set()
+
+ major_versions = [major_version] if isinstance(major_version, str) else major_version
+ if not major_versions:
+ # No major_version of interest specified -> treat as if only current source system version
+ # requested
+ major_versions = [get_source_major_version()]
+
+ components = [component] if isinstance(component, str) else component
+ if not components:
+ error_msg = ("At least one component must be specified when calling this"
+ " function, available choices are {choices}".format(
+ choices=sorted(_LEAPP_PACKAGES_MAP.keys()))
+ )
+ raise ValueError(error_msg)
+
+ for comp in components:
+ for a_major_version in major_versions:
+ if comp not in _LEAPP_PACKAGES_MAP:
+ error_msg = "The requested component {comp} is unknown, available choices are {choices}".format(
+ comp=component, choices=sorted(_LEAPP_PACKAGES_MAP.keys()))
+ raise ValueError(error_msg)
+
+ if a_major_version not in _LEAPP_PACKAGES_MAP[comp]:
+ error_msg = "The requested major_version {ver} is unknown, available choices are {choices}".format(
+ ver=a_major_version, choices=sorted(_LEAPP_PACKAGES_MAP[comp].keys()))
+ raise ValueError(error_msg)
+
+ # All went well otherwise, get the data
+ res.update(_LEAPP_PACKAGES_MAP[comp][a_major_version].get(type_, []))
+
+ return sorted(res)
+
+
+def get_leapp_packages(major_version=None, component=GET_LEAPP_PACKAGES_DEFAULT_COMPONENTS):
+ """
+ Get list of leapp packages.
+
+ :param major_version: a list or string specifying major_versions. If not defined then current
+ system_version will be used.
+ :param component: a list or a single enum value specifying leapp components
+ (use enum :class: LeappComponents) If defined then only packages related to the specific
+ component(s) will be returned.
+ The default set of components is in `GET_LEAPP_PACKAGES_DEFAULT_COMPONENTS` and
+ simple modifications of the default can be achieved with code like:
+
+ .. code-block:: python
+ get_leapp_packages(
+ component=GET_LEAPP_PACKAGES_DEFAULT_COMPONENTS.difference(
+ [LeappComponents.TOOLS]
+ ))
+
+ :raises ValueError: if a requested component or major_version doesn't exist.
+
+ .. note::
+ Call :func:`get_leapp_dep_packages` as well if you also need the deps metapackages.
+ Those packages determine which RPMs need to be installed for leapp to function.
+ They aren't just Requires on the base leapp and leapp-repository RPMs because they
+ need to be switched from the old system_version's to the new ones at a different
+ point in the upgrade than the base RPMs.
+ """
+ return _get_leapp_packages_of_type(major_version, component, type_="pkgs")
+
+
+def get_leapp_dep_packages(major_version=None, component=GET_LEAPP_PACKAGES_DEFAULT_COMPONENTS):
+ """
+ Get list of leapp dep metapackages.
+
+ :param major_version: a list or string specifying major_versions. If not defined then current
+ system_version will be used.
+ :param component: a list or a single enum value specifying leapp components
+ (use enum :class: LeappComponents) If defined then only packages related to the specific
+ component(s) will be returned.
+ The default set of components is in `GET_LEAPP_PACKAGES_DEFAULT_COMPONENTS` and
+ simple modifications of the default can be achieved with code like:
+
+ .. code-block:: python
+ get_leapp_packages(
+ component=GET_LEAPP_PACKAGES_DEFAULT_COMPONENTS.difference(
+ [LeappComponents.TOOLS]
+ ))
+ :raises ValueError: if a requested component or major_version doesn't exist.
+ """
+ return _get_leapp_packages_of_type(major_version, component, type_="deps")
diff --git a/repos/system_upgrade/common/libraries/tests/test_rpms.py b/repos/system_upgrade/common/libraries/tests/test_rpms.py
index 39a32dcb..955ab05c 100644
--- a/repos/system_upgrade/common/libraries/tests/test_rpms.py
+++ b/repos/system_upgrade/common/libraries/tests/test_rpms.py
@@ -1,4 +1,8 @@
-from leapp.libraries.common.rpms import _parse_config_modification
+import pytest
+
+from leapp.libraries.common.rpms import _parse_config_modification, get_leapp_dep_packages, get_leapp_packages
+from leapp.libraries.common.testutils import CurrentActorMocked
+from leapp.libraries.stdlib import api
def test_parse_config_modification():
@@ -30,3 +34,64 @@ def test_parse_config_modification():
"S.5....T. c /etc/ssh/sshd_config",
]
assert _parse_config_modification(data, "/etc/ssh/sshd_config")
+
+
+@pytest.mark.parametrize('major_version,component,result', [
+ (None, None, ['leapp', 'python3-leapp', 'leapp-upgrade-el8toel9', 'snactor']),
+ ('7', None, ['leapp', 'python2-leapp', 'leapp-upgrade-el7toel8', 'snactor']),
+ (['7', '8'], None, ['leapp', 'python2-leapp', 'leapp-upgrade-el7toel8',
+ 'python3-leapp', 'leapp-upgrade-el8toel9', 'snactor']),
+ ('8', 'framework', ['leapp', 'python3-leapp']),
+ ])
+def test_get_leapp_packages(major_version, component, result, monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='8.9', dst_ver='9.3'))
+
+ kwargs = {}
+ if major_version:
+ kwargs["major_version"] = major_version
+ if component:
+ kwargs["component"] = component
+
+ assert set(get_leapp_packages(** kwargs)) == set(result)
+
+
+@pytest.mark.parametrize('major_version,component,result', [
+ ('8', 'nosuchcomponent',
+ (ValueError,
+ r"component nosuchcomponent is unknown, available choices are \['cockpit', 'framework', 'repository', 'tools']")
+ ),
+ ('nosuchversion', "framework",
+ (ValueError, r"major_version nosuchversion is unknown, available choices are \['7', '8']")),
+ ('nosuchversion', False,
+ (ValueError, r"At least one component must be specified when calling this function,"
+ r" available choices are \['cockpit', 'framework', 'repository', 'tools']")),
+])
+def test_get_leapp_packages_errors(major_version, component, result, monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='8.9', dst_ver='9.3'))
+
+ kwargs = {}
+ if major_version:
+ kwargs["major_version"] = major_version
+ if component is not None:
+ kwargs["component"] = component
+
+ exc_type, exc_msg = result
+ with pytest.raises(exc_type, match=exc_msg):
+ get_leapp_packages(**kwargs)
+
+
+@pytest.mark.parametrize('major_version,component,result', [
+ (None, None, ['leapp-deps', 'leapp-upgrade-el8toel9-deps']),
+ ('8', 'framework', ['leapp-deps']),
+ ('7', 'tools', []),
+])
+def test_get_leapp_dep_packages(major_version, component, result, monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='8.9', dst_ver='9.3'))
+
+ kwargs = {}
+ if major_version:
+ kwargs["major_version"] = major_version
+ if component:
+ kwargs["component"] = component
+
+ assert frozenset(get_leapp_dep_packages(**kwargs)) == frozenset(result)
--
2.43.0

View File

@ -1,28 +0,0 @@
From c627a0be13bf2170df0089cd5516e7615a97eb34 Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Wed, 10 Jan 2024 13:34:44 +0100
Subject: [PATCH 52/60] Switch test repo branch to main
As default branch in tmt tests repo has been changed
from master to main, we have to address this in
packit configuration.
---
.packit.yaml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.packit.yaml b/.packit.yaml
index 1d0b6433..383f5314 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -88,7 +88,7 @@ jobs:
- &sanity-79to86
job: tests
fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
- fmf_ref: "master"
+ fmf_ref: "main"
use_internal_tf: True
trigger: pull_request
labels:
--
2.43.0

View File

@ -1,98 +0,0 @@
From 7a819fb293340b2ed22b6d5e2816dd9c39fefdc9 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Tue, 5 Dec 2023 13:53:52 +0100
Subject: [PATCH 53/60] Update dependencies: require xfsprogs and e2fsprogs
To be able to format our OVL disk images with XFS or Ext4, we need
the required tools present on the system. However, on systems with
XFS file systems only, it's not needed to have tools for ext4,
and vice versa. So on such systems, users can remove these packages
manually. In that cases, we get into a problems, especially when
XFS is the default FS in our case.
To resolve that, we add dependencies for xfsprogs and e2fsprogs rpms
into the spec file, so we are sure these are always present on the
system.
In case of Ext4 it is a little bit "redundant" - as use of Ext4 is
optional. However we expect actually that many people will do it
(many == not a small amount of people -> not uncommon use..).
So keeping this the least effort, let's add the requirement for both
as the actual installation stack is not big.
Packaging:
* Requires xfsprogs, e2fsprogs
* Bump leapp-repository-dependencies to 10
jira: RHEL-10847
---
packaging/leapp-repository.spec | 12 +++++++++++-
packaging/other_specs/leapp-el7toel8-deps.spec | 15 ++++++++++++++-
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/packaging/leapp-repository.spec b/packaging/leapp-repository.spec
index 937a738e..2b0a80d4 100644
--- a/packaging/leapp-repository.spec
+++ b/packaging/leapp-repository.spec
@@ -2,7 +2,7 @@
%global repositorydir %{leapp_datadir}/repositories
%global custom_repositorydir %{leapp_datadir}/custom-repositories
-%define leapp_repo_deps 9
+%define leapp_repo_deps 10
%if 0%{?rhel} == 7
%define leapp_python_sitelib %{python2_sitelib}
@@ -149,6 +149,16 @@ Provides: leapp-repository-dependencies = %{leapp_repo_deps}
##################################################
Requires: dnf >= 4
Requires: pciutils
+
+# required to be able to format disk images with XFS file systems (default)
+Requires: xfsprogs
+
+# required to be able to format disk images with Ext4 file systems
+# NOTE: this is not happening by default, but we can expact that many customers
+# will want to / need to do this - especially on RHEL 7 now. Adding this deps
+# as the best trade-off to resolve this problem.
+Requires: e2fsprogs
+
%if 0%{?rhel} && 0%{?rhel} == 7
# Required to gather system facts about SELinux
Requires: libselinux-python
diff --git a/packaging/other_specs/leapp-el7toel8-deps.spec b/packaging/other_specs/leapp-el7toel8-deps.spec
index 4a181ee1..c4e0dd90 100644
--- a/packaging/other_specs/leapp-el7toel8-deps.spec
+++ b/packaging/other_specs/leapp-el7toel8-deps.spec
@@ -9,7 +9,7 @@
%endif
-%define leapp_repo_deps 9
+%define leapp_repo_deps 10
%define leapp_framework_deps 5
# NOTE: the Version contains the %{rhel} macro just for the convenience to
@@ -68,6 +68,19 @@ Requires: cpio
# just to be sure that /etc/modprobe.d is present
Requires: kmod
+# required to be able to format disk images with XFS file systems (default)
+# NOTE: this is really needed on the source system, but keep it for the target
+# one too
+Requires: xfsprogs
+
+# required to be able to format disk images with Ext4 file systems
+# NOTE: this is not happening by default, but we can expact that many customers
+# will want to / need to do this - especially on RHEL 7 now. Adding this deps
+# as the best trade-off to resolve this problem.
+# NOTE: this is really needed on the source system, but keep it for the target
+# one too
+Requires: e2fsprogs
+
%description -n %{lrdname}
%{summary}
--
2.43.0

View File

@ -1,106 +0,0 @@
From 50b4fc016befd855094bdba4d7187bf690c4b2ad Mon Sep 17 00:00:00 2001
From: Toshio Kuratomi <a.badger@gmail.com>
Date: Thu, 11 Jan 2024 11:00:43 -0800
Subject: [PATCH 54/60] Several enhancements to the Makefile
* Allow arbitrary user supplied arguments for pytest, pylint, and flake8. This can be used, for
instance, to select specific tests in pytest (PYTEST_ARGS="-k 'perform_ok'"), or to "disable" a
single linter: (PYLINT_ARGS='--version').
* Better document how to determine the proper value for ACTOR=<actor>.
---
Makefile | 33 +++++++++++++++++++++------------
1 file changed, 21 insertions(+), 12 deletions(-)
diff --git a/Makefile b/Makefile
index b504a854..0de2a86a 100644
--- a/Makefile
+++ b/Makefile
@@ -16,6 +16,12 @@ REPOSITORIES ?= $(shell ls $(_SYSUPG_REPOS) | xargs echo | tr " " ",")
SYSUPG_TEST_PATHS=$(shell echo $(REPOSITORIES) | sed -r "s|(,\\|^)| $(_SYSUPG_REPOS)/|g")
TEST_PATHS:=commands repos/common $(SYSUPG_TEST_PATHS)
+# Several commands can take arbitrary user supplied arguments from environment
+# variables as well:
+PYTEST_ARGS ?=
+PYLINT_ARGS ?=
+FLAKE8_ARGS ?=
+
# python version to run test with
_PYTHON_VENV=$${PYTHON_VENV:-python2.7}
@@ -131,10 +137,13 @@ help:
@echo " test_container_all_no_lint run tests without linting in all available containers"
@echo " clean_containers clean all testing and building container images (to force a rebuild for example)"
@echo ""
- @echo "Targets test, lint and test_no_lint support environment variables ACTOR and"
- @echo "TEST_LIBS."
- @echo "If ACTOR=<actor> is specified, targets are run against the specified actor."
- @echo "If TEST_LIBS=y is specified, targets are run against shared libraries."
+ @echo "* Targets test, lint and test_no_lint support environment variables ACTOR and"
+ @echo " TEST_LIBS."
+ @echo "* If ACTOR=<actor> is specified, targets are run against the specified actor."
+ @echo " <actor> must be the name attribute defined in actor.py."
+ @echo "* If TEST_LIBS=y is specified, targets are run against shared libraries."
+ @echo "* Command line options can be added to pytest, pylint, and flake8 by setting"
+ @echo " the PYTEST_ARGS, PYLINT_ARGS, and FLAKE8_ARGS environment variables."
@echo ""
@echo "Envars affecting actions with COPR (optional):"
@echo " COPR_REPO specify COPR repository, e,g. @oamg/leapp"
@@ -323,15 +332,15 @@ lint:
SEARCH_PATH="$(TEST_PATHS)" && \
echo "Using search path '$${SEARCH_PATH}'" && \
echo "--- Running pylint ---" && \
- bash -c "[[ ! -z '$${SEARCH_PATH}' ]] && find $${SEARCH_PATH} -name '*.py' | sort -u | xargs pylint -j0" && \
+ bash -c "[[ ! -z '$${SEARCH_PATH}' ]] && find $${SEARCH_PATH} -name '*.py' | sort -u | xargs pylint -j0 $(PYLINT_ARGS)" && \
echo "--- Running flake8 ---" && \
- bash -c "[[ ! -z '$${SEARCH_PATH}' ]] && flake8 $${SEARCH_PATH}"
+ bash -c "[[ ! -z '$${SEARCH_PATH}' ]] && flake8 $${SEARCH_PATH} $(FLAKE8_ARGS)"
if [[ "$(_PYTHON_VENV)" == "python2.7" ]] ; then \
. $(VENVNAME)/bin/activate; \
echo "--- Checking py3 compatibility ---" && \
SEARCH_PATH=$(REPOS_PATH) && \
- bash -c "[[ ! -z '$${SEARCH_PATH}' ]] && find $${SEARCH_PATH} -name '*.py' | sort -u | xargs pylint --py3k" && \
+ bash -c "[[ ! -z '$${SEARCH_PATH}' ]] && find $${SEARCH_PATH} -name '*.py' | sort -u | xargs pylint --py3k $(PYLINT_ARGS)" && \
echo "--- Linting done. ---"; \
fi
@@ -358,7 +367,7 @@ test_no_lint:
cd repos/system_upgrade/el7toel8/; \
snactor workflow sanity-check ipu && \
cd - && \
- $(_PYTHON_VENV) -m pytest $(REPORT_ARG) $(TEST_PATHS) $(LIBRARY_PATH)
+ $(_PYTHON_VENV) -m pytest $(REPORT_ARG) $(TEST_PATHS) $(LIBRARY_PATH) $(PYTEST_ARGS)
test: lint test_no_lint
@@ -474,14 +483,14 @@ fast_lint:
@. $(VENVNAME)/bin/activate; \
FILES_TO_LINT="$$(git diff --name-only $(MASTER_BRANCH) --diff-filter AMR | grep '\.py$$')"; \
if [[ -n "$$FILES_TO_LINT" ]]; then \
- pylint -j 0 $$FILES_TO_LINT && \
- flake8 $$FILES_TO_LINT; \
+ pylint -j 0 $$FILES_TO_LINT $(PYLINT_ARGS) && \
+ flake8 $$FILES_TO_LINT $(FLAKE8_ARG); \
LINT_EXIT_CODE="$$?"; \
if [[ "$$LINT_EXIT_CODE" != "0" ]]; then \
exit $$LINT_EXIT_CODE; \
fi; \
if [[ "$(_PYTHON_VENV)" == "python2.7" ]] ; then \
- pylint --py3k $$FILES_TO_LINT; \
+ pylint --py3k $$FILES_TO_LINT $(PYLINT_ARGS); \
fi; \
else \
echo "No files to lint."; \
@@ -489,7 +498,7 @@ fast_lint:
dev_test_no_lint:
. $(VENVNAME)/bin/activate; \
- $(_PYTHON_VENV) -m pytest $(REPORT_ARG) $(APPROX_TEST_PATHS) $(LIBRARY_PATH)
+ $(_PYTHON_VENV) -m pytest $(REPORT_ARG) $(APPROX_TEST_PATHS) $(LIBRARY_PATH) $(PYTEST_ARGS)
dashboard_data:
. $(VENVNAME)/bin/activate; \
--
2.43.0

View File

@ -1,122 +0,0 @@
From e414f7c6572af4293cacadd810154677892c4028 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 23 Nov 2023 17:38:35 +0100
Subject: [PATCH 55/60] pes_events_scanner: Ignore Leapp related PES events
When PES events are added for all the Leapp related packages, we need to
make sure to ignore them in `pes_events_scanner` to make sure they are
*not* taken into account during the RPM upgrade transaction as we don't
want to upgrade them or want to handle their upgrade in a different way.
Jira: OAMG-5645
---
.../libraries/pes_events_scanner.py | 16 +++++
.../tests/test_pes_event_scanner.py | 61 +++++++++++++++++++
2 files changed, 77 insertions(+)
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
index 72dd34ec..75c3ea89 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
@@ -480,6 +480,21 @@ def apply_transaction_configuration(source_pkgs):
return source_pkgs_with_conf_applied
+def remove_leapp_related_events(events):
+ leapp_pkgs = [
+ 'leapp', 'leapp-deps', 'leapp-upgrade-el7toel8', 'leapp-upgrade-el8toel9',
+ 'leapp-upgrade-el7toel8-deps', 'leapp-upgrade-el8toel9-deps', 'python2-leapp',
+ 'python3-leapp', 'snactor'
+ ]
+ res = []
+ for event in events:
+ if not any(pkg.name in leapp_pkgs for pkg in event.in_pkgs):
+ res.append(event)
+ else:
+ api.current_logger().debug('Filtered out leapp related event, event id: {}'.format(event.id))
+ return res
+
+
def process():
# Retrieve data - installed_pkgs, transaction configuration, pes events
events = get_pes_events('/etc/leapp/files', 'pes-events.json')
@@ -494,6 +509,7 @@ def process():
# packages of the target system, so we can distinguish what needs to be repomapped
repoids_of_source_pkgs = {pkg.repository for pkg in source_pkgs}
+ events = remove_leapp_related_events(events)
events = remove_undesired_events(events, releases)
# Apply events - compute what packages should the target system have
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py b/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
index 243f85c4..8150c164 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
@@ -402,3 +402,64 @@ def test_pkgs_are_demodularized_when_crossing_major_version(monkeypatch):
}
assert demodularized_pkgs == {Package('demodularized', 'repo', ('module-demodularized', 'stream'))}
assert target_pkgs == expected_target_pkgs
+
+
+def test_remove_leapp_related_events():
+ # these are just hypothetical and not necessarily correct
+ package_set_two_leapp = {Package('leapp-upgrade-el7toel8', 'repoid-rhel7', None),
+ Package('leapp-upgrade-el7toel8-deps', 'repoid-rhel7', None)}
+ package_set_one_leapp = {Package('leapp-upgrade-el7toel8', 'repoid-rhel7', None),
+ Package('other', 'repoid-rhel7', None)}
+ in_events = [
+ Event(1, Action.PRESENT, {Package('leapp', 'repoid-rhel7', None)},
+ {Package('leapp', 'repoid-rhel8', None)}, (7, 0), (8, 0), []),
+
+ Event(1, Action.RENAMED, {Package('leapp-deps', 'repoid-rhel7', None)},
+ {Package('leapp-deps', 'repoid-rhel8', None)}, (7, 0), (8, 0), []),
+ Event(1, Action.RENAMED, {Package('leapp-upgrade-el7toel8', 'repoid-rhel7', None)},
+ {Package('leapp-upgrade-el8toel9', 'repoid-rhel8', None)}, (7, 0), (8, 0), []),
+ Event(2, Action.RENAMED, {Package('leapp-upgrade-el7toel8-deps', 'repoid-rhel7', None)},
+ {Package('leapp-upgrade-el8toel9-deps', 'repoid-rhel8', None)}, (7, 0), (8, 0), []),
+ Event(2, Action.PRESENT, {Package('snactor', 'repoid-rhel7', None)},
+ {Package('snactor', 'repoid-rhel8', None)}, (7, 0), (8, 0), []),
+ Event(2, Action.REPLACED, {Package('python2-leapp', 'repoid-rhel7', None)},
+ {Package('python3-leapp', 'repoid-rhel8', None)},
+ (7, 0), (8, 0), []),
+
+ Event(1, Action.DEPRECATED, {Package('leapp-upgrade-el8toel9', 'repoid-rhel8', None)},
+ {Package('leapp-upgrade-el8toel9', 'repoid-rhel9', None)}, (8, 0), (9, 0), []),
+ Event(2, Action.REMOVED, {Package('leapp-upgrade-el8toel9-deps', 'repoid-rhel8', None)},
+ {}, (8, 0), (9, 0), []),
+ Event(1, Action.RENAMED, {Package('leapp-deps', 'repoid-rhel8', None)},
+ {Package('leapp-deps', 'repoid-rhel9', None)}, (8, 0), (9, 0), []),
+ Event(2, Action.PRESENT, {Package('snactor', 'repoid-rhel8', None)},
+ {Package('snactor', 'repoid-rhel9', None)}, (8, 0), (9, 0), []),
+ Event(2, Action.REMOVED, {Package('python3-leapp', 'repoid-rhel8', None)},
+ {Package('snactor', 'repoid-rhel9', None)}, (8, 0), (9, 0), []),
+
+ Event(2, Action.PRESENT, {Package('other-pkg', 'repoid-rhel8', None)},
+ {Package('other-pkg', 'repoid-rhel9', None)}, (7, 0), (8, 0), []),
+ Event(2, Action.PRESENT, {Package('other-pkg-with-leapp-in-the-name', 'repoid-rhel7', None)},
+ {Package('other-pkg-with-leapp-in-the-name', 'repoid-rhel8', None)}, (7, 0), (8, 0), []),
+
+ # multiple leapp packages in in_pkgs
+ Event(1, Action.MERGED, package_set_two_leapp, {Package('leapp-upgrade-el7toel8', 'repoid-rhel8', None)},
+ (7, 0), (8, 0), []),
+
+ # multiple leapp packages in out_pkgs
+ Event(1, Action.SPLIT, {Package('leapp-upgrade-el7toel8', 'repoid-rhel7', None)},
+ package_set_two_leapp, (7, 0), (8, 0), []),
+
+ # leapp and other pkg in in_pkgs
+ Event(1, Action.MERGED, package_set_one_leapp, {Package('leapp', 'repoid-rhel8', None)},
+ (7, 0), (8, 0), []),
+ ]
+ expected_out_events = [
+ Event(2, Action.PRESENT, {Package('other-pkg', 'repoid-rhel8', None)},
+ {Package('other-pkg', 'repoid-rhel9', None)}, (7, 0), (8, 0), []),
+ Event(2, Action.PRESENT, {Package('other-pkg-with-leapp-in-the-name', 'repoid-rhel7', None)},
+ {Package('other-pkg-with-leapp-in-the-name', 'repoid-rhel8', None)}, (7, 0), (8, 0), []),
+ ]
+
+ out_events = pes_events_scanner.remove_leapp_related_events(in_events)
+ assert out_events == expected_out_events
--
2.43.0

View File

@ -1,61 +0,0 @@
From 14667eef1fbec335780f995af89e0c0fb8dc25ba Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Thu, 11 Jan 2024 14:27:56 +0100
Subject: [PATCH 56/60] Use library functions for getting leapp packages
Instead of harcoded list of leapp packages let's rely on
native leapp library functions that were introduced a few
commits ago.
OAMG-5645
---
.../peseventsscanner/libraries/pes_events_scanner.py | 9 ++++-----
.../peseventsscanner/tests/test_pes_event_scanner.py | 5 ++++-
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
index 75c3ea89..f9411dfe 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
@@ -5,6 +5,7 @@ from leapp import reporting
from leapp.exceptions import StopActorExecutionError
from leapp.libraries.actor import peseventsscanner_repomap
from leapp.libraries.actor.pes_event_parsing import Action, get_pes_events, Package
+from leapp.libraries.common import rpms
from leapp.libraries.common.config import version
from leapp.libraries.stdlib import api
from leapp.libraries.stdlib.config import is_verbose
@@ -481,11 +482,9 @@ def apply_transaction_configuration(source_pkgs):
def remove_leapp_related_events(events):
- leapp_pkgs = [
- 'leapp', 'leapp-deps', 'leapp-upgrade-el7toel8', 'leapp-upgrade-el8toel9',
- 'leapp-upgrade-el7toel8-deps', 'leapp-upgrade-el8toel9-deps', 'python2-leapp',
- 'python3-leapp', 'snactor'
- ]
+ # NOTE(ivasilev) Need to revisit this once rhel9->rhel10 upgrades become a thing
+ leapp_pkgs = rpms.get_leapp_dep_packages(
+ major_version=['7', '8']) + rpms.get_leapp_packages(major_version=['7', '8'])
res = []
for event in events:
if not any(pkg.name in leapp_pkgs for pkg in event.in_pkgs):
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py b/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
index 8150c164..7cdcf820 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
@@ -404,7 +404,10 @@ def test_pkgs_are_demodularized_when_crossing_major_version(monkeypatch):
assert target_pkgs == expected_target_pkgs
-def test_remove_leapp_related_events():
+def test_remove_leapp_related_events(monkeypatch):
+ # NOTE(ivasilev) That's required to use leapp library functions that rely on calls to
+ # get_source/target_system_version functions
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='7.9', dst_ver='8.8'))
# these are just hypothetical and not necessarily correct
package_set_two_leapp = {Package('leapp-upgrade-el7toel8', 'repoid-rhel7', None),
Package('leapp-upgrade-el7toel8-deps', 'repoid-rhel7', None)}
--
2.43.0

View File

@ -1,233 +0,0 @@
From 1afd0fb1a0ed7354e7ed525bf0de3b883eddff8e Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Thu, 19 Oct 2023 18:44:06 +0200
Subject: [PATCH 57/60] Introduce TrackedFilesInfoSource message and new actor
We hit already several times a situation that an actor needed an
information about specific file (whether exists, has been changed,...).
And for that purpose extra scanner actor needed to be created, with
an associated message and Model.
To cover such cases, we are introducing new model
TrackedFilesInfoSource and actor scansourcefiles. So in future, when
any actor needs such a piece of information and do something based
on it, developer can just update lists in the introduced actor's
library, so the information about particular file will be provided.
Another benefit is saving a time on writting new unit tests and code
for the scan, as updating a list of files to be tracked does not
affect the algorithm.
---
.../common/actors/scansourcefiles/actor.py | 32 ++++++++
.../libraries/scansourcefiles.py | 79 +++++++++++++++++++
.../tests/unit_test_scansourcefiles.py | 5 ++
.../common/models/trackedfiles.py | 60 ++++++++++++++
4 files changed, 176 insertions(+)
create mode 100644 repos/system_upgrade/common/actors/scansourcefiles/actor.py
create mode 100644 repos/system_upgrade/common/actors/scansourcefiles/libraries/scansourcefiles.py
create mode 100644 repos/system_upgrade/common/actors/scansourcefiles/tests/unit_test_scansourcefiles.py
create mode 100644 repos/system_upgrade/common/models/trackedfiles.py
diff --git a/repos/system_upgrade/common/actors/scansourcefiles/actor.py b/repos/system_upgrade/common/actors/scansourcefiles/actor.py
new file mode 100644
index 00000000..b368fc88
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scansourcefiles/actor.py
@@ -0,0 +1,32 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import scansourcefiles
+from leapp.models import TrackedFilesInfoSource
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class ScanSourceFiles(Actor):
+ """
+ Scan files (explicitly specified) of the source system.
+
+ If an actor require information about a file, like whether it's installed,
+ modified, etc. It can be added to the list of files to be tracked, so no
+ extra actor is required to be created to provide just that one information.
+
+ The scan of all changed files tracked by RPMs is very expensive. So we rather
+ provide this possibility to simplify the work for others.
+
+ See lists defined in the private library.
+ """
+ # TODO(pstodulk): in some cases could be valuable to specify an rpm name
+ # and provide information about all changed files instead. Both approaches
+ # have a little bit different use-cases and expectations. In the second
+ # case it would be good solution regarding track of leapp-repository
+ # changed files.
+
+ name = 'scan_source_files'
+ consumes = ()
+ produces = (TrackedFilesInfoSource,)
+ tags = (IPUWorkflowTag, FactsPhaseTag)
+
+ def process(self):
+ scansourcefiles.process()
diff --git a/repos/system_upgrade/common/actors/scansourcefiles/libraries/scansourcefiles.py b/repos/system_upgrade/common/actors/scansourcefiles/libraries/scansourcefiles.py
new file mode 100644
index 00000000..33e0275f
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scansourcefiles/libraries/scansourcefiles.py
@@ -0,0 +1,79 @@
+import os
+
+from leapp.libraries.common.config.version import get_source_major_version
+from leapp.libraries.stdlib import api, CalledProcessError, run
+from leapp.models import FileInfo, TrackedFilesInfoSource
+
+# TODO(pstodulk): make linter happy about this
+# common -> Files supposed to be scanned on all system versions.
+# '8' (etc..) -> files supposed to be scanned when particular major version of OS is used
+TRACKED_FILES = {
+ 'common': [
+ ],
+ '8': [
+ ],
+ '9': [
+ ],
+}
+
+# TODO(pstodulk)?: introduce possibility to discover files under a dir that
+# are not tracked by any rpm or a specified rpm? Currently I have only one
+# use case for that in my head, so possibly it will be better to skip a generic
+# solution and just introduce a new actor and msg for that (check whether
+# actors not owned by our package(s) are present).
+
+
+def _get_rpm_name(input_file):
+ try:
+ rpm_names = run(['rpm', '-qf', '--queryformat', r'%{NAME}\n', input_file], split=True)['stdout']
+ except CalledProcessError:
+ # is not owned by any rpm
+ return ''
+
+ if len(rpm_names) > 1:
+ # this is very seatbelt; could happen for directories, but we do
+ # not expect here directories specified at all. if so, we should
+ # provide list instead of string
+ api.current_logger().warning(
+ 'The {} file is owned by multiple rpms: {}.'
+ .format(input_file, ', '.join(rpm_names))
+ )
+ return rpm_names[0]
+
+
+def is_modified(input_file):
+ """
+ Return True if checksum has been changed (or removed).
+
+ Ignores mode, user, type, ...
+ """
+ result = run(['rpm', '-Vf', '--nomtime', input_file], checked=False)
+ if not result['exit_code']:
+ return False
+ status = result['stdout'].split()[0]
+ return status == 'missing' or '5' in status
+
+
+def scan_file(input_file):
+ data = {
+ 'path': input_file,
+ 'exists': os.path.exists(input_file),
+ 'rpm_name': _get_rpm_name(input_file),
+ }
+
+ if data['rpm_name']:
+ data['is_modified'] = is_modified(input_file)
+ else:
+ # it's not tracked by any rpm at all, so always False
+ data['is_modified'] = False
+
+ return FileInfo(**data)
+
+
+def scan_files(files):
+ return [scan_file(fname) for fname in files]
+
+
+def process():
+ files = scan_files(TRACKED_FILES['common'] + TRACKED_FILES.get(get_source_major_version(), []))
+ api.produce(TrackedFilesInfoSource(files=files))
diff --git a/repos/system_upgrade/common/actors/scansourcefiles/tests/unit_test_scansourcefiles.py b/repos/system_upgrade/common/actors/scansourcefiles/tests/unit_test_scansourcefiles.py
new file mode 100644
index 00000000..6a6b009a
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scansourcefiles/tests/unit_test_scansourcefiles.py
@@ -0,0 +1,5 @@
+def test_scansourcefiles():
+ # TODO(pstodulk): keeping unit tests for later after I check the idea
+ # of this actor with the team.
+ # JIRA: OAMG-10367
+ pass
diff --git a/repos/system_upgrade/common/models/trackedfiles.py b/repos/system_upgrade/common/models/trackedfiles.py
new file mode 100644
index 00000000..f7c2c809
--- /dev/null
+++ b/repos/system_upgrade/common/models/trackedfiles.py
@@ -0,0 +1,60 @@
+from leapp.models import fields, Model
+from leapp.topics import SystemInfoTopic
+
+
+class FileInfo(Model):
+ """
+ Various data about a file.
+
+ This model is not supposed to be used as a message directly.
+ See e.g. :class:`TrackedSourceFilesInfo` instead.
+ """
+ topic = SystemInfoTopic
+
+ path = fields.String()
+ """
+ Canonical path to the file.
+ """
+
+ exists = fields.Boolean()
+ """
+ True if the file is present on the system.
+ """
+
+ rpm_name = fields.String(default="")
+ """
+ Name of the rpm that owns the file. Otherwise empty string if not owned
+ by any rpm.
+ """
+
+ # NOTE(pstodulk): I have been thinking about the "state"/"modified" field
+ # instead. Which could contain enum list, where could be specified what has
+ # been changed (checksum, type, owner, ...). But currently we do not have
+ # use cases for that and do not want to implement it now. So starting simply
+ # with this one.
+ is_modified = fields.Boolean()
+ """
+ True if the checksum of the file has been changed (includes the missing state).
+
+ The field is valid only for a file tracked by rpm - excluding ghost files.
+ In such a case the value is always false.
+ """
+
+
+class TrackedFilesInfoSource(Model):
+ """
+ Provide information about files on the source system explicitly defined
+ in the actor to be tracked.
+
+ Search an actor producing this message to discover the list where you
+ could add the file into the list to be tracked.
+
+ This particular message is expected to be produced only once by the
+ specific actor. Do not produce multiple messages of this model.
+ """
+ topic = SystemInfoTopic
+
+ files = fields.List(fields.Model(FileInfo), default=[])
+ """
+ List of :class:`FileInfo`.
+ """
--
2.43.0

View File

@ -1,581 +0,0 @@
From c8321a9da33ecfb71d4f6ebd03c4b334f9e91dcc Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Fri, 20 Oct 2023 16:40:09 +0200
Subject: [PATCH 58/60] Add actors for OpenSSL conf and IBMCA
* The openssl-ibmca needs to be reconfigured manually after the upgrade.
Report it to the user if the package is installed.
* The openssl configuration file (/etc/pki/tls/openssl.cnf) is not
100% compatible between major verions of RHEL due to different
versions of OpenSSL. Also the configuration is supposed to be
done via system wide crypto policies instead, so it's expected
to not modify this file anymore. If the content of the file has
been modified, report to user what will happen during the upgrade
and what they should do after it.
* If the openssl config file is modified (rpm -Vf <file>) and
*.rpmnew file exists, back up the file with .leappsave suffix
and replace it by the *.rpmsave one.
---
.../actors/openssl/checkopensslconf/actor.py | 33 ++++
.../libraries/checkopensslconf.py | 135 ++++++++++++++++
.../tests/unit_test_checkopensslconf.py | 102 ++++++++++++
.../openssl/migrateopensslconf/actor.py | 26 ++++
.../libraries/migrateopensslconf.py | 54 +++++++
.../tests/unit_test_migrateopensslconf.py | 145 ++++++++++++++++++
.../libraries/scansourcefiles.py | 1 +
7 files changed, 496 insertions(+)
create mode 100644 repos/system_upgrade/common/actors/openssl/checkopensslconf/actor.py
create mode 100644 repos/system_upgrade/common/actors/openssl/checkopensslconf/libraries/checkopensslconf.py
create mode 100644 repos/system_upgrade/common/actors/openssl/checkopensslconf/tests/unit_test_checkopensslconf.py
create mode 100644 repos/system_upgrade/common/actors/openssl/migrateopensslconf/actor.py
create mode 100644 repos/system_upgrade/common/actors/openssl/migrateopensslconf/libraries/migrateopensslconf.py
create mode 100644 repos/system_upgrade/common/actors/openssl/migrateopensslconf/tests/unit_test_migrateopensslconf.py
diff --git a/repos/system_upgrade/common/actors/openssl/checkopensslconf/actor.py b/repos/system_upgrade/common/actors/openssl/checkopensslconf/actor.py
new file mode 100644
index 00000000..dd05db9c
--- /dev/null
+++ b/repos/system_upgrade/common/actors/openssl/checkopensslconf/actor.py
@@ -0,0 +1,33 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import checkopensslconf
+from leapp.models import DistributionSignedRPM, Report, TrackedFilesInfoSource
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+
+
+class CheckOpenSSLConf(Actor):
+ """
+ Check whether the openssl configuration and openssl-IBMCA.
+
+ See the report messages for more details. The summary is that since RHEL 8
+ it's expected to configure OpenSSL via crypto policies. Also, OpenSSL has
+ different versions between major versions of RHEL:
+ * RHEL 7: 1.0,
+ * RHEL 8: 1.1,
+ * RHEL 9: 3.0
+ So OpenSSL configuration from older system does not have to be 100%
+ compatible with the new system. In some cases, the old configuration could
+ make the system inaccessible remotely. So new approach is to ensure the
+ upgraded system will use always new default /etc/pki/tls/openssl.cnf
+ configuration file (the original one will be backed up if modified by user).
+
+ Similar for OpenSSL-IBMCA, when it's expected to configure it again on
+ each newer system.
+ """
+
+ name = 'check_openssl_conf'
+ consumes = (DistributionSignedRPM, TrackedFilesInfoSource)
+ produces = (Report,)
+ tags = (IPUWorkflowTag, ChecksPhaseTag)
+
+ def process(self):
+ checkopensslconf.process()
diff --git a/repos/system_upgrade/common/actors/openssl/checkopensslconf/libraries/checkopensslconf.py b/repos/system_upgrade/common/actors/openssl/checkopensslconf/libraries/checkopensslconf.py
new file mode 100644
index 00000000..06a30fa1
--- /dev/null
+++ b/repos/system_upgrade/common/actors/openssl/checkopensslconf/libraries/checkopensslconf.py
@@ -0,0 +1,135 @@
+from leapp import reporting
+from leapp.libraries.common.config import architecture, version
+from leapp.libraries.common.rpms import has_package
+from leapp.libraries.stdlib import api
+from leapp.models import DistributionSignedRPM, TrackedFilesInfoSource
+
+DEFAULT_OPENSSL_CONF = '/etc/pki/tls/openssl.cnf'
+URL_8_CRYPTOPOLICIES = 'https://red.ht/rhel-8-system-wide-crypto-policies'
+URL_9_CRYPTOPOLICIES = 'https://red.ht/rhel-9-system-wide-crypto-policies'
+
+
+def check_ibmca():
+ if not architecture.matches_architecture(architecture.ARCH_S390X):
+ # not needed check really, but keeping it to make it clear
+ return
+ if not has_package(DistributionSignedRPM, 'openssl-ibmca'):
+ return
+ # In RHEL 9 has been introduced new technology: openssl providers. The engine
+ # is deprecated, so keep proper teminology to not confuse users.
+ dst_tech = 'engine' if version.get_target_major_version() == '8' else 'providers'
+ summary = (
+ 'The presence of openssl-ibmca package suggests that the system may be configured'
+ ' to use the IBMCA OpenSSL engine.'
+ ' Due to major changes in OpenSSL and libica between RHEL {source} and RHEL {target} it is not'
+ ' possible to migrate OpenSSL configuration files automatically. Therefore,'
+ ' it is necessary to enable IBMCA {tech} in the OpenSSL config file manually'
+ ' after the system upgrade.'
+ .format(
+ source=version.get_source_major_version(),
+ target=version.get_target_major_version(),
+ tech=dst_tech
+ )
+ )
+
+ hint = (
+ 'Configure the IBMCA {tech} manually after the upgrade.'
+ ' Please, be aware that it is not recommended to configure the system default'
+ ' {fpath}. Instead, it is recommended to configure a copy of'
+ ' that file and use this copy only for particular applications that are supposed'
+ ' to utilize the IBMCA {tech}. The location of the OpenSSL configuration file'
+ ' can be specified using the OPENSSL_CONF environment variable.'
+ .format(tech=dst_tech, fpath=DEFAULT_OPENSSL_CONF)
+ )
+
+ reporting.create_report([
+ reporting.Title('Detected possible use of IBMCA in OpenSSL'),
+ reporting.Summary(summary),
+ reporting.Remediation(hint=hint),
+ reporting.Severity(reporting.Severity.MEDIUM),
+ reporting.Groups([
+ reporting.Groups.POST,
+ reporting.Groups.ENCRYPTION
+ ]),
+ ])
+
+
+def _is_openssl_modified():
+ tracked_files = next(api.consume(TrackedFilesInfoSource), None)
+ if not tracked_files:
+ # unexpected at all, skipping testing, but keeping the log just in case
+ api.current_logger.warning('The TrackedFilesInfoSource message is missing! Skipping check of openssl config.')
+ return False
+ for finfo in tracked_files.files:
+ if finfo.path == DEFAULT_OPENSSL_CONF:
+ return finfo.is_modified
+ return False
+
+
+def check_default_openssl():
+ if not _is_openssl_modified():
+ return
+
+ crypto_url = URL_8_CRYPTOPOLICIES if version.get_target_major_version == '8' else URL_9_CRYPTOPOLICIES
+
+ # TODO(pstodulk): Needs in future some rewording, as OpenSSL engines are
+ # deprecated since "RHEL 8" and people should use OpenSSL providers instead.
+ # (IIRC, they are required to use OpenSSL providers since RHEL 9.) The
+ # current wording could be inaccurate.
+ summary = (
+ 'The OpenSSL configuration file ({fpath}) has been'
+ ' modified on the system. RHEL 8 (and newer) systems provide a crypto-policies'
+ ' mechanism ensuring usage of system-wide secure cryptography algorithms.'
+ ' Also the target system uses newer version of OpenSSL that is not fully'
+ ' compatible with the current one.'
+ ' To ensure the upgraded system uses crypto-policies as expected,'
+ ' the new version of the openssl configuration file must be installed'
+ ' during the upgrade. This will be done automatically.'
+ ' The original configuration file will be saved'
+ ' as "{fpath}.leappsave".'
+ '\n\nNote this can affect the ability to connect to the system after'
+ ' the upgrade if it depends on the current OpenSSL configuration.'
+ ' Such a problem may be caused by using a particular OpenSSL engine, as'
+ ' OpenSSL engines built for the'
+ ' RHEL {source} system are not compatible with RHEL {target}.'
+ .format(
+ fpath=DEFAULT_OPENSSL_CONF,
+ source=version.get_source_major_version(),
+ target=version.get_target_major_version()
+ )
+ )
+ if version.get_target_major_version() == '9':
+ # NOTE(pstodulk): that a try to make things with engine/providers a
+ # little bit better (see my TODO note above)
+ summary += (
+ '\n\nNote the legacy ENGINE API is deprecated since RHEL 8 and'
+ ' it is required to use the new OpenSSL providers API instead on'
+ ' RHEL 9 systems.'
+ )
+ hint = (
+ 'Check that your ability to login to the system does not depend on'
+ ' the OpenSSL configuration. After the upgrade, review the system configuration'
+ ' and configure the system as needed.'
+ ' Please, be aware that it is not recommended to configure the system default'
+ ' {fpath}. Instead, it is recommended to copy the file and use this copy'
+ ' to configure particular applications.'
+ ' The default OpenSSL configuration file should be modified only'
+ ' when it is really necessary.'
+ )
+ reporting.create_report([
+ reporting.Title('The /etc/pki/tls/openssl.cnf file is modified and will be replaced during the upgrade.'),
+ reporting.Summary(summary),
+ reporting.Remediation(hint=hint),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Groups([reporting.Groups.POST, reporting.Groups.SECURITY]),
+ reporting.RelatedResource('file', DEFAULT_OPENSSL_CONF),
+ reporting.ExternalLink(
+ title='Using system-wide cryptographic policies.',
+ url=crypto_url
+ )
+ ])
+
+
+def process():
+ check_ibmca()
+ check_default_openssl()
diff --git a/repos/system_upgrade/common/actors/openssl/checkopensslconf/tests/unit_test_checkopensslconf.py b/repos/system_upgrade/common/actors/openssl/checkopensslconf/tests/unit_test_checkopensslconf.py
new file mode 100644
index 00000000..541ff75d
--- /dev/null
+++ b/repos/system_upgrade/common/actors/openssl/checkopensslconf/tests/unit_test_checkopensslconf.py
@@ -0,0 +1,102 @@
+import pytest
+
+from leapp import reporting
+from leapp.libraries.actor import checkopensslconf
+from leapp.libraries.common.config import architecture
+from leapp.libraries.common.testutils import create_report_mocked, CurrentActorMocked, logger_mocked
+from leapp.libraries.stdlib import api
+from leapp.models import DistributionSignedRPM, FileInfo, RPM, TrackedFilesInfoSource
+
+_DUMP_PKG_NAMES = ['random', 'pkgs', 'openssl-ibmca-nope', 'ibmca', 'nope-openssl-ibmca']
+_SSL_CONF = checkopensslconf.DEFAULT_OPENSSL_CONF
+
+
+def _msg_pkgs(pkgnames):
+ rpms = []
+ for pname in pkgnames:
+ rpms.append(RPM(
+ name=pname,
+ epoch='0',
+ version='1.0',
+ release='1',
+ arch='noarch',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID 199e2f91fd431d51',
+ packager='Red Hat, Inc. (auxiliary key 2) <security@redhat.com>'
+
+ ))
+ return DistributionSignedRPM(items=rpms)
+
+
+@pytest.mark.parametrize('arch,pkgnames,ibmca_report', (
+ (architecture.ARCH_S390X, [], False),
+ (architecture.ARCH_S390X, _DUMP_PKG_NAMES, False),
+ (architecture.ARCH_S390X, ['openssl-ibmca'], True),
+ (architecture.ARCH_S390X, _DUMP_PKG_NAMES + ['openssl-ibmca'], True),
+ (architecture.ARCH_S390X, ['openssl-ibmca'] + _DUMP_PKG_NAMES, True),
+
+ # stay false for non-IBM-z arch - invalid scenario basically
+ (architecture.ARCH_X86_64, ['openssl-ibmca'], False),
+ (architecture.ARCH_PPC64LE, ['openssl-ibmca'], False),
+ (architecture.ARCH_ARM64, ['openssl-ibmca'], False),
+
+))
+@pytest.mark.parametrize('src_maj_ver', ('7', '8', '9'))
+def test_check_ibmca(monkeypatch, src_maj_ver, arch, pkgnames, ibmca_report):
+ monkeypatch.setattr(reporting, "create_report", create_report_mocked())
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(
+ arch=arch,
+ msgs=[_msg_pkgs(pkgnames)],
+ src_ver='{}.6'.format(src_maj_ver),
+ dst_ver='{}.0'.format(int(src_maj_ver) + 1)
+ ))
+ checkopensslconf.check_ibmca()
+
+ if not ibmca_report:
+ assert not reporting.create_report.called, 'IBMCA report created when it should not.'
+ else:
+ assert reporting.create_report.called, 'IBMCA report has not been created.'
+
+
+def _msg_files(fnames_changed, fnames_untouched):
+ res = []
+ for fname in fnames_changed:
+ res.append(FileInfo(
+ path=fname,
+ exists=True,
+ is_modified=True
+ ))
+
+ for fname in fnames_untouched:
+ res.append(FileInfo(
+ path=fname,
+ exists=True,
+ is_modified=False
+ ))
+
+ return TrackedFilesInfoSource(files=res)
+
+
+# NOTE(pstodulk): Ignoring situation when _SSL_CONF is missing (modified, do not exists).
+# It's not a valid scenario actually, as this file just must exists on the system to
+# consider it in a supported state.
+@pytest.mark.parametrize('msg,openssl_report', (
+ # matrix focused on openssl reports only (positive)
+ (_msg_files([], []), False),
+ (_msg_files([_SSL_CONF], []), True),
+ (_msg_files(['what/ever', _SSL_CONF, 'something'], []), True),
+ (_msg_files(['what/ever'], [_SSL_CONF]), False),
+))
+@pytest.mark.parametrize('src_maj_ver', ('7', '8', '9'))
+def test_check_openssl(monkeypatch, src_maj_ver, msg, openssl_report):
+ monkeypatch.setattr(reporting, "create_report", create_report_mocked())
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(
+ msgs=[msg],
+ src_ver='{}.6'.format(src_maj_ver),
+ dst_ver='{}.0'.format(int(src_maj_ver) + 1)
+ ))
+ checkopensslconf.process()
+
+ if not openssl_report:
+ assert not reporting.create_report.called, 'OpenSSL report created when it should not.'
+ else:
+ assert reporting.create_report.called, 'OpenSSL report has not been created.'
diff --git a/repos/system_upgrade/common/actors/openssl/migrateopensslconf/actor.py b/repos/system_upgrade/common/actors/openssl/migrateopensslconf/actor.py
new file mode 100644
index 00000000..f373b5c4
--- /dev/null
+++ b/repos/system_upgrade/common/actors/openssl/migrateopensslconf/actor.py
@@ -0,0 +1,26 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import migrateopensslconf
+from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag
+
+
+class MigrateOpenSslConf(Actor):
+ """
+ Enforce the target default configuration file to be used.
+
+ If the /etc/pki/tls/openssl.cnf has been modified and openssl.cnf.rpmnew
+ file is created, backup the original one and replace it by the new default.
+
+ tl;dr: (simplified)
+ if the file is modified; then
+ mv /etc/pki/tls/openssl.cnf{,.leappsave}
+ mv /etc/pki/tls/openssl.cnf{.rpmnew,}
+ fi
+ """
+
+ name = 'migrate_openssl_conf'
+ consumes = ()
+ produces = ()
+ tags = (IPUWorkflowTag, ApplicationsPhaseTag)
+
+ def process(self):
+ migrateopensslconf.process()
diff --git a/repos/system_upgrade/common/actors/openssl/migrateopensslconf/libraries/migrateopensslconf.py b/repos/system_upgrade/common/actors/openssl/migrateopensslconf/libraries/migrateopensslconf.py
new file mode 100644
index 00000000..140c5718
--- /dev/null
+++ b/repos/system_upgrade/common/actors/openssl/migrateopensslconf/libraries/migrateopensslconf.py
@@ -0,0 +1,54 @@
+import os
+
+from leapp.libraries.stdlib import api, CalledProcessError, run
+
+DEFAULT_OPENSSL_CONF = '/etc/pki/tls/openssl.cnf'
+OPENSSL_CONF_RPMNEW = '{}.rpmnew'.format(DEFAULT_OPENSSL_CONF)
+OPENSSL_CONF_BACKUP = '{}.leappsave'.format(DEFAULT_OPENSSL_CONF)
+
+
+def _is_openssl_modified():
+ """
+ Return True if modified in any way
+ """
+ # NOTE(pstodulk): this is different from the approach in scansourcefiles,
+ # where we are interested about modified content. In this case, if the
+ # file is modified in any way, let's do something about that..
+ try:
+ run(['rpm', '-Vf', DEFAULT_OPENSSL_CONF])
+ except CalledProcessError:
+ return True
+ return False
+
+
+def _safe_mv_file(src, dst):
+ """
+ Move the file from src to dst. Return True on success, otherwise False.
+ """
+ try:
+ run(['mv', src, dst])
+ except CalledProcessError:
+ return False
+ return True
+
+
+def process():
+ if not _is_openssl_modified():
+ return
+ if not os.path.exists(OPENSSL_CONF_RPMNEW):
+ api.current_logger().debug('The {} file is modified, but *.rpmsave not found. Cannot do anything.')
+ return
+ if not _safe_mv_file(DEFAULT_OPENSSL_CONF, OPENSSL_CONF_BACKUP):
+ # NOTE(pstodulk): One of reasons could be the file is missing, however
+ # that's not expected to happen at all. If the file is missing before
+ # the upgrade, it will be installed by new openssl* package
+ api.current_logger().error(
+ 'Could not back up the {} file. Skipping other actions.'
+ .format(DEFAULT_OPENSSL_CONF)
+ )
+ return
+ if not _safe_mv_file(OPENSSL_CONF_RPMNEW, DEFAULT_OPENSSL_CONF):
+ # unexpected, it's double seatbelt
+ api.current_logger().error('Cannot apply the new openssl configuration file! Restore it from the backup.')
+ if not _safe_mv_file(OPENSSL_CONF_BACKUP, DEFAULT_OPENSSL_CONF):
+ api.current_logger().error('Cannot restore the openssl configuration file!')
diff --git a/repos/system_upgrade/common/actors/openssl/migrateopensslconf/tests/unit_test_migrateopensslconf.py b/repos/system_upgrade/common/actors/openssl/migrateopensslconf/tests/unit_test_migrateopensslconf.py
new file mode 100644
index 00000000..e9200312
--- /dev/null
+++ b/repos/system_upgrade/common/actors/openssl/migrateopensslconf/tests/unit_test_migrateopensslconf.py
@@ -0,0 +1,145 @@
+import os
+
+import pytest
+
+from leapp.libraries.actor import migrateopensslconf
+from leapp.libraries.common.testutils import CurrentActorMocked, logger_mocked
+from leapp.libraries.stdlib import CalledProcessError
+
+
+class PathExistsMocked(object):
+ def __init__(self, existing_files=None):
+ self.called = 0
+ self._existing_files = existing_files if existing_files else []
+
+ def __call__(self, fpath):
+ self.called += 1
+ return fpath in self._existing_files
+
+
+class IsOpensslModifiedMocked(object):
+ def __init__(self, ret_values):
+ self._ret_values = ret_values
+ # ret_values is list of bools to return on each call. ret_values.pop(0)
+ # if the list becomes empty, returns False
+
+ self.called = 0
+
+ def __call__(self):
+ self.called += 1
+ if not self._ret_values:
+ return False
+ return self._ret_values.pop(0)
+
+
+class SafeMVFileMocked(object):
+ def __init__(self, ret_values):
+ self._ret_values = ret_values
+ # ret_values is list of bools to return on each call. ret_values.pop(0)
+ # if the list becomes empty, returns False
+
+ self.called = 0
+ self.args_list = []
+
+ def __call__(self, src, dst):
+ self.called += 1
+ self.args_list.append((src, dst))
+ if not self._ret_values:
+ return False
+ return self._ret_values.pop(0)
+
+
+def test_migrate_openssl_nothing_to_do(monkeypatch):
+ monkeypatch.setattr(migrateopensslconf.api, 'current_logger', logger_mocked())
+ monkeypatch.setattr(migrateopensslconf, '_is_openssl_modified', IsOpensslModifiedMocked([False]))
+ monkeypatch.setattr(migrateopensslconf, '_safe_mv_file', SafeMVFileMocked([False]))
+ monkeypatch.setattr(os.path, 'exists', PathExistsMocked())
+
+ migrateopensslconf.process()
+ assert not os.path.exists.called
+ assert not migrateopensslconf._safe_mv_file.called
+
+ monkeypatch.setattr(migrateopensslconf, '_is_openssl_modified', IsOpensslModifiedMocked([True]))
+ migrateopensslconf.process()
+ assert os.path.exists.called
+ assert migrateopensslconf.api.current_logger.dbgmsg
+ assert not migrateopensslconf._safe_mv_file.called
+
+
+def test_migrate_openssl_failed_backup(monkeypatch):
+ monkeypatch.setattr(migrateopensslconf.api, 'current_logger', logger_mocked())
+ monkeypatch.setattr(migrateopensslconf, '_is_openssl_modified', IsOpensslModifiedMocked([True]))
+ monkeypatch.setattr(migrateopensslconf, '_safe_mv_file', SafeMVFileMocked([False]))
+ monkeypatch.setattr(os.path, 'exists', PathExistsMocked([migrateopensslconf.OPENSSL_CONF_RPMNEW]))
+
+ migrateopensslconf.process()
+ assert migrateopensslconf._safe_mv_file.called == 1
+ assert migrateopensslconf._safe_mv_file.args_list[0][0] == migrateopensslconf.DEFAULT_OPENSSL_CONF
+ assert migrateopensslconf.api.current_logger.errmsg
+
+
+def test_migrate_openssl_ok(monkeypatch):
+ monkeypatch.setattr(migrateopensslconf.api, 'current_logger', logger_mocked())
+ monkeypatch.setattr(migrateopensslconf, '_is_openssl_modified', IsOpensslModifiedMocked([True]))
+ monkeypatch.setattr(migrateopensslconf, '_safe_mv_file', SafeMVFileMocked([True, True]))
+ monkeypatch.setattr(os.path, 'exists', PathExistsMocked([migrateopensslconf.OPENSSL_CONF_RPMNEW]))
+
+ migrateopensslconf.process()
+ assert migrateopensslconf._safe_mv_file.called == 2
+ assert migrateopensslconf._safe_mv_file.args_list[1][1] == migrateopensslconf.DEFAULT_OPENSSL_CONF
+ assert not migrateopensslconf.api.current_logger.errmsg
+
+
+def test_migrate_openssl_failed_migrate(monkeypatch):
+ monkeypatch.setattr(migrateopensslconf.api, 'current_logger', logger_mocked())
+ monkeypatch.setattr(migrateopensslconf, '_is_openssl_modified', IsOpensslModifiedMocked([True]))
+ monkeypatch.setattr(migrateopensslconf, '_safe_mv_file', SafeMVFileMocked([True, False, True]))
+ monkeypatch.setattr(os.path, 'exists', PathExistsMocked([migrateopensslconf.OPENSSL_CONF_RPMNEW]))
+
+ migrateopensslconf.process()
+ assert migrateopensslconf._safe_mv_file.called == 3
+ assert migrateopensslconf._safe_mv_file.args_list[2][1] == migrateopensslconf.DEFAULT_OPENSSL_CONF
+ assert migrateopensslconf.api.current_logger.errmsg
+
+
+def test_migrate_openssl_failed_restore(monkeypatch):
+ monkeypatch.setattr(migrateopensslconf.api, 'current_logger', logger_mocked())
+ monkeypatch.setattr(migrateopensslconf, '_is_openssl_modified', IsOpensslModifiedMocked([True]))
+ monkeypatch.setattr(migrateopensslconf, '_safe_mv_file', SafeMVFileMocked([True]))
+ monkeypatch.setattr(os.path, 'exists', PathExistsMocked([migrateopensslconf.OPENSSL_CONF_RPMNEW]))
+
+ migrateopensslconf.process()
+ assert migrateopensslconf._safe_mv_file.called == 3
+ assert len(migrateopensslconf.api.current_logger.errmsg) == 2
+
+
+class MockedRun(object):
+ def __init__(self, raise_err):
+ self.called = 0
+ self.args = None
+ self._raise_err = raise_err
+
+ def __call__(self, args):
+ self.called += 1
+ self.args = args
+ if self._raise_err:
+ raise CalledProcessError(
+ message='A Leapp Command Error occurred.',
+ command=args,
+ result={'signal': None, 'exist_code': 1, 'pid': 0, 'stdout': 'fale', 'stderr': 'fake'}
+ )
+ # NOTE(pstodulk) ignore return as the code in the library does not use it
+
+
+@pytest.mark.parametrize('result', (True, False))
+def test_is_openssl_modified(monkeypatch, result):
+ monkeypatch.setattr(migrateopensslconf, 'run', MockedRun(result))
+ assert migrateopensslconf._is_openssl_modified() is result
+ assert migrateopensslconf.run.called == 1
+
+
+@pytest.mark.parametrize('result', (True, False))
+def test_safe_mv_file(monkeypatch, result):
+ monkeypatch.setattr(migrateopensslconf, 'run', MockedRun(not result))
+ assert migrateopensslconf._safe_mv_file('foo', 'bar') is result
+ assert ['mv', 'foo', 'bar'] == migrateopensslconf.run.args
diff --git a/repos/system_upgrade/common/actors/scansourcefiles/libraries/scansourcefiles.py b/repos/system_upgrade/common/actors/scansourcefiles/libraries/scansourcefiles.py
index 33e0275f..16c0e8aa 100644
--- a/repos/system_upgrade/common/actors/scansourcefiles/libraries/scansourcefiles.py
+++ b/repos/system_upgrade/common/actors/scansourcefiles/libraries/scansourcefiles.py
@@ -9,6 +9,7 @@ from leapp.models import FileInfo, TrackedFilesInfoSource
# '8' (etc..) -> files supposed to be scanned when particular major version of OS is used
TRACKED_FILES = {
'common': [
+ '/etc/pki/tls/openssl.cnf',
],
'8': [
],
--
2.43.0

View File

@ -1,547 +0,0 @@
From 7c6e0d8ce1ca550309f2e76e1e57bef147f7a86b Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Thu, 16 Nov 2023 13:16:01 +0100
Subject: [PATCH 59/60] Introduce custom modifications tracking
This commit introduces two actors:
* the scanner that that scans leapp files and
produces messages with actor name/filepath mapping in case
any unexpected custom files or modified files were discovered.
* the checker that processes CustomModification messages and
produces report entries.
* uses rpms.get_leapp_packages function
* pstodulk: Updated report messages to provide more information to users
The purpose of this change is to help with the investigation
of reported issues as people harm themselves from time to time
and as this is not usually expected, it prolongs the solution
of the problem (people investigating such issues do not check
this possibility as the first thing, which is understandable).
This should help to identify possible root causes faster as
report msg should be always visible.
Jira: RHEL-1774
Co-authored-by: Petr Stodulka <pstodulk@redhat.com>
---
.../actors/checkcustommodifications/actor.py | 19 +++
.../libraries/checkcustommodifications.py | 138 ++++++++++++++++
.../tests/test_checkcustommodifications.py | 35 +++++
.../actors/scancustommodifications/actor.py | 18 +++
.../libraries/scancustommodifications.py | 147 ++++++++++++++++++
.../tests/test_scancustommodifications.py | 89 +++++++++++
.../common/models/custommodifications.py | 13 ++
7 files changed, 459 insertions(+)
create mode 100644 repos/system_upgrade/common/actors/checkcustommodifications/actor.py
create mode 100644 repos/system_upgrade/common/actors/checkcustommodifications/libraries/checkcustommodifications.py
create mode 100644 repos/system_upgrade/common/actors/checkcustommodifications/tests/test_checkcustommodifications.py
create mode 100644 repos/system_upgrade/common/actors/scancustommodifications/actor.py
create mode 100644 repos/system_upgrade/common/actors/scancustommodifications/libraries/scancustommodifications.py
create mode 100644 repos/system_upgrade/common/actors/scancustommodifications/tests/test_scancustommodifications.py
create mode 100644 repos/system_upgrade/common/models/custommodifications.py
diff --git a/repos/system_upgrade/common/actors/checkcustommodifications/actor.py b/repos/system_upgrade/common/actors/checkcustommodifications/actor.py
new file mode 100644
index 00000000..a1a50bad
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checkcustommodifications/actor.py
@@ -0,0 +1,19 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import checkcustommodifications
+from leapp.models import CustomModifications, Report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+
+
+class CheckCustomModificationsActor(Actor):
+ """
+ Checks CustomModifications messages and produces a report about files in leapp directories that have been
+ modified or newly added.
+ """
+
+ name = 'check_custom_modifications_actor'
+ consumes = (CustomModifications,)
+ produces = (Report,)
+ tags = (IPUWorkflowTag, ChecksPhaseTag)
+
+ def process(self):
+ checkcustommodifications.report_any_modifications()
diff --git a/repos/system_upgrade/common/actors/checkcustommodifications/libraries/checkcustommodifications.py b/repos/system_upgrade/common/actors/checkcustommodifications/libraries/checkcustommodifications.py
new file mode 100644
index 00000000..f1744531
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checkcustommodifications/libraries/checkcustommodifications.py
@@ -0,0 +1,138 @@
+from leapp import reporting
+from leapp.libraries.stdlib import api
+from leapp.models import CustomModifications
+
+FMT_LIST_SEPARATOR = "\n - "
+
+
+def _pretty_files(messages):
+ """
+ Return formatted string of discovered files from obtained CustomModifications messages.
+ """
+ flist = []
+ for msg in messages:
+ actor = ' (Actor: {})'.format(msg.actor_name) if msg.actor_name else ''
+ flist.append(
+ '{sep}{filename}{actor}'.format(
+ sep=FMT_LIST_SEPARATOR,
+ filename=msg.filename,
+ actor=actor
+ )
+ )
+ return ''.join(flist)
+
+
+def _is_modified_config(msg):
+ # NOTE(pstodulk):
+ # We are interested just about modified files for now. Having new created config
+ # files is not so much important for us right now, but in future it could
+ # be changed.
+ if msg.component and msg.component == 'configuration':
+ return msg.type == 'modified'
+ return False
+
+
+def _create_report(title, summary, hint, links=None):
+ report_parts = [
+ reporting.Title(title),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Groups([reporting.Groups.UPGRADE_PROCESS]),
+ reporting.RemediationHint(hint)
+ ]
+ if links:
+ report_parts += links
+ reporting.create_report(report_parts)
+
+
+def check_configuration_files(msgs):
+ filtered_msgs = [m for m in msgs if _is_modified_config(m)]
+ if not filtered_msgs:
+ return
+ title = 'Detected modified configuration files in leapp configuration directories.'
+ summary = (
+ 'We have detected that some configuration files related to leapp or'
+ ' upgrade process have been modified. Some of these changes could be'
+ ' intended (e.g. modified repomap.json file in case of private cloud'
+ ' regions or customisations done on used Satellite server) so it is'
+ ' not always needed to worry about them. However they can impact'
+ ' the in-place upgrade and it is good to be aware of potential problems'
+ ' or unexpected results if they are not intended.'
+ '\nThe list of modified configuration files:{files}'
+ .format(files=_pretty_files(filtered_msgs))
+ )
+ hint = (
+ 'If some of changes in listed configuration files have not been intended,'
+ ' you can restore original files by following procedure:'
+ '\n1. Remove (or back up) modified files that you want to restore.'
+ '\n2. Reinstall packages which owns these files.'
+ )
+ _create_report(title, summary, hint)
+
+
+def _is_modified_code(msg):
+ if msg.component not in ['framework', 'repository']:
+ return False
+ return msg.type == 'modified'
+
+
+def check_modified_code(msgs):
+ filtered_msgs = [m for m in msgs if _is_modified_code(m)]
+ if not filtered_msgs:
+ return
+ title = 'Detected modified files of the in-place upgrade tooling.'
+ summary = (
+ 'We have detected that some files of the tooling processing the in-place'
+ ' upgrade have been modified. Note that such modifications can be allowed'
+ ' only after consultation with Red Hat - e.g. when support suggests'
+ ' the change to resolve discovered problem.'
+ ' If these changes have not been approved by Red Hat, the in-place upgrade'
+ ' is unsupported.'
+ '\nFollowing files have been modified:{files}'
+ .format(files=_pretty_files(filtered_msgs))
+ )
+ hint = 'To restore original files reinstall related packages.'
+ _create_report(title, summary, hint)
+
+
+def check_custom_actors(msgs):
+ filtered_msgs = [m for m in msgs if m.type == 'custom']
+ if not filtered_msgs:
+ return
+ title = 'Detected custom leapp actors or files.'
+ summary = (
+ 'We have detected installed custom actors or files on the system.'
+ ' These can be provided e.g. by third party vendors, Red Hat consultants,'
+ ' or can be created by users to customize the upgrade (e.g. to migrate'
+ ' custom applications).'
+ ' This is allowed and appreciated. However Red Hat is not responsible'
+ ' for any issues caused by these custom leapp actors.'
+ ' Note that upgrade tooling is under agile development which could'
+ ' require more frequent update of custom actors.'
+ '\nThe list of custom leapp actors and files:{files}'
+ .format(files=_pretty_files(filtered_msgs))
+ )
+ hint = (
+ 'In case of any issues connected to custom or third party actors,'
+ ' contact vendor of such actors. Also we suggest to ensure the installed'
+ ' custom leapp actors are up to date, compatible with the installed'
+ ' packages.'
+ )
+ links = [
+ reporting.ExternalLink(
+ url='https://red.ht/customize-rhel-upgrade',
+ title='Customizing your Red Hat Enterprise Linux in-place upgrade'
+ )
+ ]
+
+ _create_report(title, summary, hint, links)
+
+
+def report_any_modifications():
+ modifications = list(api.consume(CustomModifications))
+ if not modifications:
+ # no modification detected
+ return
+ check_custom_actors(modifications)
+ check_configuration_files(modifications)
+ check_modified_code(modifications)
diff --git a/repos/system_upgrade/common/actors/checkcustommodifications/tests/test_checkcustommodifications.py b/repos/system_upgrade/common/actors/checkcustommodifications/tests/test_checkcustommodifications.py
new file mode 100644
index 00000000..6a538065
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checkcustommodifications/tests/test_checkcustommodifications.py
@@ -0,0 +1,35 @@
+from leapp.libraries.actor import checkcustommodifications
+from leapp.models import CustomModifications, Report
+
+
+def test_report_any_modifications(current_actor_context):
+ discovered_msgs = [CustomModifications(filename='some/changed/leapp/actor/file',
+ type='modified',
+ actor_name='an_actor',
+ component='repository'),
+ CustomModifications(filename='some/new/actor/in/leapp/dir',
+ type='custom',
+ actor_name='a_new_actor',
+ component='repository'),
+ CustomModifications(filename='some/new/actor/in/leapp/dir',
+ type='modified',
+ actor_name='a_new_actor',
+ component='configuration'),
+ CustomModifications(filename='some/changed/file/in/framework',
+ type='modified',
+ actor_name='',
+ component='framework')]
+ for msg in discovered_msgs:
+ current_actor_context.feed(msg)
+ current_actor_context.run()
+ reports = current_actor_context.consume(Report)
+ assert len(reports) == 3
+ assert (reports[0].report['title'] ==
+ 'Detected custom leapp actors or files.')
+ assert 'some/new/actor/in/leapp/dir (Actor: a_new_actor)' in reports[0].report['summary']
+ assert (reports[1].report['title'] ==
+ 'Detected modified configuration files in leapp configuration directories.')
+ assert (reports[2].report['title'] ==
+ 'Detected modified files of the in-place upgrade tooling.')
+ assert 'some/changed/file/in/framework' in reports[2].report['summary']
+ assert 'some/changed/leapp/actor/file (Actor: an_actor)' in reports[2].report['summary']
diff --git a/repos/system_upgrade/common/actors/scancustommodifications/actor.py b/repos/system_upgrade/common/actors/scancustommodifications/actor.py
new file mode 100644
index 00000000..5eae33aa
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancustommodifications/actor.py
@@ -0,0 +1,18 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import scancustommodifications
+from leapp.models import CustomModifications
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class ScanCustomModificationsActor(Actor):
+ """
+ Collects information about files in leapp directories that have been modified or newly added.
+ """
+
+ name = 'scan_custom_modifications_actor'
+ produces = (CustomModifications,)
+ tags = (IPUWorkflowTag, FactsPhaseTag)
+
+ def process(self):
+ for msg in scancustommodifications.scan():
+ self.produce(msg)
diff --git a/repos/system_upgrade/common/actors/scancustommodifications/libraries/scancustommodifications.py b/repos/system_upgrade/common/actors/scancustommodifications/libraries/scancustommodifications.py
new file mode 100644
index 00000000..80137ef4
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancustommodifications/libraries/scancustommodifications.py
@@ -0,0 +1,147 @@
+import ast
+import os
+
+from leapp.exceptions import StopActorExecution
+from leapp.libraries.common import rpms
+from leapp.libraries.stdlib import api, CalledProcessError, run
+from leapp.models import CustomModifications
+
+LEAPP_REPO_DIRS = ['/usr/share/leapp-repository']
+LEAPP_PACKAGES_TO_IGNORE = ['snactor']
+
+
+def _get_dirs_to_check(component):
+ if component == 'repository':
+ return LEAPP_REPO_DIRS
+ return []
+
+
+def _get_rpms_to_check(component=None):
+ if component == 'repository':
+ return rpms.get_leapp_packages(component=rpms.LeappComponents.REPOSITORY)
+ if component == 'framework':
+ return rpms.get_leapp_packages(component=rpms.LeappComponents.FRAMEWORK)
+ return rpms.get_leapp_packages(components=[rpms.LeappComponents.REPOSITORY, rpms.LeappComponents.FRAMEWORK])
+
+
+def deduce_actor_name(a_file):
+ """
+ A helper to map an actor/library to the actor name
+ If a_file is an actor or an actor library, the name of the actor (name attribute of actor class) will be returned.
+ Empty string is returned if the file could not be associated with any actor.
+ """
+ if not os.path.exists(a_file):
+ return ''
+ # NOTE(ivasilev) Actors reside only in actor.py files, so AST processing any other file can be skipped.
+ # In case this function has been called on a non-actor file, let's go straight to recursive call on the assumed
+ # location of the actor file.
+ if os.path.basename(a_file) == 'actor.py':
+ data = None
+ with open(a_file) as f:
+ try:
+ data = ast.parse(f.read())
+ except TypeError:
+ api.current_logger().warning('An error occurred while parsing %s, can not deduce actor name', a_file)
+ return ''
+ # NOTE(ivasilev) Making proper syntax analysis is not the goal here, so let's get away with the bare minimum.
+ # An actor file will have an Actor ClassDef with a name attribute and a process function defined
+ actor = next((obj for obj in data.body if isinstance(obj, ast.ClassDef) and obj.name and
+ any(isinstance(o, ast.FunctionDef) and o.name == 'process' for o in obj.body)), None)
+ # NOTE(ivasilev) obj.name attribute refers only to Class name, so for fetching name attribute need to go
+ # deeper
+ if actor:
+ try:
+ actor_name = next((expr.value.s for expr in actor.body
+ if isinstance(expr, ast.Assign) and expr.targets[-1].id == 'name'), None)
+ except (AttributeError, IndexError):
+ api.current_logger().warning("Syntax Analysis for %d has failed", a_file)
+ actor_name = None
+ if actor_name:
+ return actor_name
+
+ # Assuming here we are dealing with a library or a file, so let's discover actor filename and deduce actor name
+ # from it. Actor is expected to be found under ../../actor.py
+ def _check_assumed_location(subdir):
+ assumed_actor_file = os.path.join(a_file.split(subdir)[0], 'actor.py')
+ if not os.path.exists(assumed_actor_file):
+ # Nothing more we can do - no actor name mapping, return ''
+ return ''
+ return deduce_actor_name(assumed_actor_file)
+
+ return _check_assumed_location('libraries') or _check_assumed_location('files')
+
+
+def _run_command(cmd, warning_to_log, checked=True):
+ """
+ A helper that executes a command and returns a result or raises StopActorExecution.
+ Upon success results will contain a list with line-by-line output returned by the command.
+ """
+ try:
+ res = run(cmd, checked=checked)
+ output = res['stdout'].strip()
+ if not output:
+ return []
+ return output.split('\n')
+ except CalledProcessError:
+ api.current_logger().warning(warning_to_log)
+ raise StopActorExecution()
+
+
+def _modification_model(filename, change_type, component, rpm_checks_str=''):
+ # XXX FIXME(ivasilev) Actively thinking if different model classes inheriting from CustomModifications
+ # are needed or let's get away with one model for everything (as is implemented now).
+ # The only difference atm is that actor_name makes sense only for repository modifications.
+ return CustomModifications(filename=filename, type=change_type, component=component,
+ actor_name=deduce_actor_name(filename), rpm_checks_str=rpm_checks_str)
+
+
+def check_for_modifications(component):
+ """
+ This will return a list of any untypical files or changes to shipped leapp files discovered on the system.
+ An empty list means that no modifications have been found.
+ """
+ rpms = _get_rpms_to_check(component)
+ dirs = _get_dirs_to_check(component)
+ source_of_truth = []
+ leapp_files = []
+ # Let's collect data about what should have been installed from rpm
+ for rpm in rpms:
+ res = _run_command(['rpm', '-ql', rpm], 'Could not get a list of installed files from rpm {}'.format(rpm))
+ source_of_truth.extend(res)
+ # Let's collect data about what's really on the system
+ for directory in dirs:
+ res = _run_command(['find', directory, '-type', 'f'],
+ 'Could not get a list of leapp files from {}'.format(directory))
+ leapp_files.extend(res)
+ # Let's check for unexpected additions
+ custom_files = sorted(set(leapp_files) - set(source_of_truth))
+ # Now let's check for modifications
+ modified_files = []
+ modified_configs = []
+ for rpm in rpms:
+ res = _run_command(
+ ['rpm', '-V', '--nomtime', rpm], 'Could not check authenticity of the files from {}'.format(rpm),
+ # NOTE(ivasilev) check is False here as in case of any changes found exit code will be 1
+ checked=False)
+ if res:
+ api.current_logger().warning('Modifications to leapp files detected!\n%s', res)
+ for modification_str in res:
+ modification = tuple(modification_str.split())
+ if len(modification) == 3 and modification[1] == 'c':
+ # Dealing with a configuration that will be displayed as ('S.5......', 'c', '/file/path')
+ modified_configs.append(modification)
+ else:
+ # Modification of any other rpm file detected
+ modified_files.append(modification)
+ return ([_modification_model(filename=f[1], component=component, rpm_checks_str=f[0], change_type='modified')
+ # Let's filter out pyc files not to clutter the output as pyc will be present even in case of
+ # a plain open & save-not-changed that we agreed not to react upon.
+ for f in modified_files if not f[1].endswith('.pyc')] +
+ [_modification_model(filename=f, component=component, change_type='custom')
+ for f in custom_files] +
+ [_modification_model(filename=f[2], component='configuration', rpm_checks_str=f[0], change_type='modified')
+ for f in modified_configs])
+
+
+def scan():
+ return check_for_modifications('framework') + check_for_modifications('repository')
diff --git a/repos/system_upgrade/common/actors/scancustommodifications/tests/test_scancustommodifications.py b/repos/system_upgrade/common/actors/scancustommodifications/tests/test_scancustommodifications.py
new file mode 100644
index 00000000..a48869e4
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancustommodifications/tests/test_scancustommodifications.py
@@ -0,0 +1,89 @@
+import pytest
+
+from leapp.libraries.actor import scancustommodifications
+from leapp.libraries.common.testutils import CurrentActorMocked, produce_mocked
+from leapp.libraries.stdlib import api
+
+FILES_FROM_RPM = """
+repos/system_upgrade/el8toel9/actors/xorgdrvfact/libraries/xorgdriverlib.py
+repos/system_upgrade/el8toel9/actors/anotheractor/actor.py
+repos/system_upgrade/el8toel9/files
+"""
+
+FILES_ON_SYSTEM = """
+repos/system_upgrade/el8toel9/actors/xorgdrvfact/libraries/xorgdriverlib.py
+repos/system_upgrade/el8toel9/actors/anotheractor/actor.py
+repos/system_upgrade/el8toel9/files
+/some/unrelated/to/leapp/file
+repos/system_upgrade/el8toel9/files/file/that/should/not/be/there
+repos/system_upgrade/el8toel9/actors/actor/that/should/not/be/there
+"""
+
+VERIFIED_FILES = """
+.......T. repos/system_upgrade/el8toel9/actors/xorgdrvfact/libraries/xorgdriverlib.py
+S.5....T. repos/system_upgrade/el8toel9/actors/anotheractor/actor.py
+S.5....T. c etc/leapp/files/pes-events.json
+"""
+
+
+@pytest.mark.parametrize('a_file,name', [
+ ('repos/system_upgrade/el8toel9/actors/checkblacklistca/actor.py', 'checkblacklistca'),
+ ('repos/system_upgrade/el7toel8/actors/checkmemcached/actor.py', 'check_memcached'),
+ # actor library
+ ('repos/system_upgrade/el7toel8/actors/checkmemcached/libraries/checkmemcached.py', 'check_memcached'),
+ # actor file
+ ('repos/system_upgrade/common/actors/createresumeservice/files/leapp_resume.service', 'create_systemd_service'),
+ ('repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh',
+ 'common_leapp_dracut_modules'),
+ # not a library and not an actor file
+ ('repos/system_upgrade/el7toel8/models/authselect.py', ''),
+ ('repos/system_upgrade/common/files/rhel_upgrade.py', ''),
+ # common library not tied to any actor
+ ('repos/system_upgrade/common/libraries/mounting.py', ''),
+ ('repos/system_upgrade/common/libraries/config/version.py', ''),
+ ('repos/system_upgrade/common/libraries/multipathutil.py', ''),
+ ('repos/system_upgrade/common/libraries/config/version.py', ''),
+ ('repos/system_upgrade/common/libraries/dnfplugin.py', ''),
+ ('repos/system_upgrade/common/libraries/testutils.py', ''),
+ # the rest of false positives discovered by dkubek
+ ('repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos_repomap.py', 'setuptargetrepos'),
+ ('repos/system_upgrade/el8toel9/actors/sssdfacts/libraries/sssdfacts8to9.py', 'sssd_facts_8to9'),
+ ('repos/system_upgrade/el8toel9/actors/nisscanner/libraries/nisscan.py', 'nis_scanner'),
+ ('repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos_repomap.py', 'setuptargetrepos'),
+ ('repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py', 'repository_mapping'),
+ ('repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner_repomap.py',
+ 'pes_events_scanner')
+])
+def test_deduce_actor_name_from_file(a_file, name):
+ assert scancustommodifications.deduce_actor_name(a_file) == name
+
+
+def mocked__run_command(list_of_args, log_message, checked=True):
+ if list_of_args == ['rpm', '-ql', 'leapp-upgrade-el8toel9']:
+ # get source of truth
+ return FILES_FROM_RPM.strip().split('\n')
+ if list_of_args and list_of_args[0] == 'find':
+ # listing files in directory
+ return FILES_ON_SYSTEM.strip().split('\n')
+ if list_of_args == ['rpm', '-V', '--nomtime', 'leapp-upgrade-el8toel9']:
+ # checking authenticity
+ return VERIFIED_FILES.strip().split('\n')
+ return []
+
+
+def test_check_for_modifications(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(arch='x86_64', src_ver='8.9', dst_ver='9.3'))
+ monkeypatch.setattr(scancustommodifications, '_run_command', mocked__run_command)
+ modifications = scancustommodifications.check_for_modifications('repository')
+ modified = [m for m in modifications if m.type == 'modified']
+ custom = [m for m in modifications if m.type == 'custom']
+ configurations = [m for m in modifications if m.component == 'configuration']
+ assert len(modified) == 3
+ assert modified[0].filename == 'repos/system_upgrade/el8toel9/actors/xorgdrvfact/libraries/xorgdriverlib.py'
+ assert modified[0].rpm_checks_str == '.......T.'
+ assert len(custom) == 3
+ assert custom[0].filename == '/some/unrelated/to/leapp/file'
+ assert custom[0].rpm_checks_str == ''
+ assert len(configurations) == 1
+ assert configurations[0].filename == 'etc/leapp/files/pes-events.json'
+ assert configurations[0].rpm_checks_str == 'S.5....T.'
diff --git a/repos/system_upgrade/common/models/custommodifications.py b/repos/system_upgrade/common/models/custommodifications.py
new file mode 100644
index 00000000..51709dde
--- /dev/null
+++ b/repos/system_upgrade/common/models/custommodifications.py
@@ -0,0 +1,13 @@
+from leapp.models import fields, Model
+from leapp.topics import SystemFactsTopic
+
+
+class CustomModifications(Model):
+ """Model to store any custom or modified files that are discovered in leapp directories"""
+ topic = SystemFactsTopic
+
+ filename = fields.String()
+ actor_name = fields.String()
+ type = fields.StringEnum(choices=['custom', 'modified'])
+ rpm_checks_str = fields.String(default='')
+ component = fields.String()
--
2.43.0

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,107 +0,0 @@
From 946f8c6a36962a4e7ddc5354d21fcd7d70e108f9 Mon Sep 17 00:00:00 2001
From: Martin Kluson <mkluson@redhat.com>
Date: Fri, 12 Jan 2024 13:45:59 +0100
Subject: [PATCH 62/66] Use `happy_path` instead `e2e` for public clouds
`happy_path` performs similar steps as `e2e` does and is used in the rest of tiers.
---
.packit.yaml | 41 ++++++++++++++++++++++++++---------------
1 file changed, 26 insertions(+), 15 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index 383f5314..d87f33c0 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -123,13 +123,13 @@ jobs:
targets:
epel-7-x86_64:
distros: [RHEL-7.9-rhui]
- identifier: sanity-7.9to8.6-aws-e2e
+ identifier: sanity-7.9to8.6-aws
# NOTE(ivasilev) Unfortunately to use yaml templates we need to rewrite the whole tf_extra_params dict
# to use plan_filter (can't just specify one section test.tmt.plan_filter, need to specify environments.* as well)
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:e2e & enabled:true'
+ plan_filter: 'tag:upgrade_happy_path & enabled:true'
environments:
- tmt:
context:
@@ -144,33 +144,42 @@ jobs:
TARGET_RELEASE: "8.6"
RHUI: "aws"
LEAPPDATA_BRANCH: "upstream"
+ LEAPP_NO_RHSM: "1"
+ USE_CUSTOM_REPOS: rhui
- &sanity-79to88-aws
<<: *sanity-79to86-aws
- identifier: sanity-7.9to8.8-aws-e2e
+ identifier: sanity-7.9to8.8-aws
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.8"
RHUI: "aws"
LEAPPDATA_BRANCH: "upstream"
+ LEAPP_NO_RHSM: "1"
+ USE_CUSTOM_REPOS: rhui
- &sanity-79to89-aws
<<: *sanity-79to86-aws
- identifier: sanity-7.9to8.9-aws-e2e
+ identifier: sanity-7.9to8.9-aws
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.9"
RHUI: "aws"
LEAPPDATA_BRANCH: "upstream"
-
-- &sanity-79to810-aws
- <<: *sanity-79to86-aws
- identifier: sanity-7.9to8.10-aws-e2e
- env:
- SOURCE_RELEASE: "7.9"
- TARGET_RELEASE: "8.10"
- RHUI: "aws"
- LEAPPDATA_BRANCH: "upstream"
+ LEAPP_NO_RHSM: "1"
+ USE_CUSTOM_REPOS: rhui
+
+# NOTE(mkluson) RHEL 8.10 content is not publicly available (via RHUI)
+#- &sanity-79to810-aws
+# <<: *sanity-79to86-aws
+# identifier: sanity-7.9to8.10-aws
+# env:
+# SOURCE_RELEASE: "7.9"
+# TARGET_RELEASE: "8.10"
+# RHUI: "aws"
+# LEAPPDATA_BRANCH: "upstream"
+# LEAPP_NO_RHSM: "1"
+# USE_CUSTOM_REPOS: rhui
# On-demand minimal beaker tests
- &beaker-minimal-79to86
@@ -619,11 +628,11 @@ jobs:
targets:
epel-8-x86_64:
distros: [RHEL-8.6-rhui]
- identifier: sanity-8.6to9.0-aws-e2e
+ identifier: sanity-8.6to9.0-aws
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:e2e & enabled:true'
+ plan_filter: 'tag:upgrade_happy_path & enabled:true'
environments:
- tmt:
context:
@@ -639,3 +648,5 @@ jobs:
RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
RHUI: "aws"
LEAPPDATA_BRANCH: "upstream"
+ LEAPP_NO_RHSM: "1"
+ USE_CUSTOM_REPOS: rhui
--
2.43.0

View File

@ -1,256 +0,0 @@
From 8552bbfd7418484b92327ded0d08a1849a693fe7 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Thu, 18 Jan 2024 14:51:34 +0100
Subject: [PATCH 63/66] Update upgrade data + bump required data stream to 3.0
* Added RHEL 9 repos for Alibaba RHUI with mapping for IPU 8 -> 9
* Actors require "3.0" in the list of provided_data_streams
* All data files updated to provide onls "3.0" data stream
* Add NL at the end of the device_driver_deprecation_data.json file
to be POSIX compatible as expected.
---
.../files/device_driver_deprecation_data.json | 3 +-
etc/leapp/files/pes-events.json | 1 -
etc/leapp/files/repomap.json | 104 +++++++++++++++++-
.../common/libraries/config/__init__.py | 2 +-
4 files changed, 103 insertions(+), 7 deletions(-)
diff --git a/etc/leapp/files/device_driver_deprecation_data.json b/etc/leapp/files/device_driver_deprecation_data.json
index 7d5f5c74..c570f8d4 100644
--- a/etc/leapp/files/device_driver_deprecation_data.json
+++ b/etc/leapp/files/device_driver_deprecation_data.json
@@ -1,6 +1,5 @@
{
"provided_data_streams": [
- "2.0",
"3.0"
],
"data": [
@@ -5058,4 +5057,4 @@
]
}
]
-}
\ No newline at end of file
+}
diff --git a/etc/leapp/files/pes-events.json b/etc/leapp/files/pes-events.json
index c89a1547..5b4b4f87 100644
--- a/etc/leapp/files/pes-events.json
+++ b/etc/leapp/files/pes-events.json
@@ -500347,7 +500347,6 @@ null
}
],
"provided_data_streams": [
-"2.0",
"3.0"
],
"timestamp": "202401101404Z"
diff --git a/etc/leapp/files/repomap.json b/etc/leapp/files/repomap.json
index 9b73f2d7..1c97b7de 100644
--- a/etc/leapp/files/repomap.json
+++ b/etc/leapp/files/repomap.json
@@ -1,5 +1,5 @@
{
- "datetime": "202307241553Z",
+ "datetime": "202401171742Z",
"version_format": "1.2.0",
"mapping": [
{
@@ -225,6 +225,12 @@
"target": [
"rhel9-rhui-google-compute-engine-leapp"
]
+ },
+ {
+ "source": "rhel8-rhui-custom-client-at-alibaba",
+ "target": [
+ "rhel9-rhui-custom-client-at-alibaba"
+ ]
}
]
}
@@ -2855,6 +2861,14 @@
"channel": "ga",
"repo_type": "rpm"
},
+ {
+ "major_version": "9",
+ "repoid": "rhui-rhel-9-for-aarch64-baseos-rhui-rpms",
+ "arch": "aarch64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
+ },
{
"major_version": "9",
"repoid": "rhui-rhel-9-for-x86_64-baseos-e4s-rhui-rpms",
@@ -2870,6 +2884,14 @@
"channel": "ga",
"repo_type": "rpm",
"rhui": "google"
+ },
+ {
+ "major_version": "9",
+ "repoid": "rhui-rhel-9-for-x86_64-baseos-rhui-rpms",
+ "arch": "x86_64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
}
]
},
@@ -3059,6 +3081,14 @@
"channel": "ga",
"repo_type": "rpm"
},
+ {
+ "major_version": "9",
+ "repoid": "rhui-rhel-9-for-aarch64-appstream-rhui-rpms",
+ "arch": "aarch64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
+ },
{
"major_version": "9",
"repoid": "rhui-rhel-9-for-x86_64-appstream-e4s-rhui-rpms",
@@ -3074,6 +3104,14 @@
"channel": "ga",
"repo_type": "rpm",
"rhui": "google"
+ },
+ {
+ "major_version": "9",
+ "repoid": "rhui-rhel-9-for-x86_64-appstream-rhui-rpms",
+ "arch": "x86_64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
}
]
},
@@ -3188,6 +3226,14 @@
"channel": "ga",
"repo_type": "rpm"
},
+ {
+ "major_version": "9",
+ "repoid": "rhui-codeready-builder-for-rhel-9-aarch64-rhui-rpms",
+ "arch": "aarch64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
+ },
{
"major_version": "9",
"repoid": "rhui-codeready-builder-for-rhel-9-x86_64-rhui-rpms",
@@ -3195,6 +3241,14 @@
"channel": "ga",
"repo_type": "rpm",
"rhui": "google"
+ },
+ {
+ "major_version": "9",
+ "repoid": "rhui-codeready-builder-for-rhel-9-x86_64-rhui-rpms",
+ "arch": "x86_64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
}
]
},
@@ -3301,6 +3355,14 @@
"repo_type": "rpm",
"rhui": "aws"
},
+ {
+ "major_version": "9",
+ "repoid": "rhui-rhel-9-for-aarch64-supplementary-rhui-rpms",
+ "arch": "aarch64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
+ },
{
"major_version": "9",
"repoid": "rhui-rhel-9-for-x86_64-supplementary-rhui-rpms",
@@ -3308,6 +3370,14 @@
"channel": "ga",
"repo_type": "rpm",
"rhui": "google"
+ },
+ {
+ "major_version": "9",
+ "repoid": "rhui-rhel-9-for-x86_64-supplementary-rhui-rpms",
+ "arch": "x86_64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
}
]
},
@@ -3687,6 +3757,14 @@
"channel": "ga",
"repo_type": "rpm",
"rhui": "google"
+ },
+ {
+ "major_version": "9",
+ "repoid": "rhui-rhel-9-for-x86_64-highavailability-rhui-rpms",
+ "arch": "x86_64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
}
]
},
@@ -3788,10 +3866,30 @@
"rhui": "google"
}
]
+ },
+ {
+ "pesid": "rhel9-rhui-custom-client-at-alibaba",
+ "entries": [
+ {
+ "major_version": "9",
+ "repoid": "rhui-custom-rhui_client_at_alibaba-rhel-9",
+ "arch": "aarch64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
+ },
+ {
+ "major_version": "9",
+ "repoid": "rhui-custom-rhui_client_at_alibaba-rhel-9",
+ "arch": "x86_64",
+ "channel": "ga",
+ "repo_type": "rpm",
+ "rhui": "alibaba"
+ }
+ ]
}
],
"provided_data_streams": [
- "2.0",
"3.0"
]
-}
\ No newline at end of file
+}
diff --git a/repos/system_upgrade/common/libraries/config/__init__.py b/repos/system_upgrade/common/libraries/config/__init__.py
index b3697a4d..9757948e 100644
--- a/repos/system_upgrade/common/libraries/config/__init__.py
+++ b/repos/system_upgrade/common/libraries/config/__init__.py
@@ -3,7 +3,7 @@ from leapp.libraries.stdlib import api
# The devel variable for target product channel can also contain 'beta'
SUPPORTED_TARGET_CHANNELS = {'ga', 'e4s', 'eus', 'aus'}
-CONSUMED_DATA_STREAM_ID = '2.0'
+CONSUMED_DATA_STREAM_ID = '3.0'
def get_env(name, default=None):
--
2.43.0

View File

@ -1,38 +0,0 @@
From f0ff88658cfa5fb4c766fb2cc44dc93fbe08958d Mon Sep 17 00:00:00 2001
From: "jinkangkang.jkk" <jinkangkang.jkk@alibaba-inc.com>
Date: Wed, 29 Nov 2023 11:18:11 +0800
Subject: [PATCH 64/66] Cover upgrades RHEL 8 to RHEL 9 using RHUI on Alibaba
cloud
Note the repomap.json file does not cover yet mapping of repositories
between RHEL 8 and RHEL 9 for aarch64 (ARM). This will be covered
by further update of the file.
---
repos/system_upgrade/common/libraries/rhui.py | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/repos/system_upgrade/common/libraries/rhui.py b/repos/system_upgrade/common/libraries/rhui.py
index b31eba0b..2dfb209c 100644
--- a/repos/system_upgrade/common/libraries/rhui.py
+++ b/repos/system_upgrade/common/libraries/rhui.py
@@ -481,6 +481,17 @@ RHUI_CLOUD_MAP = {
('leapp-google-sap.repo', YUM_REPOS_PATH)
],
},
+ 'alibaba': {
+ 'src_pkg': 'aliyun_rhui_rhel8',
+ 'target_pkg': 'aliyun_rhui_rhel9',
+ 'leapp_pkg': 'leapp-rhui-alibaba',
+ 'leapp_pkg_repo': 'leapp-alibaba.repo',
+ 'files_map': [
+ ('content.crt', RHUI_PKI_PRODUCT_DIR),
+ ('key.pem', RHUI_PKI_DIR),
+ ('leapp-alibaba.repo', YUM_REPOS_PATH)
+ ],
+ },
},
}
--
2.43.0

View File

@ -1,49 +0,0 @@
From a9e48f836e3bcc4e89f26c25a94b35195db2c043 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Wed, 23 Aug 2023 12:04:05 +0200
Subject: [PATCH 65/66] load data files: do not try to download data files when
missing
As the leapp upgrade data files are nowadays part of the install rpm,
there is no need to download them anymore. Also, we do not have any
plans to provide the updated data files anywhere out of the rpm.
From that point, do not try to download provided data files anymore.
---
repos/system_upgrade/common/libraries/fetch.py | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/libraries/fetch.py b/repos/system_upgrade/common/libraries/fetch.py
index 1ca26170..42fcb74c 100644
--- a/repos/system_upgrade/common/libraries/fetch.py
+++ b/repos/system_upgrade/common/libraries/fetch.py
@@ -20,6 +20,7 @@ def _get_hint():
rpmname = 'leapp-upgrade-el{}toel{}'.format(get_source_major_version(), get_target_major_version())
hint = (
'All official data files are nowadays part of the installed rpms.'
+ ' That is the only official resource of actual official data files for in-place upgrades.'
' This issue is usually encountered when the data files are incorrectly customized, replaced, or removed'
' (e.g. by custom scripts).'
' In case you want to recover the original file, remove it (if still exists)'
@@ -33,7 +34,9 @@ def _raise_error(local_path, details):
"""
If the file acquisition fails in any way, throw an informative error to stop the actor.
"""
+ rpmname = 'leapp-upgrade-el{}toel{}'.format(get_source_major_version(), get_target_major_version())
summary = 'Data file {lp} is missing or invalid.'.format(lp=local_path)
+
raise StopActorExecutionError(summary, details={'details': details, 'hint': _get_hint()})
@@ -174,7 +177,7 @@ def load_data_asset(actor_requesting_asset,
try:
# The asset family ID has the form (major, minor), include only `major` in the URL
- raw_asset_contents = read_or_fetch(asset_filename, data_stream=data_stream_major)
+ raw_asset_contents = read_or_fetch(asset_filename, data_stream=data_stream_major, allow_download=False)
asset_contents = json.loads(raw_asset_contents)
except ValueError:
msg = 'The {0} file (at {1}) does not contain a valid JSON object.'.format(asset_fulltext_name, asset_filename)
--
2.43.0

View File

@ -1,319 +0,0 @@
From 353cd03d5339a6f3905f8bc4f067e0758f6e1d78 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Wed, 17 Jan 2024 21:46:19 +0100
Subject: [PATCH 66/66] upgrade data files loading: update error msgs and
repors + minor changes
Updated number of error messages and reports to be sure that users
know what files are actually problematic. Original errors and reports
usually didn't contain the full path to an upgrade data file due to
possibility to download the file from a server. However, the posibility
to download fresh data files from a requested server is not expected
to support in the current state as the data files are part of provided
packages.
I've been thinking quite long time whether to actually drop
or deprecate bigger part of the code to simplify the whole solution,
as currently it's not planned to have a possibility to download
some data files from servers in future. However, thinking about
upcoming challenges, I am not totally persuaded that we will not
revive that functionality in future, or that we will not want to
use it for something little bit different. From that POV (and late
phase of development prior the planned release) I think it will be
better to preserve it for now and raise a discussion about it later.
Other changes in this PR:
* drop hardcoded name of the leapp-upgrade-elXtoelY rpm and use
the leapp.libraries.common.rpms.get_leapp_packages() function
* replace REPOSITORY group by SANITY; it was originally a mixture
of both and SANITY fits better to me from this point
* the check of consumed data sets could produce report with empty
links, as the original article(s) we referred to have been obsoleted;
so added filter for missing URLs
Co-authored-by: Toshio Kuratomi <a.badger@gmail.com>
---
.../libraries/check_consumed_assets.py | 31 ++++++++++++++++---
.../deviceanddriverdeprecationdataload.py | 3 ++
.../libraries/pes_event_parsing.py | 24 +++++++++-----
.../libraries/repositoriesmapping.py | 21 +++++++------
.../system_upgrade/common/libraries/fetch.py | 28 ++++++++++-------
5 files changed, 75 insertions(+), 32 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checkconsumedassets/libraries/check_consumed_assets.py b/repos/system_upgrade/common/actors/checkconsumedassets/libraries/check_consumed_assets.py
index f5998de0..1558c2fc 100644
--- a/repos/system_upgrade/common/actors/checkconsumedassets/libraries/check_consumed_assets.py
+++ b/repos/system_upgrade/common/actors/checkconsumedassets/libraries/check_consumed_assets.py
@@ -4,10 +4,27 @@ from collections import defaultdict, namedtuple
from leapp import reporting
from leapp.libraries.common.config import get_consumed_data_stream_id
from leapp.libraries.common.fetch import ASSET_PROVIDED_DATA_STREAMS_FIELD
+from leapp.libraries.common.rpms import get_leapp_packages, LeappComponents
from leapp.libraries.stdlib import api
from leapp.models import ConsumedDataAsset
+def _get_hint():
+ hint = (
+ 'All official assets (data files) are part of the installed rpms these days.'
+ ' This issue is usually encountered when the data files are incorrectly'
+ ' customized, replaced, or removed. '
+ ' In case you want to recover the original files, remove them (if they still exist)'
+ ' and reinstall the following rpms: {rpms}.\n'
+ 'The listed assets (data files) are usually inside the /etc/leapp/files/'
+ ' directory.'
+ .format(
+ rpms=', '.join(get_leapp_packages(component=LeappComponents.REPOSITORY))
+ )
+ )
+ return hint
+
+
def compose_summary_for_incompatible_assets(assets, incompatibility_reason):
if not assets:
return []
@@ -69,13 +86,16 @@ def report_incompatible_assets(assets):
summary_lines += compose_summary_for_incompatible_assets(incompatible_assets, reason)
for asset in incompatible_assets:
- doc_url_to_title[asset.docs_url].append(asset.docs_title)
+ if asset.docs_url:
+ # Add URLs only when they are specified. docs_url could be empty string
+ doc_url_to_title[asset.docs_url].append(asset.docs_title)
report_parts = [
reporting.Title(title),
reporting.Summary('\n'.join(summary_lines)),
reporting.Severity(reporting.Severity.HIGH),
- reporting.Groups([reporting.Groups.INHIBITOR, reporting.Groups.REPOSITORY]),
+ reporting.Remediation(hint=_get_hint()),
+ reporting.Groups([reporting.Groups.INHIBITOR, reporting.Groups.SANITY]),
]
report_parts += make_report_entries_with_unique_urls(docs_url_to_title_map)
@@ -101,13 +121,16 @@ def report_malformed_assets(malformed_assets):
details = ' - The asset file {filename} contains invalid value in its "{data_streams_field}"'
details = details.format(filename=asset.filename, data_streams_field=ASSET_PROVIDED_DATA_STREAMS_FIELD)
summary_lines.append(details)
- docs_url_to_title_map[asset.docs_url].append(asset.docs_title)
+ if asset.docs_url:
+ # Add URLs only when they are specified. docs_url could be empty string
+ docs_url_to_title_map[asset.docs_url].append(asset.docs_title)
report_parts = [
reporting.Title(title),
reporting.Summary('\n'.join(summary_lines)),
+ reporting.Remediation(hint=_get_hint()),
reporting.Severity(reporting.Severity.HIGH),
- reporting.Groups([reporting.Groups.INHIBITOR, reporting.Groups.REPOSITORY]),
+ reporting.Groups([reporting.Groups.INHIBITOR, reporting.Groups.SANITY]),
]
report_parts += make_report_entries_with_unique_urls(docs_url_to_title_map)
diff --git a/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/libraries/deviceanddriverdeprecationdataload.py b/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/libraries/deviceanddriverdeprecationdataload.py
index 3caa4e0a..f422c2c3 100644
--- a/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/libraries/deviceanddriverdeprecationdataload.py
+++ b/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/libraries/deviceanddriverdeprecationdataload.py
@@ -13,6 +13,9 @@ def process():
supported_device_types = set(DeviceDriverDeprecationEntry.device_type.serialize()['choices'])
data_file_name = 'device_driver_deprecation_data.json'
+ # NOTE(pstodulk): load_data_assert raises StopActorExecutionError, see
+ # the code for more info. Keeping the handling on the framework in such
+ # a case as we have no work to do in such a case here.
deprecation_data = fetch.load_data_asset(api.current_actor(),
data_file_name,
asset_fulltext_name='Device driver deprecation data',
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_event_parsing.py b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_event_parsing.py
index 35bcec73..f24dda68 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_event_parsing.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_event_parsing.py
@@ -8,7 +8,7 @@ from leapp import reporting
from leapp.exceptions import StopActorExecution
from leapp.libraries.common import fetch
from leapp.libraries.common.config import architecture
-from leapp.libraries.common.config.version import get_source_major_version, get_target_major_version
+from leapp.libraries.common.rpms import get_leapp_packages, LeappComponents
from leapp.libraries.stdlib import api
# NOTE(mhecko): The modulestream field contains a set of modulestreams until the very end when we generate a Package
@@ -67,6 +67,9 @@ def get_pes_events(pes_json_directory, pes_json_filename):
:return: List of Event tuples, where each event contains event type and input/output pkgs
"""
try:
+ # NOTE(pstodulk): load_data_assert raises StopActorExecutionError, see
+ # the code for more info. Keeping the handling on the framework in such
+ # a case as we have no work to do in such a case here.
events_data = fetch.load_data_asset(api.current_actor(),
pes_json_filename,
asset_fulltext_name='PES events file',
@@ -83,22 +86,27 @@ def get_pes_events(pes_json_directory, pes_json_filename):
events_matching_arch = [e for e in all_events if not e.architectures or arch in e.architectures]
return events_matching_arch
except (ValueError, KeyError):
- rpmname = 'leapp-upgrade-el{}toel{}'.format(get_source_major_version(), get_target_major_version())
- title = 'Missing/Invalid PES data file ({}/{})'.format(pes_json_directory, pes_json_filename)
+ local_path = os.path.join(pes_json_directory, pes_json_filename)
+ title = 'Missing/Invalid PES data file ({})'.format(local_path)
summary = (
'All official data files are nowadays part of the installed rpms.'
' This issue is usually encountered when the data files are incorrectly customized, replaced, or removed'
' (e.g. by custom scripts).'
- ' In case you want to recover the original file, remove it (if still exists)'
- ' and reinstall the {} rpm.'
- .format(rpmname)
+ )
+ hint = (
+ ' In case you want to recover the original {lp} file, remove it (if it still exists)'
+ ' and reinstall the following rpms: {rpms}.'
+ .format(
+ lp=local_path,
+ rpms=', '.join(get_leapp_packages(component=LeappComponents.REPOSITORY))
+ )
)
reporting.create_report([
reporting.Title(title),
reporting.Summary(summary),
+ reporting.Remediation(hint=hint),
reporting.Severity(reporting.Severity.HIGH),
- reporting.Groups([reporting.Groups.SANITY]),
- reporting.Groups([reporting.Groups.INHIBITOR]),
+ reporting.Groups([reporting.Groups.SANITY, reporting.Groups.INHIBITOR]),
reporting.RelatedResource('file', os.path.join(pes_json_directory, pes_json_filename))
])
raise StopActorExecution()
diff --git a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
index 6f2b2e0f..8045634e 100644
--- a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
+++ b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
@@ -4,6 +4,7 @@ from collections import defaultdict
from leapp.exceptions import StopActorExecutionError
from leapp.libraries.common.config.version import get_source_major_version, get_target_major_version
from leapp.libraries.common.fetch import load_data_asset
+from leapp.libraries.common.rpms import get_leapp_packages, LeappComponents
from leapp.libraries.stdlib import api
from leapp.models import PESIDRepositoryEntry, RepoMapEntry, RepositoriesMapping
from leapp.models.fields import ModelViolationError
@@ -130,29 +131,31 @@ class RepoMapData(object):
def _inhibit_upgrade(msg):
- rpmname = 'leapp-upgrade-el{}toel{}'.format(get_source_major_version(), get_target_major_version())
+ local_path = os.path.join('/etc/leapp/file', REPOMAP_FILE)
hint = (
'All official data files are nowadays part of the installed rpms.'
' This issue is usually encountered when the data files are incorrectly customized, replaced, or removed'
' (e.g. by custom scripts).'
- ' In case you want to recover the original file, remove it (if still exists)'
- ' and reinstall the {} rpm.'
- .format(rpmname)
+ ' In case you want to recover the original {lp} file, remove the current one (if it still exists)'
+ ' and reinstall the following packages: {rpms}.'
+ .format(
+ lp=local_path,
+ rpms=', '.join(get_leapp_packages(component=LeappComponents.REPOSITORY))
+ )
)
raise StopActorExecutionError(msg, details={'hint': hint})
def _read_repofile(repofile):
- # NOTE: what about catch StopActorExecution error when the file cannot be
- # obtained -> then check whether old_repomap file exists and in such a case
- # inform user they have to provide the new repomap.json file (we have the
- # warning now only which could be potentially overlooked)
+ # NOTE(pstodulk): load_data_assert raises StopActorExecutionError, see
+ # the code for more info. Keeping the handling on the framework in such
+ # a case as we have no work to do in such a case here.
repofile_data = load_data_asset(api.current_actor(),
repofile,
asset_fulltext_name='Repositories mapping',
docs_url='',
docs_title='')
- return repofile_data # If the file does not contain a valid json then load_asset will do a stop actor execution
+ return repofile_data
def scan_repositories(read_repofile_func=_read_repofile):
diff --git a/repos/system_upgrade/common/libraries/fetch.py b/repos/system_upgrade/common/libraries/fetch.py
index 42fcb74c..82bf4ff3 100644
--- a/repos/system_upgrade/common/libraries/fetch.py
+++ b/repos/system_upgrade/common/libraries/fetch.py
@@ -7,7 +7,7 @@ import requests
from leapp import models
from leapp.exceptions import StopActorExecutionError
from leapp.libraries.common.config import get_consumed_data_stream_id, get_env
-from leapp.libraries.common.config.version import get_source_major_version, get_target_major_version
+from leapp.libraries.common.rpms import get_leapp_packages, LeappComponents
from leapp.libraries.stdlib import api
SERVICE_HOST_DEFAULT = "https://cert.cloud.redhat.com"
@@ -16,16 +16,18 @@ MAX_ATTEMPTS = 3
ASSET_PROVIDED_DATA_STREAMS_FIELD = 'provided_data_streams'
-def _get_hint():
- rpmname = 'leapp-upgrade-el{}toel{}'.format(get_source_major_version(), get_target_major_version())
+def _get_hint(local_path):
hint = (
- 'All official data files are nowadays part of the installed rpms.'
- ' That is the only official resource of actual official data files for in-place upgrades.'
+ 'All official data files are part of the installed rpms these days.'
+ ' The rpm is the only official source of the official data files for in-place upgrades.'
' This issue is usually encountered when the data files are incorrectly customized, replaced, or removed'
' (e.g. by custom scripts).'
- ' In case you want to recover the original file, remove it (if still exists)'
- ' and reinstall the {} rpm.'
- .format(rpmname)
+ ' In case you want to recover the original {lp} file, remove the current one (if it still exists)'
+ ' and reinstall the following packages: {rpms}.'
+ .format(
+ lp=local_path,
+ rpms=', '.join(get_leapp_packages(component=LeappComponents.REPOSITORY))
+ )
)
return hint
@@ -34,10 +36,9 @@ def _raise_error(local_path, details):
"""
If the file acquisition fails in any way, throw an informative error to stop the actor.
"""
- rpmname = 'leapp-upgrade-el{}toel{}'.format(get_source_major_version(), get_target_major_version())
summary = 'Data file {lp} is missing or invalid.'.format(lp=local_path)
- raise StopActorExecutionError(summary, details={'details': details, 'hint': _get_hint()})
+ raise StopActorExecutionError(summary, details={'details': details, 'hint': _get_hint(local_path)})
def _request_data(service_path, cert, proxies, timeout=REQUEST_TIMEOUT):
@@ -148,6 +149,7 @@ def load_data_asset(actor_requesting_asset,
docs_title):
"""
Load the content of the data asset with given asset_filename
+ and produce :class:`leapp.model.ConsumedDataAsset` message.
:param Actor actor_requesting_asset: The actor instance requesting the asset file. It is necessary for the actor
to be able to produce ConsumedDataAsset message in order for leapp to be able
@@ -157,6 +159,10 @@ def load_data_asset(actor_requesting_asset,
:param str docs_url: Docs url to provide if an asset is malformed or outdated.
:param str docs_title: Title of the documentation to where `docs_url` points to.
:returns: A dict with asset contents (a parsed JSON), or None if the asset was outdated.
+ :raises StopActorExecutionError: In following cases:
+ * ConsumedDataAsset is not specified in the produces tuple of the actor_requesting_asset actor
+ * The content of the required data file is not valid JSON format
+ * The required data cannot be obtained (e.g. due to missing file)
"""
# Check that the actor that is attempting to obtain the asset meets the contract to call this function
@@ -167,7 +173,7 @@ def load_data_asset(actor_requesting_asset,
error_hint = {'hint': ('Read documentation at the following link for more information about how to retrieve '
'the valid file: {0}'.format(docs_url))}
else:
- error_hint = {'hint': _get_hint()}
+ error_hint = {'hint': _get_hint(os.path.join('/etc/leapp/files', asset_filename))}
data_stream_id = get_consumed_data_stream_id()
data_stream_major = data_stream_id.split('.', 1)[0]
--
2.43.0

View File

@ -1,283 +0,0 @@
From bec6615a9c6fda68153d4d1d76930438a233ae83 Mon Sep 17 00:00:00 2001
From: Toshio Kuratomi <a.badger@gmail.com>
Date: Fri, 19 Jan 2024 11:42:15 -0800
Subject: [PATCH 67/69] Fix another cornercase with symlink handling
* Symlinks to a directory inside of /etc/pki were being created as empty directories.
* Note: unittests being added for both this problem and a second problem:
two links to the same external file will be copied as two separate files but ideally we want one
to be a copy and the other to link to the copy. The unittests for the second problem are
commented out.
Fixes: https://issues.redhat.com/browse/RHEL-3284
---
.../libraries/userspacegen.py | 66 ++++++---
.../tests/unit_test_targetuserspacecreator.py | 128 ++++++++++++++++++
2 files changed, 173 insertions(+), 21 deletions(-)
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
index 8d804407..d917bfd5 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
@@ -350,7 +350,7 @@ def _mkdir_with_copied_mode(path, mode_from):
def _choose_copy_or_link(symlink, srcdir):
"""
- Copy file contents or create a symlink depending on where the pointee resides.
+ Determine whether to copy file contents or create a symlink depending on where the pointee resides.
:param symlink: The source symlink to follow. This must be an absolute path.
:param srcdir: The root directory that every piece of content must be present in.
@@ -415,7 +415,7 @@ def _choose_copy_or_link(symlink, srcdir):
# To make comparisons, we need to resolve all symlinks in the directory
# structure leading up to pointee. However, we can't include pointee
# itself otherwise it will resolve to the file that it points to in the
- # end.
+ # end (which would be wrong if pointee_filename is a symlink).
canonical_pointee_dir, pointee_filename = os.path.split(pointee_as_abspath)
canonical_pointee_dir = os.path.realpath(canonical_pointee_dir)
@@ -454,6 +454,35 @@ def _choose_copy_or_link(symlink, srcdir):
return ('copy', pointee_as_abspath)
+def _copy_symlinks(symlinks_to_process, srcdir):
+ """
+ Copy file contents or create a symlink depending on where the pointee resides.
+
+ :param symlinks_to_process: List of 2-tuples of (src_path, target_path). Each src_path
+ should be an absolute path to the symlink. target_path is the path to where we
+ need to create either a link or a copy.
+ :param srcdir: The root directory that every piece of content must be present in.
+ :raises ValueError: if the arguments are not correct
+ """
+ for source_linkpath, target_linkpath in symlinks_to_process:
+ try:
+ action, source_path = _choose_copy_or_link(source_linkpath, srcdir)
+ except BrokenSymlinkError as e:
+ # Skip and report broken symlinks
+ api.current_logger().warning('{} Will not copy the file!'.format(str(e)))
+ continue
+
+ if action == "copy":
+ # Note: source_path could be a directory, so '-a' or '-r' must be
+ # given to cp.
+ run(['cp', '-a', source_path, target_linkpath])
+ elif action == 'link':
+ run(["ln", "-s", source_path, target_linkpath])
+ else:
+ # This will not happen unless _copy_or_link() has a bug.
+ raise RuntimeError("Programming error: _copy_or_link() returned an unknown action:{}".format(action))
+
+
def _copy_decouple(srcdir, dstdir):
"""
Copy files inside of `srcdir` to `dstdir` while decoupling symlinks.
@@ -467,7 +496,6 @@ def _copy_decouple(srcdir, dstdir):
.. warning::
`dstdir` must already exist.
"""
- symlinks_to_process = []
for root, directories, files in os.walk(srcdir):
# relative path from srcdir because srcdir is replaced with dstdir for
# the copy.
@@ -476,11 +504,24 @@ def _copy_decouple(srcdir, dstdir):
# Create all directories with proper permissions for security
# reasons (Putting private data into directories that haven't had their
# permissions set appropriately may leak the private information.)
+ symlinks_to_process = []
for directory in directories:
source_dirpath = os.path.join(root, directory)
target_dirpath = os.path.join(dstdir, relpath, directory)
+
+ # Defer symlinks until later because we may end up having to copy
+ # the file contents and the directory may not exist yet.
+ if os.path.islink(source_dirpath):
+ symlinks_to_process.append((source_dirpath, target_dirpath))
+ continue
+
_mkdir_with_copied_mode(target_dirpath, source_dirpath)
+ # Link or create all directories that were pointed to by symlinks and
+ # then reset symlinks_to_process for use by files.
+ _copy_symlinks(symlinks_to_process, srcdir)
+ symlinks_to_process = []
+
for filename in files:
source_filepath = os.path.join(root, filename)
target_filepath = os.path.join(dstdir, relpath, filename)
@@ -494,24 +535,7 @@ def _copy_decouple(srcdir, dstdir):
# Not a symlink so we can copy it now too
run(['cp', '-a', source_filepath, target_filepath])
- # Now process all symlinks
- for source_linkpath, target_linkpath in symlinks_to_process:
- try:
- action, source_path = _choose_copy_or_link(source_linkpath, srcdir)
- except BrokenSymlinkError as e:
- # Skip and report broken symlinks
- api.current_logger().warning('{} Will not copy the file!'.format(str(e)))
- continue
-
- if action == "copy":
- # Note: source_path could be a directory, so '-a' or '-r' must be
- # given to cp.
- run(['cp', '-a', source_path, target_linkpath])
- elif action == 'link':
- run(["ln", "-s", source_path, target_linkpath])
- else:
- # This will not happen unless _copy_or_link() has a bug.
- raise RuntimeError("Programming error: _copy_or_link() returned an unknown action:{}".format(action))
+ _copy_symlinks(symlinks_to_process, srcdir)
def _copy_certificates(context, target_userspace):
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/tests/unit_test_targetuserspacecreator.py b/repos/system_upgrade/common/actors/targetuserspacecreator/tests/unit_test_targetuserspacecreator.py
index bd49f657..19b760a1 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/tests/unit_test_targetuserspacecreator.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/tests/unit_test_targetuserspacecreator.py
@@ -381,6 +381,70 @@ def temp_directory_layout(tmp_path, initial_structure):
},
id="Absolute_symlink_to_a_file_inside_via_a_symlink_to_the_rootdir"
)),
+ # This should be fixed but not necessarily for this release.
+ # It makes sure that when we have two separate links to the
+ # same file outside of /etc/pki, one of the links is copied
+ # as a real file and the other is made a link to the copy.
+ # (Right now, the real file is copied in place of both links.)
+ # (pytest.param(
+ # {
+ # 'dir': {
+ # 'fileA': '/outside/fileC',
+ # 'fileB': '/outside/fileC',
+ # },
+ # 'outside': {
+ # 'fileC': None,
+ # },
+ # },
+ # {
+ # 'dir': {
+ # 'fileA': None,
+ # 'fileB': '/dir/fileA',
+ # },
+ # },
+ # id="Absolute_two_symlinks_to_the_same_copied_file"
+ # )),
+ (pytest.param(
+ {
+ 'dir': {
+ 'fileA': None,
+ 'link_to_dir': '/dir/inside',
+ 'inside': {
+ 'fileB': None,
+ },
+ },
+ },
+ {
+ 'dir': {
+ 'fileA': None,
+ 'link_to_dir': '/dir/inside',
+ 'inside': {
+ 'fileB': None,
+ },
+ },
+ },
+ id="Absolute_symlink_to_a_dir_inside"
+ )),
+ (pytest.param(
+ {
+ 'dir': {
+ 'fileA': None,
+ 'link_to_dir': '/outside',
+ },
+ 'outside': {
+ 'fileB': None,
+ },
+ },
+ {
+ 'dir': {
+ 'fileA': None,
+ 'link_to_dir': {
+ 'fileB': None,
+ },
+ },
+ },
+ id="Absolute_symlink_to_a_dir_outside"
+ )),
(pytest.param(
# This one is very tricky:
# * The user has made /etc/pki a symlink to some other directory that
@@ -671,6 +735,70 @@ def temp_directory_layout(tmp_path, initial_structure):
},
id="Relative_symlink_to_a_file_inside_via_a_symlink_to_the_rootdir"
)),
+ # This should be fixed but not necessarily for this release.
+ # It makes sure that when we have two separate links to the
+ # same file outside of /etc/pki, one of the links is copied
+ # as a real file and the other is made a link to the copy.
+ # (Right now, the real file is copied in place of both links.)
+ # (pytest.param(
+ # {
+ # 'dir': {
+ # 'fileA': '../outside/fileC',
+ # 'fileB': '../outside/fileC',
+ # },
+ # 'outside': {
+ # 'fileC': None,
+ # },
+ # },
+ # {
+ # 'dir': {
+ # 'fileA': None,
+ # 'fileB': 'fileA',
+ # },
+ # },
+ # id="Relative_two_symlinks_to_the_same_copied_file"
+ # )),
+ (pytest.param(
+ {
+ 'dir': {
+ 'fileA': None,
+ 'link_to_dir': '../outside',
+ },
+ 'outside': {
+ 'fileB': None,
+ },
+ },
+ {
+ 'dir': {
+ 'fileA': None,
+ 'link_to_dir': {
+ 'fileB': None,
+ },
+ },
+ },
+ id="Relative_symlink_to_a_dir_outside"
+ )),
+ (pytest.param(
+ {
+ 'dir': {
+ 'fileA': None,
+ 'link_to_dir': 'inside',
+ 'inside': {
+ 'fileB': None,
+ },
+ },
+ },
+ {
+ 'dir': {
+ 'fileA': None,
+ 'link_to_dir': 'inside',
+ 'inside': {
+ 'fileB': None,
+ },
+ },
+ },
+ id="Relative_symlink_to_a_dir_inside"
+ )),
(pytest.param(
# This one is very tricky:
# * The user has made /etc/pki a symlink to some other directory that
--
2.42.0

View File

@ -1,129 +0,0 @@
From 98a1057bb40a53a2200b0cfba9e4ad75b1d8f796 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Mon, 22 Jan 2024 18:40:07 +0100
Subject: [PATCH 68/69] device driver deprecation data: print nice error msg on
malformed data
In case of malformed device_driver_deprecation_data.json user could
originally see raw traceback without having too much information
what it actually means or how to fix it. That usually happens only
when the file is manually modified on the machine. So in this case
we inform user what file is problematic and how to restore the original
file installed by our package.
In case of upstream development, this msg could be seen also when
new data is provided if:
* data file is malformed
* data file has a new format of data (still json expected)
* etc.
These issues however will be discovered prior the merge as the
running tests will fail, so such a problematic file should never
get part of the upstream. From that point, we will be expect that
user has malformed / customized data file. So no need to handle
all possible errors differently in this case.
Jira: OAMG-7549
Co-authored-by: Toshio Kuratomi <a.badger@gmail.com>
---
.../deviceanddriverdeprecationdataload.py | 36 ++++++++++++++-----
.../tests/test_ddddload.py | 28 +++++++++++++++
2 files changed, 56 insertions(+), 8 deletions(-)
diff --git a/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/libraries/deviceanddriverdeprecationdataload.py b/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/libraries/deviceanddriverdeprecationdataload.py
index f422c2c3..b12e77c9 100644
--- a/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/libraries/deviceanddriverdeprecationdataload.py
+++ b/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/libraries/deviceanddriverdeprecationdataload.py
@@ -1,6 +1,9 @@
+from leapp.exceptions import StopActorExecutionError
from leapp.libraries.common import fetch
+from leapp.libraries.common.rpms import get_leapp_packages, LeappComponents
from leapp.libraries.stdlib import api
from leapp.models import DeviceDriverDeprecationData, DeviceDriverDeprecationEntry
+from leapp.models.fields import ModelViolationError
def process():
@@ -22,12 +25,29 @@ def process():
docs_url='',
docs_title='')
- api.produce(
- DeviceDriverDeprecationData(
- entries=[
- DeviceDriverDeprecationEntry(**entry)
- for entry in deprecation_data['data']
- if entry.get('device_type') in supported_device_types
- ]
+ try:
+ api.produce(
+ DeviceDriverDeprecationData(
+ entries=[
+ DeviceDriverDeprecationEntry(**entry)
+ for entry in deprecation_data['data']
+ if entry.get('device_type') in supported_device_types
+ ]
+ )
)
- )
+ except (ModelViolationError, ValueError, KeyError, AttributeError, TypeError) as err:
+ # For the listed errors, we expect this to happen only when data is malformed
+ # or manually updated. Corrupted data in the upstream is discovered
+ # prior the merge thanks to testing. So just suggest the restoration
+ # of the file.
+ msg = 'Invalid device and driver deprecation data: {}'.format(err)
+ hint = (
+ 'This issue is usually caused by manual update of the {lp} file.'
+ ' The data inside is either incorrect or old. To restore the original'
+ ' {lp} file, remove it and reinstall the following packages: {rpms}'
+ .format(
+ lp='/etc/leapp/file/device_driver_deprecation_data.json',
+ rpms=', '.join(get_leapp_packages(component=LeappComponents.REPOSITORY))
+ )
+ )
+ raise StopActorExecutionError(msg, details={'hint': hint})
diff --git a/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/tests/test_ddddload.py b/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/tests/test_ddddload.py
index 69bcd09c..c3386745 100644
--- a/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/tests/test_ddddload.py
+++ b/repos/system_upgrade/common/actors/loaddevicedriverdeprecationdata/tests/test_ddddload.py
@@ -1,5 +1,9 @@
+import pytest
+
+from leapp.exceptions import StopActorExecutionError
from leapp.libraries.actor import deviceanddriverdeprecationdataload as ddddload
from leapp.libraries.common import fetch
+from leapp.libraries.common.testutils import CurrentActorMocked
TEST_DATA = {
'data': [
@@ -57,3 +61,27 @@ def test_filtered_load(monkeypatch):
assert produced
assert len(produced[0].entries) == 3
assert not any([e.device_type == 'unsupported' for e in produced[0].entries])
+
+
+@pytest.mark.parametrize('data', (
+ {},
+ {'foo': 'bar'},
+ {'data': 1, 'foo': 'bar'},
+ {'data': 'string', 'foo': 'bar'},
+ {'data': {'foo': 1}, 'bar': 2},
+ {'data': {'foo': 1, 'device_type': None}},
+ {'data': {'foo': 1, 'device_type': 'cpu'}},
+ {'data': {'driver_name': ['foo'], 'device_type': 'cpu'}},
+))
+def test_invalid_dddd_data(monkeypatch, data):
+ produced = []
+
+ def load_data_asset_mock(*args, **kwargs):
+ return data
+
+ monkeypatch.setattr(fetch, 'load_data_asset', load_data_asset_mock)
+ monkeypatch.setattr(ddddload.api, 'current_actor', CurrentActorMocked())
+ monkeypatch.setattr(ddddload.api, 'produce', lambda *v: produced.extend(v))
+ with pytest.raises(StopActorExecutionError):
+ ddddload.process()
+ assert not produced
--
2.42.0

View File

@ -1,695 +0,0 @@
From b75dc49bb3d41e89067a8b609eeb35c485fb40a1 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Tue, 23 Jan 2024 16:48:44 +0100
Subject: [PATCH 69/69] Update PES data (CTC2-2)
Includes fixed idm-tomcatjss related events for upgrades IPU 8 -> 9.
Jira: RHEL-21779
---
etc/leapp/files/pes-events.json | 666 +++++++++++++++++++++++++++++++-
1 file changed, 665 insertions(+), 1 deletion(-)
diff --git a/etc/leapp/files/pes-events.json b/etc/leapp/files/pes-events.json
index 5b4b4f87..dfc09de5 100644
--- a/etc/leapp/files/pes-events.json
+++ b/etc/leapp/files/pes-events.json
@@ -500344,10 +500344,674 @@ null
"minor_version": 4,
"os_name": "RHEL"
}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13841,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "libtimezonemap-devel",
+"repository": "rhel9-CRB"
+}
+],
+"set_id": 19633
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 3,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 4,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13842,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "libtimezonemap-devel",
+"repository": "rhel8-CRB"
+}
+],
+"set_id": 19634
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 8,
+"minor_version": 10,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13843,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "libadwaita",
+"repository": "rhel9-AppStream"
+}
+],
+"set_id": 19635
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 3,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 4,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13844,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "libadwaita-devel",
+"repository": "rhel9-CRB"
+}
+],
+"set_id": 19636
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 3,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 4,
+"os_name": "RHEL"
+}
+},
+{
+"action": 1,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13845,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "libtimezonemap-devel",
+"repository": "rhel8-CRB"
+}
+],
+"set_id": 19637
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 10,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 0,
+"os_name": "RHEL"
+}
+},
+{
+"action": 1,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13846,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "graphviz-ruby",
+"repository": "rhel8-AppStream"
+}
+],
+"set_id": 19638
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 10,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 0,
+"os_name": "RHEL"
+}
+},
+{
+"action": 2,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13847,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "graphviz-ruby",
+"repository": "rhel8-AppStream"
+}
+],
+"set_id": 19639
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 8,
+"minor_version": 10,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13848,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "vulkan-utility-libraries-devel",
+"repository": "rhel9-CRB"
+}
+],
+"set_id": 19640
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 3,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 4,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13849,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "gtk-vnc2-devel",
+"repository": "rhel8-CRB"
+}
+],
+"set_id": 19641
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 8,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 8,
+"minor_version": 9,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13850,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "gvnc-devel",
+"repository": "rhel8-CRB"
+}
+],
+"set_id": 19642
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 8,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 8,
+"minor_version": 9,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13851,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "graphviz-ruby",
+"repository": "rhel9-AppStream"
+}
+],
+"set_id": 19643
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 3,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 4,
+"os_name": "RHEL"
+}
+},
+{
+"action": 3,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13852,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+{
+"name": "pki-deps",
+"stream": "10.6"
+}
+],
+"name": "pki-servlet-engine",
+"repository": "rhel8-AppStream"
+}
+],
+"set_id": 19644
+},
+"initial_release": {
+"major_version": 8,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [
+{
+"in_modulestream": {
+"name": "pki-deps",
+"stream": "10.6"
+},
+"out_modulestream": null
+}
+],
+"out_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "tomcat",
+"repository": "rhel8-AppStream"
+}
+],
+"set_id": 19645
+},
+"release": {
+"major_version": 8,
+"minor_version": 10,
+"os_name": "RHEL"
+}
+},
+{
+"action": 3,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13853,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "idm-tomcatjss",
+"repository": "rhel9-AppStream"
+}
+],
+"set_id": 19646
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 3,
+"os_name": "RHEL"
+},
+"modulestream_maps": [
+{
+"in_modulestream": null,
+"out_modulestream": null
+}
+],
+"out_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "idm-jss-tomcat",
+"repository": "rhel9-AppStream"
+}
+],
+"set_id": 19647
+},
+"release": {
+"major_version": 9,
+"minor_version": 4,
+"os_name": "RHEL"
+}
+},
+{
+"action": 0,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13854,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "gtk-vnc2-devel",
+"repository": "rhel9-CRB"
+}
+],
+"set_id": 19648
+},
+"initial_release": {
+"major_version": 9,
+"minor_version": 3,
+"os_name": "RHEL"
+},
+"modulestream_maps": [],
+"out_packageset": null,
+"release": {
+"major_version": 9,
+"minor_version": 4,
+"os_name": "RHEL"
+}
+},
+{
+"action": 7,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13855,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "leapp-upgrade-el7toel8",
+"repository": "rhel7-extras"
+}
+],
+"set_id": 19649
+},
+"initial_release": {
+"major_version": 7,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [
+{
+"in_modulestream": null,
+"out_modulestream": null
+}
+],
+"out_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "leapp-upgrade-el8toel9",
+"repository": "rhel8-AppStream"
+}
+],
+"set_id": 19650
+},
+"release": {
+"major_version": 8,
+"minor_version": 6,
+"os_name": "RHEL"
+}
+},
+{
+"action": 7,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13856,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "python2-leapp",
+"repository": "rhel7-extras"
+}
+],
+"set_id": 19651
+},
+"initial_release": {
+"major_version": 7,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [
+{
+"in_modulestream": null,
+"out_modulestream": null
+}
+],
+"out_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "python3-leapp",
+"repository": "rhel8-AppStream"
+}
+],
+"set_id": 19652
+},
+"release": {
+"major_version": 8,
+"minor_version": 6,
+"os_name": "RHEL"
+}
+},
+{
+"action": 7,
+"architectures": [
+"aarch64",
+"ppc64le",
+"s390x",
+"x86_64"
+],
+"id": 13857,
+"in_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "leapp-upgrade-el7toel8-deps",
+"repository": "rhel7-extras"
+}
+],
+"set_id": 19653
+},
+"initial_release": {
+"major_version": 7,
+"minor_version": 9,
+"os_name": "RHEL"
+},
+"modulestream_maps": [
+{
+"in_modulestream": null,
+"out_modulestream": null
+}
+],
+"out_packageset": {
+"package": [
+{
+"modulestreams": [
+null
+],
+"name": "leapp-upgrade-el8toel9-deps",
+"repository": "rhel8-AppStream"
+}
+],
+"set_id": 19654
+},
+"release": {
+"major_version": 8,
+"minor_version": 6,
+"os_name": "RHEL"
+}
}
],
"provided_data_streams": [
"3.0"
],
-"timestamp": "202401101404Z"
+"timestamp": "202401231304Z"
}
--
2.42.0

View File

@ -41,8 +41,8 @@ py2_byte_compile "%1" "%2"}
# RHEL 8+ packages to be consistent with other leapp projects in future.
Name: leapp-repository
Version: 0.19.0
Release: 10%{?dist}
Version: 0.20.0
Release: 1%{?dist}
Summary: Repositories for leapp
License: ASL 2.0
@ -55,81 +55,6 @@ BuildArch: noarch
### PATCHES HERE
# Patch0001: filename.patch
Patch0001: 0001-Further-narrow-down-packit-tests.patch
Patch0002: 0002-Bring-back-uefi_test.patch
Patch0003: 0003-Add-7.9-8.9-and-8.9-9.3-upgrade-paths.patch
Patch0004: 0004-Split-tier1-tests-into-default-on-push-and-on-demand.patch
Patch0005: 0005-Add-labels-to-all-tests.patch
Patch0006: 0006-Refactor-using-YAML-anchors.patch
Patch0007: 0007-Add-kernel-rt-tests-and-switch-to-sanity-for-default.patch
Patch0008: 0008-Minor-label-enhancements.patch
Patch0009: 0009-Update-pr-welcome-message.patch
Patch0010: 0010-Address-ddiblik-s-review-comments.patch
Patch0011: 0011-Address-mmoran-s-review-comments.patch
Patch0012: 0012-Add-isccfg-library-manual-running-mode.patch
Patch0013: 0013-Avoid-warnings-on-python2.patch
Patch0014: 0014-makefile-add-dev_test_no_lint-target.patch
Patch0015: 0015-Fix-the-issue-of-going-out-of-bounds-in-the-isccfg-p.patch
Patch0016: 0016-make-pylint-and-spellcheck-happy-again.patch
Patch0017: 0017-Remove-TUV-from-supported-target-channels.patch
Patch0018: 0018-Transition-systemd-service-states-during-upgrade.patch
Patch0019: 0019-Remove-obsoleted-enablersyncdservice-actor.patch
Patch0020: 0020-default-to-NO_RHSM-mode-when-subscription-manager-is.patch
Patch0021: 0021-call-correct-mkdir-when-trying-to-create-etc-rhsm-fa.patch
Patch0022: 0022-RHSM-Adjust-the-switch-to-container-mode-for-new-RHS.patch
Patch0023: 0023-load-all-substitutions-from-etc.patch
Patch0024: 0024-Do-not-create-dangling-symlinks-for-containerized-RH.patch
Patch0025: 0025-be-less-strict-when-figuring-out-major-version-in-in.patch
Patch0026: 0026-rhui-bootstrap-target-rhui-clients-in-scratch-contai.patch
Patch0027: 0027-add-backward-compatibility-for-leapp-rhui-aws-azure-.patch
Patch0028: 0028-checknfs-do-not-check-systemd-mounts.patch
Patch0029: 0029-Switch-from-plan-name-regex-to-filter-by-tags.patch
Patch0030: 0030-Bring-back-reference-to-oamg-leapp-tests-repo.patch
Patch0031: 0031-add-the-posibility-to-upgrade-with-a-local-repositor.patch
Patch0032: 0032-Fix-certificate-symlink-handling.patch
Patch0033: 0033-Add-prod-certs-and-upgrade-paths-for-8.10-9.4.patch
Patch0034: 0034-pylint-ignore-too-many-lines.patch
Patch0035: 0035-Update-upgrade-paths-Add-8.10-9.4.patch
Patch0036: 0036-Copy-dnf.conf-to-target-userspace-and-allow-a-custom.patch
Patch0037: 0037-adjustlocalrepos-suppress-unwanted-deprecation-repor.patch
Patch0038: 0038-add-detection-for-custom-libraries-registered-by-ld..patch
Patch0039: 0039-Fix-several-typos-and-Makefile-help.patch
Patch0040: 0040-Move-code-handling-GPG-keys-to-separate-library.patch
Patch0041: 0041-Check-no-new-unexpected-keys-were-installed-during-t.patch
# CTC2-0
Patch0042: 0042-BZ-2250254-force-removal-of-tomcat-during-the-upgrad.patch
Patch0043: 0043-Add-79to88-and-79to89-aws-upgrade-paths.patch
Patch0044: 0044-Add-7.9to8.10-and-8.10to9.4-upgrade-paths.patch
Patch0045: 0045-Utilize-get_target_major_version-in-no-enabled-targe.patch
Patch0046: 0046-Workaround-tft-issue-with-listing-disabled-plans.patch
Patch0047: 0047-Distribution-agnostick-check-of-signed-packages-1-2.patch
Patch0048: 0048-Distribution-agnostick-check-of-signed-packages-2-2.patch
Patch0049: 0049-Pylint-fix-superfluous-parens-in-the-code.patch
Patch0050: 0050-distributionsignedrpmscanner-refactoring-gpg-pubkey-.patch
Patch0051: 0051-Introduce-two-functions-for-listing-which-packages-a.patch
Patch0052: 0052-Switch-test-repo-branch-to-main.patch
Patch0053: 0053-Update-dependencies-require-xfsprogs-and-e2fsprogs.patch
Patch0054: 0054-Several-enhancements-to-the-Makefile.patch
Patch0055: 0055-pes_events_scanner-Ignore-Leapp-related-PES-events.patch
Patch0056: 0056-Use-library-functions-for-getting-leapp-packages.patch
Patch0057: 0057-Introduce-TrackedFilesInfoSource-message-and-new-act.patch
Patch0058: 0058-Add-actors-for-OpenSSL-conf-and-IBMCA.patch
Patch0059: 0059-Introduce-custom-modifications-tracking.patch
Patch0060: 0060-Rework-_copy_decouple-to-follow-relative-symlinks-an.patch
Patch0061: 0061-Update-the-data-files-pes-repomap-dddd-CTC2-0.patch
# CTC2-1
Patch0062: 0062-Use-happy_path-instead-e2e-for-public-clouds.patch
Patch0063: 0063-Update-upgrade-data-bump-required-data-stream-to-3.0.patch
Patch0064: 0064-Cover-upgrades-RHEL-8-to-RHEL-9-using-RHUI-on-Alibab.patch
Patch0065: 0065-load-data-files-do-not-try-to-download-data-files-wh.patch
Patch0066: 0066-upgrade-data-files-loading-update-error-msgs-and-rep.patch
# CTC2-2
Patch0067: 0067-Fix-another-cornercase-with-symlink-handling.patch
Patch0068: 0068-device-driver-deprecation-data-print-nice-error-msg-.patch
Patch0069: 0069-Update-PES-data-CTC2-2.patch
%description
%{summary}
@ -281,75 +206,6 @@ Requires: python3-gobject-base
# APPLY PATCHES HERE
# %%patch0001 -p1
%patch0001 -p1
%patch0002 -p1
%patch0003 -p1
%patch0004 -p1
%patch0005 -p1
%patch0006 -p1
%patch0007 -p1
%patch0008 -p1
%patch0009 -p1
%patch0010 -p1
%patch0011 -p1
%patch0012 -p1
%patch0013 -p1
%patch0014 -p1
%patch0015 -p1
%patch0016 -p1
%patch0017 -p1
%patch0018 -p1
%patch0019 -p1
%patch0020 -p1
%patch0021 -p1
%patch0022 -p1
%patch0023 -p1
%patch0024 -p1
%patch0025 -p1
%patch0026 -p1
%patch0027 -p1
%patch0028 -p1
%patch0029 -p1
%patch0030 -p1
%patch0031 -p1
%patch0032 -p1
%patch0033 -p1
%patch0034 -p1
%patch0035 -p1
%patch0036 -p1
%patch0037 -p1
%patch0038 -p1
%patch0039 -p1
%patch0040 -p1
%patch0041 -p1
%patch0042 -p1
%patch0043 -p1
%patch0044 -p1
%patch0045 -p1
%patch0046 -p1
%patch0047 -p1
%patch0048 -p1
%patch0049 -p1
%patch0050 -p1
%patch0051 -p1
%patch0052 -p1
%patch0053 -p1
%patch0054 -p1
%patch0055 -p1
%patch0056 -p1
%patch0057 -p1
%patch0058 -p1
%patch0059 -p1
%patch0060 -p1
%patch0061 -p1
%patch0062 -p1
%patch0063 -p1
%patch0064 -p1
%patch0065 -p1
%patch0066 -p1
%patch0067 -p1
%patch0068 -p1
%patch0069 -p1
%build
%if 0%{?rhel} == 7
@ -426,6 +282,14 @@ done;
# no files here
%changelog
* Tue Feb 13 2024 Toshio Kuratomi <toshio@fedoraproject.org> - 0.20.0-1
- Rebase to new upstream v0.20.0.
- Fix semanage import issue
- Fix handling of libvirt's systemd services
- Add a dracut breakpoint for the pre-upgrade step.
- Drop obsoleted upgrade paths (obsoleted releases: 8.6, 8.9, 9.0, 9.3)
- Resolves: RHEL-16729
* Tue Jan 23 2024 Toshio Kuratomi <toshio@fedoraproject.org> - 0.19.0-10
- Print nice error msg when device and driver deprecation data is malformed
- Fix another cornercase when preserving symlinks to certificates in /etc/pki

View File

@ -1,2 +1,2 @@
SHA512 (leapp-repository-0.19.0.tar.gz) = e7e913cd635c8101dc5dcd65929d19a21ce72fd9291b84ea60a20e6dbdf4a65553c890770bf16000145f601242ed7f047cae1e283966c8b385ea9bf61e04ef65
SHA512 (deps-pkgs-10.tar.gz) = e63f77e439456e0a8b0fc338b370ee7e2d7824b1d62c75f2209b283905c8c0641d504bfe910021317884fa1662429d952fd4c9b9ee457c48b34182e6f975aa0e
SHA512 (leapp-repository-0.20.0.tar.gz) = 8f1732cda85a597e2401a67b69f347398e0270fb2a411079fb9de5261213809bb3323053c0663c0c1f731eb085be6083acabd0a46aaa24d5d3f6b024bd5f0e55