LEAPP-repository CTC1 Release for 8.10/9.5

- Do not terminate the upgrade dracut module execution if
  /sysroot/root/tmp_leapp_py3/.leapp_upgrade_failed exists
- Several minor improvements in messages printed in console output
- Several minor improvements in report and error messages
- Fix the parsing of the lscpu output
- Fix evaluation of PES data
- Target by default always "GA" channel repositories unless a different
  channel is specified for the leapp execution
- Fix creation of the post upgrade report about changes in states of systemd
  services
- Update the device driver deprecation data, fixing invalid fields for some
  AMD CPUs
- Update the default kernel cmdline
- Wait for the storage initialization when /usr is on separate file system -
  covering SAN
- Resolves: RHEL-27847, RHEL-35240
This commit is contained in:
Toshio Kuratomi 2024-05-13 10:57:32 -07:00
parent 57c0bb6a61
commit 79ca77ccf4
35 changed files with 151325 additions and 5 deletions

View File

@ -1,7 +1,7 @@
From 921c06892f7550a3a8e2b3fe941c6272bdacf88d Mon Sep 17 00:00:00 2001
From: mhecko <mhecko@redhat.com>
Date: Thu, 15 Feb 2024 09:56:27 +0100
Subject: [PATCH] rhui: do not bootstrap target client on aws
Subject: [PATCH 01/34] rhui: do not bootstrap target client on aws
Bootstrapping target RHUI client now requires installing the entire
RHEL8 RPM stack. Threfore, do not try installing target client
@ -247,5 +247,5 @@ index 3eaa4826..0a2e45af 100644
class RHUIInfo(Model):
"""
--
2.43.0
2.42.0

View File

@ -0,0 +1,830 @@
From b875ae256cc61336c76ea83f5e40eb7895cab0fc Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Fri, 9 Feb 2024 13:00:16 +0100
Subject: [PATCH 02/34] Packit: Drop tests for obsoleted upgrade paths +
restructuralization
Dropping upgrade paths related to following releases: 8.6, 8.9, 9.0,
9.3. See the previous commit for more info.
During the drop of these release, I've realized the current structure
of tests is not suitable for such operations as current test/job
definitions has been chained. So e.g. tests for 8.10 -> 9.4 depended
on 8.9 -> 9.3, which depended on 8.8 -> 8.6, etc... Even going deeper,
IPU 8->9 definitions have been based on 7 -> 8 definitions.
So I updated the structure, separating tests for IPU 7 -> 8 and 8 -> 9
and also deps between all upgrade paths. Now, particular tests
can inherit one of *abstract* jobs definitions, so dropping or removing
tests for an upgrade path does not affect other tests.
Also fixed some incorrect definitions in tests, like a fixed label
for `beaker-minimal-88to92` (orig "8.6to9.2").
Update welcome-PR bot msg to reflect changes in upgrade paths.
Jira: OAMG-10451
---
.github/workflows/pr-welcome-msg.yml | 11 +-
.packit.yaml | 530 ++++++++++-----------------
2 files changed, 191 insertions(+), 350 deletions(-)
diff --git a/.github/workflows/pr-welcome-msg.yml b/.github/workflows/pr-welcome-msg.yml
index e791340e..e23c9bbb 100644
--- a/.github/workflows/pr-welcome-msg.yml
+++ b/.github/workflows/pr-welcome-msg.yml
@@ -24,18 +24,15 @@ jobs:
- **review please @oamg/developers** to notify leapp developers of the review request
- **/packit copr-build** to submit a public copr build using packit
- Packit will automatically schedule regression tests for this PR's build and latest upstream leapp build. If you need a different version of leapp from PR#42, use `/packit test oamg/leapp#42`
+ Packit will automatically schedule regression tests for this PR's build and latest upstream leapp build. If you need a different version of leapp, e.g. from PR#42, use `/packit test oamg/leapp#42`
+ Note that first time contributors cannot run tests automatically - they will be started by a reviewer.
It is possible to schedule specific on-demand tests as well. Currently 2 test sets are supported, `beaker-minimal` and `kernel-rt`, both can be used to be run on all upgrade paths or just a couple of specific ones.
To launch on-demand tests with packit:
- **/packit test --labels kernel-rt** to schedule `kernel-rt` tests set for all upgrade paths
- - **/packit test --labels beaker-minimal-8.9to9.3,kernel-rt-8.9to9.3** to schedule `kernel-rt` and `beaker-minimal` test sets for 8.9->9.3 upgrade path
+ - **/packit test --labels beaker-minimal-8.10to9.4,kernel-rt-8.10to9.4** to schedule `kernel-rt` and `beaker-minimal` test sets for 8.10->9.4 upgrade path
- [Deprecated] To launch on-demand regression testing public members of oamg organization can leave the following comment:
- - **/rerun** to schedule basic regression tests using this pr build and latest upstream leapp build as artifacts
- - **/rerun 42** to schedule basic regression tests using this pr build and leapp\*PR42\* as artifacts
- - **/rerun-sst** to schedule sst tests using this pr build and latest upstream leapp build as artifacts
- - **/rerun-sst 42** to schedule sst tests using this pr build and leapp\*PR42\* as artifacts
+ See other labels for particular jobs defined in the `.packit.yaml` file.
Please [open ticket](https://url.corp.redhat.com/oamg-ci-issue) in case you experience technical problem with the CI. (RH internal only)
diff --git a/.packit.yaml b/.packit.yaml
index 491b1450..bce97bad 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -85,18 +85,31 @@ jobs:
# builds from master branch should start with 100 release, to have high priority
- bash -c "sed -i \"s/1%{?dist}/100%{?dist}/g\" packaging/leapp-repository.spec"
-- &sanity-79to86
+
+# NOTE: to see what envars, targets, .. can be set in tests, see
+# the configuration of tests here:
+# https://gitlab.cee.redhat.com/oamg/leapp-tests/-/blob/main/config.yaml
+# Available only to RH Employees.
+
+# ###################################################################### #
+# ############################### 7 TO 8 ############################### #
+# ###################################################################### #
+
+# ###################################################################### #
+# ### Abstract job definitions to make individual tests/jobs smaller ### #
+# ###################################################################### #
+- &sanity-abstract-7to8
job: tests
+ trigger: ignore
fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
fmf_ref: "main"
use_internal_tf: True
- trigger: pull_request
labels:
- sanity
targets:
epel-7-x86_64:
distros: [RHEL-7.9-ZStream]
- identifier: sanity-7.9to8.6
+ identifier: sanity-abstract-7to8
tmt_plan: ""
tf_extra_params:
test:
@@ -110,20 +123,16 @@ jobs:
provisioning:
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
- env:
- SOURCE_RELEASE: "7.9"
- TARGET_RELEASE: "8.6"
- LEAPPDATA_BRANCH: "upstream"
-- &sanity-79to86-aws
- <<: *sanity-79to86
+- &sanity-abstract-7to8-aws
+ <<: *sanity-abstract-7to8
labels:
- sanity
- aws
targets:
epel-7-x86_64:
distros: [RHEL-7.9-rhui]
- identifier: sanity-7.9to8.6-aws
+ identifier: sanity-abstract-7to8-aws
# NOTE(ivasilev) Unfortunately to use yaml templates we need to rewrite the whole tf_extra_params dict
# to use plan_filter (can't just specify one section test.tmt.plan_filter, need to specify environments.* as well)
tf_extra_params:
@@ -139,57 +148,14 @@ jobs:
post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys; yum-config-manager --enable rhel-7-server-rhui-optional-rpms"
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
- env:
- SOURCE_RELEASE: "7.9"
- TARGET_RELEASE: "8.6"
- RHUI: "aws"
- LEAPPDATA_BRANCH: "upstream"
- LEAPP_NO_RHSM: "1"
- USE_CUSTOM_REPOS: rhui
-
-- &sanity-79to88-aws
- <<: *sanity-79to86-aws
- identifier: sanity-7.9to8.8-aws
- env:
- SOURCE_RELEASE: "7.9"
- TARGET_RELEASE: "8.8"
- RHUI: "aws"
- LEAPPDATA_BRANCH: "upstream"
- LEAPP_NO_RHSM: "1"
- USE_CUSTOM_REPOS: rhui
-
-- &sanity-79to89-aws
- <<: *sanity-79to86-aws
- identifier: sanity-7.9to8.9-aws
- env:
- SOURCE_RELEASE: "7.9"
- TARGET_RELEASE: "8.9"
- RHUI: "aws"
- LEAPPDATA_BRANCH: "upstream"
- LEAPP_NO_RHSM: "1"
- USE_CUSTOM_REPOS: rhui
-
-# NOTE(mkluson) RHEL 8.10 content is not publicly available (via RHUI)
-#- &sanity-79to810-aws
-# <<: *sanity-79to86-aws
-# identifier: sanity-7.9to8.10-aws
-# env:
-# SOURCE_RELEASE: "7.9"
-# TARGET_RELEASE: "8.10"
-# RHUI: "aws"
-# LEAPPDATA_BRANCH: "upstream"
-# LEAPP_NO_RHSM: "1"
-# USE_CUSTOM_REPOS: rhui
# On-demand minimal beaker tests
-- &beaker-minimal-79to86
- <<: *sanity-79to86
+- &beaker-minimal-7to8-abstract-ondemand
+ <<: *sanity-abstract-7to8
manual_trigger: True
labels:
- beaker-minimal
- - beaker-minimal-7.9to8.6
- - 7.9to8.6
- identifier: sanity-7.9to8.6-beaker-minimal
+ identifier: beaker-minimal-7to8-abstract-ondemand
tf_extra_params:
test:
tmt:
@@ -204,13 +170,11 @@ jobs:
BusinessUnit: sst_upgrades@leapp_upstream_test
# On-demand kernel-rt tests
-- &kernel-rt-79to86
- <<: *beaker-minimal-79to86
+- &kernel-rt-abstract-7to8-ondemand
+ <<: *beaker-minimal-7to8-abstract-ondemand
labels:
- kernel-rt
- - kernel-rt-7.9to8.6
- - 7.9to8.6
- identifier: sanity-7.9to8.6-kernel-rt
+ identifier: sanity-7to8-kernel-rt-abstract-ondemand
tf_extra_params:
test:
tmt:
@@ -224,114 +188,133 @@ jobs:
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
+
+# ###################################################################### #
+# ######################### Individual tests ########################### #
+# ###################################################################### #
+
+# Tests: 7.9 -> 8.8
+- &sanity-79to88-aws
+ <<: *sanity-abstract-7to8-aws
+ trigger: pull_request
+ identifier: sanity-7.9to8.8-aws
+ env:
+ SOURCE_RELEASE: "7.9"
+ TARGET_RELEASE: "8.8"
+ RHUI: "aws"
+ LEAPPDATA_BRANCH: "upstream"
+ LEAPP_NO_RHSM: "1"
+ USE_CUSTOM_REPOS: rhui
+
- &sanity-79to88
- <<: *sanity-79to86
+ <<: *sanity-abstract-7to8
+ trigger: pull_request
identifier: sanity-7.9to8.8
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.8"
LEAPPDATA_BRANCH: "upstream"
-# On-demand minimal beaker tests
- &beaker-minimal-79to88
- <<: *beaker-minimal-79to86
+ <<: *beaker-minimal-7to8-abstract-ondemand
+ trigger: pull_request
labels:
- beaker-minimal
- beaker-minimal-7.9to8.8
- 7.9to8.8
- identifier: sanity-7.9to8.8-beaker-minimal
+ identifier: sanity-7.9to8.8-beaker-minimal-ondemand
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.8"
LEAPPDATA_BRANCH: "upstream"
-# On-demand kernel-rt tests
- &kernel-rt-79to88
- <<: *kernel-rt-79to86
+ <<: *kernel-rt-abstract-7to8-ondemand
+ trigger: pull_request
labels:
- kernel-rt
- kernel-rt-7.9to8.8
- 7.9to8.8
- identifier: sanity-7.9to8.8-kernel-rt
+ identifier: sanity-7.9to8.8-kernel-rt-ondemand
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.8"
LEAPPDATA_BRANCH: "upstream"
-- &sanity-79to89
- <<: *sanity-79to86
- identifier: sanity-7.9to8.9
- env:
- SOURCE_RELEASE: "7.9"
- TARGET_RELEASE: "8.9"
- LEAPPDATA_BRANCH: "upstream"
-
-# On-demand minimal beaker tests
-- &beaker-minimal-79to89
- <<: *beaker-minimal-79to86
- labels:
- - beaker-minimal
- - beaker-minimal-7.9to8.9
- - 7.9to8.9
- identifier: sanity-7.9to8.9-beaker-minimal
- env:
- SOURCE_RELEASE: "7.9"
- TARGET_RELEASE: "8.9"
- LEAPPDATA_BRANCH: "upstream"
-
-# On-demand kernel-rt tests
-- &kernel-rt-79to89
- <<: *kernel-rt-79to88
- labels:
- - kernel-rt
- - kernel-rt-7.9to8.9
- - 7.9to8.9
- identifier: sanity-7.9to8.9-kernel-rt
- env:
- SOURCE_RELEASE: "7.9"
- TARGET_RELEASE: "8.9"
- LEAPPDATA_BRANCH: "upstream"
-
+# Tests: 7.9 -> 8.10
- &sanity-79to810
- <<: *sanity-79to86
+ <<: *sanity-abstract-7to8
+ trigger: pull_request
identifier: sanity-7.9to8.10
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.10"
LEAPPDATA_BRANCH: "upstream"
-# On-demand minimal beaker tests
+# NOTE(mkluson) RHEL 8.10 content is not publicly available (via RHUI)
+#- &sanity-79to810-aws
+# <<: *sanity-abstract-7to8-aws
+# trigger: pull_request
+# identifier: sanity-7.9to8.10-aws
+# env:
+# SOURCE_RELEASE: "7.9"
+# TARGET_RELEASE: "8.10"
+# RHUI: "aws"
+# LEAPPDATA_BRANCH: "upstream"
+# LEAPP_NO_RHSM: "1"
+# USE_CUSTOM_REPOS: rhui
+
- &beaker-minimal-79to810
- <<: *beaker-minimal-79to86
+ <<: *beaker-minimal-7to8-abstract-ondemand
+ trigger: pull_request
labels:
- beaker-minimal
- beaker-minimal-7.9to8.10
- 7.9to8.10
- identifier: sanity-7.9to8.10-beaker-minimal
+ identifier: sanity-7.9to8.10-beaker-minimal-ondemand
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.10"
LEAPPDATA_BRANCH: "upstream"
-# On-demand kernel-rt tests
- &kernel-rt-79to810
- <<: *kernel-rt-79to88
+ <<: *kernel-rt-abstract-7to8-ondemand
+ trigger: pull_request
labels:
- kernel-rt
- kernel-rt-7.9to8.10
- 7.9to8.10
- identifier: sanity-7.9to8.10-kernel-rt
+ identifier: sanity-7.9to8.10-kernel-rt-ondemand
env:
SOURCE_RELEASE: "7.9"
TARGET_RELEASE: "8.10"
LEAPPDATA_BRANCH: "upstream"
-- &sanity-86to90
- <<: *sanity-79to86
+
+# ###################################################################### #
+# ############################## 8 TO 10 ############################### #
+# ###################################################################### #
+
+# ###################################################################### #
+# ### Abstract job definitions to make individual tests/jobs smaller ### #
+# ###################################################################### #
+
+#NOTE(pstodulk) putting default values in abstract jobs as from 8.10, as this
+# is the last RHEL 8 release and all new future tests will start from this
+# one release.
+
+- &sanity-abstract-8to9
+ job: tests
+ trigger: ignore
+ fmf_url: "https://gitlab.cee.redhat.com/oamg/leapp-tests"
+ fmf_ref: "main"
+ use_internal_tf: True
+ labels:
+ - sanity
targets:
epel-8-x86_64:
- distros: [RHEL-8.6.0-Nightly]
- identifier: sanity-8.6to9.0
+ distros: [RHEL-8.10.0-Nightly]
+ identifier: sanity-abstract-8to9
tf_extra_params:
test:
tmt:
@@ -339,28 +322,44 @@ jobs:
environments:
- tmt:
context:
- distro: "rhel-8.6"
+ distro: "rhel-8.10"
settings:
provisioning:
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
- env:
- SOURCE_RELEASE: "8.6"
- TARGET_RELEASE: "9.0"
- RHSM_REPOS_EUS: "eus"
- LEAPPDATA_BRANCH: "upstream"
-# On-demand minimal beaker tests
-- &beaker-minimal-86to90
- <<: *beaker-minimal-79to86
+- &sanity-abstract-8to9-aws
+ <<: *sanity-abstract-8to9
+ labels:
+ - sanity
+ - aws
+ targets:
+ epel-8-x86_64:
+ distros: [RHEL-8.10-rhui]
+ identifier: sanity-abstract-8to9-aws
+ tf_extra_params:
+ test:
+ tmt:
+ plan_filter: 'tag:upgrade_happy_path & enabled:true'
+ environments:
+ - tmt:
+ context:
+ distro: "rhel-8.10"
+ settings:
+ provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
+ tags:
+ BusinessUnit: sst_upgrades@leapp_upstream_test
+
+- &beaker-minimal-8to9-abstract-ondemand
+ <<: *sanity-abstract-8to9
+ manual_trigger: True
labels:
- beaker-minimal
- - beaker-minimal-8.6to9.0
- - 8.6to9.0
targets:
epel-8-x86_64:
- distros: [RHEL-8.6.0-Nightly]
- identifier: sanity-8.6to9.0-beaker-minimal
+ distros: [RHEL-8.10.0-Nightly]
+ identifier: beaker-minimal-8to9-abstract-ondemand
tf_extra_params:
test:
tmt:
@@ -368,25 +367,17 @@ jobs:
environments:
- tmt:
context:
- distro: "rhel-8.6"
+ distro: "rhel-8.10"
settings:
provisioning:
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
- env:
- SOURCE_RELEASE: "8.6"
- TARGET_RELEASE: "9.0"
- RHSM_REPOS_EUS: "eus"
- LEAPPDATA_BRANCH: "upstream"
-# On-demand kernel-rt tests
-- &kernel-rt-86to90
- <<: *beaker-minimal-86to90
+- &kernel-rt-abstract-8to9-ondemand
+ <<: *beaker-minimal-8to9-abstract-ondemand
labels:
- kernel-rt
- - kernel-rt-8.6to9.0
- - 8.6to9.0
- identifier: sanity-8.6to9.0-kernel-rt
+ identifier: sanity-8to9-kernel-rt-abstract-ondemand
tf_extra_params:
test:
tmt:
@@ -394,14 +385,21 @@ jobs:
environments:
- tmt:
context:
- distro: "rhel-8.6"
+ distro: "rhel-8.10"
settings:
provisioning:
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
+
+# ###################################################################### #
+# ######################### Individual tests ########################### #
+# ###################################################################### #
+
+# Tests: 8.8 -> 9.2
- &sanity-88to92
- <<: *sanity-86to90
+ <<: *sanity-abstract-8to9
+ trigger: pull_request
targets:
epel-8-x86_64:
distros: [RHEL-8.8.0-Nightly]
@@ -425,21 +423,18 @@ jobs:
LEAPPDATA_BRANCH: "upstream"
LEAPP_DEVEL_TARGET_RELEASE: "9.2"
-# On-demand minimal beaker tests
-- &beaker-minimal-88to92
- <<: *beaker-minimal-86to90
- labels:
- - beaker-minimal
- - beaker-minimal-8.8to9.2
- - 8.6to9.2
+- &sanity-88to92-aws
+ <<: *sanity-abstract-8to9-aws
+ trigger: pull_request
targets:
epel-8-x86_64:
- distros: [RHEL-8.8.0-Nightly]
- identifier: sanity-8.8to9.2-beaker-minimal
+ distros: [RHEL-8.8-rhui]
+ identifier: sanity-8.8to9.2-aws
+ # NOTE(mkluson) Unfortunately to use yaml templates we need to rewrite the whole tf_extra_params dict
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:partitioning & tag:8to9 & enabled:true'
+ plan_filter: 'tag:upgrade_happy_path & enabled:true'
environments:
- tmt:
context:
@@ -452,122 +447,77 @@ jobs:
env:
SOURCE_RELEASE: "8.8"
TARGET_RELEASE: "9.2"
+ RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
+ RHUI: "aws"
LEAPPDATA_BRANCH: "upstream"
- LEAPP_DEVEL_TARGET_RELEASE: "9.2"
+ LEAPP_NO_RHSM: "1"
+ USE_CUSTOM_REPOS: rhui
-# On-demand kernel-rt tests
-- &kernel-rt-88to92
- <<: *beaker-minimal-88to92
+- &beaker-minimal-88to92
+ <<: *beaker-minimal-8to9-abstract-ondemand
+ trigger: pull_request
labels:
- - kernel-rt
- - kernel-rt-8.8to9.2
+ - beaker-minimal
+ - beaker-minimal-8.8to9.2
- 8.8to9.2
- identifier: sanity-8.8to9.2-kernel-rt
- tf_extra_params:
- test:
- tmt:
- plan_filter: 'tag:kernel-rt & tag:8to9 & enabled:true'
- environments:
- - tmt:
- context:
- distro: "rhel-8.8"
- settings:
- provisioning:
- tags:
- BusinessUnit: sst_upgrades@leapp_upstream_test
-
-- &sanity-89to93
- <<: *sanity-88to92
targets:
epel-8-x86_64:
- distros: [RHEL-8.9.0-Nightly]
- identifier: sanity-8.9to9.3
+ distros: [RHEL-8.8.0-Nightly]
+ identifier: sanity-8.8to9.2-beaker-minimal-ondemand
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:sanity & tag:8to9 & enabled:true'
+ plan_filter: 'tag:partitioning & tag:8to9 & enabled:true'
environments:
- tmt:
context:
- distro: "rhel-8.9"
+ distro: "rhel-8.8"
settings:
provisioning:
+ post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
env:
- SOURCE_RELEASE: "8.9"
- TARGET_RELEASE: "9.3"
+ SOURCE_RELEASE: "8.8"
+ TARGET_RELEASE: "9.2"
LEAPPDATA_BRANCH: "upstream"
- LEAPP_DEVEL_TARGET_RELEASE: "9.3"
+ LEAPP_DEVEL_TARGET_RELEASE: "9.2"
-# On-demand minimal beaker tests
-- &beaker-minimal-89to93
- <<: *beaker-minimal-88to92
+- &kernel-rt-88to92
+ <<: *kernel-rt-abstract-8to9-ondemand
+ trigger: pull_request
labels:
- - beaker-minimal
- - beaker-minimal-8.9to9.3
- - 8.9to9.3
+ - kernel-rt
+ - kernel-rt-8.8to9.2
+ - 8.8to9.2
+ identifier: sanity-8.8to9.2-kernel-rt-ondemand
targets:
epel-8-x86_64:
- distros: [RHEL-8.9.0-Nightly]
- identifier: sanity-8.9to9.3-beaker-minimal
+ distros: [RHEL-8.8.0-Nightly]
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:partitioning & tag:8to9 & enabled:true'
+ plan_filter: 'tag:kernel-rt & tag:8to9 & enabled:true'
environments:
- tmt:
context:
- distro: "rhel-8.9"
+ distro: "rhel-8.8"
settings:
provisioning:
tags:
BusinessUnit: sst_upgrades@leapp_upstream_test
env:
- SOURCE_RELEASE: "8.9"
- TARGET_RELEASE: "9.3"
+ SOURCE_RELEASE: "8.8"
+ TARGET_RELEASE: "9.2"
LEAPPDATA_BRANCH: "upstream"
- LEAPP_DEVEL_TARGET_RELEASE: "9.3"
+ LEAPP_DEVEL_TARGET_RELEASE: "9.2"
-# On-demand kernel-rt tests
-- &kernel-rt-89to93
- <<: *beaker-minimal-89to93
- labels:
- - kernel-rt
- - kernel-rt-8.9to9.3
- - 8.9to9.3
- identifier: sanity-8.9to9.3-kernel-rt
- tf_extra_params:
- test:
- tmt:
- plan_filter: 'tag:kernel-rt & tag:8to9 & enabled:true'
- environments:
- - tmt:
- context:
- distro: "rhel-8.9"
- settings:
- provisioning:
- tags:
- BusinessUnit: sst_upgrades@leapp_upstream_test
+# Tests: 8.10 -> 9.4
- &sanity-810to94
- <<: *sanity-88to92
- targets:
- epel-8-x86_64:
- distros: [RHEL-8.10.0-Nightly]
+ <<: *sanity-abstract-8to9
+ trigger: pull_request
identifier: sanity-8.10to9.4
- tf_extra_params:
- test:
- tmt:
- plan_filter: 'tag:sanity & tag:8to9 & enabled:true'
- environments:
- - tmt:
- context:
- distro: "rhel-8.10"
- settings:
- provisioning:
- tags:
- BusinessUnit: sst_upgrades@leapp_upstream_test
env:
SOURCE_RELEASE: "8.10"
TARGET_RELEASE: "9.4"
@@ -576,27 +526,13 @@ jobs:
# On-demand minimal beaker tests
- &beaker-minimal-810to94
- <<: *beaker-minimal-88to92
+ <<: *beaker-minimal-8to9-abstract-ondemand
+ trigger: pull_request
labels:
- beaker-minimal
- beaker-minimal-8.10to9.4
- 8.10to9.4
- targets:
- epel-8-x86_64:
- distros: [RHEL-8.10.0-Nightly]
- identifier: sanity-8.10to9.4-beaker-minimal
- tf_extra_params:
- test:
- tmt:
- plan_filter: 'tag:partitioning & tag:8to9 & enabled:true'
- environments:
- - tmt:
- context:
- distro: "rhel-8.10"
- settings:
- provisioning:
- tags:
- BusinessUnit: sst_upgrades@leapp_upstream_test
+ identifier: sanity-8.10to9.4-beaker-minimal-ondemand
env:
SOURCE_RELEASE: "8.10"
TARGET_RELEASE: "9.4"
@@ -604,107 +540,15 @@ jobs:
# On-demand kernel-rt tests
- &kernel-rt-810to94
- <<: *beaker-minimal-810to94
+ <<: *kernel-rt-abstract-8to9-ondemand
+ trigger: pull_request
labels:
- kernel-rt
- kernel-rt-8.10to9.4
- 8.10to9.4
- identifier: sanity-8.10to9.4-kernel-rt
- tf_extra_params:
- test:
- tmt:
- plan_filter: 'tag:kernel-rt & tag:8to9 & enabled:true'
- environments:
- - tmt:
- context:
- distro: "rhel-8.10"
- settings:
- provisioning:
- tags:
- BusinessUnit: sst_upgrades@leapp_upstream_test
-
-- &sanity-86to90-aws
- <<: *sanity-79to86-aws
- targets:
- epel-8-x86_64:
- distros: [RHEL-8.6-rhui]
- identifier: sanity-8.6to9.0-aws
- tf_extra_params:
- test:
- tmt:
- plan_filter: 'tag:upgrade_happy_path & enabled:true'
- environments:
- - tmt:
- context:
- distro: "rhel-8.6"
- settings:
- provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
- tags:
- BusinessUnit: sst_upgrades@leapp_upstream_test
- env:
- SOURCE_RELEASE: "8.6"
- TARGET_RELEASE: "9.0"
- RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
- RHUI: "aws"
- LEAPPDATA_BRANCH: "upstream"
- LEAPP_NO_RHSM: "1"
- USE_CUSTOM_REPOS: rhui
-
-- &sanity-88to92-aws
- <<: *sanity-86to90-aws
- targets:
- epel-8-x86_64:
- distros: [RHEL-8.8-rhui]
- identifier: sanity-8.8to9.2-aws
- # NOTE(mkluson) Unfortunately to use yaml templates we need to rewrite the whole tf_extra_params dict
- tf_extra_params:
- test:
- tmt:
- plan_filter: 'tag:upgrade_happy_path & enabled:true'
- environments:
- - tmt:
- context:
- distro: "rhel-8.8"
- settings:
- provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
- tags:
- BusinessUnit: sst_upgrades@leapp_upstream_test
+ identifier: sanity-8.10to9.4-kernel-rt-ondemand
env:
- SOURCE_RELEASE: "8.8"
- TARGET_RELEASE: "9.2"
- RHSM_REPOS: "rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms"
- RHUI: "aws"
- LEAPPDATA_BRANCH: "upstream"
- LEAPP_NO_RHSM: "1"
- USE_CUSTOM_REPOS: rhui
-
-- &sanity-89to93-aws
- <<: *sanity-86to90-aws
- targets:
- epel-8-x86_64:
- distros: [RHEL-8.9-rhui]
- identifier: sanity-8.9to9.3-aws
- # NOTE(mkluson) Unfortunately to use yaml templates we need to rewrite the whole tf_extra_params dict
- tf_extra_params:
- test:
- tmt:
- plan_filter: 'tag:upgrade_happy_path & enabled:true'
- environments:
- - tmt:
- context:
- distro: "rhel-8.9"
- settings:
- provisioning:
- post_install_script: "#!/bin/sh\nsudo sed -i s/.*ssh-rsa/ssh-rsa/ /root/.ssh/authorized_keys"
- tags:
- BusinessUnit: sst_upgrades@leapp_upstream_test
- env:
- SOURCE_RELEASE: "8.9"
- TARGET_RELEASE: "9.3"
- RHSM_REPOS: "rhel-8-for-x86_64-appstream-rpms,rhel-8-for-x86_64-baseos-rpms"
- RHUI: "aws"
+ SOURCE_RELEASE: "8.10"
+ TARGET_RELEASE: "9.4"
+ RHSM_REPOS: "rhel-8-for-x86_64-appstream-beta-rpms,rhel-8-for-x86_64-baseos-beta-rpms"
LEAPPDATA_BRANCH: "upstream"
- LEAPP_NO_RHSM: "1"
- USE_CUSTOM_REPOS: rhui
--
2.42.0

View File

@ -0,0 +1,25 @@
From db8a0cf5c66ced0ed49990a40a45a08373b34af5 Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Fri, 1 Mar 2024 20:30:04 +0100
Subject: [PATCH 03/34] silence use-yield-from from pylint 3.1
yield from cannot be used until we require python3.3 or greater
---
.pylintrc | 1 +
1 file changed, 1 insertion(+)
diff --git a/.pylintrc b/.pylintrc
index 57259bcb..f78c1c3f 100644
--- a/.pylintrc
+++ b/.pylintrc
@@ -57,6 +57,7 @@ disable=
redundant-u-string-prefix, # still have py2 to support
logging-format-interpolation,
logging-not-lazy,
+ use-yield-from, # yield from cannot be used until we require python 3.3 or greater
too-many-lines # we do not want to take care about that one
[FORMAT]
--
2.42.0

View File

@ -0,0 +1,23 @@
From 214ed9b57c5e291cda5ff6baf7c7a790038fef34 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Mon, 11 Mar 2024 18:30:23 +0100
Subject: [PATCH 04/34] rocescanner: Actually call process() in
test_roce_notibmz test
---
.../el8toel9/actors/rocescanner/tests/unit_test_rocescanner.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/repos/system_upgrade/el8toel9/actors/rocescanner/tests/unit_test_rocescanner.py b/repos/system_upgrade/el8toel9/actors/rocescanner/tests/unit_test_rocescanner.py
index a4889328..ee9e4498 100644
--- a/repos/system_upgrade/el8toel9/actors/rocescanner/tests/unit_test_rocescanner.py
+++ b/repos/system_upgrade/el8toel9/actors/rocescanner/tests/unit_test_rocescanner.py
@@ -151,4 +151,5 @@ def test_roce_noibmz(monkeypatch, arch):
monkeypatch.setattr(rocescanner.api, 'current_actor', CurrentActorMocked(arch=arch))
monkeypatch.setattr(rocescanner.api.current_actor(), 'produce', mocked_produce)
monkeypatch.setattr(rocescanner, 'get_roce_nics_lines', lambda: mocked_roce_lines)
+ rocescanner.process()
assert not mocked_produce.called
--
2.42.0

View File

@ -0,0 +1,624 @@
From 050620eabe52a2184b40a7ac2818d927516d8b6d Mon Sep 17 00:00:00 2001
From: David Kubek <dkubek@redhat.com>
Date: Tue, 20 Feb 2024 20:54:16 +0100
Subject: [PATCH 05/34] Fix incorrect parsing of `lscpu` output
Original solution expected always ``key: val`` pair on each line.
However, it has not been expected that val could be actually empty
string, which would lead to situation where the following line is
interpreted as a value.
The new solution updates the parsing for output on RHEL 7, but also
calls newly ``lscpu -J`` on RHEL 8+ to obtain data in the JSON format,
which drops all possible parsing problems from our side.
Fixes #1182
---
.github/workflows/codespell.yml | 2 +-
.../actors/scancpu/libraries/scancpu.py | 39 +++++----
.../actors/scancpu/tests/files/json/invalid | 2 +
.../scancpu/tests/files/json/lscpu_aarch64 | 29 +++++++
.../scancpu/tests/files/json/lscpu_ppc64le | 19 +++++
.../scancpu/tests/files/json/lscpu_s390x | 30 +++++++
.../scancpu/tests/files/json/lscpu_x86_64 | 31 +++++++
.../actors/scancpu/tests/files/lscpu_aarch64 | 26 ------
.../actors/scancpu/tests/files/lscpu_ppc64le | 24 ------
.../actors/scancpu/tests/files/lscpu_s390x | 38 ---------
.../scancpu/tests/files/txt/lscpu_aarch64 | 25 ++++++
.../scancpu/tests/files/txt/lscpu_empty_field | 4 +
.../scancpu/tests/files/txt/lscpu_ppc64le | 15 ++++
.../scancpu/tests/files/txt/lscpu_s390x | 26 ++++++
.../tests/files/{ => txt}/lscpu_x86_64 | 0
.../actors/scancpu/tests/test_scancpu.py | 82 ++++++++++++++++---
16 files changed, 279 insertions(+), 113 deletions(-)
create mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/json/invalid
create mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_aarch64
create mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_ppc64le
create mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_s390x
create mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_x86_64
delete mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_aarch64
delete mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_ppc64le
delete mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_s390x
create mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_aarch64
create mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_empty_field
create mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_ppc64le
create mode 100644 repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_s390x
rename repos/system_upgrade/common/actors/scancpu/tests/files/{ => txt}/lscpu_x86_64 (100%)
diff --git a/.github/workflows/codespell.yml b/.github/workflows/codespell.yml
index 24add3fb..4921bc90 100644
--- a/.github/workflows/codespell.yml
+++ b/.github/workflows/codespell.yml
@@ -23,7 +23,7 @@ jobs:
./repos/system_upgrade/el8toel9/actors/xorgdrvfact/tests/files/journalctl-xorg-intel,\
./repos/system_upgrade/el8toel9/actors/xorgdrvfact/tests/files/journalctl-xorg-qxl,\
./repos/system_upgrade/el8toel9/actors/xorgdrvfact/tests/files/journalctl-xorg-without-qxl,\
- ./repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_s390x,\
+ ./repos/system_upgrade/common/actors/scancpu/tests/files,\
./etc/leapp/files/device_driver_deprecation_data.json,\
./etc/leapp/files/pes-events.json,\
./etc/leapp/files/repomap.json,\
diff --git a/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py b/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py
index 9de50fae..7451066a 100644
--- a/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py
+++ b/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py
@@ -1,22 +1,41 @@
+import json
import re
from leapp.libraries.common.config import architecture
+from leapp.libraries.common.config.version import get_source_major_version
from leapp.libraries.stdlib import api, CalledProcessError, run
from leapp.models import CPUInfo, DetectedDeviceOrDriver, DeviceDriverDeprecationData
-LSCPU_NAME_VALUE = re.compile(r'(?P<name>[^:]+):\s+(?P<value>.+)\n?')
+LSCPU_NAME_VALUE = re.compile(r'^(?P<name>[^:]+):[^\S\n]+(?P<value>.+)\n?', flags=re.MULTILINE)
PPC64LE_MODEL = re.compile(r'\d+\.\d+ \(pvr (?P<family>[0-9a-fA-F]+) 0*[0-9a-fA-F]+\)')
-def _get_lscpu_output():
+def _get_lscpu_output(output_json=False):
try:
- result = run(['lscpu'])
+ result = run(['lscpu', '-J' if output_json else ''])
return result.get('stdout', '')
except (OSError, CalledProcessError):
api.current_logger().debug('Executing `lscpu` failed', exc_info=True)
return ''
+def _parse_lscpu_output():
+ if get_source_major_version() == '7':
+ return dict(LSCPU_NAME_VALUE.findall(_get_lscpu_output()))
+
+ lscpu = _get_lscpu_output(output_json=True)
+ try:
+ parsed_json = json.loads(lscpu)
+ # The json contains one entry "lscpu" which is a list of dictionaries
+ # with 2 keys "field" (name of the field from lscpu) and "data" (value
+ # of the field).
+ return dict((entry['field'].rstrip(':'), entry['data']) for entry in parsed_json['lscpu'])
+ except ValueError:
+ api.current_logger().debug('Failed to parse json output from `lscpu`. Got:\n{}'.format(lscpu))
+
+ return dict()
+
+
def _get_cpu_flags(lscpu):
flags = lscpu.get('Flags', '')
return flags.split()
@@ -128,24 +147,16 @@ def _find_deprecation_data_entries(lscpu):
arch_prefix, is_detected = architecture.ARCH_ARM64, _is_detected_aarch64
if arch_prefix and is_detected:
- return [
- _to_detected_device(entry) for entry in _get_cpu_entries_for(arch_prefix)
- if is_detected(lscpu, entry)
- ]
+ return [_to_detected_device(entry) for entry in _get_cpu_entries_for(arch_prefix) if is_detected(lscpu, entry)]
api.current_logger().warning('Unsupported platform could not detect relevant CPU information')
return []
def process():
- lscpu = dict(LSCPU_NAME_VALUE.findall(_get_lscpu_output()))
+ lscpu = _parse_lscpu_output()
api.produce(*_find_deprecation_data_entries(lscpu))
# Backwards compatibility
machine_type = lscpu.get('Machine type')
flags = _get_cpu_flags(lscpu)
- api.produce(
- CPUInfo(
- machine_type=int(machine_type) if machine_type else None,
- flags=flags
- )
- )
+ api.produce(CPUInfo(machine_type=int(machine_type) if machine_type else None, flags=flags))
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/json/invalid b/repos/system_upgrade/common/actors/scancpu/tests/files/json/invalid
new file mode 100644
index 00000000..422c2b7a
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancpu/tests/files/json/invalid
@@ -0,0 +1,2 @@
+a
+b
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_aarch64 b/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_aarch64
new file mode 100644
index 00000000..79186695
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_aarch64
@@ -0,0 +1,29 @@
+{
+ "lscpu": [
+ {"field": "Architecture:", "data": "aarch64"},
+ {"field": "Byte Order:", "data": "Little Endian"},
+ {"field": "CPU(s):", "data": "160"},
+ {"field": "On-line CPU(s) list:", "data": "0-159"},
+ {"field": "Thread(s) per core:", "data": "1"},
+ {"field": "Core(s) per socket:", "data": "80"},
+ {"field": "Socket(s):", "data": "2"},
+ {"field": "NUMA node(s):", "data": "4"},
+ {"field": "Vendor ID:", "data": "ARM"},
+ {"field": "BIOS Vendor ID:", "data": "Ampere(R)"},
+ {"field": "Model:", "data": "1"},
+ {"field": "Model name:", "data": "Neoverse-N1"},
+ {"field": "BIOS Model name:", "data": "Ampere(R) Altra(R) Processor"},
+ {"field": "Stepping:", "data": "r3p1"},
+ {"field": "CPU max MHz:", "data": "3000.0000"},
+ {"field": "CPU min MHz:", "data": "1000.0000"},
+ {"field": "BogoMIPS:", "data": "50.00"},
+ {"field": "L1d cache:", "data": "64K"},
+ {"field": "L1i cache:", "data": "64K"},
+ {"field": "L2 cache:", "data": "1024K"},
+ {"field": "NUMA node0 CPU(s):", "data": "0-79"},
+ {"field": "NUMA node1 CPU(s):", "data": "80-159"},
+ {"field": "NUMA node2 CPU(s):", "data": null},
+ {"field": "NUMA node3 CPU(s):", "data": null},
+ {"field": "Flags:", "data": "fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs"}
+ ]
+}
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_ppc64le b/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_ppc64le
new file mode 100644
index 00000000..cc51c4ac
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_ppc64le
@@ -0,0 +1,19 @@
+{
+ "lscpu": [
+ {"field": "Architecture:", "data": "ppc64le"},
+ {"field": "Byte Order:", "data": "Little Endian"},
+ {"field": "CPU(s):", "data": "8"},
+ {"field": "On-line CPU(s) list:", "data": "0-7"},
+ {"field": "Thread(s) per core:", "data": "1"},
+ {"field": "Core(s) per socket:", "data": "1"},
+ {"field": "Socket(s):", "data": "8"},
+ {"field": "NUMA node(s):", "data": "1"},
+ {"field": "Model:", "data": "2.1 (pvr 004b 0201)"},
+ {"field": "Model name:", "data": "POWER8E (raw), altivec supported"},
+ {"field": "Hypervisor vendor:", "data": "KVM"},
+ {"field": "Virtualization type:", "data": "para"},
+ {"field": "L1d cache:", "data": "64K"},
+ {"field": "L1i cache:", "data": "32K"},
+ {"field": "NUMA node0 CPU(s):", "data": "0-7"}
+ ]
+}
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_s390x b/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_s390x
new file mode 100644
index 00000000..950da2de
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_s390x
@@ -0,0 +1,30 @@
+{
+ "lscpu": [
+ {"field": "Architecture:", "data": "s390x"},
+ {"field": "CPU op-mode(s):", "data": "32-bit, 64-bit"},
+ {"field": "Byte Order:", "data": "Big Endian"},
+ {"field": "CPU(s):", "data": "4"},
+ {"field": "On-line CPU(s) list:", "data": "0-3"},
+ {"field": "Thread(s) per core:", "data": "1"},
+ {"field": "Core(s) per socket:", "data": "1"},
+ {"field": "Socket(s) per book:", "data": "1"},
+ {"field": "Book(s) per drawer:", "data": "1"},
+ {"field": "Drawer(s):", "data": "4"},
+ {"field": "NUMA node(s):", "data": "1"},
+ {"field": "Vendor ID:", "data": "IBM/S390"},
+ {"field": "Machine type:", "data": "3931"},
+ {"field": "CPU dynamic MHz:", "data": "5200"},
+ {"field": "CPU static MHz:", "data": "5200"},
+ {"field": "BogoMIPS:", "data": "3331.00"},
+ {"field": "Hypervisor:", "data": "KVM/Linux"},
+ {"field": "Hypervisor vendor:", "data": "KVM"},
+ {"field": "Virtualization type:", "data": "full"},
+ {"field": "Dispatching mode:", "data": "horizontal"},
+ {"field": "L1d cache:", "data": "128K"},
+ {"field": "L1i cache:", "data": "128K"},
+ {"field": "L2 cache:", "data": "32768K"},
+ {"field": "L3 cache:", "data": "262144K"},
+ {"field": "NUMA node0 CPU(s):", "data": "0-3"},
+ {"field": "Flags:", "data": "esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te vx vxd vxe gs vxe2 vxp sort dflt vxp2 nnpa sie"}
+ ]
+}
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_x86_64 b/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_x86_64
new file mode 100644
index 00000000..da75a3fa
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancpu/tests/files/json/lscpu_x86_64
@@ -0,0 +1,31 @@
+{
+ "lscpu": [
+ {"field": "Architecture:", "data": "x86_64"},
+ {"field": "CPU op-mode(s):", "data": "32-bit, 64-bit"},
+ {"field": "Byte Order:", "data": "Little Endian"},
+ {"field": "CPU(s):", "data": "2"},
+ {"field": "On-line CPU(s) list:", "data": "0,1"},
+ {"field": "Thread(s) per core:", "data": "1"},
+ {"field": "Core(s) per socket:", "data": "1"},
+ {"field": "Socket(s):", "data": "2"},
+ {"field": "NUMA node(s):", "data": "1"},
+ {"field": "Vendor ID:", "data": "GenuineIntel"},
+ {"field": "BIOS Vendor ID:", "data": "QEMU"},
+ {"field": "CPU family:", "data": "6"},
+ {"field": "Model:", "data": "165"},
+ {"field": "Model name:", "data": "Intel(R) Core(TM) i7-10850H CPU @ 2.70GHz"},
+ {"field": "BIOS Model name:", "data": "pc-i440fx-7.2"},
+ {"field": "Stepping:", "data": "2"},
+ {"field": "CPU MHz:", "data": "2712.006"},
+ {"field": "BogoMIPS:", "data": "5424.01"},
+ {"field": "Virtualization:", "data": "VT-x"},
+ {"field": "Hypervisor vendor:", "data": "KVM"},
+ {"field": "Virtualization type:", "data": "full"},
+ {"field": "L1d cache:", "data": "32K"},
+ {"field": "L1i cache:", "data": "32K"},
+ {"field": "L2 cache:", "data": "4096K"},
+ {"field": "L3 cache:", "data": "16384K"},
+ {"field": "NUMA node0 CPU(s):", "data": "0,1"},
+ {"field": "Flags:", "data": "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d"}
+ ]
+}
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_aarch64 b/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_aarch64
deleted file mode 100644
index 5b6c3470..00000000
--- a/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_aarch64
+++ /dev/null
@@ -1,26 +0,0 @@
-Architecture: aarch64
-CPU op-mode(s): 32-bit, 64-bit
-Byte Order: Little Endian
-CPU(s): 5
-On-line CPU(s) list: 0-4
-Vendor ID: APM
-Model name: -
-Model: 2
-Thread(s) per core: 1
-Core(s) per cluster: 5
-Socket(s): -
-Cluster(s): 1
-Stepping: 0x3
-BogoMIPS: 80.00
-Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
-NUMA node(s): 1
-NUMA node0 CPU(s): 0-4
-Vulnerability Itlb multihit: Not affected
-Vulnerability L1tf: Not affected
-Vulnerability Mds: Not affected
-Vulnerability Meltdown: Mitigation; PTI
-Vulnerability Spec store bypass: Vulnerable
-Vulnerability Spectre v1: Mitigation; __user pointer sanitization
-Vulnerability Spectre v2: Vulnerable
-Vulnerability Srbds: Not affected
-Vulnerability Tsx async abort: Not affected
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_ppc64le b/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_ppc64le
deleted file mode 100644
index 259dd19d..00000000
--- a/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_ppc64le
+++ /dev/null
@@ -1,24 +0,0 @@
-Architecture: ppc64le
-Byte Order: Little Endian
-CPU(s): 8
-On-line CPU(s) list: 0-7
-Model name: POWER9 (architected), altivec supported
-Model: 2.2 (pvr 004e 1202)
-Thread(s) per core: 1
-Core(s) per socket: 1
-Socket(s): 8
-Hypervisor vendor: KVM
-Virtualization type: para
-L1d cache: 256 KiB (8 instances)
-L1i cache: 256 KiB (8 instances)
-NUMA node(s): 1
-NUMA node0 CPU(s): 0-7
-Vulnerability Itlb multihit: Not affected
-Vulnerability L1tf: Mitigation; RFI Flush, L1D private per thread
-Vulnerability Mds: Not affected
-Vulnerability Meltdown: Mitigation; RFI Flush, L1D private per thread
-Vulnerability Spec store bypass: Mitigation; Kernel entry/exit barrier (eieio)
-Vulnerability Spectre v1: Mitigation; __user pointer sanitization, ori31 speculation barrier enabled
-Vulnerability Spectre v2: Mitigation; Software count cache flush (hardware accelerated), Software link stack flush
-Vulnerability Srbds: Not affected
-Vulnerability Tsx async abort: Not affected
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_s390x b/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_s390x
deleted file mode 100644
index 3c0a0ac3..00000000
--- a/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_s390x
+++ /dev/null
@@ -1,38 +0,0 @@
-Architecture: s390x
-CPU op-mode(s): 32-bit, 64-bit
-Byte Order: Big Endian
-CPU(s): 2
-On-line CPU(s) list: 0,1
-Vendor ID: IBM/S390
-Model name: -
-Machine type: 2827
-Thread(s) per core: 1
-Core(s) per socket: 1
-Socket(s) per book: 1
-Book(s) per drawer: 1
-Drawer(s): 2
-CPU dynamic MHz: 5200
-CPU static MHz: 5200
-BogoMIPS: 3241.00
-Dispatching mode: horizontal
-Flags: esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te vx vxd vxe gs vxe2 vxp sort dflt sie
-Hypervisor: z/VM 7.2.0
-Hypervisor vendor: IBM
-Virtualization type: full
-L1d cache: 256 KiB (2 instances)
-L1i cache: 256 KiB (2 instances)
-L2d cache: 8 MiB (2 instances)
-L2i cache: 8 MiB (2 instances)
-L3 cache: 256 MiB
-L4 cache: 960 MiB
-NUMA node(s): 1
-NUMA node0 CPU(s): 0,1
-Vulnerability Itlb multihit: Not affected
-Vulnerability L1tf: Not affected
-Vulnerability Mds: Not affected
-Vulnerability Meltdown: Not affected
-Vulnerability Spec store bypass: Not affected
-Vulnerability Spectre v1: Mitigation; __user pointer sanitization
-Vulnerability Spectre v2: Mitigation; etokens
-Vulnerability Srbds: Not affected
-Vulnerability Tsx async abort: Not affected
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_aarch64 b/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_aarch64
new file mode 100644
index 00000000..3b9619ef
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_aarch64
@@ -0,0 +1,25 @@
+Architecture: aarch64
+Byte Order: Little Endian
+CPU(s): 160
+On-line CPU(s) list: 0-159
+Thread(s) per core: 1
+Core(s) per socket: 80
+Socket(s): 2
+NUMA node(s): 4
+Vendor ID: ARM
+BIOS Vendor ID: Ampere(R)
+Model: 1
+Model name: Neoverse-N1
+BIOS Model name: Ampere(R) Altra(R) Processor
+Stepping: r3p1
+CPU max MHz: 3000.0000
+CPU min MHz: 1000.0000
+BogoMIPS: 50.00
+L1d cache: 64K
+L1i cache: 64K
+L2 cache: 1024K
+NUMA node0 CPU(s): 0-79
+NUMA node1 CPU(s): 80-159
+NUMA node2 CPU(s):
+NUMA node3 CPU(s):
+Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_empty_field b/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_empty_field
new file mode 100644
index 00000000..f830b7fe
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_empty_field
@@ -0,0 +1,4 @@
+Empyt 1:
+Empyt 2:
+Empyt 3:
+Flags: flag
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_ppc64le b/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_ppc64le
new file mode 100644
index 00000000..07d2ed65
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_ppc64le
@@ -0,0 +1,15 @@
+Architecture: ppc64le
+Byte Order: Little Endian
+CPU(s): 8
+On-line CPU(s) list: 0-7
+Thread(s) per core: 1
+Core(s) per socket: 1
+Socket(s): 8
+NUMA node(s): 1
+Model: 2.1 (pvr 004b 0201)
+Model name: POWER8E (raw), altivec supported
+Hypervisor vendor: KVM
+Virtualization type: para
+L1d cache: 64K
+L1i cache: 32K
+NUMA node0 CPU(s): 0-7
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_s390x b/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_s390x
new file mode 100644
index 00000000..2c0de9f9
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_s390x
@@ -0,0 +1,26 @@
+Architecture: s390x
+CPU op-mode(s): 32-bit, 64-bit
+Byte Order: Big Endian
+CPU(s): 4
+On-line CPU(s) list: 0-3
+Thread(s) per core: 1
+Core(s) per socket: 1
+Socket(s) per book: 1
+Book(s) per drawer: 1
+Drawer(s): 4
+NUMA node(s): 1
+Vendor ID: IBM/S390
+Machine type: 3931
+CPU dynamic MHz: 5200
+CPU static MHz: 5200
+BogoMIPS: 3331.00
+Hypervisor: KVM/Linux
+Hypervisor vendor: KVM
+Virtualization type: full
+Dispatching mode: horizontal
+L1d cache: 128K
+L1i cache: 128K
+L2 cache: 32768K
+L3 cache: 262144K
+NUMA node0 CPU(s): 0-3
+Flags: esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te vx vxd vxe gs vxe2 vxp sort dflt vxp2 nnpa sie
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_x86_64 b/repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_x86_64
similarity index 100%
rename from repos/system_upgrade/common/actors/scancpu/tests/files/lscpu_x86_64
rename to repos/system_upgrade/common/actors/scancpu/tests/files/txt/lscpu_x86_64
diff --git a/repos/system_upgrade/common/actors/scancpu/tests/test_scancpu.py b/repos/system_upgrade/common/actors/scancpu/tests/test_scancpu.py
index 894fae08..dc9d1ffc 100644
--- a/repos/system_upgrade/common/actors/scancpu/tests/test_scancpu.py
+++ b/repos/system_upgrade/common/actors/scancpu/tests/test_scancpu.py
@@ -3,7 +3,6 @@ import os
import pytest
from leapp.libraries.actor import scancpu
-from leapp.libraries.common import testutils
from leapp.libraries.common.config.architecture import (
ARCH_ARM64,
ARCH_PPC64LE,
@@ -11,6 +10,7 @@ from leapp.libraries.common.config.architecture import (
ARCH_SUPPORTED,
ARCH_X86_64
)
+from leapp.libraries.common.testutils import CurrentActorMocked, logger_mocked, produce_mocked
from leapp.libraries.stdlib import api
from leapp.models import CPUInfo
@@ -18,8 +18,12 @@ CUR_DIR = os.path.dirname(os.path.abspath(__file__))
LSCPU = {
ARCH_ARM64: {
- "machine_type": None,
- "flags": ['fp', 'asimd', 'evtstrm', 'aes', 'pmull', 'sha1', 'sha2', 'crc32', 'cpuid'],
+ "machine_type":
+ None,
+ "flags": [
+ 'fp', 'asimd', 'evtstrm', 'aes', 'pmull', 'sha1', 'sha2', 'crc32', 'atomics', 'fphp', 'asimdhp', 'cpuid',
+ 'asimdrdm', 'lrcpc', 'dcpop', 'asimddp', 'ssbs'
+ ]
},
ARCH_PPC64LE: {
"machine_type": None,
@@ -27,10 +31,10 @@ LSCPU = {
},
ARCH_S390X: {
"machine_type":
- 2827,
+ 3931,
"flags": [
'esan3', 'zarch', 'stfle', 'msa', 'ldisp', 'eimm', 'dfp', 'edat', 'etf3eh', 'highgprs', 'te', 'vx', 'vxd',
- 'vxe', 'gs', 'vxe2', 'vxp', 'sort', 'dflt', 'sie'
+ 'vxe', 'gs', 'vxe2', 'vxp', 'sort', 'dflt', 'vxp2', 'nnpa', 'sie'
]
},
ARCH_X86_64: {
@@ -57,23 +61,34 @@ class mocked_get_cpuinfo(object):
def __init__(self, filename):
self.filename = filename
- def __call__(self):
+ def __call__(self, output_json=False):
"""
Return lines of the self.filename test file located in the files directory.
Those files contain /proc/cpuinfo content from several machines.
"""
- with open(os.path.join(CUR_DIR, 'files', self.filename), 'r') as fp:
+
+ filename = self.filename
+ if output_json:
+ filename = os.path.join('json', filename)
+ else:
+ filename = os.path.join('txt', filename)
+ filename = os.path.join(CUR_DIR, 'files', filename)
+
+ with open(filename, 'r') as fp:
return '\n'.join(fp.read().splitlines())
@pytest.mark.parametrize("arch", ARCH_SUPPORTED)
-def test_scancpu(monkeypatch, arch):
+@pytest.mark.parametrize("version", ['7', '8'])
+def test_scancpu(monkeypatch, arch, version):
+
+ monkeypatch.setattr('leapp.libraries.actor.scancpu.get_source_major_version', lambda: version)
mocked_cpuinfo = mocked_get_cpuinfo('lscpu_' + arch)
monkeypatch.setattr(scancpu, '_get_lscpu_output', mocked_cpuinfo)
- monkeypatch.setattr(api, 'produce', testutils.produce_mocked())
- current_actor = testutils.CurrentActorMocked(arch=arch)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ current_actor = CurrentActorMocked(arch=arch)
monkeypatch.setattr(api, 'current_actor', current_actor)
scancpu.process()
@@ -89,3 +104,50 @@ def test_scancpu(monkeypatch, arch):
# Did not produce anything extra
assert expected == produced
+
+
+def test_lscpu_with_empty_field(monkeypatch):
+
+ def mocked_cpuinfo(*args, **kwargs):
+ return mocked_get_cpuinfo('lscpu_empty_field')(output_json=False)
+
+ monkeypatch.setattr(scancpu, '_get_lscpu_output', mocked_cpuinfo)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ current_actor = CurrentActorMocked()
+ monkeypatch.setattr(api, 'current_actor', current_actor)
+
+ scancpu.process()
+
+ expected = CPUInfo(machine_type=None, flags=['flag'])
+ produced = api.produce.model_instances[0]
+
+ assert api.produce.called == 1
+
+ assert expected.machine_type == produced.machine_type
+ assert sorted(expected.flags) == sorted(produced.flags)
+
+
+def test_parse_invalid_json(monkeypatch):
+
+ monkeypatch.setattr('leapp.libraries.actor.scancpu.get_source_major_version', lambda: '8')
+
+ def mocked_cpuinfo(*args, **kwargs):
+ return mocked_get_cpuinfo('invalid')(output_json=True)
+
+ monkeypatch.setattr(scancpu, '_get_lscpu_output', mocked_cpuinfo)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(api, 'current_logger', logger_mocked())
+ current_actor = CurrentActorMocked()
+ monkeypatch.setattr(api, 'current_actor', current_actor)
+
+ scancpu.process()
+
+ assert api.produce.called == 1
+
+ assert any('Failed to parse json output' in msg for msg in api.current_logger().dbgmsg)
+
+ expected = CPUInfo(machine_type=None, flags=[])
+ produced = api.produce.model_instances[0]
+
+ assert expected.machine_type == produced.machine_type
+ assert sorted(expected.flags) == sorted(produced.flags)
--
2.42.0

View File

@ -0,0 +1,254 @@
From b65ef94be64f16eacb79a55d1185e37aa401832e Mon Sep 17 00:00:00 2001
From: tomasfratrik <tomasfratrik8@gmail.com>
Date: Mon, 4 Mar 2024 09:10:02 +0100
Subject: [PATCH 06/34] Add unit tests for actor udevamdinfo
* Move actor's process to its library
* Add check for run in process
* Create file with short output of 'udevamd info -e' for testing
purposes
* Add unit tests for actor
Jira: OAMG-1277
---
.../common/actors/udev/udevadminfo/actor.py | 5 +-
.../udev/udevadminfo/libraries/udevadminfo.py | 19 +++
.../udevadminfo/tests/files/udevadm_database | 134 ++++++++++++++++++
.../udevadminfo/tests/test_udevadminfo.py | 40 ++++++
4 files changed, 195 insertions(+), 3 deletions(-)
create mode 100644 repos/system_upgrade/common/actors/udev/udevadminfo/libraries/udevadminfo.py
create mode 100644 repos/system_upgrade/common/actors/udev/udevadminfo/tests/files/udevadm_database
create mode 100644 repos/system_upgrade/common/actors/udev/udevadminfo/tests/test_udevadminfo.py
diff --git a/repos/system_upgrade/common/actors/udev/udevadminfo/actor.py b/repos/system_upgrade/common/actors/udev/udevadminfo/actor.py
index b674e56c..ac702914 100644
--- a/repos/system_upgrade/common/actors/udev/udevadminfo/actor.py
+++ b/repos/system_upgrade/common/actors/udev/udevadminfo/actor.py
@@ -1,5 +1,5 @@
from leapp.actors import Actor
-from leapp.libraries.stdlib import run
+from leapp.libraries.actor import udevadminfo
from leapp.models import UdevAdmInfoData
from leapp.tags import FactsPhaseTag, IPUWorkflowTag
@@ -15,5 +15,4 @@ class UdevAdmInfo(Actor):
tags = (IPUWorkflowTag, FactsPhaseTag,)
def process(self):
- out = run(['udevadm', 'info', '-e'])['stdout']
- self.produce(UdevAdmInfoData(db=out))
+ udevadminfo.process()
diff --git a/repos/system_upgrade/common/actors/udev/udevadminfo/libraries/udevadminfo.py b/repos/system_upgrade/common/actors/udev/udevadminfo/libraries/udevadminfo.py
new file mode 100644
index 00000000..dabe49e0
--- /dev/null
+++ b/repos/system_upgrade/common/actors/udev/udevadminfo/libraries/udevadminfo.py
@@ -0,0 +1,19 @@
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.stdlib import api, CalledProcessError, run
+from leapp.models import UdevAdmInfoData
+
+
+def process():
+ try:
+ out = run(['udevadm', 'info', '-e'])['stdout']
+ except (CalledProcessError, OSError) as err:
+ raise StopActorExecutionError(
+ message=(
+ "Unable to gather information about the system devices"
+ ),
+ details={
+ 'details': 'Failed to execute `udevadm info -e` command.',
+ 'error': str(err)
+ }
+ )
+ api.produce(UdevAdmInfoData(db=out))
diff --git a/repos/system_upgrade/common/actors/udev/udevadminfo/tests/files/udevadm_database b/repos/system_upgrade/common/actors/udev/udevadminfo/tests/files/udevadm_database
new file mode 100644
index 00000000..219fb574
--- /dev/null
+++ b/repos/system_upgrade/common/actors/udev/udevadminfo/tests/files/udevadm_database
@@ -0,0 +1,134 @@
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00/LNXCPU:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00/LNXCPU:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:LNXCPU:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1698543
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00/LNXCPU:01
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00/LNXCPU:01
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:LNXCPU:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1698839
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0103:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0103:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:PNP0103:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1697906
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:PNP0A03:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1698109
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/PNP0A06:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/PNP0A06:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:PNP0A06:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1702939
+P: /devices/LNXSYSTM:00
+E: DEVPATH=/devices/LNXSYSTM:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:LNXSYSTM:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1694509
+
+P: /devices/LNXSYSTM:00/LNXPWRBN:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXPWRBN:00
+E: DRIVER=button
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:LNXPWRBN:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1695034
+
+P: /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
+E: DEVPATH=/devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
+E: EV=3
+E: ID_FOR_SEAT=input-acpi-LNXPWRBN_00
+E: ID_INPUT=1
+E: ID_INPUT_KEY=1
+E: ID_PATH=acpi-LNXPWRBN:00
+E: ID_PATH_TAG=acpi-LNXPWRBN_00
+E: KEY=10000000000000 0
+E: MODALIAS=input:b0019v0000p0001e0000-e0,1,k74,ramlsfw
+E: NAME="Power Button"
+E: PHYS="LNXPWRBN/button/input0"
+E: PRODUCT=19/0/1/0
+E: PROP=0
+E: SUBSYSTEM=input
+E: TAGS=:seat:
+E: USEC_INITIALIZED=1697068
+
+P: /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0/event0
+N: input/event0
+E: DEVNAME=/dev/input/event0
+E: DEVPATH=/devices/LNXSYSTM:00/LNXPWRBN:00/input/input0/event0
+E: ID_INPUT=1
+E: ID_INPUT_KEY=1
+E: ID_PATH=acpi-LNXPWRBN:00
+E: ID_PATH_TAG=acpi-LNXPWRBN_00
+E: MAJOR=13
+E: MINOR=64
+E: SUBSYSTEM=input
+E: TAGS=:power-switch:
+E: USEC_INITIALIZED=1744996
+
+P: /devices/LNXSYSTM:00/LNXPWRBN:00/wakeup/wakeup10
+E: DEVPATH=/devices/LNXSYSTM:00/LNXPWRBN:00/wakeup/wakeup10
+E: SUBSYSTEM=wakeup
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:LNXSYBUS:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1695925
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:ACPI0010:PNP0A05:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1698058
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00/LNXCPU:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00/LNXCPU:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:LNXCPU:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1698543
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00/LNXCPU:01
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0010:00/LNXCPU:01
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:LNXCPU:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1698839
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0103:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0103:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:PNP0103:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1697906
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:PNP0A03:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1698109
+
+P: /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/PNP0A06:00
+E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/PNP0A06:00
+E: ID_VENDOR_FROM_DATABASE=The Linux Foundation
+E: MODALIAS=acpi:PNP0A06:
+E: SUBSYSTEM=acpi
+E: USEC_INITIALIZED=1702939
+
diff --git a/repos/system_upgrade/common/actors/udev/udevadminfo/tests/test_udevadminfo.py b/repos/system_upgrade/common/actors/udev/udevadminfo/tests/test_udevadminfo.py
new file mode 100644
index 00000000..f465d6f6
--- /dev/null
+++ b/repos/system_upgrade/common/actors/udev/udevadminfo/tests/test_udevadminfo.py
@@ -0,0 +1,40 @@
+import os
+
+import pytest
+
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.actor import udevadminfo
+from leapp.libraries.common import testutils
+from leapp.libraries.stdlib import api, CalledProcessError
+from leapp.models import UdevAdmInfoData
+
+CUR_DIR = os.path.dirname(os.path.abspath(__file__))
+
+
+def _raise_call_error(*args):
+ raise CalledProcessError(
+ message='A Leapp Command Error occurred.',
+ command=args,
+ result={'signal': None, 'exit_code': 1, 'pid': 0, 'stdout': 'fake', 'stderr': 'fake'}
+ )
+
+
+def test_failed_run(monkeypatch):
+ monkeypatch.setattr(api, 'produce', testutils.produce_mocked())
+ monkeypatch.setattr(udevadminfo, 'run', _raise_call_error)
+
+ with pytest.raises(StopActorExecutionError):
+ udevadminfo.process()
+
+
+def test_udevadminfo(monkeypatch):
+
+ with open(os.path.join(CUR_DIR, 'files', 'udevadm_database'), 'r') as fp:
+ mocked_data = fp.read()
+ monkeypatch.setattr(api, 'produce', testutils.produce_mocked())
+ monkeypatch.setattr(udevadminfo, 'run', lambda *args: {'stdout': mocked_data})
+ udevadminfo.process()
+
+ assert api.produce.called == 1
+ assert isinstance(api.produce.model_instances[0], UdevAdmInfoData)
+ assert api.produce.model_instances[0].db == mocked_data
--
2.42.0

View File

@ -0,0 +1,166 @@
From 3066cad4e7a2a440a93f01fd0c0cbec84bb5485f Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Tom=C3=A1=C5=A1=20Fr=C3=A1trik?=
<93993694+tomasfratrik@users.noreply.github.com>
Date: Thu, 11 Apr 2024 13:48:31 +0200
Subject: [PATCH 07/34] Add unit tests for scansourcefiles actor (#1190)
Jira: OAMG-10367
---
.../tests/unit_test_scansourcefiles.py | 147 +++++++++++++++++-
1 file changed, 142 insertions(+), 5 deletions(-)
diff --git a/repos/system_upgrade/common/actors/scansourcefiles/tests/unit_test_scansourcefiles.py b/repos/system_upgrade/common/actors/scansourcefiles/tests/unit_test_scansourcefiles.py
index 6a6b009a..40ae2408 100644
--- a/repos/system_upgrade/common/actors/scansourcefiles/tests/unit_test_scansourcefiles.py
+++ b/repos/system_upgrade/common/actors/scansourcefiles/tests/unit_test_scansourcefiles.py
@@ -1,5 +1,142 @@
-def test_scansourcefiles():
- # TODO(pstodulk): keeping unit tests for later after I check the idea
- # of this actor with the team.
- # JIRA: OAMG-10367
- pass
+import os
+
+import pytest
+
+from leapp.libraries.actor import scansourcefiles
+from leapp.libraries.common import testutils
+from leapp.libraries.stdlib import api, CalledProcessError
+from leapp.models import FileInfo, TrackedFilesInfoSource
+
+
+@pytest.mark.parametrize(
+ ('run_output', 'expected_output_is_modified'),
+ (
+ ({'exit_code': 0}, False),
+ ({'exit_code': 1, 'stdout': 'missing /boot/efi/EFI (Permission denied)'}, True),
+ ({'exit_code': 1, 'stdout': 'S.5...... c /etc/openldap/ldap.conf'}, True),
+ ({'exit_code': 1, 'stdout': '..?...... c /etc/libaudit.conf'}, False),
+ ({'exit_code': 1, 'stdout': '.....UG.. g /var/run/avahi-daemon'}, False),
+ )
+)
+def test_is_modified(monkeypatch, run_output, expected_output_is_modified):
+ input_file = '/file'
+
+ def mocked_run(cmd, *args, **kwargs):
+ assert cmd == ['rpm', '-Vf', '--nomtime', input_file]
+ return run_output
+
+ monkeypatch.setattr(scansourcefiles, 'run', mocked_run)
+ assert scansourcefiles.is_modified(input_file) == expected_output_is_modified
+
+
+@pytest.mark.parametrize(
+ 'run_output',
+ [
+ {'stdout': ['']},
+ {'stdout': ['rpm']},
+ {'stdout': ['rpm1', 'rpm2']},
+ ]
+)
+def test_get_rpm_name(monkeypatch, run_output):
+ input_file = '/file'
+
+ def mocked_run(cmd, *args, **kwargs):
+ assert cmd == ['rpm', '-qf', '--queryformat', r'%{NAME}\n', input_file]
+ return run_output
+
+ monkeypatch.setattr(scansourcefiles, 'run', mocked_run)
+ monkeypatch.setattr(api, 'current_logger', testutils.logger_mocked())
+ assert scansourcefiles._get_rpm_name(input_file) == run_output['stdout'][0]
+
+ if len(run_output['stdout']) > 1:
+ expected_warnmsg = ('The {} file is owned by multiple rpms: {}.'
+ .format(input_file, ', '.join(run_output['stdout'])))
+ assert api.current_logger.warnmsg == [expected_warnmsg]
+
+
+def test_get_rpm_name_error(monkeypatch):
+ input_file = '/file'
+
+ def mocked_run(cmd, *args, **kwargs):
+ assert cmd == ['rpm', '-qf', '--queryformat', r'%{NAME}\n', input_file]
+ raise CalledProcessError("mocked error", cmd, "result")
+
+ monkeypatch.setattr(scansourcefiles, 'run', mocked_run)
+ assert scansourcefiles._get_rpm_name(input_file) == ''
+
+
+@pytest.mark.parametrize(
+ ('input_file', 'exists', 'rpm_name', 'is_modified'),
+ (
+ ('/not_existing_file', False, '', False),
+ ('/not_existing_file_rpm_owned', False, 'rpm', False),
+ ('/not_existing_file_rpm_owned_modified', False, 'rpm', True),
+ ('/existing_file_not_modified', True, '', False),
+ ('/existing_file_owned_by_rpm_not_modified', True, 'rpm', False),
+ ('/existing_file_owned_by_rpm_modified', True, 'rpm', True),
+ )
+)
+def test_scan_file(monkeypatch, input_file, exists, rpm_name, is_modified):
+ monkeypatch.setattr(scansourcefiles, 'is_modified', lambda _: is_modified)
+ monkeypatch.setattr(scansourcefiles, '_get_rpm_name', lambda _: rpm_name)
+ monkeypatch.setattr(os.path, 'exists', lambda _: exists)
+
+ expected_model_output = FileInfo(path=input_file, exists=exists, rpm_name=rpm_name, is_modified=is_modified)
+ assert scansourcefiles.scan_file(input_file) == expected_model_output
+
+
+@pytest.mark.parametrize(
+ ('input_files'),
+ (
+ ([]),
+ (['/file1']),
+ (['/file1', '/file2']),
+ )
+)
+def test_scan_files(monkeypatch, input_files):
+ base_data = {
+ 'exists': False,
+ 'rpm_name': '',
+ 'is_modified': False
+ }
+
+ def scan_file_mocked(input_file):
+ return FileInfo(path=input_file, **base_data)
+
+ monkeypatch.setattr(scansourcefiles, 'scan_file', scan_file_mocked)
+ expected_output_list = [FileInfo(path=input_file, **base_data) for input_file in input_files]
+ assert scansourcefiles.scan_files(input_files) == expected_output_list
+
+
+@pytest.mark.parametrize(
+ 'rhel_major_version', ['8', '9']
+)
+def test_tracked_files(monkeypatch, rhel_major_version):
+ TRACKED_FILES_MOCKED = {
+ 'common': [
+ '/file1',
+ ],
+ '8': [
+ '/file2',
+ ],
+ '9': [
+ '/file3',
+ ],
+ }
+
+ def scan_files_mocked(files):
+ return [FileInfo(path=file_path, exists=False, rpm_name='', is_modified=False) for file_path in files]
+
+ monkeypatch.setattr(api, 'produce', testutils.produce_mocked())
+ monkeypatch.setattr(scansourcefiles, 'TRACKED_FILES', TRACKED_FILES_MOCKED)
+ monkeypatch.setattr(scansourcefiles, 'get_source_major_version', lambda: rhel_major_version)
+ monkeypatch.setattr(scansourcefiles, 'scan_files', scan_files_mocked)
+
+ scansourcefiles.process()
+
+ tracked_files_model = api.produce.model_instances[0]
+ assert api.produce.called == 1
+ assert isinstance(tracked_files_model, TrackedFilesInfoSource)
+ # assert only 1 common and 1 version file were scanned
+ assert len(tracked_files_model.files) == 2
+ assert all(isinstance(files_list_item, FileInfo) for files_list_item in tracked_files_model.files)
--
2.42.0

View File

@ -0,0 +1,72 @@
From 82070789813ae64f8fadc31a5096bf8df4124f75 Mon Sep 17 00:00:00 2001
From: mhecko <mhecko@redhat.com>
Date: Tue, 2 Apr 2024 11:24:49 +0200
Subject: [PATCH 08/34] pes_events_scanner: overwrite repositories when
applying an event
The Package class has custom __hash__ and __eq__ methods in order to
achieve a straightforward presentation via set manipulation. However,
this causes problems, e.g., when applying split events. For example:
Applying the event Split(in={(A, repo1)}, out={(A, repo2), (B, repo2)})
to the package state {(A, repo1), (B, repo1)} results in the following:
{(A, repo1), (B, repo1)} --apply--> {(A, repo2), (B, repo1)}
which is undesired as repo1 is a source system repository. Such
a package will get reported to the user as potentially removed during
the upgrade. This patch addresses this unwanted behavior.
---
.../peseventsscanner/libraries/pes_events_scanner.py | 9 ++++++++-
.../peseventsscanner/tests/test_pes_event_scanner.py | 9 +++++++++
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
index f9411dfe..f5cb2613 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/libraries/pes_events_scanner.py
@@ -139,8 +139,11 @@ def compute_pkg_changes_between_consequent_releases(source_installed_pkgs,
if event.action == Action.PRESENT:
for pkg in event.in_pkgs:
if pkg in seen_pkgs:
+ # First remove the package with the old repository and add it back, but now with the new
+ # repository. As the Package class has a custom __hash__ and __eq__ comparing only name
+ # and modulestream, the pkg.repository field is ignore and therefore the add() call
+ # does not update the entry.
if pkg in target_pkgs:
- # Remove the package with the old repository, add the one with the new one
target_pkgs.remove(pkg)
target_pkgs.add(pkg)
elif event.action == Action.DEPRECATED:
@@ -163,7 +166,11 @@ def compute_pkg_changes_between_consequent_releases(source_installed_pkgs,
event.id, event.action, removed_pkgs_str, added_pkgs_str)
# In pkgs are present, event can be applied
+ # Note: We do a .difference(event.out_packages) followed by an .union(event.out_packages) to overwrite
+ # # repositories of the packages (Package has overwritten __hash__ and __eq__, ignoring
+ # # the repository field)
target_pkgs = target_pkgs.difference(event.in_pkgs)
+ target_pkgs = target_pkgs.difference(event.out_pkgs)
target_pkgs = target_pkgs.union(event.out_pkgs)
pkgs_to_demodularize = pkgs_to_demodularize.difference(event.in_pkgs)
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py b/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
index 7cdcf820..80ece770 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/tests/test_pes_event_scanner.py
@@ -123,6 +123,15 @@ def pkgs_into_tuples(pkgs):
[(8, 0)],
{Package('renamed-out', 'rhel8-repo', None)}
),
+ (
+ {Package('A', 'rhel7-repo', None), Package('B', 'rhel7-repo', None)},
+ [
+ Event(1, Action.SPLIT, {Package('A', 'rhel7-repo', None)},
+ {Package('A', 'rhel8-repo', None), Package('B', 'rhel8-repo', None)}, (7, 6), (8, 0), [])
+ ],
+ [(8, 0)],
+ {Package('A', 'rhel8-repo', None), Package('B', 'rhel8-repo', None)}
+ ),
)
)
def test_event_application_fundamentals(monkeypatch, installed_pkgs, events, releases, expected_target_pkgs):
--
2.42.0

View File

@ -0,0 +1,53 @@
From 1f8b8f3259d7daff63dff0a4d630b36e615e416d Mon Sep 17 00:00:00 2001
From: David Kubek <dkubek@redhat.com>
Date: Wed, 10 Apr 2024 12:08:23 +0200
Subject: [PATCH 09/34] Modify upgrade not terminate after lockfile detected
Previously, when the upgrade failed in the initram the file
/sysroot/root/tmp_leapp_py3/.leapp_upgrade_failed has been generated and
upon detecting this file leapp triggered an emergency shell. This caused
the original failure to be hidden from the customer.
With this commit, we no longer crash immediately upon detecting the file
but rather continue and "wait" for the underlying issue and error to
emerge.
RHEL-24148
---
.../files/dracut/85sys-upgrade-redhat/do-upgrade.sh | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
index 4a6f7b62..56a94b5d 100755
--- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
+++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
@@ -282,6 +282,11 @@ do_upgrade() {
local dirname
dirname="$("$NEWROOT/bin/dirname" "$NEWROOT$LEAPP_FAILED_FLAG_FILE")"
[ -d "$dirname" ] || mkdir "$dirname"
+
+ echo >&2 "Creating file $NEWROOT$LEAPP_FAILED_FLAG_FILE"
+ echo >&2 "Warning: Leapp upgrade failed and there is an issue blocking the upgrade."
+ echo >&2 "Please file a support case with /var/log/leapp/leapp-upgrade.log attached"
+
"$NEWROOT/bin/touch" "$NEWROOT$LEAPP_FAILED_FLAG_FILE"
fi
@@ -358,10 +363,10 @@ mount -o "remount,rw" "$NEWROOT"
# check if leapp previously failed in the initramfs, if it did return to the emergency shell
[ -f "$NEWROOT$LEAPP_FAILED_FLAG_FILE" ] && {
echo >&2 "Found file $NEWROOT$LEAPP_FAILED_FLAG_FILE"
- echo >&2 "Error: Leapp previously failed and cannot continue, returning back to emergency shell"
- echo >&2 "Please file a support case with $NEWROOT/var/log/leapp/leapp-upgrade.log attached"
- echo >&2 "To rerun the upgrade upon exiting the dracut shell remove the $NEWROOT$LEAPP_FAILED_FLAG_FILE file"
- exit 1
+ echo >&2 "Warning: Leapp failed on a previous execution and something might be blocking the upgrade."
+ echo >&2 "Continuing with the upgrade anyway. Note that any subsequent error might be potentially misleading due to a previous failure."
+ echo >&2 "A log file will be generated at $NEWROOT/var/log/leapp/leapp-upgrade.log."
+ echo >&2 "In case of persisting failure, if possible, try to boot to the original system and file a support case with /var/log/leapp/leapp-upgrade.log attached."
}
[ ! -x "$NEWROOT$LEAPPBIN" ] && {
--
2.42.0

View File

@ -0,0 +1,37 @@
From 1fb7e78bfa86163c27c309d6244298d4b3075762 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Fri, 2 Feb 2024 12:32:15 +0100
Subject: [PATCH 10/34] Make the reboot required text more visible in the
console output
The "A reboot is required to continue. Please reboot your system."
message is printed before the reports summary and thus is easily
overlooked by users.
This patch adds a second such message after the report summary to
improve this.
Jira: RHEL-22736
---
commands/upgrade/__init__.py | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/commands/upgrade/__init__.py b/commands/upgrade/__init__.py
index c42b7cba..cc5fe647 100644
--- a/commands/upgrade/__init__.py
+++ b/commands/upgrade/__init__.py
@@ -113,6 +113,11 @@ def upgrade(args, breadcrumbs):
if workflow.failure:
sys.exit(1)
+ elif not args.resume:
+ sys.stdout.write(
+ 'Reboot the system to continue with the upgrade.'
+ ' This might take a while depending on the system configuration.\n'
+ )
def register(base_command):
--
2.42.0

View File

@ -0,0 +1,176 @@
From 8fe2a2d35395a43b8449354d68479d8339ef49ab Mon Sep 17 00:00:00 2001
From: mhecko <mhecko@redhat.com>
Date: Sun, 21 Apr 2024 22:40:38 +0200
Subject: [PATCH 11/34] check_grub_legacy: inhibit when GRUB legacy is present
Adds a new actor checking for whether any of the GRUB devices
have the old GRUB Legacy installed. If any of such devices
is detected, the upgrade is inhibited. The GRUB Legacy is detected
by searching for the string 'GRUB version 0.94' in `file -s`
of the device.
---
.../el7toel8/actors/checklegacygrub/actor.py | 20 ++++++
.../libraries/check_legacy_grub.py | 71 +++++++++++++++++++
.../tests/test_check_legacy_grub.py | 45 ++++++++++++
3 files changed, 136 insertions(+)
create mode 100644 repos/system_upgrade/el7toel8/actors/checklegacygrub/actor.py
create mode 100644 repos/system_upgrade/el7toel8/actors/checklegacygrub/libraries/check_legacy_grub.py
create mode 100644 repos/system_upgrade/el7toel8/actors/checklegacygrub/tests/test_check_legacy_grub.py
diff --git a/repos/system_upgrade/el7toel8/actors/checklegacygrub/actor.py b/repos/system_upgrade/el7toel8/actors/checklegacygrub/actor.py
new file mode 100644
index 00000000..1fc7dde4
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/checklegacygrub/actor.py
@@ -0,0 +1,20 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import check_legacy_grub as check_legacy_grub_lib
+from leapp.reporting import Report
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class CheckLegacyGrub(Actor):
+ """
+ Check whether GRUB Legacy is installed in the MBR.
+
+ GRUB Legacy is deprecated since RHEL 7 in favour of GRUB2.
+ """
+
+ name = 'check_grub_legacy'
+ consumes = ()
+ produces = (Report,)
+ tags = (FactsPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ check_legacy_grub_lib.check_grub_disks_for_legacy_grub()
diff --git a/repos/system_upgrade/el7toel8/actors/checklegacygrub/libraries/check_legacy_grub.py b/repos/system_upgrade/el7toel8/actors/checklegacygrub/libraries/check_legacy_grub.py
new file mode 100644
index 00000000..d02c14f9
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/checklegacygrub/libraries/check_legacy_grub.py
@@ -0,0 +1,71 @@
+from leapp import reporting
+from leapp.exceptions import StopActorExecution
+from leapp.libraries.common import grub as grub_lib
+from leapp.libraries.stdlib import api, CalledProcessError, run
+from leapp.reporting import create_report
+
+# There is no grub legacy package on RHEL7, therefore, the system must have been upgraded from RHEL6
+MIGRATION_TO_GRUB2_GUIDE_URL = 'https://access.redhat.com/solutions/2643721'
+
+
+def has_legacy_grub(device):
+ try:
+ output = run(['file', '-s', device])
+ except CalledProcessError as err:
+ msg = 'Failed to determine the file type for the special device `{0}`. Full error: `{1}`'
+ api.current_logger().warning(msg.format(device, str(err)))
+
+ # According to `file` manpage, the exit code > 0 iff the file does not exists (meaning)
+ # that grub_lib.get_grub_devices() is unreliable for some reason (better stop the upgrade),
+ # or because the file type could not be determined. However, its manpage directly gives examples
+ # of file -s being used on block devices, so this should be unlikely - especially if one would
+ # consider that get_grub_devices was able to determine that it is a grub device.
+ raise StopActorExecution()
+
+ grub_legacy_version_string = 'GRUB version 0.94'
+ return grub_legacy_version_string in output['stdout']
+
+
+def check_grub_disks_for_legacy_grub():
+ # Both GRUB2 and Grub Legacy are recognized by `get_grub_devices`
+ grub_devices = grub_lib.get_grub_devices()
+
+ legacy_grub_devices = []
+ for device in grub_devices:
+ if has_legacy_grub(device):
+ legacy_grub_devices.append(device)
+
+ if legacy_grub_devices:
+ details = (
+ 'Leapp detected GRUB Legacy to be installed on the system. '
+ 'The GRUB Legacy bootloader is unsupported on RHEL7 and GRUB2 must be used instead. '
+ 'The presence of GRUB Legacy is possible on systems that have been upgraded from RHEL 6 in the past, '
+ 'but required manual post-upgrade steps have not been performed. '
+ 'Note that the in-place upgrade from RHEL 6 to RHEL 7 systems is in such a case '
+ 'considered as unfinished.\n\n'
+
+ 'GRUB Legacy has been detected on following devices:\n'
+ '{block_devices_fmt}\n'
+ )
+
+ hint = (
+ 'Migrate to the GRUB2 bootloader on the reported devices. '
+ 'Also finish other post-upgrade steps related to the previous in-place upgrade, the majority of which '
+ 'is a part of the related preupgrade report for upgrades from RHEL 6 to RHEL 7.'
+ 'If you are not sure whether all previously required post-upgrade steps '
+ 'have been performed, consider a clean installation of the RHEL 8 system instead. '
+ 'Note that the in-place upgrade to RHEL 8 can fail in various ways '
+ 'if the RHEL 7 system is misconfigured.'
+ )
+
+ block_devices_fmt = '\n'.join(legacy_grub_devices)
+ create_report([
+ reporting.Title("GRUB Legacy is used on the system"),
+ reporting.Summary(details.format(block_devices_fmt=block_devices_fmt)),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Groups([reporting.Groups.BOOT]),
+ reporting.Remediation(hint=hint),
+ reporting.Groups([reporting.Groups.INHIBITOR]),
+ reporting.ExternalLink(url=MIGRATION_TO_GRUB2_GUIDE_URL,
+ title='How to install GRUB2 after a RHEL6 to RHEL7 upgrade'),
+ ])
diff --git a/repos/system_upgrade/el7toel8/actors/checklegacygrub/tests/test_check_legacy_grub.py b/repos/system_upgrade/el7toel8/actors/checklegacygrub/tests/test_check_legacy_grub.py
new file mode 100644
index 00000000..d6e5008e
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/checklegacygrub/tests/test_check_legacy_grub.py
@@ -0,0 +1,45 @@
+import pytest
+
+from leapp.libraries.actor import check_legacy_grub as check_legacy_grub_lib
+from leapp.libraries.common import grub as grub_lib
+from leapp.libraries.common.testutils import create_report_mocked
+from leapp.utils.report import is_inhibitor
+
+VDA_WITH_LEGACY_GRUB = (
+ '/dev/vda: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, '
+ 'stage2 address 0x2000, stage2 segment 0x200, GRUB version 0.94; partition 1: ID=0x83, '
+ 'active, starthead 32, startsector 2048, 1024000 sectors; partition 2: ID=0x83, starthead 221, '
+ 'startsector 1026048, 19945472 sectors, code offset 0x48\n'
+)
+
+NVME0N1_VDB_WITH_GRUB = (
+ '/dev/nvme0n1: x86 boot sector; partition 1: ID=0x83, active, starthead 32, startsector 2048, 6291456 sectors; '
+ 'partition 2: ID=0x83, starthead 191, startsector 6293504, 993921024 sectors, code offset 0x63'
+)
+
+
+@pytest.mark.parametrize(
+ ('grub_device_to_file_output', 'should_inhibit'),
+ [
+ ({'/dev/vda': VDA_WITH_LEGACY_GRUB}, True),
+ ({'/dev/nvme0n1': NVME0N1_VDB_WITH_GRUB}, False),
+ ({'/dev/vda': VDA_WITH_LEGACY_GRUB, '/dev/nvme0n1': NVME0N1_VDB_WITH_GRUB}, True)
+ ]
+)
+def test_check_legacy_grub(monkeypatch, grub_device_to_file_output, should_inhibit):
+
+ def file_cmd_mock(cmd, *args, **kwargs):
+ assert cmd[:2] == ['file', '-s']
+ return {'stdout': grub_device_to_file_output[cmd[2]]}
+
+ monkeypatch.setattr(check_legacy_grub_lib, 'create_report', create_report_mocked())
+ monkeypatch.setattr(grub_lib, 'get_grub_devices', lambda: list(grub_device_to_file_output.keys()))
+ monkeypatch.setattr(check_legacy_grub_lib, 'run', file_cmd_mock)
+
+ check_legacy_grub_lib.check_grub_disks_for_legacy_grub()
+
+ assert bool(check_legacy_grub_lib.create_report.called) == should_inhibit
+ if should_inhibit:
+ assert len(check_legacy_grub_lib.create_report.reports) == 1
+ report = check_legacy_grub_lib.create_report.reports[0]
+ assert is_inhibitor(report)
--
2.42.0

View File

@ -0,0 +1,46 @@
From a4e3906fff5d11e0fb94f5dbe10ed653dc2d0bee Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Matej=20Matu=C5=A1ka?= <mmatuska@redhat.com>
Date: Tue, 23 Apr 2024 23:56:57 +0200
Subject: [PATCH 12/34] Default channel to GA is not specified otherwise
(#1205)
Originally we tried to map by default repositories from particular channels on the source system to their equivalents on the target system. IOW:
* eus -> eus
* aus -> aus
* e4s -> e4s
* "ga" -> "ga"
...
However, it has been revealed this logic should not apply on minor releases for which these non-ga (premium) repositories do not exist. So doing upgrade e.g. to 8.9, 8.10 , 9.3 for which specific eus, etc.. repositories are not defined lead to 404 error.
Discussing this deeply between stakeholders, it has been decided to drop this logic and target always to "ga" repositories unless the leapp is executed with instructions to choose a different channel (using envars, --channel .. option). To prevent this issue.
It's still possible to require mistakenly e.g. "eus" channel for the target release for which the related repositories are not defined. e.g.:
> leapp upgrade --channel eus --target 8.10
In such a case, the previous errors (404 Not found) can be hit again. But it will not happen by default. In this case, we expect and request people to understand what they want when they use the option.
@pirat89 : Updated commit msg
jira: RHEL-24720
---
commands/upgrade/util.py | 2 ++
1 file changed, 2 insertions(+)
diff --git a/commands/upgrade/util.py b/commands/upgrade/util.py
index b11265ee..9eff0ad1 100644
--- a/commands/upgrade/util.py
+++ b/commands/upgrade/util.py
@@ -207,6 +207,8 @@ def prepare_configuration(args):
if args.channel:
os.environ['LEAPP_TARGET_PRODUCT_CHANNEL'] = args.channel
+ elif 'LEAPP_TARGET_PRODUCT_CHANNEL' not in os.environ:
+ os.environ['LEAPP_TARGET_PRODUCT_CHANNEL'] = 'ga'
if args.iso:
os.environ['LEAPP_TARGET_ISO'] = args.iso
--
2.42.0

View File

@ -0,0 +1,39 @@
From 6d05575efdd6c3c728e784add3017d072eda4d5e Mon Sep 17 00:00:00 2001
From: Toshio Kuratomi <tkuratom@redhat.com>
Date: Tue, 23 Apr 2024 14:03:44 -0700
Subject: [PATCH 13/34] Enhance grub2 install failure message.
The new message informs the useir will happen (they will boot into the old RHEL's kernel) so they
understand why they need to run the remediation.
jira: https://issues.redhat.com/browse/RHEL-29683
---
.../common/actors/checkgrubcore/actor.py | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checkgrubcore/actor.py b/repos/system_upgrade/common/actors/checkgrubcore/actor.py
index ae9e53ef..662c4d64 100644
--- a/repos/system_upgrade/common/actors/checkgrubcore/actor.py
+++ b/repos/system_upgrade/common/actors/checkgrubcore/actor.py
@@ -46,11 +46,15 @@ class CheckGrubCore(Actor):
reporting.Title('Leapp could not identify where GRUB2 core is located'),
reporting.Summary(
'We assumed GRUB2 core is located on the same device(s) as /boot, '
- 'however Leapp could not detect GRUB2 on the device(s). '
- 'GRUB2 core needs to be updated maually on legacy (BIOS) systems. '
+ 'however Leapp could not detect GRUB2 on those device(s). '
+ 'This means GRUB2 core will not be updated during the upgrade process and '
+ 'the system will probably ' 'boot into the old kernel after the upgrade. '
+ 'GRUB2 core needs to be updated manually on legacy (BIOS) systems to '
+ 'fix this.'
),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.BOOT]),
reporting.Remediation(
- hint='Please run "grub2-install <GRUB_DEVICE> command manually after the upgrade'),
+ hint='Please run the "grub2-install <GRUB_DEVICE>" command manually '
+ 'after the upgrade'),
])
--
2.42.0

View File

@ -0,0 +1,388 @@
From ea6cd7912ce650f033a972921a2b29636ac304db Mon Sep 17 00:00:00 2001
From: mhecko <mhecko@redhat.com>
Date: Tue, 2 Apr 2024 19:29:16 +0200
Subject: [PATCH 14/34] boot: check first partition offset on GRUB devices
Check that the first partition starts at least at 1MiB (2048 cylinders),
as too small first-partition offsets lead to failures when doing
grub2-install. The limit (1MiB) has been chosen as it is a common
value set by the disk formatting tools nowadays.
jira: https://issues.redhat.com/browse/RHEL-3341
---
.../actors/checkfirstpartitionoffset/actor.py | 24 ++++++
.../libraries/check_first_partition_offset.py | 52 +++++++++++++
.../test_check_first_partition_offset.py | 51 ++++++++++++
.../scangrubdevpartitionlayout/actor.py | 18 +++++
.../libraries/scan_layout.py | 64 +++++++++++++++
.../tests/test_scan_partition_layout.py | 78 +++++++++++++++++++
.../el7toel8/models/partitionlayout.py | 28 +++++++
7 files changed, 315 insertions(+)
create mode 100644 repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/actor.py
create mode 100644 repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/libraries/check_first_partition_offset.py
create mode 100644 repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/tests/test_check_first_partition_offset.py
create mode 100644 repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/actor.py
create mode 100644 repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/libraries/scan_layout.py
create mode 100644 repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/tests/test_scan_partition_layout.py
create mode 100644 repos/system_upgrade/el7toel8/models/partitionlayout.py
diff --git a/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/actor.py b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/actor.py
new file mode 100644
index 00000000..cde27c2a
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/actor.py
@@ -0,0 +1,24 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import check_first_partition_offset
+from leapp.models import FirmwareFacts, GRUBDevicePartitionLayout
+from leapp.reporting import Report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+
+
+class CheckFirstPartitionOffset(Actor):
+ """
+ Check whether the first partition starts at the offset >=1MiB.
+
+ The alignment of the first partition plays role in disk access speeds. Older tools placed the start of the first
+ partition at cylinder 63 (due to historical reasons connected to the INT13h BIOS API). However, grub core
+ binary is placed before the start of the first partition, meaning that not enough space causes bootloader
+ installation to fail. Modern partitioning tools place the first partition at >= 1MiB (cylinder 2048+).
+ """
+
+ name = 'check_first_partition_offset'
+ consumes = (FirmwareFacts, GRUBDevicePartitionLayout,)
+ produces = (Report,)
+ tags = (ChecksPhaseTag, IPUWorkflowTag,)
+
+ def process(self):
+ check_first_partition_offset.check_first_partition_offset()
diff --git a/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/libraries/check_first_partition_offset.py b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/libraries/check_first_partition_offset.py
new file mode 100644
index 00000000..fbd4e178
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/libraries/check_first_partition_offset.py
@@ -0,0 +1,52 @@
+from leapp import reporting
+from leapp.libraries.common.config import architecture
+from leapp.libraries.stdlib import api
+from leapp.models import FirmwareFacts, GRUBDevicePartitionLayout
+
+SAFE_OFFSET_BYTES = 1024*1024 # 1MiB
+
+
+def check_first_partition_offset():
+ if architecture.matches_architecture(architecture.ARCH_S390X):
+ return
+
+ for fact in api.consume(FirmwareFacts):
+ if fact.firmware == 'efi':
+ return # Skip EFI system
+
+ problematic_devices = []
+ for grub_dev in api.consume(GRUBDevicePartitionLayout):
+ first_partition = min(grub_dev.partitions, key=lambda partition: partition.start_offset)
+ if first_partition.start_offset < SAFE_OFFSET_BYTES:
+ problematic_devices.append(grub_dev.device)
+
+ if problematic_devices:
+ summary = (
+ 'On the system booting by using BIOS, the in-place upgrade fails '
+ 'when upgrading the GRUB2 bootloader if the boot disk\'s embedding area '
+ 'does not contain enough space for the core image installation. '
+ 'This results in a broken system, and can occur when the disk has been '
+ 'partitioned manually, for example using the RHEL 6 fdisk utility.\n\n'
+
+ 'The list of devices with small embedding area:\n'
+ '{0}.'
+ )
+ problematic_devices_fmt = ['- {0}'.format(dev) for dev in problematic_devices]
+
+ hint = (
+ 'We recommend to perform a fresh installation of the RHEL 8 system '
+ 'instead of performing the in-place upgrade.\n'
+ 'Another possibility is to reformat the devices so that there is '
+ 'at least {0} kiB space before the first partition. '
+ 'Note that this operation is not supported and does not have to be '
+ 'always possible.'
+ )
+
+ reporting.create_report([
+ reporting.Title('Found GRUB devices with too little space reserved before the first partition'),
+ reporting.Summary(summary.format('\n'.join(problematic_devices_fmt))),
+ reporting.Remediation(hint=hint.format(SAFE_OFFSET_BYTES // 1024)),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Groups([reporting.Groups.BOOT]),
+ reporting.Groups([reporting.Groups.INHIBITOR]),
+ ])
diff --git a/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/tests/test_check_first_partition_offset.py b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/tests/test_check_first_partition_offset.py
new file mode 100644
index 00000000..e349ff7d
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/tests/test_check_first_partition_offset.py
@@ -0,0 +1,51 @@
+import pytest
+
+from leapp import reporting
+from leapp.libraries.actor import check_first_partition_offset
+from leapp.libraries.common import grub
+from leapp.libraries.common.testutils import create_report_mocked, CurrentActorMocked
+from leapp.libraries.stdlib import api
+from leapp.models import FirmwareFacts, GRUBDevicePartitionLayout, PartitionInfo
+from leapp.reporting import Report
+from leapp.utils.report import is_inhibitor
+
+
+@pytest.mark.parametrize(
+ ('devices', 'should_report'),
+ [
+ (
+ [
+ GRUBDevicePartitionLayout(device='/dev/vda',
+ partitions=[PartitionInfo(part_device='/dev/vda1', start_offset=32256)])
+ ],
+ True
+ ),
+ (
+ [
+ GRUBDevicePartitionLayout(device='/dev/vda',
+ partitions=[PartitionInfo(part_device='/dev/vda1', start_offset=1024*1025)])
+ ],
+ False
+ ),
+ (
+ [
+ GRUBDevicePartitionLayout(device='/dev/vda',
+ partitions=[PartitionInfo(part_device='/dev/vda1', start_offset=1024*1024)])
+ ],
+ False
+ )
+ ]
+)
+def test_bad_offset_reported(monkeypatch, devices, should_report):
+ def consume_mocked(model_cls):
+ if model_cls == FirmwareFacts:
+ return [FirmwareFacts(firmware='bios')]
+ return devices
+
+ monkeypatch.setattr(api, 'consume', consume_mocked)
+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked())
+ monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
+
+ check_first_partition_offset.check_first_partition_offset()
+
+ assert bool(reporting.create_report.called) == should_report
diff --git a/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/actor.py b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/actor.py
new file mode 100644
index 00000000..0db93aba
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/actor.py
@@ -0,0 +1,18 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import scan_layout as scan_layout_lib
+from leapp.models import GRUBDevicePartitionLayout, GrubInfo
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class ScanGRUBDevicePartitionLayout(Actor):
+ """
+ Scan all identified GRUB devices for their partition layout.
+ """
+
+ name = 'scan_grub_device_partition_layout'
+ consumes = (GrubInfo,)
+ produces = (GRUBDevicePartitionLayout,)
+ tags = (FactsPhaseTag, IPUWorkflowTag,)
+
+ def process(self):
+ scan_layout_lib.scan_grub_device_partition_layout()
diff --git a/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/libraries/scan_layout.py b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/libraries/scan_layout.py
new file mode 100644
index 00000000..bb2e6d9e
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/libraries/scan_layout.py
@@ -0,0 +1,64 @@
+from leapp.libraries.stdlib import api, CalledProcessError, run
+from leapp.models import GRUBDevicePartitionLayout, GrubInfo, PartitionInfo
+
+SAFE_OFFSET_BYTES = 1024*1024 # 1MiB
+
+
+def split_on_space_segments(line):
+ fragments = (fragment.strip() for fragment in line.split(' '))
+ return [fragment for fragment in fragments if fragment]
+
+
+def get_partition_layout(device):
+ try:
+ partition_table = run(['fdisk', '-l', '-u=sectors', device], split=True)['stdout']
+ except CalledProcessError as err:
+ # Unlikely - if the disk has no partition table, `fdisk` terminates with 0 (no err). Fdisk exits with an err
+ # when the device does not exists, or if it is too small to contain a partition table.
+
+ err_msg = 'Failed to run `fdisk` to obtain the partition table of the device {0}. Full error: \'{1}\''
+ api.current_logger().error(err_msg.format(device, str(err)))
+ return None
+
+ table_iter = iter(partition_table)
+
+ for line in table_iter:
+ if not line.startswith('Units'):
+ # We are still reading general device information and not the table itself
+ continue
+
+ unit = line.split('=')[2].strip() # Contains '512 bytes'
+ unit = int(unit.split(' ')[0].strip())
+ break # First line of the partition table header
+
+ for line in table_iter:
+ line = line.strip()
+ if not line.startswith('Device'):
+ continue
+
+ part_all_attrs = split_on_space_segments(line)
+ break
+
+ partitions = []
+ for partition_line in table_iter:
+ # Fields: Device Boot Start End Sectors Size Id Type
+ # The line looks like: `/dev/vda1 * 2048 2099199 2097152 1G 83 Linux`
+ part_info = split_on_space_segments(partition_line)
+
+ # If the partition is not bootable, the Boot column might be empty
+ part_device = part_info[0]
+ part_start = int(part_info[2]) if len(part_info) == len(part_all_attrs) else int(part_info[1])
+ partitions.append(PartitionInfo(part_device=part_device, start_offset=part_start*unit))
+
+ return GRUBDevicePartitionLayout(device=device, partitions=partitions)
+
+
+def scan_grub_device_partition_layout():
+ grub_devices = next(api.consume(GrubInfo), None)
+ if not grub_devices:
+ return
+
+ for device in grub_devices.orig_devices:
+ dev_info = get_partition_layout(device)
+ if dev_info:
+ api.produce(dev_info)
diff --git a/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/tests/test_scan_partition_layout.py b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/tests/test_scan_partition_layout.py
new file mode 100644
index 00000000..37bb5bcf
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/tests/test_scan_partition_layout.py
@@ -0,0 +1,78 @@
+from collections import namedtuple
+
+import pytest
+
+from leapp.libraries.actor import scan_layout as scan_layout_lib
+from leapp.libraries.common import grub
+from leapp.libraries.common.testutils import create_report_mocked, produce_mocked
+from leapp.libraries.stdlib import api
+from leapp.models import GRUBDevicePartitionLayout, GrubInfo
+from leapp.utils.report import is_inhibitor
+
+Device = namedtuple('Device', ['name', 'partitions', 'sector_size'])
+Partition = namedtuple('Partition', ['name', 'start_offset'])
+
+
+@pytest.mark.parametrize(
+ 'devices',
+ [
+ (
+ Device(name='/dev/vda', sector_size=512,
+ partitions=[Partition(name='/dev/vda1', start_offset=63),
+ Partition(name='/dev/vda2', start_offset=1000)]),
+ Device(name='/dev/vdb', sector_size=1024,
+ partitions=[Partition(name='/dev/vdb1', start_offset=100),
+ Partition(name='/dev/vdb2', start_offset=20000)])
+ ),
+ (
+ Device(name='/dev/vda', sector_size=512,
+ partitions=[Partition(name='/dev/vda1', start_offset=111),
+ Partition(name='/dev/vda2', start_offset=1000)]),
+ )
+ ]
+)
+def test_get_partition_layout(monkeypatch, devices):
+ device_to_fdisk_output = {}
+ for device in devices:
+ fdisk_output = [
+ 'Disk {0}: 42.9 GB, 42949672960 bytes, 83886080 sectors'.format(device.name),
+ 'Units = sectors of 1 * {sector_size} = {sector_size} bytes'.format(sector_size=device.sector_size),
+ 'Sector size (logical/physical): 512 bytes / 512 bytes',
+ 'I/O size (minimum/optimal): 512 bytes / 512 bytes',
+ 'Disk label type: dos',
+ 'Disk identifier: 0x0000000da',
+ '',
+ ' Device Boot Start End Blocks Id System',
+ ]
+ for part in device.partitions:
+ part_line = '{0} * {1} 2099199 1048576 83 Linux'.format(part.name, part.start_offset)
+ fdisk_output.append(part_line)
+
+ device_to_fdisk_output[device.name] = fdisk_output
+
+ def mocked_run(cmd, *args, **kwargs):
+ assert cmd[:3] == ['fdisk', '-l', '-u=sectors']
+ device = cmd[3]
+ output = device_to_fdisk_output[device]
+ return {'stdout': output}
+
+ def consume_mocked(*args, **kwargs):
+ yield GrubInfo(orig_devices=[device.name for device in devices])
+
+ monkeypatch.setattr(scan_layout_lib, 'run', mocked_run)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(api, 'consume', consume_mocked)
+
+ scan_layout_lib.scan_grub_device_partition_layout()
+
+ assert api.produce.called == len(devices)
+
+ dev_name_to_desc = {dev.name: dev for dev in devices}
+
+ for message in api.produce.model_instances:
+ assert isinstance(message, GRUBDevicePartitionLayout)
+ dev = dev_name_to_desc[message.device]
+
+ expected_part_name_to_start = {part.name: part.start_offset*dev.sector_size for part in dev.partitions}
+ actual_part_name_to_start = {part.part_device: part.start_offset for part in message.partitions}
+ assert expected_part_name_to_start == actual_part_name_to_start
diff --git a/repos/system_upgrade/el7toel8/models/partitionlayout.py b/repos/system_upgrade/el7toel8/models/partitionlayout.py
new file mode 100644
index 00000000..c6483283
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/models/partitionlayout.py
@@ -0,0 +1,28 @@
+from leapp.models import fields, Model
+from leapp.topics import SystemInfoTopic
+
+
+class PartitionInfo(Model):
+ """
+ Information about a single partition.
+ """
+ topic = SystemInfoTopic
+
+ part_device = fields.String()
+ """ Partition device """
+
+ start_offset = fields.Integer()
+ """ Partition start - offset from the start of the block device in bytes """
+
+
+class GRUBDevicePartitionLayout(Model):
+ """
+ Information about partition layout of a GRUB device.
+ """
+ topic = SystemInfoTopic
+
+ device = fields.String()
+ """ GRUB device """
+
+ partitions = fields.List(fields.Model(PartitionInfo))
+ """ List of partitions present on the device """
--
2.42.0

View File

@ -0,0 +1,138 @@
From 683176dbeeeff32cc6b04410b4f7e4715a3de8e0 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Wed, 24 Apr 2024 01:09:51 +0200
Subject: [PATCH 15/34] boot: Skip checks of first partition offset for for gpt
partition table
This is extension of the previous commit. The original problem that
we are trying to resolve is to be sure the embedding area (MBR gap)
has expected size. This is irrelevant in case of GPT partition table
is used on a device. The fdisk output format is in case of GPT
disk label different, which breaks the parsing, resulting in empty
list of partitions in related GRUBDevicePartitionLayout msg.
For now, let's skip produce of msgs for "GPT devices". As a seatbelt,
ignore processing of messages with empty partitions field, expecting
that such a device does not contain MBR. We want to prevent false
positive inhibitors (and FP blocking errors). We expect that total
number of machines with small embedding area is very minor in total
numbers, so even if we would miss something (which is not expected
now to our best knowledge) it's still good trade-off as the major
goal is to reduce number of machines that have problems with the
in-place upgrade.
The solution can be updated in future if there is a reason for it.
---
.../libraries/check_first_partition_offset.py | 7 +++++
.../test_check_first_partition_offset.py | 16 +++++++++++
.../libraries/scan_layout.py | 28 +++++++++++++++++++
.../tests/test_scan_partition_layout.py | 5 ++++
4 files changed, 56 insertions(+)
diff --git a/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/libraries/check_first_partition_offset.py b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/libraries/check_first_partition_offset.py
index fbd4e178..fca9c3ff 100644
--- a/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/libraries/check_first_partition_offset.py
+++ b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/libraries/check_first_partition_offset.py
@@ -16,6 +16,13 @@ def check_first_partition_offset():
problematic_devices = []
for grub_dev in api.consume(GRUBDevicePartitionLayout):
+ if not grub_dev.partitions:
+ # NOTE(pstodulk): In case of empty partition list we have nothing to do.
+ # This can could happen when the fdisk output is different then expected.
+ # E.g. when GPT partition table is used on the disk. We are right now
+ # interested strictly about MBR only, so ignoring these cases.
+ # This is seatbelt, as the msg should not be produced for GPT at all.
+ continue
first_partition = min(grub_dev.partitions, key=lambda partition: partition.start_offset)
if first_partition.start_offset < SAFE_OFFSET_BYTES:
problematic_devices.append(grub_dev.device)
diff --git a/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/tests/test_check_first_partition_offset.py b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/tests/test_check_first_partition_offset.py
index e349ff7d..f925f7d4 100644
--- a/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/tests/test_check_first_partition_offset.py
+++ b/repos/system_upgrade/el7toel8/actors/checkfirstpartitionoffset/tests/test_check_first_partition_offset.py
@@ -20,6 +20,16 @@ from leapp.utils.report import is_inhibitor
],
True
),
+ (
+ [
+ GRUBDevicePartitionLayout(device='/dev/vda',
+ partitions=[
+ PartitionInfo(part_device='/dev/vda2', start_offset=1024*1025),
+ PartitionInfo(part_device='/dev/vda1', start_offset=32256)
+ ])
+ ],
+ True
+ ),
(
[
GRUBDevicePartitionLayout(device='/dev/vda',
@@ -33,6 +43,12 @@ from leapp.utils.report import is_inhibitor
partitions=[PartitionInfo(part_device='/dev/vda1', start_offset=1024*1024)])
],
False
+ ),
+ (
+ [
+ GRUBDevicePartitionLayout(device='/dev/vda', partitions=[])
+ ],
+ False
)
]
)
diff --git a/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/libraries/scan_layout.py b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/libraries/scan_layout.py
index bb2e6d9e..f51bcda4 100644
--- a/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/libraries/scan_layout.py
+++ b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/libraries/scan_layout.py
@@ -31,6 +31,34 @@ def get_partition_layout(device):
unit = int(unit.split(' ')[0].strip())
break # First line of the partition table header
+ # Discover disk label type: dos | gpt
+ for line in table_iter:
+ line = line.strip()
+ if not line.startswith('Disk label type'):
+ continue
+ disk_type = line.split(':')[1].strip()
+ break
+
+ if disk_type == 'gpt':
+ api.current_logger().info(
+ 'Detected GPT partition table. Skipping produce of GRUBDevicePartitionLayout message.'
+ )
+ # NOTE(pstodulk): The GPT table has a different output format than
+ # expected below, example (ignore start/end lines):
+ # --------------------------- start ----------------------------------
+ # # Start End Size Type Name
+ # 1 2048 4095 1M BIOS boot
+ # 2 4096 2101247 1G Microsoft basic
+ # 3 2101248 41940991 19G Linux LVM
+ # ---------------------------- end -----------------------------------
+ # But mainly, in case of GPT, we have nothing to actually check as
+ # we are gathering this data now mainly to get information about the
+ # actual size of embedding area (MBR gap). In case of GPT, there is
+ # bios boot / prep boot partition, which has always 1 MiB and fulfill
+ # our expectations. So skip in this case another processing and generation
+ # of the msg. Let's improve it in future if we find a reason for it.
+ return None
+
for line in table_iter:
line = line.strip()
if not line.startswith('Device'):
diff --git a/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/tests/test_scan_partition_layout.py b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/tests/test_scan_partition_layout.py
index 37bb5bcf..54025379 100644
--- a/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/tests/test_scan_partition_layout.py
+++ b/repos/system_upgrade/el7toel8/actors/scangrubdevpartitionlayout/tests/test_scan_partition_layout.py
@@ -76,3 +76,8 @@ def test_get_partition_layout(monkeypatch, devices):
expected_part_name_to_start = {part.name: part.start_offset*dev.sector_size for part in dev.partitions}
actual_part_name_to_start = {part.part_device: part.start_offset for part in message.partitions}
assert expected_part_name_to_start == actual_part_name_to_start
+
+
+def test_get_partition_layout_gpt(monkeypatch):
+ # TODO(pstodulk): skipping for now, due to time pressure. Testing for now manually.
+ pass
--
2.42.0

View File

@ -0,0 +1,152 @@
From 8d84c02b92f3a7a7d74a272aa31d5b0a0f24faea Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Thu, 11 Apr 2024 15:56:51 +0200
Subject: [PATCH 16/34] repomapping: Add RHEL7 ELS repos
RHEL-21891
---
etc/leapp/files/repomap.json | 60 ++++++++++++++++++-
.../common/models/repositoriesmap.py | 2 +-
2 files changed, 59 insertions(+), 3 deletions(-)
diff --git a/etc/leapp/files/repomap.json b/etc/leapp/files/repomap.json
index 57cab1ad..1ee6c56a 100644
--- a/etc/leapp/files/repomap.json
+++ b/etc/leapp/files/repomap.json
@@ -1,6 +1,6 @@
{
- "datetime": "202401261328Z",
- "version_format": "1.2.0",
+ "datetime": "202404091246Z",
+ "version_format": "1.2.1",
"mapping": [
{
"source_major_version": "7",
@@ -303,6 +303,13 @@
"channel": "beta",
"repo_type": "rpm"
},
+ {
+ "major_version": "7",
+ "repoid": "rhel-7-for-system-z-els-rpms",
+ "arch": "s390x",
+ "channel": "els",
+ "repo_type": "rpm"
+ },
{
"major_version": "7",
"repoid": "rhel-7-for-system-z-eus-rpms",
@@ -346,6 +353,13 @@
"channel": "e4s",
"repo_type": "rpm"
},
+ {
+ "major_version": "7",
+ "repoid": "rhel-7-server-els-rpms",
+ "arch": "x86_64",
+ "channel": "els",
+ "repo_type": "rpm"
+ },
{
"major_version": "7",
"repoid": "rhel-7-server-eus-rpms",
@@ -486,6 +500,13 @@
"channel": "ga",
"repo_type": "rpm"
},
+ {
+ "major_version": "7",
+ "repoid": "rhel-7-for-system-z-els-optional-rpms",
+ "arch": "s390x",
+ "channel": "els",
+ "repo_type": "rpm"
+ },
{
"major_version": "7",
"repoid": "rhel-7-for-system-z-eus-optional-rpms",
@@ -529,6 +550,13 @@
"channel": "e4s",
"repo_type": "rpm"
},
+ {
+ "major_version": "7",
+ "repoid": "rhel-7-server-els-optional-rpms",
+ "arch": "x86_64",
+ "channel": "els",
+ "repo_type": "rpm"
+ },
{
"major_version": "7",
"repoid": "rhel-7-server-eus-optional-rpms",
@@ -892,6 +920,13 @@
"channel": "beta",
"repo_type": "rpm"
},
+ {
+ "major_version": "7",
+ "repoid": "rhel-sap-for-rhel-7-for-system-z-els-rpms",
+ "arch": "s390x",
+ "channel": "els",
+ "repo_type": "rpm"
+ },
{
"major_version": "7",
"repoid": "rhel-sap-for-rhel-7-for-system-z-eus-rpms",
@@ -920,6 +955,13 @@
"channel": "e4s",
"repo_type": "rpm"
},
+ {
+ "major_version": "7",
+ "repoid": "rhel-sap-for-rhel-7-server-els-rpms",
+ "arch": "x86_64",
+ "channel": "els",
+ "repo_type": "rpm"
+ },
{
"major_version": "7",
"repoid": "rhel-sap-for-rhel-7-server-eus-rhui-rpms",
@@ -1022,6 +1064,13 @@
"channel": "e4s",
"repo_type": "rpm"
},
+ {
+ "major_version": "7",
+ "repoid": "rhel-sap-hana-for-rhel-7-server-els-rpms",
+ "arch": "x86_64",
+ "channel": "els",
+ "repo_type": "rpm"
+ },
{
"major_version": "7",
"repoid": "rhel-sap-hana-for-rhel-7-server-eus-rhui-rpms",
@@ -1125,6 +1174,13 @@
"channel": "e4s",
"repo_type": "rpm"
},
+ {
+ "major_version": "7",
+ "repoid": "rhel-ha-for-rhel-7-server-els-rpms",
+ "arch": "x86_64",
+ "channel": "els",
+ "repo_type": "rpm"
+ },
{
"major_version": "7",
"repoid": "rhel-ha-for-rhel-7-server-eus-rhui-rpms",
diff --git a/repos/system_upgrade/common/models/repositoriesmap.py b/repos/system_upgrade/common/models/repositoriesmap.py
index 7ef0bdb4..7192a60d 100644
--- a/repos/system_upgrade/common/models/repositoriesmap.py
+++ b/repos/system_upgrade/common/models/repositoriesmap.py
@@ -61,7 +61,7 @@ class PESIDRepositoryEntry(Model):
too.
"""
- channel = fields.StringEnum(['ga', 'e4s', 'eus', 'aus', 'beta'])
+ channel = fields.StringEnum(['ga', 'e4s', 'eus', 'aus', 'beta', 'els'])
"""
The 'channel' of the repository.
--
2.42.0

View File

@ -0,0 +1,25 @@
From f154c6566a2fbeb4fe64405794b6777c156cabcb Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Mon, 15 Apr 2024 17:13:03 +0200
Subject: [PATCH 17/34] bump required repomap version
---
.../actors/repositoriesmapping/libraries/repositoriesmapping.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
index 8045634e..58089195 100644
--- a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
+++ b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
@@ -17,7 +17,7 @@ REPOMAP_FILE = 'repomap.json'
class RepoMapData(object):
- VERSION_FORMAT = '1.2.0'
+ VERSION_FORMAT = '1.2.1'
def __init__(self):
self.repositories = []
--
2.42.0

View File

@ -0,0 +1,24 @@
From 6dc1621c4395412724d8cccf7a2694013fb4f5f0 Mon Sep 17 00:00:00 2001
From: Matej Matuska <mmatuska@redhat.com>
Date: Mon, 15 Apr 2024 17:25:07 +0200
Subject: [PATCH 18/34] fixup! bump required repomap version
---
.../actors/repositoriesmapping/tests/files/repomap_example.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/repositoriesmapping/tests/files/repomap_example.json b/repos/system_upgrade/common/actors/repositoriesmapping/tests/files/repomap_example.json
index 5e95f5fe..a5fc5fe1 100644
--- a/repos/system_upgrade/common/actors/repositoriesmapping/tests/files/repomap_example.json
+++ b/repos/system_upgrade/common/actors/repositoriesmapping/tests/files/repomap_example.json
@@ -1,6 +1,6 @@
{
"datetime": "202107141655Z",
- "version_format": "1.2.0",
+ "version_format": "1.2.1",
"mapping": [
{
"source_major_version": "7",
--
2.42.0

View File

@ -0,0 +1,33 @@
From a5bd2546f748ddac4240b3a34b168e422ef78c99 Mon Sep 17 00:00:00 2001
From: David Kubek <dkubek@redhat.com>
Date: Wed, 24 Apr 2024 11:05:32 +0200
Subject: [PATCH 19/34] Fix incorrect command formulation
Mitigation of an error where instead of no argument an "empty argument"
was passed to `lscpu`
lscpu ''
vs.
lscpu
---
repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py b/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py
index 7451066a..db3f92d4 100644
--- a/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py
+++ b/repos/system_upgrade/common/actors/scancpu/libraries/scancpu.py
@@ -12,7 +12,7 @@ PPC64LE_MODEL = re.compile(r'\d+\.\d+ \(pvr (?P<family>[0-9a-fA-F]+) 0*[0-9a-fA-
def _get_lscpu_output(output_json=False):
try:
- result = run(['lscpu', '-J' if output_json else ''])
+ result = run(['lscpu'] + (['-J'] if output_json else []))
return result.get('stdout', '')
except (OSError, CalledProcessError):
api.current_logger().debug('Executing `lscpu` failed', exc_info=True)
--
2.42.0

View File

@ -0,0 +1,37 @@
From 5e51626069dd7d5e38f36cafdd45f14cfb213e5d Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Thu, 25 Apr 2024 14:59:55 +0200
Subject: [PATCH 20/34] mention `Report` in produces of
transitionsystemdservicesstates (#1210)
fixes upgrade warnings:
leapp.workflow.Applications.transition_systemd_services_states: Actor is trying to produce a message of type "<class 'leapp.reporting.Report'>" without mentioning it explicitely in the actor's "produces" tuple. The message will be ignored
---
.../actors/systemd/transitionsystemdservicesstates/actor.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/actor.py b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/actor.py
index 139f9f6b..d2863e09 100644
--- a/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/actor.py
+++ b/repos/system_upgrade/common/actors/systemd/transitionsystemdservicesstates/actor.py
@@ -7,6 +7,7 @@ from leapp.models import (
SystemdServicesPresetInfoTarget,
SystemdServicesTasks
)
+from leapp.reporting import Report
from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag
@@ -46,7 +47,7 @@ class TransitionSystemdServicesStates(Actor):
SystemdServicesPresetInfoSource,
SystemdServicesPresetInfoTarget
)
- produces = (SystemdServicesTasks,)
+ produces = (Report, SystemdServicesTasks)
tags = (ApplicationsPhaseTag, IPUWorkflowTag)
def process(self):
--
2.42.0

View File

@ -0,0 +1,47 @@
From 346b741209553b9aeb96f2f728480e740a25e0af Mon Sep 17 00:00:00 2001
From: Inessa Vasilevskaya <ivasilev@redhat.com>
Date: Thu, 25 Apr 2024 13:25:56 +0200
Subject: [PATCH 21/34] Update packit config after tier redefinition
Now for basic sanity test verification in upstream tests
tagged by 'tier0' will be used instead of 'sanity'.
RHELMISC-3211
---
.packit.yaml | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/.packit.yaml b/.packit.yaml
index bce97bad..ed6412dc 100644
--- a/.packit.yaml
+++ b/.packit.yaml
@@ -114,7 +114,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:sanity & enabled:true'
+ plan_filter: 'tag:tier0 & enabled:true'
environments:
- tmt:
context:
@@ -318,7 +318,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:sanity & tag:8to9 & enabled:true'
+ plan_filter: 'tag:tier0 & tag:8to9 & enabled:true'
environments:
- tmt:
context:
@@ -407,7 +407,7 @@ jobs:
tf_extra_params:
test:
tmt:
- plan_filter: 'tag:sanity & tag:8to9 & enabled:true'
+ plan_filter: 'tag:tier0 & tag:8to9 & enabled:true'
environments:
- tmt:
context:
--
2.42.0

View File

@ -0,0 +1,34 @@
From 0d904126f785ae785e29165bf83a493d8f837fbe Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Fri, 26 Apr 2024 14:45:06 +0200
Subject: [PATCH 22/34] Update reboot msg: Note the console access
Some users hasn't read the upgrade documentation and are not aware
that after the reboot the actual upgrade is processing. As they wait
just for the ssh connection, they think that something is wrong and
sometimes reboot the machine, interrupting the entire process, making
the machine broken in some cases.
Adding info that they need a console access in case they want to watch
the upgrade progress.
Jira: https://issues.redhat.com/browse/RHEL-27231
---
commands/upgrade/__init__.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/commands/upgrade/__init__.py b/commands/upgrade/__init__.py
index cc5fe647..1e15b59c 100644
--- a/commands/upgrade/__init__.py
+++ b/commands/upgrade/__init__.py
@@ -117,6 +117,7 @@ def upgrade(args, breadcrumbs):
sys.stdout.write(
'Reboot the system to continue with the upgrade.'
' This might take a while depending on the system configuration.\n'
+ 'Make sure you have console access to view the actual upgrade process.\n'
)
--
2.42.0

View File

@ -0,0 +1,435 @@
From 2b27cfbbed1059d8af1add3a209d919901897e47 Mon Sep 17 00:00:00 2001
From: Toshio Kuratomi <tkuratom@redhat.com>
Date: Tue, 30 Apr 2024 07:28:35 -0700
Subject: [PATCH 23/34] Fix kernel cmdline args we add not being propogated to
newly installed kernels. (#1193)
On some upgrades, any kernel commandline args that we were adding were added to the default kernel
but once the user installed a new kernel, those args were not propogated to the new kernel. This
was happening on S390x and on RHEL7=>8 upgrades.
To fix this, we add the kernel commandline args to both the default kernel and to the defaults for
all kernels.
On S390x and upgrades to RHEL9 or greater, this is done by placing the kernel cmdline arguments into
the /etc/kernel/cmdline file.
On upgrades to RHEL <= 8 for all architectures other than S390x, this is done by having
grub2-editenv modify the /boot/grub2/grubenv file.
Jira: RHEL-26840, OAMG-10424
---
.../actors/kernelcmdlineconfig/actor.py | 3 +-
.../libraries/kernelcmdlineconfig.py | 166 +++++++++++++++++-
.../tests/test_kernelcmdlineconfig.py | 116 ++++++++++--
3 files changed, 261 insertions(+), 24 deletions(-)
diff --git a/repos/system_upgrade/common/actors/kernelcmdlineconfig/actor.py b/repos/system_upgrade/common/actors/kernelcmdlineconfig/actor.py
index 13c47113..b44fd835 100644
--- a/repos/system_upgrade/common/actors/kernelcmdlineconfig/actor.py
+++ b/repos/system_upgrade/common/actors/kernelcmdlineconfig/actor.py
@@ -29,4 +29,5 @@ class KernelCmdlineConfig(Actor):
if ff.firmware == 'bios' and os.path.ismount('/boot/efi'):
configs = ['/boot/grub2/grub.cfg', '/boot/efi/EFI/redhat/grub.cfg']
- kernelcmdlineconfig.modify_kernel_args_in_boot_cfg(configs)
+
+ kernelcmdlineconfig.entrypoint(configs)
diff --git a/repos/system_upgrade/common/actors/kernelcmdlineconfig/libraries/kernelcmdlineconfig.py b/repos/system_upgrade/common/actors/kernelcmdlineconfig/libraries/kernelcmdlineconfig.py
index f98e8168..ad59eb22 100644
--- a/repos/system_upgrade/common/actors/kernelcmdlineconfig/libraries/kernelcmdlineconfig.py
+++ b/repos/system_upgrade/common/actors/kernelcmdlineconfig/libraries/kernelcmdlineconfig.py
@@ -1,9 +1,27 @@
+import re
+
+from leapp import reporting
from leapp.exceptions import StopActorExecutionError
from leapp.libraries import stdlib
-from leapp.libraries.common.config import architecture
+from leapp.libraries.common.config import architecture, version
from leapp.libraries.stdlib import api
from leapp.models import InstalledTargetKernelInfo, KernelCmdlineArg, TargetKernelCmdlineArgTasks
+KERNEL_CMDLINE_FILE = "/etc/kernel/cmdline"
+
+
+class ReadOfKernelArgsError(Exception):
+ """
+ Failed to retrieve the kernel command line arguments
+ """
+
+
+def use_cmdline_file():
+ if (architecture.matches_architecture(architecture.ARCH_S390X) or
+ version.matches_target_version('>= 9.0')):
+ return True
+ return False
+
def run_grubby_cmd(cmd):
try:
@@ -15,6 +33,9 @@ def run_grubby_cmd(cmd):
stdlib.run(['/usr/sbin/zipl'])
except (OSError, stdlib.CalledProcessError) as e:
+ # In most cases we don't raise StopActorExecutionError in post-upgrade
+ # actors.
+ #
raise StopActorExecutionError(
"Failed to append extra arguments to kernel command line.",
details={"details": str(e)})
@@ -31,22 +52,36 @@ def format_kernelarg_msgs_for_grubby_cmd(kernelarg_msgs):
return ' '.join(kernel_args)
-def modify_kernel_args_in_boot_cfg(configs_to_modify_explicitly=None):
- kernel_info = next(api.consume(InstalledTargetKernelInfo), None)
- if not kernel_info:
- return
+def set_default_kernel_args(kernel_args):
+ if use_cmdline_file():
+ # Put kernel_args into /etc/kernel/cmdline
+ with open(KERNEL_CMDLINE_FILE, 'w') as f:
+ f.write(kernel_args)
+ else:
+ # Use grub2-editenv to put the kernel args into /boot/grub2/grubenv
+ stdlib.run(['grub2-editenv', '-', 'set', 'kernelopts={}'.format(kernel_args)])
- # Collect desired kernelopt modifications
+
+def retrieve_arguments_to_modify():
+ """
+ Retrieve the arguments other actors would like to add or remove from the kernel cmdline.
+ """
kernelargs_msgs_to_add = list(api.consume(KernelCmdlineArg))
kernelargs_msgs_to_remove = []
+
for target_kernel_arg_task in api.consume(TargetKernelCmdlineArgTasks):
kernelargs_msgs_to_add.extend(target_kernel_arg_task.to_add)
kernelargs_msgs_to_remove.extend(target_kernel_arg_task.to_remove)
- if not kernelargs_msgs_to_add and not kernelargs_msgs_to_remove:
- return # There is no work to do
+ return kernelargs_msgs_to_add, kernelargs_msgs_to_remove
+
- grubby_modify_kernelargs_cmd = ['grubby', '--update-kernel={0}'.format(kernel_info.kernel_img_path)]
+def modify_args_for_default_kernel(kernel_info,
+ kernelargs_msgs_to_add,
+ kernelargs_msgs_to_remove,
+ configs_to_modify_explicitly=None):
+ grubby_modify_kernelargs_cmd = ['grubby',
+ '--update-kernel={0}'.format(kernel_info.kernel_img_path)]
if kernelargs_msgs_to_add:
grubby_modify_kernelargs_cmd += [
@@ -64,3 +99,116 @@ def modify_kernel_args_in_boot_cfg(configs_to_modify_explicitly=None):
run_grubby_cmd(cmd)
else:
run_grubby_cmd(grubby_modify_kernelargs_cmd)
+
+
+def _extract_grubby_value(record):
+ data = record.split('=', 1)[1]
+ matches = re.match(r'^([\'"]?)(.*)\1$', data)
+ return matches.group(2)
+
+
+def retrieve_args_for_default_kernel(kernel_info):
+ # Copy the args for the default kernel to all kernels.
+ kernel_args = None
+ kernel_root = None
+ cmd = ['grubby', '--info', kernel_info.kernel_img_path]
+ output = stdlib.run(cmd, split=False)
+ for record in output['stdout'].splitlines():
+ # This could be done with one regex but it's cleaner to parse it as
+ # structured data.
+ if record.startswith('args='):
+ temp_kernel_args = _extract_grubby_value(record)
+
+ if kernel_args:
+ api.current_logger().warning('Grubby output is malformed:'
+ ' `args=` is listed more than once.')
+ if kernel_args != temp_kernel_args:
+ raise ReadOfKernelArgsError('Grubby listed `args=` multiple'
+ ' times with different values.')
+ kernel_args = _extract_grubby_value(record)
+ elif record.startswith('root='):
+ api.current_logger().warning('Grubby output is malformed:'
+ ' `root=` is listed more than once.')
+ if kernel_root:
+ raise ReadOfKernelArgsError('Grubby listed `root=` multiple'
+ ' times with different values')
+ kernel_root = _extract_grubby_value(record)
+
+ if not kernel_args or not kernel_root:
+ raise ReadOfKernelArgsError(
+ 'Failed to retrieve kernel command line to save for future installed'
+ ' kernels: root={}, args={}'.format(kernel_root, kernel_args)
+ )
+
+ return kernel_root, kernel_args
+
+
+def modify_kernel_args_in_boot_cfg(configs_to_modify_explicitly=None):
+ kernel_info = next(api.consume(InstalledTargetKernelInfo), None)
+ if not kernel_info:
+ return
+
+ # Collect desired kernelopt modifications
+ kernelargs_msgs_to_add, kernelargs_msgs_to_remove = retrieve_arguments_to_modify()
+ if not kernelargs_msgs_to_add and not kernelargs_msgs_to_remove:
+ # Nothing to do
+ return
+
+ # Modify the kernel cmdline for the default kernel
+ modify_args_for_default_kernel(kernel_info,
+ kernelargs_msgs_to_add,
+ kernelargs_msgs_to_remove,
+ configs_to_modify_explicitly)
+
+ # Copy kernel params from the default kernel to all the kernels
+ kernel_root, kernel_args = retrieve_args_for_default_kernel(kernel_info)
+ complete_kernel_args = 'root={} {}'.format(kernel_root, kernel_args)
+ set_default_kernel_args(complete_kernel_args)
+
+
+def entrypoint(configs=None):
+ try:
+ modify_kernel_args_in_boot_cfg(configs)
+ except ReadOfKernelArgsError as e:
+ api.current_logger().error(str(e))
+
+ if use_cmdline_file():
+ report_hint = reporting.Hints(
+ 'After the system has been rebooted into the new version of RHEL, you'
+ ' should take the kernel cmdline arguments from /proc/cmdline (Everything'
+ ' except the BOOT_IMAGE entry and initrd entries) and copy them into'
+ ' /etc/kernel/cmdline before installing any new kernels.'
+ )
+ else:
+ report_hint = reporting.Hints(
+ 'After the system has been rebooted into the new version of RHEL, you'
+ ' should take the kernel cmdline arguments from /proc/cmdline (Everything'
+ ' except the BOOT_IMAGE entry and initrd entries) and then use the'
+ ' grub2-editenv command to make them the default kernel args. For example,'
+ ' if /proc/cmdline contains:\n\n'
+ ' BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-425.3.1.el8.x86_64'
+ ' root=/dev/mapper/rhel_ibm--root ro console=tty0'
+ ' console=ttyS0,115200 rd_NO_PLYMOUTH\n\n'
+ ' then run the following grub2-editenv command:\n\n'
+ ' # grub2-editenv - set "kernelopts=root=/dev/mapper/rhel_ibm--root'
+ ' ro console=tty0 console=ttyS0,115200 rd_NO_PLYMOUTH"'
+ )
+
+ reporting.create_report([
+ reporting.Title('Could not set the kernel arguments for future kernels'),
+ reporting.Summary(
+ 'During the upgrade we needed to modify the kernel command line arguments.'
+ ' We were able to change the arguments for the default kernel but we were'
+ ' not able to set the arguments as the default for kernels installed in'
+ ' the future.'
+ ),
+ report_hint,
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Groups([
+ reporting.Groups.BOOT,
+ reporting.Groups.KERNEL,
+ reporting.Groups.POST,
+ ]),
+ reporting.RelatedResource('file', '/etc/kernel/cmdline'),
+ reporting.RelatedResource('file', '/proc/cmdline'),
+ ])
diff --git a/repos/system_upgrade/common/actors/kernelcmdlineconfig/tests/test_kernelcmdlineconfig.py b/repos/system_upgrade/common/actors/kernelcmdlineconfig/tests/test_kernelcmdlineconfig.py
index 3f9b2e5e..ffe4b046 100644
--- a/repos/system_upgrade/common/actors/kernelcmdlineconfig/tests/test_kernelcmdlineconfig.py
+++ b/repos/system_upgrade/common/actors/kernelcmdlineconfig/tests/test_kernelcmdlineconfig.py
@@ -1,7 +1,10 @@
+from __future__ import division
+
from collections import namedtuple
import pytest
+from leapp.exceptions import StopActorExecutionError
from leapp.libraries import stdlib
from leapp.libraries.actor import kernelcmdlineconfig
from leapp.libraries.common.config import architecture
@@ -11,14 +14,44 @@ from leapp.models import InstalledTargetKernelInfo, KernelCmdlineArg, TargetKern
TARGET_KERNEL_NEVRA = 'kernel-core-1.2.3-4.x86_64.el8.x64_64'
+# pylint: disable=E501
+SAMPLE_KERNEL_ARGS = ('ro rootflags=subvol=root'
+ ' resume=/dev/mapper/luks-2c0df999-81ec-4a35-a1f9-b93afee8c6ad'
+ ' rd.luks.uuid=luks-90a6412f-c588-46ca-9118-5aca35943d25'
+ ' rd.luks.uuid=luks-2c0df999-81ec-4a35-a1f9-b93afee8c6ad rhgb quiet'
+ )
+SAMPLE_KERNEL_ROOT = 'UUID=1aa15850-2685-418d-95a6-f7266a2de83a'
+TEMPLATE_GRUBBY_INFO_OUTPUT = """index=0
+kernel="/boot/vmlinuz-6.5.13-100.fc37.x86_64"
+args="{0}"
+root="{1}"
+initrd="/boot/initramfs-6.5.13-100.fc37.x86_64.img"
+title="Fedora Linux (6.5.13-100.fc37.x86_64) 37 (Thirty Seven)"
+id="a3018267cdd8451db7c77bb3e5b1403d-6.5.13-100.fc37.x86_64"
+""" # noqa: E501
+SAMPLE_GRUBBY_INFO_OUTPUT = TEMPLATE_GRUBBY_INFO_OUTPUT.format(SAMPLE_KERNEL_ARGS, SAMPLE_KERNEL_ROOT)
+# pylint: enable=E501
+
class MockedRun(object):
- def __init__(self):
+ def __init__(self, outputs=None):
+ """
+ Mock stdlib.run().
+
+ If outputs is given, it is a dictionary mapping a cmd to output as stdout.
+ """
self.commands = []
+ self.outputs = outputs or {}
def __call__(self, cmd, *args, **kwargs):
self.commands.append(cmd)
- return {}
+ return {
+ "stdout": self.outputs.get(" ".join(cmd), ""),
+ "stderr": "",
+ "signal": None,
+ "exit_code": 0,
+ "pid": 1234,
+ }
@pytest.mark.parametrize(
@@ -50,7 +83,7 @@ class MockedRun(object):
),
]
)
-def test_kernelcmdline_config_valid_msgs(monkeypatch, msgs, expected_grubby_kernelopt_args):
+def test_kernelcmdline_config_valid_msgs(monkeypatch, tmpdir, msgs, expected_grubby_kernelopt_args):
kernel_img_path = '/boot/vmlinuz-X'
kernel_info = InstalledTargetKernelInfo(pkg_nevra=TARGET_KERNEL_NEVRA,
uname_r='',
@@ -61,18 +94,28 @@ def test_kernelcmdline_config_valid_msgs(monkeypatch, msgs, expected_grubby_kern
grubby_base_cmd = ['grubby', '--update-kernel={}'.format(kernel_img_path)]
expected_grubby_cmd = grubby_base_cmd + expected_grubby_kernelopt_args
- mocked_run = MockedRun()
+ mocked_run = MockedRun(
+ outputs={" ".join(("grubby", "--info", kernel_img_path)): SAMPLE_GRUBBY_INFO_OUTPUT}
+ )
monkeypatch.setattr(stdlib, 'run', mocked_run)
- monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(architecture.ARCH_X86_64, msgs=msgs))
+ monkeypatch.setattr(api, 'current_actor',
+ CurrentActorMocked(architecture.ARCH_X86_64,
+ dst_ver="8.1",
+ msgs=msgs)
+ )
kernelcmdlineconfig.modify_kernel_args_in_boot_cfg()
- assert mocked_run.commands and len(mocked_run.commands) == 1
- assert expected_grubby_cmd == mocked_run.commands.pop()
+ assert mocked_run.commands and len(mocked_run.commands) == 3
+ assert expected_grubby_cmd == mocked_run.commands.pop(0)
- mocked_run = MockedRun()
+ mocked_run = MockedRun(
+ outputs={" ".join(("grubby", "--info", kernel_img_path)): SAMPLE_GRUBBY_INFO_OUTPUT}
+ )
monkeypatch.setattr(stdlib, 'run', mocked_run)
monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(architecture.ARCH_S390X, msgs=msgs))
+ monkeypatch.setattr(kernelcmdlineconfig, 'KERNEL_CMDLINE_FILE', str(tmpdir / 'cmdline'))
+
kernelcmdlineconfig.modify_kernel_args_in_boot_cfg()
- assert mocked_run.commands and len(mocked_run.commands) == 2
+ assert mocked_run.commands and len(mocked_run.commands) == 3
assert expected_grubby_cmd == mocked_run.commands.pop(0)
assert ['/usr/sbin/zipl'] == mocked_run.commands.pop(0)
@@ -86,9 +129,17 @@ def test_kernelcmdline_explicit_configs(monkeypatch):
initramfs_path='/boot/initramfs-X')
msgs = [kernel_info, TargetKernelCmdlineArgTasks(to_remove=[KernelCmdlineArg(key='key1', value='value1')])]
- mocked_run = MockedRun()
+ grubby_cmd_info = ["grubby", "--info", kernel_img_path]
+ mocked_run = MockedRun(
+ outputs={" ".join(grubby_cmd_info): SAMPLE_GRUBBY_INFO_OUTPUT}
+ )
monkeypatch.setattr(stdlib, 'run', mocked_run)
- monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(architecture.ARCH_X86_64, msgs=msgs))
+ monkeypatch.setattr(api, 'current_actor',
+ CurrentActorMocked(architecture.ARCH_X86_64,
+ dst_ver="8.1",
+ msgs=msgs
+ )
+ )
configs = ['/boot/grub2/grub.cfg', '/boot/efi/EFI/redhat/grub.cfg']
kernelcmdlineconfig.modify_kernel_args_in_boot_cfg(configs_to_modify_explicitly=configs)
@@ -97,19 +148,27 @@ def test_kernelcmdline_explicit_configs(monkeypatch):
'--remove-args', 'key1=value1']
expected_cmds = [
grubby_cmd_without_config + ['-c', '/boot/grub2/grub.cfg'],
- grubby_cmd_without_config + ['-c', '/boot/efi/EFI/redhat/grub.cfg']
+ grubby_cmd_without_config + ['-c', '/boot/efi/EFI/redhat/grub.cfg'],
+ grubby_cmd_info,
+ ["grub2-editenv", "-", "set", "kernelopts=root={} {}".format(
+ SAMPLE_KERNEL_ROOT, SAMPLE_KERNEL_ARGS)],
]
assert mocked_run.commands == expected_cmds
def test_kernelcmdline_config_no_args(monkeypatch):
+ kernel_img_path = '/boot/vmlinuz-X'
kernel_info = InstalledTargetKernelInfo(pkg_nevra=TARGET_KERNEL_NEVRA,
uname_r='',
- kernel_img_path='/boot/vmlinuz-X',
+ kernel_img_path=kernel_img_path,
initramfs_path='/boot/initramfs-X')
- mocked_run = MockedRun()
+ mocked_run = MockedRun(
+ outputs={" ".join(("grubby", "--info", kernel_img_path)):
+ TEMPLATE_GRUBBY_INFO_OUTPUT.format("", "")
+ }
+ )
monkeypatch.setattr(stdlib, 'run', mocked_run)
monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(architecture.ARCH_S390X, msgs=[kernel_info]))
kernelcmdlineconfig.modify_kernel_args_in_boot_cfg()
@@ -122,3 +181,32 @@ def test_kernelcmdline_config_no_version(monkeypatch):
monkeypatch.setattr(api, 'current_actor', CurrentActorMocked(architecture.ARCH_S390X))
kernelcmdlineconfig.modify_kernel_args_in_boot_cfg()
assert not mocked_run.commands
+
+
+def test_kernelcmdline_config_malformed_args(monkeypatch):
+ kernel_img_path = '/boot/vmlinuz-X'
+ kernel_info = InstalledTargetKernelInfo(pkg_nevra=TARGET_KERNEL_NEVRA,
+ uname_r='',
+ kernel_img_path=kernel_img_path,
+ initramfs_path='/boot/initramfs-X')
+
+ # For this test, we need to check we get the proper error if grubby --info
+ # doesn't output any args information at all.
+ grubby_info_output = "\n".join(line for line in SAMPLE_GRUBBY_INFO_OUTPUT.splitlines()
+ if not line.startswith("args="))
+ mocked_run = MockedRun(
+ outputs={" ".join(("grubby", "--info", kernel_img_path)): grubby_info_output,
+ }
+ )
+ msgs = [kernel_info,
+ TargetKernelCmdlineArgTasks(to_remove=[
+ KernelCmdlineArg(key='key1', value='value1')])
+ ]
+ monkeypatch.setattr(stdlib, 'run', mocked_run)
+ monkeypatch.setattr(api, 'current_actor',
+ CurrentActorMocked(architecture.ARCH_S390X, msgs=msgs))
+
+ with pytest.raises(kernelcmdlineconfig.ReadOfKernelArgsError,
+ match="Failed to retrieve kernel command line to save for future"
+ " installed kernels."):
+ kernelcmdlineconfig.modify_kernel_args_in_boot_cfg()
--
2.42.0

View File

@ -0,0 +1,30 @@
From 0869ab168780f4fa10f37f34aaf51342a88c0e5c Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Tue, 30 Apr 2024 16:42:26 +0200
Subject: [PATCH 24/34] kernelcmdlineconfig: add newline in /etc/kernel/cmdline
Previous solution created the file without adding the newline
in the end of the file. The original solution worked, but to stay
on the safe side, adding the expected new line.
Jira: RHEL-26840, OAMG-10424
---
.../actors/kernelcmdlineconfig/libraries/kernelcmdlineconfig.py | 2 ++
1 file changed, 2 insertions(+)
diff --git a/repos/system_upgrade/common/actors/kernelcmdlineconfig/libraries/kernelcmdlineconfig.py b/repos/system_upgrade/common/actors/kernelcmdlineconfig/libraries/kernelcmdlineconfig.py
index ad59eb22..238a8aa6 100644
--- a/repos/system_upgrade/common/actors/kernelcmdlineconfig/libraries/kernelcmdlineconfig.py
+++ b/repos/system_upgrade/common/actors/kernelcmdlineconfig/libraries/kernelcmdlineconfig.py
@@ -57,6 +57,8 @@ def set_default_kernel_args(kernel_args):
# Put kernel_args into /etc/kernel/cmdline
with open(KERNEL_CMDLINE_FILE, 'w') as f:
f.write(kernel_args)
+ # new line is expected in the EOF (POSIX).
+ f.write('\n')
else:
# Use grub2-editenv to put the kernel args into /boot/grub2/grubenv
stdlib.run(['grub2-editenv', '-', 'set', 'kernelopts={}'.format(kernel_args)])
--
2.42.0

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,110 @@
From 88126ef33db2094b89fc17ad9e9a3962a9bb7d65 Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Mon, 19 Feb 2024 12:09:05 +0100
Subject: [PATCH 27/34] Move common Satellite Upgrade code to "common"
This allows the re-use of the code in the el8toel9 upgrade.
---
.../satellite_upgrade_services/actor.py | 35 +++++++++++++++++++
.../actors/satellite_upgrader/actor.py | 0
.../tests/unit_test_satellite_upgrader.py | 0
.../{el7toel8 => common}/models/satellite.py | 0
.../satellite_upgrade_data_migration/actor.py | 15 +-------
5 files changed, 36 insertions(+), 14 deletions(-)
create mode 100644 repos/system_upgrade/common/actors/satellite_upgrade_services/actor.py
rename repos/system_upgrade/{el7toel8 => common}/actors/satellite_upgrader/actor.py (100%)
rename repos/system_upgrade/{el7toel8 => common}/actors/satellite_upgrader/tests/unit_test_satellite_upgrader.py (100%)
rename repos/system_upgrade/{el7toel8 => common}/models/satellite.py (100%)
diff --git a/repos/system_upgrade/common/actors/satellite_upgrade_services/actor.py b/repos/system_upgrade/common/actors/satellite_upgrade_services/actor.py
new file mode 100644
index 00000000..3cda49a9
--- /dev/null
+++ b/repos/system_upgrade/common/actors/satellite_upgrade_services/actor.py
@@ -0,0 +1,35 @@
+import glob
+import os
+
+from leapp.actors import Actor
+from leapp.models import SatelliteFacts
+from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag
+
+SYSTEMD_WANTS_BASE = '/etc/systemd/system/multi-user.target.wants/'
+SERVICES_TO_DISABLE = ['dynflow-sidekiq@*', 'foreman', 'foreman-proxy',
+ 'httpd', 'postgresql', 'pulpcore-api', 'pulpcore-content',
+ 'pulpcore-worker@*', 'tomcat', 'redis']
+
+
+class SatelliteUpgradeServices(Actor):
+ """
+ Reconfigure Satellite services
+ """
+
+ name = 'satellite_upgrade_services'
+ consumes = (SatelliteFacts,)
+ produces = ()
+ tags = (IPUWorkflowTag, ApplicationsPhaseTag)
+
+ def process(self):
+ facts = next(self.consume(SatelliteFacts), None)
+ if not facts or not facts.has_foreman:
+ return
+
+ # disable services, will be re-enabled by the installer
+ for service_name in SERVICES_TO_DISABLE:
+ for service in glob.glob(os.path.join(SYSTEMD_WANTS_BASE, '{}.service'.format(service_name))):
+ try:
+ os.unlink(service)
+ except OSError as e:
+ self.log.warning('Failed disabling service {}: {}'.format(service, e))
diff --git a/repos/system_upgrade/el7toel8/actors/satellite_upgrader/actor.py b/repos/system_upgrade/common/actors/satellite_upgrader/actor.py
similarity index 100%
rename from repos/system_upgrade/el7toel8/actors/satellite_upgrader/actor.py
rename to repos/system_upgrade/common/actors/satellite_upgrader/actor.py
diff --git a/repos/system_upgrade/el7toel8/actors/satellite_upgrader/tests/unit_test_satellite_upgrader.py b/repos/system_upgrade/common/actors/satellite_upgrader/tests/unit_test_satellite_upgrader.py
similarity index 100%
rename from repos/system_upgrade/el7toel8/actors/satellite_upgrader/tests/unit_test_satellite_upgrader.py
rename to repos/system_upgrade/common/actors/satellite_upgrader/tests/unit_test_satellite_upgrader.py
diff --git a/repos/system_upgrade/el7toel8/models/satellite.py b/repos/system_upgrade/common/models/satellite.py
similarity index 100%
rename from repos/system_upgrade/el7toel8/models/satellite.py
rename to repos/system_upgrade/common/models/satellite.py
diff --git a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_data_migration/actor.py b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_data_migration/actor.py
index 0cf66970..1dd52691 100644
--- a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_data_migration/actor.py
+++ b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_data_migration/actor.py
@@ -11,15 +11,10 @@ POSTGRESQL_SCL_DATA_PATH = '/var/opt/rh/rh-postgresql12/lib/pgsql/data/'
POSTGRESQL_USER = 'postgres'
POSTGRESQL_GROUP = 'postgres'
-SYSTEMD_WANTS_BASE = '/etc/systemd/system/multi-user.target.wants/'
-SERVICES_TO_DISABLE = ['dynflow-sidekiq@*', 'foreman', 'foreman-proxy',
- 'httpd', 'postgresql', 'pulpcore-api', 'pulpcore-content',
- 'pulpcore-worker@*', 'tomcat']
-
class SatelliteUpgradeDataMigration(Actor):
"""
- Reconfigure Satellite services and migrate PostgreSQL data
+ Migrate Satellite PostgreSQL data
"""
name = 'satellite_upgrade_data_migration'
@@ -32,14 +27,6 @@ class SatelliteUpgradeDataMigration(Actor):
if not facts or not facts.has_foreman:
return
- # disable services, will be re-enabled by the installer
- for service_name in SERVICES_TO_DISABLE:
- for service in glob.glob(os.path.join(SYSTEMD_WANTS_BASE, '{}.service'.format(service_name))):
- try:
- os.unlink(service)
- except Exception as e: # pylint: disable=broad-except
- self.log.warning('Failed disabling service {}: {}'.format(service, e))
-
if facts.postgresql.local_postgresql and os.path.exists(POSTGRESQL_SCL_DATA_PATH):
# we can assume POSTGRESQL_DATA_PATH exists and is empty
# move PostgreSQL data to the new home
--
2.42.0

View File

@ -0,0 +1,252 @@
From 0f70dbf229c04b5374d767eeab25ad3fa32e0d8f Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Mon, 19 Feb 2024 12:23:28 +0100
Subject: [PATCH 28/34] Add el8toel9 upgrade facts for Satellite
This adds the el8toel9 specific fact scanner/generator for Satellite
upgrades. The result of this actor is what drives the actual upgrade
actions.
---
.../actors/satellite_upgrade_facts/actor.py | 71 ++++++++
.../unit_test_satellite_upgrade_facts.py | 151 ++++++++++++++++++
2 files changed, 222 insertions(+)
create mode 100644 repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/actor.py
create mode 100644 repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py
diff --git a/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/actor.py b/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/actor.py
new file mode 100644
index 00000000..2dc78cce
--- /dev/null
+++ b/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/actor.py
@@ -0,0 +1,71 @@
+from leapp.actors import Actor
+from leapp.libraries.common.config import architecture
+from leapp.libraries.common.rpms import has_package
+from leapp.models import (
+ InstalledRPM,
+ RepositoriesSetupTasks,
+ RpmTransactionTasks,
+ SatelliteFacts,
+ SatellitePostgresqlFacts,
+ UsedRepositories
+)
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+RELATED_PACKAGES = ('foreman', 'foreman-selinux', 'foreman-proxy', 'katello', 'katello-selinux',
+ 'candlepin', 'candlepin-selinux', 'pulpcore-selinux', 'satellite', 'satellite-capsule')
+RELATED_PACKAGE_PREFIXES = ('rubygem-hammer', 'rubygem-foreman', 'rubygem-katello',
+ 'rubygem-smart_proxy', 'python3.11-pulp', 'foreman-installer',
+ 'satellite-installer')
+
+
+class SatelliteUpgradeFacts(Actor):
+ """
+ Report which Satellite packages require updates and how to handle PostgreSQL data
+ """
+
+ name = 'satellite_upgrade_facts'
+ consumes = (InstalledRPM, UsedRepositories)
+ produces = (RepositoriesSetupTasks, RpmTransactionTasks, SatelliteFacts)
+ tags = (IPUWorkflowTag, FactsPhaseTag)
+
+ def process(self):
+ if not architecture.matches_architecture(architecture.ARCH_X86_64):
+ return
+
+ has_foreman = has_package(InstalledRPM, 'foreman') or has_package(InstalledRPM, 'foreman-proxy')
+ if not has_foreman:
+ return
+
+ local_postgresql = has_package(InstalledRPM, 'postgresql-server')
+
+ to_install = ['rubygem-foreman_maintain']
+
+ for rpm_pkgs in self.consume(InstalledRPM):
+ for pkg in rpm_pkgs.items:
+ if pkg.name in RELATED_PACKAGES or pkg.name.startswith(RELATED_PACKAGE_PREFIXES):
+ to_install.append(pkg.name)
+
+ if local_postgresql:
+ to_install.extend(['postgresql', 'postgresql-server'])
+ if has_package(InstalledRPM, 'postgresql-contrib'):
+ to_install.append('postgresql-contrib')
+ if has_package(InstalledRPM, 'postgresql-evr'):
+ to_install.append('postgresql-evr')
+
+ self.produce(SatelliteFacts(
+ has_foreman=has_foreman,
+ has_katello_installer=False,
+ postgresql=SatellitePostgresqlFacts(
+ local_postgresql=local_postgresql,
+ ),
+ ))
+
+ repositories_to_enable = []
+ for used_repos in self.consume(UsedRepositories):
+ for used_repo in used_repos.repositories:
+ if used_repo.repository.startswith(('satellite-6', 'satellite-capsule-6', 'satellite-maintenance-6')):
+ repositories_to_enable.append(used_repo.repository.replace('for-rhel-8', 'for-rhel-9'))
+ if repositories_to_enable:
+ self.produce(RepositoriesSetupTasks(to_enable=repositories_to_enable))
+
+ self.produce(RpmTransactionTasks(to_install=to_install))
diff --git a/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py b/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py
new file mode 100644
index 00000000..b0e44c46
--- /dev/null
+++ b/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py
@@ -0,0 +1,151 @@
+from leapp.libraries.common.config import mock_configs
+from leapp.models import (
+ InstalledRPM,
+ RepositoriesSetupTasks,
+ RPM,
+ RpmTransactionTasks,
+ SatelliteFacts,
+ UsedRepositories,
+ UsedRepository
+)
+
+RH_PACKAGER = 'Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>'
+
+
+def fake_package(pkg_name):
+ return RPM(name=pkg_name, version='0.1', release='1.sm01', epoch='1', packager=RH_PACKAGER, arch='noarch',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID 199e2f91fd431d51')
+
+
+FOREMAN_RPM = fake_package('foreman')
+FOREMAN_PROXY_RPM = fake_package('foreman-proxy')
+KATELLO_INSTALLER_RPM = fake_package('foreman-installer-katello')
+KATELLO_RPM = fake_package('katello')
+RUBYGEM_KATELLO_RPM = fake_package('rubygem-katello')
+RUBYGEM_FOREMAN_PUPPET_RPM = fake_package('rubygem-foreman_puppet')
+POSTGRESQL_RPM = fake_package('postgresql-server')
+SATELLITE_RPM = fake_package('satellite')
+SATELLITE_CAPSULE_RPM = fake_package('satellite-capsule')
+
+SATELLITE_REPOSITORY = UsedRepository(repository='satellite-6.99-for-rhel-8-x86_64-rpms')
+CAPSULE_REPOSITORY = UsedRepository(repository='satellite-capsule-6.99-for-rhel-8-x86_64-rpms')
+MAINTENANCE_REPOSITORY = UsedRepository(repository='satellite-maintenance-6.99-for-rhel-8-x86_64-rpms')
+
+
+def test_no_satellite_present(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+ message = current_actor_context.consume(SatelliteFacts)
+ assert not message
+
+
+def test_satellite_present(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+ message = current_actor_context.consume(SatelliteFacts)[0]
+ assert message.has_foreman
+
+
+def test_wrong_arch(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG_S390X)
+ message = current_actor_context.consume(SatelliteFacts)
+ assert not message
+
+
+def test_satellite_capsule_present(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_PROXY_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+ message = current_actor_context.consume(SatelliteFacts)[0]
+ assert message.has_foreman
+
+
+def test_no_katello_installer_present(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+ message = current_actor_context.consume(SatelliteFacts)[0]
+ assert not message.has_katello_installer
+
+
+def test_katello_installer_present(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM, KATELLO_INSTALLER_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+ message = current_actor_context.consume(SatelliteFacts)[0]
+ # while the katello installer rpm is present, we do not want this to be true
+ # as the version in EL8 doesn't have the system checks we skip with this flag
+ assert not message.has_katello_installer
+
+
+def test_installs_related_package(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM, KATELLO_RPM, RUBYGEM_KATELLO_RPM,
+ RUBYGEM_FOREMAN_PUPPET_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+ message = current_actor_context.consume(RpmTransactionTasks)[0]
+ assert 'katello' in message.to_install
+ assert 'rubygem-katello' in message.to_install
+ assert 'rubygem-foreman_puppet' in message.to_install
+
+
+def test_installs_satellite_package(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM, SATELLITE_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+ message = current_actor_context.consume(RpmTransactionTasks)[0]
+ assert 'satellite' in message.to_install
+ assert 'satellite-capsule' not in message.to_install
+
+
+def test_installs_satellite_capsule_package(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_PROXY_RPM, SATELLITE_CAPSULE_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+ message = current_actor_context.consume(RpmTransactionTasks)[0]
+ assert 'satellite-capsule' in message.to_install
+ assert 'satellite' not in message.to_install
+
+
+def test_detects_local_postgresql(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM, POSTGRESQL_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+
+ satellitemsg = current_actor_context.consume(SatelliteFacts)[0]
+ assert satellitemsg.postgresql.local_postgresql
+
+
+def test_detects_remote_postgresql(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+
+ satellitemsg = current_actor_context.consume(SatelliteFacts)[0]
+ assert not satellitemsg.postgresql.local_postgresql
+
+
+def test_enables_right_repositories_on_satellite(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM, SATELLITE_RPM]))
+ current_actor_context.feed(UsedRepositories(repositories=[SATELLITE_REPOSITORY, MAINTENANCE_REPOSITORY]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+
+ rpmmessage = current_actor_context.consume(RepositoriesSetupTasks)[0]
+
+ assert 'satellite-maintenance-6.99-for-rhel-9-x86_64-rpms' in rpmmessage.to_enable
+ assert 'satellite-6.99-for-rhel-9-x86_64-rpms' in rpmmessage.to_enable
+ assert 'satellite-capsule-6.99-for-rhel-9-x86_64-rpms' not in rpmmessage.to_enable
+
+
+def test_enables_right_repositories_on_capsule(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_PROXY_RPM, SATELLITE_CAPSULE_RPM]))
+ current_actor_context.feed(UsedRepositories(repositories=[CAPSULE_REPOSITORY, MAINTENANCE_REPOSITORY]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+
+ rpmmessage = current_actor_context.consume(RepositoriesSetupTasks)[0]
+
+ assert 'satellite-maintenance-6.99-for-rhel-9-x86_64-rpms' in rpmmessage.to_enable
+ assert 'satellite-6.99-for-rhel-9-x86_64-rpms' not in rpmmessage.to_enable
+ assert 'satellite-capsule-6.99-for-rhel-9-x86_64-rpms' in rpmmessage.to_enable
+
+
+def test_enables_right_repositories_on_upstream(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+
+ message = current_actor_context.consume(RepositoriesSetupTasks)
+
+ assert not message
--
2.42.0

View File

@ -0,0 +1,112 @@
From 720bb13c5eb411469e8bf825b93aeefdc771f039 Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Wed, 24 Apr 2024 13:58:39 +0200
Subject: [PATCH 29/34] Refresh collation version if pulp-ansible is present
When migrating to a new OS, we REINDEX all databases.
pulp_ansible ships with an own collation (using ICU), which needs a
version refresh after the REINDEX has been done.
---
.../common/actors/satellite_upgrader/actor.py | 4 ++++
.../tests/unit_test_satellite_upgrader.py | 18 ++++++++++++++++++
.../system_upgrade/common/models/satellite.py | 2 ++
.../actors/satellite_upgrade_facts/actor.py | 1 +
.../tests/unit_test_satellite_upgrade_facts.py | 9 +++++++++
5 files changed, 34 insertions(+)
diff --git a/repos/system_upgrade/common/actors/satellite_upgrader/actor.py b/repos/system_upgrade/common/actors/satellite_upgrader/actor.py
index f498f2fa..2e0290ae 100644
--- a/repos/system_upgrade/common/actors/satellite_upgrader/actor.py
+++ b/repos/system_upgrade/common/actors/satellite_upgrader/actor.py
@@ -25,6 +25,10 @@ class SatelliteUpgrader(Actor):
run(['sed', '-i', '/data_directory/d', '/var/lib/pgsql/data/postgresql.conf'])
run(['systemctl', 'start', 'postgresql'])
run(['runuser', '-u', 'postgres', '--', 'reindexdb', '-a'])
+ if facts.postgresql.has_pulp_ansible_semver:
+ run(['runuser', '-c',
+ 'echo "ALTER COLLATION pulp_ansible_semver REFRESH VERSION;" | psql pulpcore',
+ 'postgres'])
except CalledProcessError as e:
api.current_logger().error('Failed to reindex the database: {}'.format(str(e)))
diff --git a/repos/system_upgrade/common/actors/satellite_upgrader/tests/unit_test_satellite_upgrader.py b/repos/system_upgrade/common/actors/satellite_upgrader/tests/unit_test_satellite_upgrader.py
index 2f3509f3..55896c75 100644
--- a/repos/system_upgrade/common/actors/satellite_upgrader/tests/unit_test_satellite_upgrader.py
+++ b/repos/system_upgrade/common/actors/satellite_upgrader/tests/unit_test_satellite_upgrader.py
@@ -48,3 +48,21 @@ def test_run_reindexdb(monkeypatch, current_actor_context):
assert mocked_run.commands[1] == ['systemctl', 'start', 'postgresql']
assert mocked_run.commands[2] == ['runuser', '-u', 'postgres', '--', 'reindexdb', '-a']
assert mocked_run.commands[3] == ['foreman-installer', '--disable-system-checks']
+
+
+def test_run_reindexdb_with_pulp_ansible(monkeypatch, current_actor_context):
+ mocked_run = MockedRun()
+ monkeypatch.setattr('leapp.libraries.stdlib.run', mocked_run)
+ current_actor_context.feed(SatelliteFacts(has_foreman=True,
+ postgresql=SatellitePostgresqlFacts(local_postgresql=True,
+ has_pulp_ansible_semver=True)))
+ current_actor_context.run()
+ assert mocked_run.commands
+ assert len(mocked_run.commands) == 5
+ assert mocked_run.commands[0] == ['sed', '-i', '/data_directory/d', '/var/lib/pgsql/data/postgresql.conf']
+ assert mocked_run.commands[1] == ['systemctl', 'start', 'postgresql']
+ assert mocked_run.commands[2] == ['runuser', '-u', 'postgres', '--', 'reindexdb', '-a']
+ assert mocked_run.commands[3] == ['runuser', '-c',
+ 'echo "ALTER COLLATION pulp_ansible_semver REFRESH VERSION;" | psql pulpcore',
+ 'postgres']
+ assert mocked_run.commands[4] == ['foreman-installer', '--disable-system-checks']
diff --git a/repos/system_upgrade/common/models/satellite.py b/repos/system_upgrade/common/models/satellite.py
index b4282790..532f6a3a 100644
--- a/repos/system_upgrade/common/models/satellite.py
+++ b/repos/system_upgrade/common/models/satellite.py
@@ -15,6 +15,8 @@ class SatellitePostgresqlFacts(Model):
""" How many bytes are required on the target partition """
space_available = fields.Nullable(fields.Integer())
""" How many bytes are available on the target partition """
+ has_pulp_ansible_semver = fields.Boolean(default=False)
+ """ Whether the DB has the pulp_ansible_semver collation """
class SatelliteFacts(Model):
diff --git a/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/actor.py b/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/actor.py
index 2dc78cce..46612876 100644
--- a/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/actor.py
+++ b/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/actor.py
@@ -57,6 +57,7 @@ class SatelliteUpgradeFacts(Actor):
has_katello_installer=False,
postgresql=SatellitePostgresqlFacts(
local_postgresql=local_postgresql,
+ has_pulp_ansible_semver=has_package(InstalledRPM, 'python3.11-pulp-ansible'),
),
))
diff --git a/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py b/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py
index b0e44c46..e7ca512e 100644
--- a/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py
+++ b/repos/system_upgrade/el8toel9/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py
@@ -26,6 +26,7 @@ RUBYGEM_FOREMAN_PUPPET_RPM = fake_package('rubygem-foreman_puppet')
POSTGRESQL_RPM = fake_package('postgresql-server')
SATELLITE_RPM = fake_package('satellite')
SATELLITE_CAPSULE_RPM = fake_package('satellite-capsule')
+PULP_ANSIBLE_RPM = fake_package('python3.11-pulp-ansible')
SATELLITE_REPOSITORY = UsedRepository(repository='satellite-6.99-for-rhel-8-x86_64-rpms')
CAPSULE_REPOSITORY = UsedRepository(repository='satellite-capsule-6.99-for-rhel-8-x86_64-rpms')
@@ -118,6 +119,14 @@ def test_detects_remote_postgresql(current_actor_context):
assert not satellitemsg.postgresql.local_postgresql
+def test_detects_pulp_ansible(current_actor_context):
+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM, POSTGRESQL_RPM, PULP_ANSIBLE_RPM]))
+ current_actor_context.run(config_model=mock_configs.CONFIG)
+
+ satellitemsg = current_actor_context.consume(SatelliteFacts)[0]
+ assert satellitemsg.postgresql.has_pulp_ansible_semver
+
+
def test_enables_right_repositories_on_satellite(current_actor_context):
current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM, SATELLITE_RPM]))
current_actor_context.feed(UsedRepositories(repositories=[SATELLITE_REPOSITORY, MAINTENANCE_REPOSITORY]))
--
2.42.0

View File

@ -0,0 +1,90 @@
From bad2fb2e446246dc80807b078c80e0c98fd72fe5 Mon Sep 17 00:00:00 2001
From: Evgeni Golov <evgeni@golov.de>
Date: Fri, 26 Apr 2024 09:08:58 +0200
Subject: [PATCH 30/34] Refactor satellite_upgrade_services to use
SystemdServicesTasks
We used to just delete the symlinks in /etc/systemd, but with the new
systemd actors this doesn't work anymore as they will restore the
pre-delete state because they by default aim at having source and
target systems match in terms of services. By using SystemdServicesTasks
we can explicitly turn those services off and inform all interested
parties about this.
---
.../satellite_upgrade_services/actor.py | 15 ++++++------
.../unit_test_satellite_upgrade_services.py | 24 +++++++++++++++++++
2 files changed, 31 insertions(+), 8 deletions(-)
create mode 100644 repos/system_upgrade/common/actors/satellite_upgrade_services/tests/unit_test_satellite_upgrade_services.py
diff --git a/repos/system_upgrade/common/actors/satellite_upgrade_services/actor.py b/repos/system_upgrade/common/actors/satellite_upgrade_services/actor.py
index 3cda49a9..d14edfb7 100644
--- a/repos/system_upgrade/common/actors/satellite_upgrade_services/actor.py
+++ b/repos/system_upgrade/common/actors/satellite_upgrade_services/actor.py
@@ -2,8 +2,8 @@ import glob
import os
from leapp.actors import Actor
-from leapp.models import SatelliteFacts
-from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag
+from leapp.models import SatelliteFacts, SystemdServicesTasks
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
SYSTEMD_WANTS_BASE = '/etc/systemd/system/multi-user.target.wants/'
SERVICES_TO_DISABLE = ['dynflow-sidekiq@*', 'foreman', 'foreman-proxy',
@@ -18,8 +18,8 @@ class SatelliteUpgradeServices(Actor):
name = 'satellite_upgrade_services'
consumes = (SatelliteFacts,)
- produces = ()
- tags = (IPUWorkflowTag, ApplicationsPhaseTag)
+ produces = (SystemdServicesTasks,)
+ tags = (IPUWorkflowTag, FactsPhaseTag)
def process(self):
facts = next(self.consume(SatelliteFacts), None)
@@ -27,9 +27,8 @@ class SatelliteUpgradeServices(Actor):
return
# disable services, will be re-enabled by the installer
+ services_to_disable = []
for service_name in SERVICES_TO_DISABLE:
for service in glob.glob(os.path.join(SYSTEMD_WANTS_BASE, '{}.service'.format(service_name))):
- try:
- os.unlink(service)
- except OSError as e:
- self.log.warning('Failed disabling service {}: {}'.format(service, e))
+ services_to_disable.append(os.path.basename(service))
+ self.produce(SystemdServicesTasks(to_disable=services_to_disable))
diff --git a/repos/system_upgrade/common/actors/satellite_upgrade_services/tests/unit_test_satellite_upgrade_services.py b/repos/system_upgrade/common/actors/satellite_upgrade_services/tests/unit_test_satellite_upgrade_services.py
new file mode 100644
index 00000000..f41621ab
--- /dev/null
+++ b/repos/system_upgrade/common/actors/satellite_upgrade_services/tests/unit_test_satellite_upgrade_services.py
@@ -0,0 +1,24 @@
+import glob
+
+from leapp.models import SatelliteFacts, SatellitePostgresqlFacts, SystemdServicesTasks
+
+
+def test_disable_httpd(monkeypatch, current_actor_context):
+ def mock_glob():
+ orig_glob = glob.glob
+
+ def mocked_glob(pathname):
+ if pathname == '/etc/systemd/system/multi-user.target.wants/httpd.service':
+ return [pathname]
+ return orig_glob(pathname)
+
+ return mocked_glob
+
+ monkeypatch.setattr('glob.glob', mock_glob())
+
+ current_actor_context.feed(SatelliteFacts(has_foreman=True,
+ postgresql=SatellitePostgresqlFacts(local_postgresql=False)))
+ current_actor_context.run()
+
+ message = current_actor_context.consume(SystemdServicesTasks)[0]
+ assert 'httpd.service' in message.to_disable
--
2.42.0

View File

@ -0,0 +1,144 @@
From 64e2c58ac3bd97cbb09daf4c861204705c69ec97 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Fri, 3 May 2024 14:44:51 +0200
Subject: [PATCH 31/34] mount /usr: Implement try-sleep loop - add time for
storage initialisation
This problem is typical for SAN + FC when the storage needs sometimes
more time for the initialisation. Implemented try-sleep loop.
Retry the activation of the storage + /usr mounting in 15s.
The loop can be repeated 10 times, so total time is 150s right now
for the activation.
Note that this is not proper solution for the storage initialisation,
however we have discovered some obstacles in the bootup process to
be able to do it correctly as we would like to. Regarding limited
time, we are going to deliver this solution, that should improve
the experience and should be safe to not cause regressions for already
working functionality. We expect to provide better solution for
newer upgrades paths in future (IPU 8->9 and newer).
jira: https://issues.redhat.com/browse/RHEL-3344
---
.../dracut/85sys-upgrade-redhat/mount_usr.sh | 95 +++++++++++++++----
1 file changed, 79 insertions(+), 16 deletions(-)
diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/mount_usr.sh b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/mount_usr.sh
index 3c52652f..db065d87 100755
--- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/mount_usr.sh
+++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/mount_usr.sh
@@ -22,6 +22,18 @@ filtersubvol() {
mount_usr()
{
+ #
+ # mount_usr [true | false]
+ # Expected a "true" value for the last attempt to mount /usr. On the last
+ # attempt, in case of failure drop to shell.
+ #
+ # Return 0 when everything is all right
+ # In case of failure and /usr has been detected:
+ # return 2 when $1 is "true" (drop to shell invoked)
+ # (note: possibly it's nonsense, but to be sure..)
+ # return 1 otherwise
+ #
+ _last_attempt="$1"
# check, if we have to mount the /usr filesystem
while read -r _dev _mp _fs _opts _freq _passno; do
[ "${_dev%%#*}" != "$_dev" ] && continue
@@ -60,25 +72,76 @@ mount_usr()
fi
done < "${NEWROOT}/etc/fstab" >> /etc/fstab
- if [ "$_usr_found" != "" ]; then
- info "Mounting /usr with -o $_opts"
- mount "${NEWROOT}/usr" 2>&1 | vinfo
- mount -o remount,rw "${NEWROOT}/usr"
+ if [ "$_usr_found" = "" ]; then
+ # nothing to do
+ return 0
+ fi
- if ! ismounted "${NEWROOT}/usr"; then
- warn "Mounting /usr to ${NEWROOT}/usr failed"
- warn "*** Dropping you to a shell; the system will continue"
- warn "*** when you leave the shell."
- action_on_fail
- fi
+ info "Mounting /usr with -o $_opts"
+ mount "${NEWROOT}/usr" 2>&1 | vinfo
+ mount -o remount,rw "${NEWROOT}/usr"
+
+ if ismounted "${NEWROOT}/usr"; then
+ # success!!
+ return 0
+ fi
+
+ if [ "$_last_attempt" = "true" ]; then
+ warn "Mounting /usr to ${NEWROOT}/usr failed"
+ warn "*** Dropping you to a shell; the system will continue"
+ warn "*** when you leave the shell."
+ action_on_fail
+ return 2
fi
+
+ return 1
}
-if [ -f "${NEWROOT}/etc/fstab" ]; then
- # In case we have the LVM command available try make it activate all partitions
- if command -v lvm 2>/dev/null 1>/dev/null; then
- lvm vgchange -a y
+
+try_to_mount_usr() {
+ _last_attempt="$1"
+ if [ ! -f "${NEWROOT}/etc/fstab" ]; then
+ warn "File ${NEWROOT}/etc/fstab doesn't exist."
+ return 1
+ fi
+
+ # In case we have the LVM command available try make it activate all partitions
+ if command -v lvm 2>/dev/null 1>/dev/null; then
+ lvm vgchange -a y || {
+ warn "Detected problem when tried to activate LVM VG."
+ if [ "$_last_attempt" != "true" ]; then
+ # this is not last execution, retry
+ return 1
+ fi
+ # NOTE(pstodulk):
+ # last execution, so call mount_usr anyway
+ # I am not 100% about lvm vgchange exit codes and I am aware of
+ # possible warnings, in this last run, let's keep it on mount_usr
+ # anyway..
+ }
+ fi
+
+ mount_usr "$1"
+}
+
+_sleep_timeout=15
+_last_attempt="false"
+for i in 0 1 2 3 4 5 6 7 8 9 10 11; do
+ if [ $i -eq 11 ]; then
+ _last_attempt="true"
fi
+ try_to_mount_usr "$_last_attempt" && break
+
+ # something is wrong. In some cases, storage needs more time for the
+ # initialisation - especially in case of SAN.
+
+ if [ "$_last_attempt" = "true" ]; then
+ warn "The last attempt to initialize storage has not been successful."
+ warn "Unknown state of the storage. It is possible that upgrade will be stopped."
+ break
+ fi
+
+ warn "Failed attempt to initialize the storage. Retry in $_sleep_timeout seconds. Attempt: $i of 10"
+ sleep $_sleep_timeout
+done
- mount_usr
-fi
--
2.42.0

View File

@ -0,0 +1,374 @@
From 3cb522d3a682365dae5d8745056f4671bdd5e41b Mon Sep 17 00:00:00 2001
From: Michal Reznik <mreznik@redhat.com>
Date: Fri, 3 May 2024 13:47:49 +0200
Subject: [PATCH 32/34] Add additional KB resources
add aditional KB resources in a form of ExternalLink or
error details as requested by support
---
.../libraries/checkbootavailspace.py | 4 ++++
.../common/actors/checkcifs/libraries/checkcifs.py | 5 +++++
.../libraries/checkdddd.py | 10 ++++++++++
.../common/actors/checkmemory/libraries/checkmemory.py | 5 +++++
repos/system_upgrade/common/actors/checknfs/actor.py | 4 ++++
.../common/actors/checkrootsymlinks/actor.py | 5 +++++
.../libraries/checkyumpluginsenabled.py | 4 ++++
.../libraries/checkinstalledkernels.py | 5 +++++
.../missinggpgkeysinhibitor/libraries/missinggpgkey.py | 5 ++++-
.../common/actors/opensshpermitrootlogincheck/actor.py | 5 +++++
.../common/actors/persistentnetnamesdisable/actor.py | 5 +++++
.../targetuserspacecreator/libraries/userspacegen.py | 9 +++++++++
.../actors/verifydialogs/libraries/verifydialogs.py | 5 +++++
repos/system_upgrade/common/libraries/rhsm.py | 3 ++-
.../system_upgrade/el7toel8/actors/checkbtrfs/actor.py | 4 ++++
.../actors/checkhacluster/libraries/checkhacluster.py | 4 ++++
.../el7toel8/actors/checkremovedpammodules/actor.py | 4 ++++
.../libraries/checkinstalleddevelkernels.py | 4 ++++
.../libraries/satellite_upgrade_check.py | 5 +++++
.../actors/checkifcfg/libraries/checkifcfg_ifcfg.py | 5 +++++
.../actors/firewalldcheckallowzonedrifting/actor.py | 5 +++++
21 files changed, 103 insertions(+), 2 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checkbootavailspace/libraries/checkbootavailspace.py b/repos/system_upgrade/common/actors/checkbootavailspace/libraries/checkbootavailspace.py
index 7380f335..0cc4cf7d 100644
--- a/repos/system_upgrade/common/actors/checkbootavailspace/libraries/checkbootavailspace.py
+++ b/repos/system_upgrade/common/actors/checkbootavailspace/libraries/checkbootavailspace.py
@@ -32,6 +32,10 @@ def inhibit_upgrade(avail_bytes):
'/boot needs additional {0} MiB to be able to accommodate the upgrade initramfs and new kernel.'.format(
additional_mib_needed)
),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/298263',
+ title='Why does kernel cannot be upgraded due to insufficient space in /boot ?'
+ ),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.FILESYSTEM]),
reporting.Groups([reporting.Groups.INHIBITOR]),
diff --git a/repos/system_upgrade/common/actors/checkcifs/libraries/checkcifs.py b/repos/system_upgrade/common/actors/checkcifs/libraries/checkcifs.py
index b3ae146f..fc26ea70 100644
--- a/repos/system_upgrade/common/actors/checkcifs/libraries/checkcifs.py
+++ b/repos/system_upgrade/common/actors/checkcifs/libraries/checkcifs.py
@@ -18,6 +18,11 @@ def checkcifs(storage_info):
reporting.Groups.NETWORK
]),
reporting.Remediation(hint='Comment out CIFS entries to proceed with the upgrade.'),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/6964304',
+ title='Leapp upgrade failed with error '
+ '"Inhibitor: Use of CIFS detected. Upgrade cannot proceed"'
+ ),
reporting.Groups([reporting.Groups.INHIBITOR]),
reporting.RelatedResource('file', '/etc/fstab')
])
diff --git a/repos/system_upgrade/common/actors/checkdetecteddevicesanddrivers/libraries/checkdddd.py b/repos/system_upgrade/common/actors/checkdetecteddevicesanddrivers/libraries/checkdddd.py
index df431c0e..defe3f9a 100644
--- a/repos/system_upgrade/common/actors/checkdetecteddevicesanddrivers/libraries/checkdddd.py
+++ b/repos/system_upgrade/common/actors/checkdetecteddevicesanddrivers/libraries/checkdddd.py
@@ -35,6 +35,16 @@ def create_inhibitors(inhibiting_entries):
source=get_source_major_version(),
)
),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/6971716',
+ title='Leapp preupgrade getting "Inhibitor: Detected loaded kernel drivers which have been '
+ 'removed in RHEL 8. Upgrade cannot proceed." '
+ ),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/5436131',
+ title='Leapp upgrade fail with error "Inhibitor: Detected loaded kernel drivers which '
+ 'have been removed in RHEL 8. Upgrade cannot proceed."'
+ ),
reporting.Audience('sysadmin'),
reporting.Groups([reporting.Groups.KERNEL, reporting.Groups.DRIVERS]),
reporting.Severity(reporting.Severity.HIGH),
diff --git a/repos/system_upgrade/common/actors/checkmemory/libraries/checkmemory.py b/repos/system_upgrade/common/actors/checkmemory/libraries/checkmemory.py
index 25012273..808c9662 100644
--- a/repos/system_upgrade/common/actors/checkmemory/libraries/checkmemory.py
+++ b/repos/system_upgrade/common/actors/checkmemory/libraries/checkmemory.py
@@ -42,6 +42,11 @@ def process():
reporting.Summary(summary),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.SANITY, reporting.Groups.INHIBITOR]),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/7014179',
+ title='Leapp upgrade fail with error"Minimum memory requirements '
+ 'for RHEL 8 are not met"Upgrade cannot proceed'
+ ),
reporting.ExternalLink(
url='https://access.redhat.com/articles/rhel-limits',
title='Red Hat Enterprise Linux Technology Capabilities and Limits'
diff --git a/repos/system_upgrade/common/actors/checknfs/actor.py b/repos/system_upgrade/common/actors/checknfs/actor.py
index 208c5dd9..94c5e606 100644
--- a/repos/system_upgrade/common/actors/checknfs/actor.py
+++ b/repos/system_upgrade/common/actors/checknfs/actor.py
@@ -61,6 +61,10 @@ class CheckNfs(Actor):
reporting.Groups.NETWORK
]),
reporting.Remediation(hint='Disable NFS temporarily for the upgrade if possible.'),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/6964006',
+ title='Why does leapp upgrade fail on detecting NFS during upgrade?'
+ ),
reporting.Groups([reporting.Groups.INHIBITOR]),
] + fstab_related_resource
)
diff --git a/repos/system_upgrade/common/actors/checkrootsymlinks/actor.py b/repos/system_upgrade/common/actors/checkrootsymlinks/actor.py
index 2769b7c1..c35272b2 100644
--- a/repos/system_upgrade/common/actors/checkrootsymlinks/actor.py
+++ b/repos/system_upgrade/common/actors/checkrootsymlinks/actor.py
@@ -37,6 +37,11 @@ class CheckRootSymlinks(Actor):
'point to absolute paths.\n'
'Please change these links to relative ones.'
),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/6989732',
+ title='leapp upgrade stops with Inhibitor "Upgrade requires links in root '
+ 'directory to be relative"'
+ ),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.INHIBITOR])]
diff --git a/repos/system_upgrade/common/actors/checkyumpluginsenabled/libraries/checkyumpluginsenabled.py b/repos/system_upgrade/common/actors/checkyumpluginsenabled/libraries/checkyumpluginsenabled.py
index 48f38d0a..5522af9c 100644
--- a/repos/system_upgrade/common/actors/checkyumpluginsenabled/libraries/checkyumpluginsenabled.py
+++ b/repos/system_upgrade/common/actors/checkyumpluginsenabled/libraries/checkyumpluginsenabled.py
@@ -63,6 +63,10 @@ def check_required_yum_plugins_enabled(pkg_manager_info):
# Provide all commands as one due to problems with satellites
commands=[['bash', '-c', '"{0}"'.format('; '.join(remediation_commands))]]
),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/7028063',
+ title='Why is Leapp preupgrade generating "Inhibitor: Required YUM plugins are not being loaded."'
+ ),
reporting.RelatedResource('file', pkg_manager_config_path),
reporting.RelatedResource('file', subscription_manager_plugin_conf),
reporting.RelatedResource('file', product_id_plugin_conf),
diff --git a/repos/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py b/repos/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py
index 95882d29..4573354b 100644
--- a/repos/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py
+++ b/repos/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py
@@ -103,5 +103,10 @@ def process():
reporting.Groups([reporting.Groups.KERNEL, reporting.Groups.BOOT]),
reporting.Groups([reporting.Groups.INHIBITOR]),
reporting.Remediation(hint=remediation),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/7014134',
+ title='Leapp upgrade fail with error "Inhibitor:Newest installed kernel '
+ 'not in use" Upgrade cannot proceed'
+ ),
reporting.RelatedResource('package', 'kernel')
])
diff --git a/repos/system_upgrade/common/actors/missinggpgkeysinhibitor/libraries/missinggpgkey.py b/repos/system_upgrade/common/actors/missinggpgkeysinhibitor/libraries/missinggpgkey.py
index 9a806ca2..4b93e741 100644
--- a/repos/system_upgrade/common/actors/missinggpgkeysinhibitor/libraries/missinggpgkey.py
+++ b/repos/system_upgrade/common/actors/missinggpgkeysinhibitor/libraries/missinggpgkey.py
@@ -65,7 +65,10 @@ def _consume_data():
used_target_repos = next(api.consume(UsedTargetRepositories)).repos
except StopIteration:
raise StopActorExecutionError(
- 'Could not check for valid GPG keys', details={'details': 'No UsedTargetRepositories facts'}
+ 'Could not check for valid GPG keys', details={
+ 'details': 'No UsedTargetRepositories facts',
+ 'link': 'https://access.redhat.com/solutions/7061850'
+ }
)
try:
diff --git a/repos/system_upgrade/common/actors/opensshpermitrootlogincheck/actor.py b/repos/system_upgrade/common/actors/opensshpermitrootlogincheck/actor.py
index 2ac4ec8f..7a49622f 100644
--- a/repos/system_upgrade/common/actors/opensshpermitrootlogincheck/actor.py
+++ b/repos/system_upgrade/common/actors/opensshpermitrootlogincheck/actor.py
@@ -135,6 +135,11 @@ class OpenSshPermitRootLoginCheck(Actor):
'sshd_config next to the "PermitRootLogin yes" directive '
'to prevent rpm replacing it during the upgrade.'
),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/7003083',
+ title='Why Leapp Preupgrade for RHEL 8 to 9 getting '
+ '"Possible problems with remote login using root account" ?'
+ ),
reporting.Groups([reporting.Groups.INHIBITOR])
] + COMMON_RESOURCES)
# If the configuration is modified and contains any directive allowing
diff --git a/repos/system_upgrade/common/actors/persistentnetnamesdisable/actor.py b/repos/system_upgrade/common/actors/persistentnetnamesdisable/actor.py
index 0e13c139..1f7f1413 100644
--- a/repos/system_upgrade/common/actors/persistentnetnamesdisable/actor.py
+++ b/repos/system_upgrade/common/actors/persistentnetnamesdisable/actor.py
@@ -50,6 +50,11 @@ class PersistentNetNamesDisable(Actor):
title='How to perform an in-place upgrade to RHEL 8 when using kernel NIC names on RHEL 7',
url='https://access.redhat.com/solutions/4067471'
),
+ reporting.ExternalLink(
+ title='RHEL 8 to RHEL 9: inplace upgrade fails at '
+ '"Network configuration for unsupported device types detected"',
+ url='https://access.redhat.com/solutions/7009239'
+ ),
reporting.Remediation(
hint='Rename all ethX network interfaces following the attached KB solution article.'
),
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
index d60bc75f..dc93c9a0 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
@@ -828,6 +828,10 @@ def _get_rhsm_available_repoids(context):
' to set up Satellite and the system properly.'
).format(target_major_version)),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/5392811',
+ title='RHEL 7 to RHEL 8 LEAPP Upgrade Failing When Using Red Hat Satellite'
+ ),
reporting.ExternalLink(
# https://red.ht/preparing-for-upgrade-to-rhel8
# https://red.ht/preparing-for-upgrade-to-rhel9
@@ -1007,6 +1011,11 @@ def gather_target_repositories(context, indata):
# https://red.ht/preparing-for-upgrade-to-rhel10
url='https://red.ht/preparing-for-upgrade-to-rhel{}'.format(target_major_version),
title='Preparing for the upgrade'),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/7001181',
+ title='LEAPP Upgrade Failing from RHEL 7 to RHEL 8 when system is '
+ 'registered to custromer portal'
+ ),
reporting.RelatedResource("file", "/etc/leapp/files/repomap.json"),
reporting.RelatedResource("file", "/etc/yum.repos.d/")
])
diff --git a/repos/system_upgrade/common/actors/verifydialogs/libraries/verifydialogs.py b/repos/system_upgrade/common/actors/verifydialogs/libraries/verifydialogs.py
index a6dbe6eb..a79079b1 100644
--- a/repos/system_upgrade/common/actors/verifydialogs/libraries/verifydialogs.py
+++ b/repos/system_upgrade/common/actors/verifydialogs/libraries/verifydialogs.py
@@ -20,5 +20,10 @@ def check_dialogs(inhibit_if_no_userchoice=True):
reporting.Summary(summary.format('\n'.join(sections))),
reporting.Groups([reporting.Groups.INHIBITOR] if inhibit_if_no_userchoice else []),
reporting.Remediation(hint=dialogs_remediation, commands=cmd_remediation),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/7035321',
+ title='Leapp upgrade fail with error "Inhibitor: Missing required answers '
+ 'in the answer file."'
+ ),
reporting.Key(dialog.key)]
reporting.create_report(report_data + dialog_resources)
diff --git a/repos/system_upgrade/common/libraries/rhsm.py b/repos/system_upgrade/common/libraries/rhsm.py
index eb388829..74f6aeb1 100644
--- a/repos/system_upgrade/common/libraries/rhsm.py
+++ b/repos/system_upgrade/common/libraries/rhsm.py
@@ -85,7 +85,8 @@ def _handle_rhsm_exceptions(hint=None):
details={
'details': str(e),
'stderr': e.stderr,
- 'hint': hint or _def_hint
+ 'hint': hint or _def_hint,
+ 'link': 'https://access.redhat.com/solutions/6138372'
}
)
diff --git a/repos/system_upgrade/el7toel8/actors/checkbtrfs/actor.py b/repos/system_upgrade/el7toel8/actors/checkbtrfs/actor.py
index c1b07f8d..a3848957 100644
--- a/repos/system_upgrade/el7toel8/actors/checkbtrfs/actor.py
+++ b/repos/system_upgrade/el7toel8/actors/checkbtrfs/actor.py
@@ -41,6 +41,10 @@ class CheckBtrfs(Actor):
title='How do I prevent a kernel module from loading automatically?',
url='https://access.redhat.com/solutions/41278'
),
+ reporting.ExternalLink(
+ title='Leapp upgrade fail with error "Inhibitor: Btrfs has been removed from RHEL8"',
+ url='https://access.redhat.com/solutions/7020130'
+ ),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.INHIBITOR]),
reporting.Groups([reporting.Groups.FILESYSTEM]),
diff --git a/repos/system_upgrade/el7toel8/actors/checkhacluster/libraries/checkhacluster.py b/repos/system_upgrade/el7toel8/actors/checkhacluster/libraries/checkhacluster.py
index 870cf8a9..115867d2 100644
--- a/repos/system_upgrade/el7toel8/actors/checkhacluster/libraries/checkhacluster.py
+++ b/repos/system_upgrade/el7toel8/actors/checkhacluster/libraries/checkhacluster.py
@@ -25,6 +25,10 @@ def inhibit(node_type):
" to a RHEL High Availability or Resilient Storage Cluster"
),
),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/7049940',
+ title='Leapp upgrade from RHEL 7 to RHEL 8 fails for pacemaker cluster'
+ ),
reporting.Remediation(
hint=(
"Destroy the existing HA cluster"
diff --git a/repos/system_upgrade/el7toel8/actors/checkremovedpammodules/actor.py b/repos/system_upgrade/el7toel8/actors/checkremovedpammodules/actor.py
index 503f6149..d2e92398 100644
--- a/repos/system_upgrade/el7toel8/actors/checkremovedpammodules/actor.py
+++ b/repos/system_upgrade/el7toel8/actors/checkremovedpammodules/actor.py
@@ -59,6 +59,10 @@ class CheckRemovedPamModules(Actor):
'please remove the pam module(s) from all the files '
'under /etc/pam.d/.'.format(', '.join(replacements))
),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/7004774',
+ title='Leapp preupgrade fails with: The pam_tally2 pam module(s) no longer available'
+ ),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.INHIBITOR]),
] + [reporting.RelatedResource('pam', r) for r in replacements | found_modules])
diff --git a/repos/system_upgrade/el7toel8/actors/kernel/checkinstalleddevelkernels/checkinstalleddevelkernels/libraries/checkinstalleddevelkernels.py b/repos/system_upgrade/el7toel8/actors/kernel/checkinstalleddevelkernels/checkinstalleddevelkernels/libraries/checkinstalleddevelkernels.py
index 0ff4489f..fa49092c 100644
--- a/repos/system_upgrade/el7toel8/actors/kernel/checkinstalleddevelkernels/checkinstalleddevelkernels/libraries/checkinstalleddevelkernels.py
+++ b/repos/system_upgrade/el7toel8/actors/kernel/checkinstalleddevelkernels/checkinstalleddevelkernels/libraries/checkinstalleddevelkernels.py
@@ -38,5 +38,9 @@ def process():
reporting.Groups([reporting.Groups.KERNEL]),
reporting.Groups([reporting.Groups.INHIBITOR]),
reporting.Remediation(hint=hint, commands=commands),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/4723671',
+ title='leapp upgrade fails on kernel-devel packages'
+ ),
reporting.RelatedResource('package', 'kernel-devel')
])
diff --git a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_check/libraries/satellite_upgrade_check.py b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_check/libraries/satellite_upgrade_check.py
index 6954dd50..82148ef3 100644
--- a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_check/libraries/satellite_upgrade_check.py
+++ b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_check/libraries/satellite_upgrade_check.py
@@ -53,6 +53,11 @@ def satellite_upgrade_check(facts):
reporting.create_report([
reporting.Title(title),
reporting.Summary(summary),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/6794671',
+ title='Leapp preupgrade of Red Hat Satellite 6 fails on '
+ 'Old PostgreSQL data found in /var/lib/pgsql/data'
+ ),
reporting.Severity(severity),
reporting.Groups([]),
reporting.Groups(flags)
diff --git a/repos/system_upgrade/el8toel9/actors/checkifcfg/libraries/checkifcfg_ifcfg.py b/repos/system_upgrade/el8toel9/actors/checkifcfg/libraries/checkifcfg_ifcfg.py
index 946841df..ed666350 100644
--- a/repos/system_upgrade/el8toel9/actors/checkifcfg/libraries/checkifcfg_ifcfg.py
+++ b/repos/system_upgrade/el8toel9/actors/checkifcfg/libraries/checkifcfg_ifcfg.py
@@ -88,6 +88,11 @@ def process():
reporting.Title(title),
reporting.Summary(summary),
reporting.Remediation(hint=remediation),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/7009239',
+ title='RHEL 8 to RHEL 9: inplace upgrade fails at '
+ '"Network configuration for unsupported device types detected"'
+ ),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.NETWORK, reporting.Groups.SERVICES]),
reporting.Groups([reporting.Groups.INHIBITOR]),
diff --git a/repos/system_upgrade/el8toel9/actors/firewalldcheckallowzonedrifting/actor.py b/repos/system_upgrade/el8toel9/actors/firewalldcheckallowzonedrifting/actor.py
index b7eb5806..0002f6aa 100644
--- a/repos/system_upgrade/el8toel9/actors/firewalldcheckallowzonedrifting/actor.py
+++ b/repos/system_upgrade/el8toel9/actors/firewalldcheckallowzonedrifting/actor.py
@@ -44,6 +44,11 @@ class FirewalldCheckAllowZoneDrifting(Actor):
reporting.ExternalLink(
url='https://access.redhat.com/articles/4855631',
title='Changes in firewalld related to Zone Drifting'),
+ reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/6969130',
+ title='Leapp Preupgrade check fails with error - '
+ '"Inhibitor: Firewalld Configuration AllowZoneDrifting Is Unsupported".'
+ ),
reporting.Remediation(
hint='Set AllowZoneDrifting=no in /etc/firewalld/firewalld.conf',
commands=[['sed', '-i', 's/^AllowZoneDrifting=.*/AllowZoneDrifting=no/',
--
2.42.0

View File

@ -0,0 +1,43 @@
From da5ce33f51b2607c372acfc0e9bb28bf5270ef65 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Sat, 4 May 2024 10:07:14 +0200
Subject: [PATCH 33/34] storage initialisation: apply sleep always
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Based on feedback from @rmetrich¹ the sleep period should be applied
always as the problem with initialisation is happening also on systems
with higher than few number of LVs where /usr is not on dedicated
volume.
1: https://github.com/oamg/leapp-repository/pull/1218#issuecomment-2093303020
---
.../files/dracut/85sys-upgrade-redhat/mount_usr.sh | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/mount_usr.sh b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/mount_usr.sh
index db065d87..84f4857d 100755
--- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/mount_usr.sh
+++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/mount_usr.sh
@@ -127,6 +127,8 @@ try_to_mount_usr() {
_sleep_timeout=15
_last_attempt="false"
for i in 0 1 2 3 4 5 6 7 8 9 10 11; do
+ info "Storage initialisation: Attempt $i of 11. Wait $_sleep_timeout seconds."
+ sleep $_sleep_timeout
if [ $i -eq 11 ]; then
_last_attempt="true"
fi
@@ -141,7 +143,6 @@ for i in 0 1 2 3 4 5 6 7 8 9 10 11; do
break
fi
- warn "Failed attempt to initialize the storage. Retry in $_sleep_timeout seconds. Attempt: $i of 10"
- sleep $_sleep_timeout
+ warn "Failed attempt to initialize the storage. Retry..."
done
--
2.42.0

View File

@ -0,0 +1,30 @@
From ce8dfff52c067fc334511c9773759eca8bff8b61 Mon Sep 17 00:00:00 2001
From: Rodolfo Olivieri <rolivier@redhat.com>
Date: Tue, 7 May 2024 09:38:41 -0300
Subject: [PATCH 34/34] Add renovate to track github-actions deps
The renovate bot will track and automatically update the github actions
dependencies in the project.
---
.github/renovate.json | 8 ++++++++
1 file changed, 8 insertions(+)
create mode 100644 .github/renovate.json
diff --git a/.github/renovate.json b/.github/renovate.json
new file mode 100644
index 00000000..f55d8081
--- /dev/null
+++ b/.github/renovate.json
@@ -0,0 +1,8 @@
+{
+ "extends": [
+ "config:base"
+ ],
+ "enabledManagers": [
+ "github-actions"
+ ]
+}
\ No newline at end of file
--
2.42.0

View File

@ -42,7 +42,7 @@ py2_byte_compile "%1" "%2"}
Name: leapp-repository
Version: 0.20.0
Release: 2%{?dist}
Release: 3%{?dist}
Summary: Repositories for leapp
License: ASL 2.0
@ -57,7 +57,39 @@ BuildArch: noarch
# Patch0001: filename.patch
Patch0001: 0001-rhui-do-not-bootstrap-target-client-on-aws.patch
Patch0002: 0002-Packit-Drop-tests-for-obsoleted-upgrade-paths-restru.patch
Patch0003: 0003-silence-use-yield-from-from-pylint-3.1.patch
Patch0004: 0004-rocescanner-Actually-call-process-in-test_roce_notib.patch
Patch0005: 0005-Fix-incorrect-parsing-of-lscpu-output.patch
Patch0006: 0006-Add-unit-tests-for-actor-udevamdinfo.patch
Patch0007: 0007-Add-unit-tests-for-scansourcefiles-actor-1190.patch
Patch0008: 0008-pes_events_scanner-overwrite-repositories-when-apply.patch
Patch0009: 0009-Modify-upgrade-not-terminate-after-lockfile-detected.patch
Patch0010: 0010-Make-the-reboot-required-text-more-visible-in-the-co.patch
Patch0011: 0011-check_grub_legacy-inhibit-when-GRUB-legacy-is-presen.patch
Patch0012: 0012-Default-channel-to-GA-is-not-specified-otherwise-120.patch
Patch0013: 0013-Enhance-grub2-install-failure-message.patch
Patch0014: 0014-boot-check-first-partition-offset-on-GRUB-devices.patch
Patch0015: 0015-boot-Skip-checks-of-first-partition-offset-for-for-g.patch
Patch0016: 0016-repomapping-Add-RHEL7-ELS-repos.patch
Patch0017: 0017-bump-required-repomap-version.patch
Patch0018: 0018-fixup-bump-required-repomap-version.patch
Patch0019: 0019-Fix-incorrect-command-formulation.patch
Patch0020: 0020-mention-Report-in-produces-of-transitionsystemdservi.patch
Patch0021: 0021-Update-packit-config-after-tier-redefinition.patch
Patch0022: 0022-Update-reboot-msg-Note-the-console-access.patch
Patch0023: 0023-Fix-kernel-cmdline-args-we-add-not-being-propogated-.patch
Patch0024: 0024-kernelcmdlineconfig-add-newline-in-etc-kernel-cmdlin.patch
Patch0025: 0025-Data-Update-DDDD.json-fixed-incorrect-data-add-craft.patch
Patch0026: 0026-Data-Update-PES-data-to-cover-up-to-date-changes.patch
Patch0027: 0027-Move-common-Satellite-Upgrade-code-to-common.patch
Patch0028: 0028-Add-el8toel9-upgrade-facts-for-Satellite.patch
Patch0029: 0029-Refresh-collation-version-if-pulp-ansible-is-present.patch
Patch0030: 0030-Refactor-satellite_upgrade_services-to-use-SystemdSe.patch
Patch0031: 0031-mount-usr-Implement-try-sleep-loop-add-time-for-stor.patch
Patch0032: 0032-Add-additional-KB-resources.patch
Patch0033: 0033-storage-initialisation-apply-sleep-always.patch
Patch0034: 0034-Add-renovate-to-track-github-actions-deps.patch
%description
%{summary}
@ -210,6 +242,39 @@ Requires: python3-gobject-base
# APPLY PATCHES HERE
# %%patch0001 -p1
%patch0001 -p1
%patch0002 -p1
%patch0003 -p1
%patch0004 -p1
%patch0005 -p1
%patch0006 -p1
%patch0007 -p1
%patch0008 -p1
%patch0009 -p1
%patch0010 -p1
%patch0011 -p1
%patch0012 -p1
%patch0013 -p1
%patch0014 -p1
%patch0015 -p1
%patch0016 -p1
%patch0017 -p1
%patch0018 -p1
%patch0019 -p1
%patch0020 -p1
%patch0021 -p1
%patch0022 -p1
%patch0023 -p1
%patch0024 -p1
%patch0025 -p1
%patch0026 -p1
%patch0027 -p1
%patch0028 -p1
%patch0029 -p1
%patch0030 -p1
%patch0031 -p1
%patch0032 -p1
%patch0033 -p1
%patch0034 -p1
%build
@ -287,6 +352,24 @@ done;
# no files here
%changelog
* Mon May 13 2024 Toshio Kuratomi <toshio@fedoraproject.org> - 0.20.0-3
- Do not terminate the upgrade dracut module execution if
/sysroot/root/tmp_leapp_py3/.leapp_upgrade_failed exists
- Several minor improvements in messages printed in console output
- Several minor improvements in report and error messages
- Fix the parsing of the lscpu output
- Fix evaluation of PES data
- Target by default always "GA" channel repositories unless a different
channel is specified for the leapp execution
- Fix creation of the post upgrade report about changes in states of systemd
services
- Update the device driver deprecation data, fixing invalid fields for some
AMD CPUs
- Update the default kernel cmdline
- Wait for the storage initialization when /usr is on separate file system -
covering SAN
- Resolves: RHEL-27847, RHEL-35240
* Tue Feb 20 2024 Petr Stodulka <pstodulk@redhat.com> - 0.20.0-2
- Fallback to original RHUI solution on AWS to fix issues caused by changes in RHUI client
- Resolves: RHEL-16729