Compare commits

...

3 Commits

Author SHA1 Message Date
6e448a755d import CS leapp-repository-0.20.0-2.el8 2024-05-22 10:44:17 +00:00
7f3492f658 import CS leapp-repository-0.19.0-4.el8 2024-01-10 17:19:25 +00:00
CentOS Sources
7418c7fbb3 import leapp-repository-0.18.0-1.el8 2023-05-16 06:47:35 +00:00
10 changed files with 458 additions and 993 deletions

4
.gitignore vendored
View File

@ -1,2 +1,2 @@
SOURCES/deps-pkgs-7.tar.gz
SOURCES/leapp-repository-0.17.0.tar.gz
SOURCES/deps-pkgs-10.tar.gz
SOURCES/leapp-repository-0.20.0.tar.gz

View File

@ -1,2 +1,2 @@
4886551d9ee2259cdfbd8d64a02d0ab9a381ba3d SOURCES/deps-pkgs-7.tar.gz
cbb3e6025c6567507d3bc317731b4c2f0a0eb872 SOURCES/leapp-repository-0.17.0.tar.gz
d520ada12294e4dd8837c81f92d4c184ab403d51 SOURCES/deps-pkgs-10.tar.gz
185bbb040dba48e1ea2d6c627133af594378afd4 SOURCES/leapp-repository-0.20.0.tar.gz

View File

@ -1,562 +0,0 @@
From 505963d51e3989a7d907861dd870133c670ccb78 Mon Sep 17 00:00:00 2001
From: Joe Shimkus <joe@shimkus.com>
Date: Wed, 24 Aug 2022 13:30:19 -0400
Subject: [PATCH] CheckVDO: Ask user only faiulres and undetermined devices (+
report update)
The previous solution made possible to skip the VDO check answering
the user question (confirming no vdo devices are present) if the
vdo package is not installed (as the scan of the system could not
be performed). However as part of the bug 2096159 it was discovered
that some systems have very dynamic storage which could dissapear
in the very moment the check by the vdo tool is performed which lead
to the reported inhibitor. We have discovered that this could be real
blocker of the upgrade on such systems as it's pretty simple to hit
at least 1 of N devices to raise such an issue. (*)
To make the upgrade possible on such systems, the dialog has been
updated to be able to skip any problematic VDO checks:
- undetermined block devices
- failures during the vdo scan of a block device
In such a case, user must confirm that no VDO device non-managed
by LVM is present. The dialog is asking now for the `confirm` key
from user instead of `all_vdo_converted`. If any non-LVM managed VDO
devices are discovered, the upgrade is inhibited despite the answer
(this is supposed to happen only when user's answer is not right so
we are ok about that behaviour).
Also reports are updated, as previously it could happen that several
reports with the same title appear during one run of leapp, but each
of them has a different meaning. Set individual titles to all
reports. Also summaries or reports have been updated.
(*) This also includes situations when discovered list of devices
is not complete as some block devices could be loaded after the
initial scan of block devices (StorageInfo msg) is created. Which
means that such devices will not be checked at all as they will not
be known to other actors. We consider this ok as when a system with
dynamic storage is present, usually many of block devices are
redundant. So usually user will have to answer the dialog anyway due
to other "unstable" block devices.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2096159
Jira: OAMG-7025
---
.../el8toel9/actors/checkvdo/actor.py | 96 +++++++----
.../actors/checkvdo/libraries/checkvdo.py | 155 ++++++++++--------
.../checkvdo/tests/unit_test_checkvdo.py | 44 +++--
3 files changed, 184 insertions(+), 111 deletions(-)
diff --git a/repos/system_upgrade/el8toel9/actors/checkvdo/actor.py b/repos/system_upgrade/el8toel9/actors/checkvdo/actor.py
index c28b3a9..d43bac0 100644
--- a/repos/system_upgrade/el8toel9/actors/checkvdo/actor.py
+++ b/repos/system_upgrade/el8toel9/actors/checkvdo/actor.py
@@ -12,7 +12,7 @@ class CheckVdo(Actor):
`Background`
============
- In RHEL 9.0 the indepdent VDO management software, `vdo manager`, is
+ In RHEL 9.0 the independent VDO management software, `vdo manager`, is
superseded by LVM management. Existing VDOs must be converted to LVM-based
management *before* upgrading to RHEL 9.0.
@@ -32,12 +32,24 @@ class CheckVdo(Actor):
If the VdoConversionInfo model indicates unexpected errors occurred during
scanning CheckVdo will produce appropriate inhibitory reports.
- Lastly, if the VdoConversionInfo model indicates conditions exist where VDO
- devices could exist but the necessary software to check was not installed
- on the system CheckVdo will present a dialog to the user. This dialog will
- ask the user to either install the required software if the user knows or
- is unsure that VDO devices exist or to approve the continuation of the
- upgrade if the user is certain that no VDO devices exist.
+ If the VdoConversionInfo model indicates conditions exist where VDO devices
+ could exist but the necessary software to check was not installed on the
+ system CheckVdo will present a dialog to the user. This dialog will ask the
+ user to either install the required software if the user knows or is unsure
+ that VDO devices exist or to approve the continuation of the upgrade if the
+ user is certain that either there are no VDO devices present or that all
+ VDO devices have been successfully converted.
+
+ To maximize safety CheckVdo operates against all block devices which
+ match the criteria for potential VDO devices. Given the dynamic nature
+ of device presence within a system some devices which may have been present
+ during leapp discovery may not be present when CheckVdo runs. As CheckVdo
+ defaults to producing inhibitory reports if a device cannot be checked
+ (for any reason) this dynamism may be problematic. To prevent CheckVdo
+ producing an inhibitory report for devices which are dynamically no longer
+ present within the system the user may answer the previously mentioned
+ dialog in the affirmative when the user knows that all VDO devices have
+ been converted. This will circumvent checks of block devices.
"""
name = 'check_vdo'
@@ -50,37 +62,55 @@ class CheckVdo(Actor):
reason='Confirmation',
components=(
BooleanComponent(
- key='no_vdo_devices',
- label='Are there no VDO devices on the system?',
- description='Enter True if there are no VDO devices on '
- 'the system and False continue the upgrade. '
- 'If the system has no VDO devices, then it '
- 'is safe to continue the upgrade. If there '
- 'are VDO devices they must all be converted '
- 'to LVM management before the upgrade can '
- 'proceed.',
- reason='Based on installed packages it is possible that '
- 'VDO devices exist on the system. All VDO devices '
- 'must be converted to being managed by LVM before '
- 'the upgrade occurs. Because the \'vdo\' package '
- 'is not installed, Leapp cannot determine whether '
- 'any VDO devices exist that have not yet been '
- 'converted. If the devices are not converted and '
- 'the upgrade proceeds the data on unconverted VDO '
- 'devices will be inaccessible. If you have any '
- 'doubts you should choose to install the \'vdo\' '
- 'package and re-run the upgrade process to check '
- 'for unconverted VDO devices. If you are certain '
- 'that the system has no VDO devices or that all '
- 'VDO devices have been converted to LVM management '
- 'you may opt to allow the upgrade to proceed.'
+ key='confirm',
+ label='Are all VDO devices, if any, successfully converted to LVM management?',
+ description='Enter True if no VDO devices are present '
+ 'on the system or all VDO devices on the system '
+ 'have been successfully converted to LVM '
+ 'management. '
+ 'Entering True will circumvent check of failures '
+ 'and undetermined devices. '
+ 'Recognized VDO devices that have not been '
+ 'converted to LVM management can still block '
+ 'the upgrade despite the answer.'
+ 'All VDO devices must be converted to LVM '
+ 'management before upgrading.',
+ reason='To maximize safety all block devices on a system '
+ 'that meet the criteria as possible VDO devices '
+ 'are checked to verify that, if VDOs, they have '
+ 'been converted to LVM management. '
+ 'If the devices are not converted and the upgrade '
+ 'proceeds the data on unconverted VDO devices will '
+ 'be inaccessible. '
+ 'In order to perform checking the \'vdo\' package '
+ 'must be installed. '
+ 'If the \'vdo\' package is not installed and there '
+ 'are any doubts the \'vdo\' package should be '
+ 'installed and the upgrade process re-run to check '
+ 'for unconverted VDO devices. '
+ 'If the check of any device fails for any reason '
+ 'an upgrade inhibiting report is generated. '
+ 'This may be problematic if devices are '
+ 'dynamically removed from the system subsequent to '
+ 'having been identified during device discovery. '
+ 'If it is certain that all VDO devices have been '
+ 'successfully converted to LVM management this '
+ 'dialog may be answered in the affirmative which '
+ 'will circumvent block device checking.'
),
)
),
)
+ _asked_answer = False
+ _vdo_answer = None
- def get_no_vdo_devices_response(self):
- return self.get_answers(self.dialogs[0]).get('no_vdo_devices')
+ def get_vdo_answer(self):
+ if not self._asked_answer:
+ self._asked_answer = True
+ # calling this multiple times could lead to possible issues
+ # or at least in redundant reports
+ self._vdo_answer = self.get_answers(self.dialogs[0]).get('confirm')
+ return self._vdo_answer
def process(self):
for conversion_info in self.consume(VdoConversionInfo):
diff --git a/repos/system_upgrade/el8toel9/actors/checkvdo/libraries/checkvdo.py b/repos/system_upgrade/el8toel9/actors/checkvdo/libraries/checkvdo.py
index 9ba5c70..3b161c9 100644
--- a/repos/system_upgrade/el8toel9/actors/checkvdo/libraries/checkvdo.py
+++ b/repos/system_upgrade/el8toel9/actors/checkvdo/libraries/checkvdo.py
@@ -1,10 +1,35 @@
from leapp import reporting
from leapp.libraries.stdlib import api
-_report_title = reporting.Title('VDO devices migration to LVM management')
+def _report_skip_check():
+ if not api.current_actor().get_vdo_answer():
+ return
+
+ summary = ('User has asserted all VDO devices on the system have been '
+ 'successfully converted to LVM management or no VDO '
+ 'devices are present.')
+ reporting.create_report([
+ reporting.Title('Skipping the VDO check of block devices'),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.INFO),
+ reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]),
+ ])
+
+
+def _process_failed_check_devices(conversion_info):
+ # Post-conversion VDOs that were not successfully checked for having
+ # completed the migration to LVM management.
+ # Return True if failed checks detected
+ devices = [x for x in conversion_info.post_conversion if (not x.complete) and x.check_failed]
+ devices += [x for x in conversion_info.undetermined_conversion if x.check_failed]
+ if not devices:
+ return False
+
+ if api.current_actor().get_vdo_answer():
+ # User asserted all possible VDO should be already converted - skip
+ return True
-def _create_unexpected_resuilt_report(devices):
names = [x.name for x in devices]
multiple = len(names) > 1
summary = ['Unexpected result checking device{0}'.format('s' if multiple else '')]
@@ -16,13 +41,14 @@ def _create_unexpected_resuilt_report(devices):
'and re-run the upgrade.'))
reporting.create_report([
- _report_title,
+ reporting.Title('Checking VDO conversion to LVM management of block devices failed'),
reporting.Summary(summary),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]),
reporting.Remediation(hint=remedy_hint),
reporting.Groups([reporting.Groups.INHIBITOR])
])
+ return True
def _process_post_conversion_vdos(vdos):
@@ -32,23 +58,28 @@ def _process_post_conversion_vdos(vdos):
if post_conversion:
devices = [x.name for x in post_conversion]
multiple = len(devices) > 1
- summary = ''.join(('VDO device{0} \'{1}\' '.format('s' if multiple else '',
- ', '.join(devices)),
- 'did not complete migration to LVM management. ',
- 'The named device{0} '.format('s' if multiple else ''),
- '{0} successfully converted at the '.format('were' if multiple else 'was'),
- 'device format level; however, the expected LVM management '
- 'portion of the conversion did not take place. This '
- 'indicates that an exceptional condition (for example, a '
- 'system crash) likely occured during the conversion '
- 'process. The LVM portion of the conversion must be '
- 'performed in order for upgrade to proceed.'))
+ summary = (
+ 'VDO device{s_suffix} \'{devices_str}\' '
+ 'did not complete migration to LVM management. '
+ 'The named device{s_suffix} {was_were} successfully converted '
+ 'at the device format level; however, the expected LVM management '
+ 'portion of the conversion did not take place. This indicates '
+ 'that an exceptional condition (for example, a system crash) '
+ 'likely occurred during the conversion process. The LVM portion '
+ 'of the conversion must be performed in order for upgrade '
+ 'to proceed.'
+ .format(
+ s_suffix='s' if multiple else '',
+ devices_str=', '.join(devices),
+ was_were='were' if multiple else 'was',
+ )
+ )
remedy_hint = ('Consult the VDO to LVM conversion process '
'documentation for how to complete the conversion.')
reporting.create_report([
- _report_title,
+ reporting.Title('Detected VDO devices that have not finished the conversion to LVM management.'),
reporting.Summary(summary),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]),
@@ -56,33 +87,32 @@ def _process_post_conversion_vdos(vdos):
reporting.Groups([reporting.Groups.INHIBITOR])
])
- # Post-conversion VDOs that were not successfully checked for having
- # completed the migration to LVM management.
- post_conversion = [x for x in vdos if (not x.complete) and x.check_failed]
- if post_conversion:
- _create_unexpected_resuilt_report(post_conversion)
-
def _process_pre_conversion_vdos(vdos):
# Pre-conversion VDOs generate an inhibiting report.
if vdos:
devices = [x.name for x in vdos]
multiple = len(devices) > 1
- summary = ''.join(('VDO device{0} \'{1}\' require{2} '.format('s' if multiple else '',
- ', '.join(devices),
- '' if multiple else 's'),
- 'migration to LVM management.'
- 'After performing the upgrade VDO devices can only be '
- 'managed via LVM. Any VDO device not currently managed '
- 'by LVM must be converted to LVM management before '
- 'upgrading. The data on any VDO device not converted to '
- 'LVM management will be inaccessible after upgrading.'))
+ summary = (
+ 'VDO device{s_suffix} \'{devices_str}\' require{s_suffix_verb} '
+ 'migration to LVM management.'
+ 'After performing the upgrade VDO devices can only be '
+ 'managed via LVM. Any VDO device not currently managed '
+ 'by LVM must be converted to LVM management before '
+ 'upgrading. The data on any VDO device not converted to '
+ 'LVM management will be inaccessible after upgrading.'
+ .format(
+ s_suffix='s' if multiple else '',
+ s_suffix_verb='' if multiple else 's',
+ devices_str=', '.join(devices),
+ )
+ )
remedy_hint = ('Consult the VDO to LVM conversion process '
'documentation for how to perform the conversion.')
reporting.create_report([
- _report_title,
+ reporting.Title('Detected VDO devices not managed by LVM'),
reporting.Summary(summary),
reporting.Severity(reporting.Severity.HIGH),
reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]),
@@ -104,43 +134,40 @@ def _process_undetermined_conversion_devices(devices):
# A device can only end up as undetermined either via a check that failed
# or if it was not checked. If the info for the device indicates that it
# did not have a check failure that means it was not checked.
-
- checked = [x for x in devices if x.check_failed]
- if checked:
- _create_unexpected_resuilt_report(checked)
+ # Return True if failed checks detected
unchecked = [x for x in devices if not x.check_failed]
- if unchecked:
- no_vdo_devices = api.current_actor().get_no_vdo_devices_response()
- if no_vdo_devices:
- summary = ('User has asserted there are no VDO devices on the '
- 'system in need of conversion to LVM management.')
-
- reporting.create_report([
- _report_title,
- reporting.Summary(summary),
- reporting.Severity(reporting.Severity.INFO),
- reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]),
- reporting.Groups([])
- ])
- elif no_vdo_devices is False:
- summary = ('User has opted to inhibit upgrade in regard to '
- 'potential VDO devices requiring conversion to LVM '
- 'management.')
- remedy_hint = ('Install the \'vdo\' package and re-run upgrade to '
- 'check for VDO devices requiring conversion.')
-
- reporting.create_report([
- _report_title,
- reporting.Summary(summary),
- reporting.Severity(reporting.Severity.HIGH),
- reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]),
- reporting.Remediation(hint=remedy_hint),
- reporting.Groups([reporting.Groups.INHIBITOR])
- ])
+ if not unchecked:
+ return False
+
+ if api.current_actor().get_vdo_answer():
+ # User asserted no VDO devices are present
+ return True
+
+ summary = (
+ 'The check of block devices could not be performed as the \'vdo\' '
+ 'package is not installed. All VDO devices must be converted to '
+ 'LVM management prior to the upgrade to prevent the loss of data.')
+ remedy_hint = ('Install the \'vdo\' package and re-run upgrade to '
+ 'check for VDO devices requiring conversion or confirm '
+ 'that all VDO devices, if any, are managed by LVM.')
+
+ reporting.create_report([
+ reporting.Title('Cannot perform the VDO check of block devices'),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]),
+ reporting.Remediation(hint=remedy_hint),
+ reporting.Groups([reporting.Groups.INHIBITOR])
+ ])
+ return True
def check_vdo(conversion_info):
_process_pre_conversion_vdos(conversion_info.pre_conversion)
_process_post_conversion_vdos(conversion_info.post_conversion)
- _process_undetermined_conversion_devices(conversion_info.undetermined_conversion)
+
+ detected_under_dev = _process_undetermined_conversion_devices(conversion_info.undetermined_conversion)
+ detected_failed_check = _process_failed_check_devices(conversion_info)
+ if detected_under_dev or detected_failed_check:
+ _report_skip_check()
diff --git a/repos/system_upgrade/el8toel9/actors/checkvdo/tests/unit_test_checkvdo.py b/repos/system_upgrade/el8toel9/actors/checkvdo/tests/unit_test_checkvdo.py
index e0ac39d..865e036 100644
--- a/repos/system_upgrade/el8toel9/actors/checkvdo/tests/unit_test_checkvdo.py
+++ b/repos/system_upgrade/el8toel9/actors/checkvdo/tests/unit_test_checkvdo.py
@@ -13,14 +13,16 @@ from leapp.models import (
from leapp.utils.report import is_inhibitor
-class MockedActorNoVdoDevices(CurrentActorMocked):
- def get_no_vdo_devices_response(self):
- return True
+# Mock actor base for CheckVdo tests.
+class MockedActorCheckVdo(CurrentActorMocked):
+ def get_vdo_answer(self):
+ return False
-class MockedActorSomeVdoDevices(CurrentActorMocked):
- def get_no_vdo_devices_response(self):
- return False
+# Mock actor for all_vdo_converted dialog response.
+class MockedActorAllVdoConvertedTrue(MockedActorCheckVdo):
+ def get_vdo_answer(self):
+ return True
def aslist(f):
@@ -66,6 +68,7 @@ def _undetermined_conversion_vdos(count=0, failing=False, start_char='a'):
# No VDOs tests.
def test_no_vdos(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(),
@@ -76,6 +79,7 @@ def test_no_vdos(monkeypatch):
# Concurrent pre- and post-conversion tests.
def test_both_conversion_vdo_incomplete(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
post_count = 7
checkvdo.check_vdo(
@@ -89,6 +93,7 @@ def test_both_conversion_vdo_incomplete(monkeypatch):
# Post-conversion tests.
def test_post_conversion_multiple_vdo_incomplete(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(7, 5),
@@ -100,6 +105,7 @@ def test_post_conversion_multiple_vdo_incomplete(monkeypatch):
def test_post_conversion_multiple_vdo_complete(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(7, 7),
@@ -109,6 +115,7 @@ def test_post_conversion_multiple_vdo_complete(monkeypatch):
def test_post_conversion_single_vdo_incomplete(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(1),
@@ -121,6 +128,7 @@ def test_post_conversion_single_vdo_incomplete(monkeypatch):
def test_post_conversion_single_check_failing(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(2, complete=1, failing=1),
@@ -135,6 +143,7 @@ def test_post_conversion_single_check_failing(monkeypatch):
def test_post_conversion_multiple_check_failing(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(7, complete=4, failing=3),
@@ -147,6 +156,7 @@ def test_post_conversion_multiple_check_failing(monkeypatch):
def test_post_conversion_incomplete_and_check_failing(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(2, failing=1),
@@ -158,6 +168,7 @@ def test_post_conversion_incomplete_and_check_failing(monkeypatch):
# Pre-conversion tests.
def test_pre_conversion_multiple_vdo_incomplete(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(),
@@ -169,6 +180,7 @@ def test_pre_conversion_multiple_vdo_incomplete(monkeypatch):
def test_pre_conversion_single_vdo_incomplete(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(),
@@ -182,6 +194,7 @@ def test_pre_conversion_single_vdo_incomplete(monkeypatch):
# Undetermined tests.
def test_undetermined_single_check_failing(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(),
@@ -196,6 +209,7 @@ def test_undetermined_single_check_failing(monkeypatch):
def test_undetermined_multiple_check_failing(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(),
@@ -207,27 +221,29 @@ def test_undetermined_multiple_check_failing(monkeypatch):
'Unexpected result checking devices')
-def test_undetermined_multiple_no_check_no_vdos(monkeypatch):
- monkeypatch.setattr(api, 'current_actor', MockedActorNoVdoDevices())
+def test_undetermined_multiple_no_check(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(),
pre_conversion=_pre_conversion_vdos(),
undetermined_conversion=_undetermined_conversion_vdos(3)))
assert reporting.create_report.called == 1
- assert not is_inhibitor(reporting.create_report.report_fields)
+ assert is_inhibitor(reporting.create_report.report_fields)
assert reporting.create_report.report_fields['summary'].startswith(
- 'User has asserted there are no VDO devices')
+ 'The check of block devices could not be performed as the \'vdo\' '
+ 'package is not installed.')
-def test_undetermined_multiple_no_check_some_vdos(monkeypatch):
- monkeypatch.setattr(api, 'current_actor', MockedActorSomeVdoDevices())
+# all_vdo_converted test.
+def test_all_vdo_converted_true(monkeypatch):
+ monkeypatch.setattr(api, 'current_actor', MockedActorAllVdoConvertedTrue())
monkeypatch.setattr(reporting, 'create_report', create_report_mocked())
checkvdo.check_vdo(
VdoConversionInfo(post_conversion=_post_conversion_vdos(),
pre_conversion=_pre_conversion_vdos(),
undetermined_conversion=_undetermined_conversion_vdos(3)))
assert reporting.create_report.called == 1
- assert is_inhibitor(reporting.create_report.report_fields)
+ assert not is_inhibitor(reporting.create_report.report_fields)
assert reporting.create_report.report_fields['summary'].startswith(
- 'User has opted to inhibit upgrade')
+ 'User has asserted all VDO devices on the system have been successfully converted')
--
2.37.2

View File

@ -0,0 +1,251 @@
From 921c06892f7550a3a8e2b3fe941c6272bdacf88d Mon Sep 17 00:00:00 2001
From: mhecko <mhecko@redhat.com>
Date: Thu, 15 Feb 2024 09:56:27 +0100
Subject: [PATCH] rhui: do not bootstrap target client on aws
Bootstrapping target RHUI client now requires installing the entire
RHEL8 RPM stack. Threfore, do not try installing target client
and instead rely only on the files from our leapp-rhui-aws package.
---
.../cloud/checkrhui/libraries/checkrhui.py | 6 +-
.../libraries/userspacegen.py | 104 ++++++++++++++----
.../system_upgrade/common/models/rhuiinfo.py | 7 ++
3 files changed, 92 insertions(+), 25 deletions(-)
diff --git a/repos/system_upgrade/common/actors/cloud/checkrhui/libraries/checkrhui.py b/repos/system_upgrade/common/actors/cloud/checkrhui/libraries/checkrhui.py
index 84ab40e3..e1c158c7 100644
--- a/repos/system_upgrade/common/actors/cloud/checkrhui/libraries/checkrhui.py
+++ b/repos/system_upgrade/common/actors/cloud/checkrhui/libraries/checkrhui.py
@@ -142,7 +142,11 @@ def customize_rhui_setup_for_aws(rhui_family, setup_info):
target_version = version.get_target_major_version()
if target_version == '8':
- return # The rhel8 plugin is packed into leapp-rhui-aws as we need python2 compatible client
+ # RHEL8 rh-amazon-rhui-client depends on amazon-libdnf-plugin that depends
+ # essentially on the entire RHEL8 RPM stack, so we cannot just swap the clients
+ # The leapp-rhui-aws will provide all necessary files to access entire RHEL8 content
+ setup_info.bootstrap_target_client = False
+ return
amazon_plugin_copy_task = CopyFile(src='/usr/lib/python3.9/site-packages/dnf-plugins/amazon-id.py',
dst='/usr/lib/python3.6/site-packages/dnf-plugins/')
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
index d917bfd5..d60bc75f 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
@@ -853,9 +853,9 @@ def _get_rhui_available_repoids(context, cloud_repo):
return set(repoids)
-def get_copy_location_from_copy_in_task(context, copy_task):
+def get_copy_location_from_copy_in_task(context_basepath, copy_task):
basename = os.path.basename(copy_task.src)
- dest_in_container = context.full_path(copy_task.dst)
+ dest_in_container = os.path.join(context_basepath, copy_task.dst)
if os.path.isdir(dest_in_container):
return os.path.join(copy_task.dst, basename)
return copy_task.dst
@@ -871,7 +871,10 @@ def _get_rh_available_repoids(context, indata):
# If we are upgrading a RHUI system, check what repositories are provided by the (already installed) target clients
if indata and indata.rhui_info:
- files_provided_by_clients = _query_rpm_for_pkg_files(context, indata.rhui_info.target_client_pkg_names)
+ setup_info = indata.rhui_info.target_client_setup_info
+ target_content_access_files = set()
+ if setup_info.bootstrap_target_client:
+ target_content_access_files = _query_rpm_for_pkg_files(context, indata.rhui_info.target_client_pkg_names)
def is_repofile(path):
return os.path.dirname(path) == '/etc/yum.repos.d' and os.path.basename(path).endswith('.repo')
@@ -884,24 +887,33 @@ def _get_rh_available_repoids(context, indata):
yum_repos_d = context.full_path('/etc/yum.repos.d')
all_repofiles = {os.path.join(yum_repos_d, path) for path in os.listdir(yum_repos_d) if path.endswith('.repo')}
- client_repofiles = {context.full_path(path) for path in files_provided_by_clients if is_repofile(path)}
+ api.current_logger().debug('(RHUI Setup) All available repofiles: {0}'.format(' '.join(all_repofiles)))
+
+ target_access_repofiles = {
+ context.full_path(path) for path in target_content_access_files if is_repofile(path)
+ }
# Exclude repofiles used to setup the target rhui access as on some platforms the repos provided by
# the client are not sufficient to install the client into target userspace (GCP)
rhui_setup_repofile_tasks = [task for task in setup_tasks if task.src.endswith('repo')]
rhui_setup_repofiles = (
- get_copy_location_from_copy_in_task(context, copy_task) for copy_task in rhui_setup_repofile_tasks
+ get_copy_location_from_copy_in_task(context.base_dir, copy) for copy in rhui_setup_repofile_tasks
)
rhui_setup_repofiles = {context.full_path(repofile) for repofile in rhui_setup_repofiles}
- foreign_repofiles = all_repofiles - client_repofiles - rhui_setup_repofiles
+ foreign_repofiles = all_repofiles - target_access_repofiles - rhui_setup_repofiles
+
+ api.current_logger().debug(
+ 'The following repofiles are considered as unknown to'
+ ' the target RHUI content setup and will be ignored: {0}'.format(' '.join(foreign_repofiles))
+ )
# Rename non-client repofiles so they will not be recognized when running dnf repolist
for foreign_repofile in foreign_repofiles:
os.rename(foreign_repofile, '{0}.back'.format(foreign_repofile))
try:
- dnf_cmd = ['dnf', 'repolist', '--releasever', target_ver, '-v']
+ dnf_cmd = ['dnf', 'repolist', '--releasever', target_ver, '-v', '--enablerepo', '*']
repolist_result = context.call(dnf_cmd)['stdout']
repoid_lines = [line for line in repolist_result.split('\n') if line.startswith('Repo-id')]
rhui_repoids = {extract_repoid_from_line(line) for line in repoid_lines}
@@ -919,6 +931,9 @@ def _get_rh_available_repoids(context, indata):
for foreign_repofile in foreign_repofiles:
os.rename('{0}.back'.format(foreign_repofile), foreign_repofile)
+ api.current_logger().debug(
+ 'The following repofiles are considered as provided by RedHat: {0}'.format(' '.join(rh_repoids))
+ )
return rh_repoids
@@ -1086,7 +1101,7 @@ def _get_target_userspace():
return constants.TARGET_USERSPACE.format(get_target_major_version())
-def _create_target_userspace(context, packages, files, target_repoids):
+def _create_target_userspace(context, indata, packages, files, target_repoids):
"""Create the target userspace."""
target_path = _get_target_userspace()
prepare_target_userspace(context, target_path, target_repoids, list(packages))
@@ -1096,12 +1111,57 @@ def _create_target_userspace(context, packages, files, target_repoids):
_copy_files(target_context, files)
dnfplugin.install(_get_target_userspace())
+ # If we used only repofiles from leapp-rhui-<provider> then remove these as they provide
+ # duplicit definitions as the target clients already installed in the target container
+ if indata.rhui_info:
+ api.current_logger().debug(
+ 'Target container should have access to content. '
+ 'Removing repofiles from leapp-rhui-<provider> from the target..'
+ )
+ setup_info = indata.rhui_info.target_client_setup_info
+ if not setup_info.bootstrap_target_client:
+ target_userspace_path = _get_target_userspace()
+ for copy in setup_info.preinstall_tasks.files_to_copy_into_overlay:
+ dst_in_container = get_copy_location_from_copy_in_task(target_userspace_path, copy)
+ dst_in_container = dst_in_container.strip('/')
+ dst_in_host = os.path.join(target_userspace_path, dst_in_container)
+ if os.path.isfile(dst_in_host) and dst_in_host.endswith('.repo'):
+ api.current_logger().debug('Removing repofile: {0}'.format(dst_in_host))
+ os.remove(dst_in_host)
+
# and do not forget to set the rhsm into the container mode again
with mounting.NspawnActions(_get_target_userspace()) as target_context:
rhsm.set_container_mode(target_context)
-def install_target_rhui_client_if_needed(context, indata):
+def _apply_rhui_access_preinstall_tasks(context, rhui_setup_info):
+ if rhui_setup_info.preinstall_tasks:
+ api.current_logger().debug('Applying RHUI preinstall tasks.')
+ preinstall_tasks = rhui_setup_info.preinstall_tasks
+
+ for file_to_remove in preinstall_tasks.files_to_remove:
+ api.current_logger().debug('Removing {0} from the scratch container.'.format(file_to_remove))
+ context.remove(file_to_remove)
+
+ for copy_info in preinstall_tasks.files_to_copy_into_overlay:
+ api.current_logger().debug(
+ 'Copying {0} in {1} into the scratch container.'.format(copy_info.src, copy_info.dst)
+ )
+ context.makedirs(os.path.dirname(copy_info.dst), exists_ok=True)
+ context.copy_to(copy_info.src, copy_info.dst)
+
+
+def _apply_rhui_access_postinstall_tasks(context, rhui_setup_info):
+ if rhui_setup_info.postinstall_tasks:
+ api.current_logger().debug('Applying RHUI postinstall tasks.')
+ for copy_info in rhui_setup_info.postinstall_tasks.files_to_copy:
+ context.makedirs(os.path.dirname(copy_info.dst), exists_ok=True)
+ debug_msg = 'Copying {0} to {1} (inside the scratch container).'
+ api.current_logger().debug(debug_msg.format(copy_info.src, copy_info.dst))
+ context.call(['cp', copy_info.src, copy_info.dst])
+
+
+def setup_target_rhui_access_if_needed(context, indata):
if not indata.rhui_info:
return
@@ -1110,15 +1170,14 @@ def install_target_rhui_client_if_needed(context, indata):
_create_target_userspace_directories(userspace_dir)
setup_info = indata.rhui_info.target_client_setup_info
- if setup_info.preinstall_tasks:
- preinstall_tasks = setup_info.preinstall_tasks
+ _apply_rhui_access_preinstall_tasks(context, setup_info)
- for file_to_remove in preinstall_tasks.files_to_remove:
- context.remove(file_to_remove)
-
- for copy_info in preinstall_tasks.files_to_copy_into_overlay:
- context.makedirs(os.path.dirname(copy_info.dst), exists_ok=True)
- context.copy_to(copy_info.src, copy_info.dst)
+ if not setup_info.bootstrap_target_client:
+ # Installation of the target RHUI client is not possible and we bundle all necessary
+ # files into the leapp-rhui-<provider> packages.
+ api.current_logger().debug('Bootstrapping target RHUI client is disabled, leapp will rely '
+ 'only on files budled in leapp-rhui-<provider> package.')
+ return
cmd = ['dnf', '-y']
@@ -1149,16 +1208,13 @@ def install_target_rhui_client_if_needed(context, indata):
context.call(cmd, callback_raw=utils.logging_handler, stdin='\n'.join(dnf_transaction_steps))
- if setup_info.postinstall_tasks:
- for copy_info in setup_info.postinstall_tasks.files_to_copy:
- context.makedirs(os.path.dirname(copy_info.dst), exists_ok=True)
- context.call(['cp', copy_info.src, copy_info.dst])
+ _apply_rhui_access_postinstall_tasks(context, setup_info)
# Do a cleanup so there are not duplicit repoids
files_owned_by_clients = _query_rpm_for_pkg_files(context, indata.rhui_info.target_client_pkg_names)
for copy_task in setup_info.preinstall_tasks.files_to_copy_into_overlay:
- dest = get_copy_location_from_copy_in_task(context, copy_task)
+ dest = get_copy_location_from_copy_in_task(context.base_dir, copy_task)
can_be_cleaned_up = copy_task.src not in setup_info.files_supporting_client_operation
if dest not in files_owned_by_clients and can_be_cleaned_up:
context.remove(dest)
@@ -1184,10 +1240,10 @@ def perform():
target_iso = next(api.consume(TargetOSInstallationImage), None)
with mounting.mount_upgrade_iso_to_root_dir(overlay.target, target_iso):
- install_target_rhui_client_if_needed(context, indata)
+ setup_target_rhui_access_if_needed(context, indata)
target_repoids = _gather_target_repositories(context, indata, prod_cert_path)
- _create_target_userspace(context, indata.packages, indata.files, target_repoids)
+ _create_target_userspace(context, indata, indata.packages, indata.files, target_repoids)
# TODO: this is tmp solution as proper one needs significant refactoring
target_repo_facts = repofileutils.get_parsed_repofiles(context)
api.produce(TMPTargetRepositoriesFacts(repositories=target_repo_facts))
diff --git a/repos/system_upgrade/common/models/rhuiinfo.py b/repos/system_upgrade/common/models/rhuiinfo.py
index 3eaa4826..0a2e45af 100644
--- a/repos/system_upgrade/common/models/rhuiinfo.py
+++ b/repos/system_upgrade/common/models/rhuiinfo.py
@@ -36,6 +36,13 @@ class TargetRHUISetupInfo(Model):
files_supporting_client_operation = fields.List(fields.String(), default=[])
"""A subset of files copied in preinstall tasks that should not be cleaned up."""
+ bootstrap_target_client = fields.Boolean(default=True)
+ """
+ Swap the current RHUI client for the target one to facilitate access to the target content.
+
+ When False, only files from the leapp-rhui-<provider> will be used to access target content.
+ """
+
class RHUIInfo(Model):
"""
--
2.43.0

View File

@ -1,23 +0,0 @@
From 496abd1775779054377c5e35ae96fa4d390bab42 Mon Sep 17 00:00:00 2001
From: Petr Stodulka <pstodulk@redhat.com>
Date: Tue, 19 Apr 2022 21:51:03 +0200
Subject: [PATCH] Enforce the removal of rubygem-irb (do not install it)
---
etc/leapp/transaction/to_remove | 3 +++
1 file changed, 3 insertions(+)
diff --git a/etc/leapp/transaction/to_remove b/etc/leapp/transaction/to_remove
index 0feb782..07c6864 100644
--- a/etc/leapp/transaction/to_remove
+++ b/etc/leapp/transaction/to_remove
@@ -1,3 +1,6 @@
### List of packages (each on new line) to be removed from the upgrade transaction
# Removing initial-setup package to avoid it asking for EULA acceptance during upgrade - OAMG-1531
initial-setup
+
+# temporary workaround for the file conflict symlink <-> dir (#2030627)
+rubygem-irb
--
2.35.1

View File

@ -1,25 +0,0 @@
From 1c6388139695aefb02daa7b5cb13e628f03eab43 Mon Sep 17 00:00:00 2001
From: Michal Hecko <mhecko@redhat.com>
Date: Mon, 17 Oct 2022 12:59:22 +0200
Subject: [PATCH] rhui(azure-sap-apps): consider RHUI client as signed
---
.../common/actors/redhatsignedrpmscanner/actor.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py b/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py
index dd6db7c9..647805cd 100644
--- a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py
+++ b/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py
@@ -56,7 +56,7 @@ class RedHatSignedRpmScanner(Actor):
upg_path = rhui.get_upg_path()
# AWS RHUI packages do not have to be whitelisted because they are signed by RedHat
- whitelisted_cloud_flavours = ('azure', 'azure-eus', 'azure-sap', 'google', 'google-sap')
+ whitelisted_cloud_flavours = ('azure', 'azure-eus', 'azure-sap', 'azure-sap-apps', 'google', 'google-sap')
whitelisted_cloud_pkgs = {
rhui.RHUI_CLOUD_MAP[upg_path].get(flavour, {}).get('src_pkg') for flavour in whitelisted_cloud_flavours
}
--
2.37.3

View File

@ -1,41 +0,0 @@
From a2f35c0aa4e00936e58c17a94d4f1507a3287c72 Mon Sep 17 00:00:00 2001
From: Michal Hecko <mhecko@redhat.com>
Date: Mon, 17 Oct 2022 12:59:22 +0200
Subject: [PATCH] rhui(azure-sap-apps): handle EUS SAP Apps content on RHEL8+
---
.../common/actors/cloud/checkrhui/actor.py | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/repos/system_upgrade/common/actors/cloud/checkrhui/actor.py b/repos/system_upgrade/common/actors/cloud/checkrhui/actor.py
index 822c7535..a56bb1e1 100644
--- a/repos/system_upgrade/common/actors/cloud/checkrhui/actor.py
+++ b/repos/system_upgrade/common/actors/cloud/checkrhui/actor.py
@@ -3,6 +3,7 @@ import os
from leapp import reporting
from leapp.actors import Actor
from leapp.libraries.common import rhsm, rhui
+from leapp.libraries.common.config.version import get_source_major_version
from leapp.libraries.common.rpms import has_package
from leapp.libraries.stdlib import api
from leapp.models import (
@@ -105,9 +106,15 @@ class CheckRHUI(Actor):
if info['src_pkg'] != info['target_pkg']:
self.produce(RpmTransactionTasks(to_install=[info['target_pkg']]))
self.produce(RpmTransactionTasks(to_remove=[info['src_pkg']]))
- if provider in ('azure-sap', 'azure-sap-apps'):
+ # Handle azure SAP systems that use two RHUI clients - one for RHEL content, one for SAP content
+ if provider == 'azure-sap':
azure_nonsap_pkg = rhui.RHUI_CLOUD_MAP[upg_path]['azure']['src_pkg']
self.produce(RpmTransactionTasks(to_remove=[azure_nonsap_pkg]))
+ elif provider == 'azure-sap-apps':
+ # SAP Apps systems have EUS content channel from RHEL8+
+ src_rhel_content_type = 'azure' if get_source_major_version() == '7' else 'azure-eus'
+ azure_nonsap_pkg = rhui.RHUI_CLOUD_MAP[upg_path][src_rhel_content_type]['src_pkg']
+ self.produce(RpmTransactionTasks(to_remove=[azure_nonsap_pkg]))
self.produce(RHUIInfo(provider=provider))
self.produce(RequiredTargetUserspacePackages(packages=[info['target_pkg']]))
--
2.37.3

View File

@ -1,32 +0,0 @@
From a06e248faa3b336c09ee6137eee54a1a0256162b Mon Sep 17 00:00:00 2001
From: Vinzenz Feenstra <vfeenstr@redhat.com>
Date: Wed, 19 Oct 2022 21:05:00 +0200
Subject: [PATCH] checksaphana: Move to common
We need to start handling also el8 to el9 upgrades now.
Signed-off-by: Vinzenz Feenstra <vfeenstr@redhat.com>
---
.../{el7toel8 => common}/actors/checksaphana/actor.py | 0
.../actors/checksaphana/libraries/checksaphana.py | 0
.../actors/checksaphana/tests/test_checksaphana.py | 0
3 files changed, 0 insertions(+), 0 deletions(-)
rename repos/system_upgrade/{el7toel8 => common}/actors/checksaphana/actor.py (100%)
rename repos/system_upgrade/{el7toel8 => common}/actors/checksaphana/libraries/checksaphana.py (100%)
rename repos/system_upgrade/{el7toel8 => common}/actors/checksaphana/tests/test_checksaphana.py (100%)
diff --git a/repos/system_upgrade/el7toel8/actors/checksaphana/actor.py b/repos/system_upgrade/common/actors/checksaphana/actor.py
similarity index 100%
rename from repos/system_upgrade/el7toel8/actors/checksaphana/actor.py
rename to repos/system_upgrade/common/actors/checksaphana/actor.py
diff --git a/repos/system_upgrade/el7toel8/actors/checksaphana/libraries/checksaphana.py b/repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py
similarity index 100%
rename from repos/system_upgrade/el7toel8/actors/checksaphana/libraries/checksaphana.py
rename to repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py
diff --git a/repos/system_upgrade/el7toel8/actors/checksaphana/tests/test_checksaphana.py b/repos/system_upgrade/common/actors/checksaphana/tests/test_checksaphana.py
similarity index 100%
rename from repos/system_upgrade/el7toel8/actors/checksaphana/tests/test_checksaphana.py
rename to repos/system_upgrade/common/actors/checksaphana/tests/test_checksaphana.py
--
2.37.3

View File

@ -1,276 +0,0 @@
From b716765e638156c9a5cb21a474d1203b695acf8d Mon Sep 17 00:00:00 2001
From: Vinzenz Feenstra <vfeenstr@redhat.com>
Date: Wed, 19 Oct 2022 21:42:14 +0200
Subject: [PATCH] checksaphana: Adjust for el7toel8 and el8toel9 requirements
Previously only upgrade from el7toel8 were supported for SAP Hana.
This patch will introduce the adjustments necessary to allow the
upgrade of RHEL with SAP Hana installed even on el8toel9.
Signed-off-by: Vinzenz Feenstra <vfeenstr@redhat.com>
---
.../checksaphana/libraries/checksaphana.py | 64 ++++++++++++----
.../checksaphana/tests/test_checksaphana.py | 73 +++++++++++++++++--
2 files changed, 117 insertions(+), 20 deletions(-)
diff --git a/repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py b/repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py
index e540ccd1..564d86b8 100644
--- a/repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py
+++ b/repos/system_upgrade/common/actors/checksaphana/libraries/checksaphana.py
@@ -1,5 +1,5 @@
from leapp import reporting
-from leapp.libraries.common.config import architecture
+from leapp.libraries.common.config import architecture, version
from leapp.libraries.stdlib import api
from leapp.models import SapHanaInfo
@@ -7,8 +7,17 @@ from leapp.models import SapHanaInfo
# Requirement is SAP HANA 2.00 rev 54 which is the minimal supported revision for both RHEL 7.9 and RHEL 8.2
SAP_HANA_MINIMAL_MAJOR_VERSION = 2
-SAP_HANA_RHEL8_REQUIRED_PATCH_LEVELS = ((5, 54, 0),)
-SAP_HANA_MINIMAL_VERSION_STRING = 'HANA 2.0 SPS05 rev 54 or later'
+# RHEL 8.2 target requirements
+SAP_HANA_RHEL82_REQUIRED_PATCH_LEVELS = ((5, 54, 0),)
+SAP_HANA_RHEL82_MINIMAL_VERSION_STRING = 'HANA 2.0 SPS05 rev 54 or later'
+
+# RHEL 8.6 target requirements
+SAP_HANA_RHEL86_REQUIRED_PATCH_LEVELS = ((5, 59, 2),)
+SAP_HANA_RHEL86_MINIMAL_VERSION_STRING = 'HANA 2.0 SPS05 rev 59.02 or later'
+
+# RHEL 9 target requirements
+SAP_HANA_RHEL9_REQUIRED_PATCH_LEVELS = ((5, 59, 4), (6, 63, 0))
+SAP_HANA_RHEL9_MINIMAL_VERSION_STRING = 'HANA 2.0 SPS05 rev 59.04 or later, or SPS06 rev 63 or later'
def _manifest_get(manifest, key, default_value=None):
@@ -56,6 +65,16 @@ def _create_detected_instances_list(details):
return ''
+def _min_ver_string():
+ if version.get_target_major_version() == '8':
+ ver_str = SAP_HANA_RHEL86_MINIMAL_VERSION_STRING
+ if version.matches_target_version('8.2'):
+ ver_str = SAP_HANA_RHEL82_MINIMAL_VERSION_STRING
+ else:
+ ver_str = SAP_HANA_RHEL9_MINIMAL_VERSION_STRING
+ return ver_str
+
+
def version1_check(info):
""" Creates a report for SAP HANA instances running on version 1 """
found = {}
@@ -64,6 +83,7 @@ def version1_check(info):
_add_hana_details(found, instance)
if found:
+ min_ver_string = _min_ver_string()
detected = _create_detected_instances_list(found)
reporting.create_report([
reporting.Title('Found SAP HANA 1 which is not supported with the target version of RHEL'),
@@ -75,7 +95,7 @@ def version1_check(info):
reporting.Severity(reporting.Severity.HIGH),
reporting.RemediationHint((
'In order to upgrade RHEL, you will have to upgrade your SAP HANA 1.0 software to '
- '{supported}.'.format(supported=SAP_HANA_MINIMAL_VERSION_STRING))),
+ '{supported}.'.format(supported=min_ver_string))),
reporting.ExternalLink(url='https://launchpad.support.sap.com/#/notes/2235581',
title='SAP HANA: Supported Operating Systems'),
reporting.Groups([reporting.Groups.SANITY]),
@@ -100,11 +120,11 @@ def _major_version_check(instance):
return False
-def _sp_rev_patchlevel_check(instance):
+def _sp_rev_patchlevel_check(instance, patchlevels):
""" Checks whether this SP, REV & PatchLevel are eligible """
number = _manifest_get(instance.manifest, 'rev-number', '000')
if len(number) > 2 and number.isdigit():
- required_sp_levels = [r[0] for r in SAP_HANA_RHEL8_REQUIRED_PATCH_LEVELS]
+ required_sp_levels = [r[0] for r in patchlevels]
lowest_sp = min(required_sp_levels)
highest_sp = max(required_sp_levels)
sp = int(number[0:2].lstrip('0') or '0')
@@ -114,7 +134,7 @@ def _sp_rev_patchlevel_check(instance):
if sp > highest_sp:
# Less than minimal required SP
return True
- for requirements in SAP_HANA_RHEL8_REQUIRED_PATCH_LEVELS:
+ for requirements in patchlevels:
req_sp, req_rev, req_pl = requirements
if sp == req_sp:
rev = int(number.lstrip('0') or '0')
@@ -134,7 +154,13 @@ def _sp_rev_patchlevel_check(instance):
def _fullfills_hana_min_version(instance):
""" Performs a check whether the version of SAP HANA fullfills the minimal requirements for the target RHEL """
- return _major_version_check(instance) and _sp_rev_patchlevel_check(instance)
+ if version.get_target_major_version() == '8':
+ patchlevels = SAP_HANA_RHEL86_REQUIRED_PATCH_LEVELS
+ if version.matches_target_version('8.2'):
+ patchlevels = SAP_HANA_RHEL82_REQUIRED_PATCH_LEVELS
+ else:
+ patchlevels = SAP_HANA_RHEL9_REQUIRED_PATCH_LEVELS
+ return _major_version_check(instance) and _sp_rev_patchlevel_check(instance, patchlevels)
def version2_check(info):
@@ -147,17 +173,18 @@ def version2_check(info):
_add_hana_details(found, instance)
if found:
+ min_ver_string = _min_ver_string()
detected = _create_detected_instances_list(found)
reporting.create_report([
- reporting.Title('SAP HANA needs to be updated before upgrade'),
+ reporting.Title('SAP HANA needs to be updated before the RHEL upgrade'),
reporting.Summary(
('A newer version of SAP HANA is required in order continue with the upgrade.'
' {min_hana_version} is required for the target version of RHEL.\n\n'
- 'The following SAP HANA instances have been detected to be running with a lower version'
+ 'The following SAP HANA instances have been detected to be installed with a lower version'
' than required on the target system:\n'
- '{detected}').format(detected=detected, min_hana_version=SAP_HANA_MINIMAL_VERSION_STRING)
+ '{detected}').format(detected=detected, min_hana_version=min_ver_string)
),
- reporting.RemediationHint('Update SAP HANA at least to {}'.format(SAP_HANA_MINIMAL_VERSION_STRING)),
+ reporting.RemediationHint('Update SAP HANA at least to {}'.format(min_ver_string)),
reporting.ExternalLink(url='https://launchpad.support.sap.com/#/notes/2235581',
title='SAP HANA: Supported Operating Systems'),
reporting.Severity(reporting.Severity.HIGH),
@@ -170,6 +197,15 @@ def version2_check(info):
def platform_check():
""" Creates an inhibitor report in case the system is not running on x86_64 """
if not architecture.matches_architecture(architecture.ARCH_X86_64):
+ if version.get_target_major_version() == '8':
+ elink = reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/5533441',
+ title='How do I upgrade from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 with SAP HANA')
+ else:
+ elink = reporting.ExternalLink(
+ url='https://access.redhat.com/solutions/6980855',
+ title='How to in-place upgrade SAP environments from RHEL 8 to RHEL 9')
+
reporting.create_report([
reporting.Title('SAP HANA upgrades are only supported on X86_64 systems'),
reporting.Summary(
@@ -180,9 +216,7 @@ def platform_check():
reporting.Groups([reporting.Groups.SANITY]),
reporting.Groups([reporting.Groups.INHIBITOR]),
reporting.Audience('sysadmin'),
- reporting.ExternalLink(
- url='https://access.redhat.com/solutions/5533441',
- title='How do I upgrade from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 with SAP HANA')
+ elink,
])
return False
diff --git a/repos/system_upgrade/common/actors/checksaphana/tests/test_checksaphana.py b/repos/system_upgrade/common/actors/checksaphana/tests/test_checksaphana.py
index 3f1d4230..6f61d0bf 100644
--- a/repos/system_upgrade/common/actors/checksaphana/tests/test_checksaphana.py
+++ b/repos/system_upgrade/common/actors/checksaphana/tests/test_checksaphana.py
@@ -2,7 +2,7 @@ import pytest
from leapp.libraries.actor import checksaphana
from leapp.libraries.common import testutils
-from leapp.libraries.stdlib import run
+from leapp.libraries.common.config import version
from leapp.models import SapHanaManifestEntry
SAPHANA1_MANIFEST = '''comptype: HDB
@@ -77,7 +77,7 @@ def _report_has_pattern(report, pattern):
EXPECTED_TITLE_PATTERNS = {
'running': lambda report: _report_has_pattern(report, 'running SAP HANA'),
'v1': lambda report: _report_has_pattern(report, 'Found SAP HANA 1'),
- 'low': lambda report: _report_has_pattern(report, 'SAP HANA needs to be updated before upgrade'),
+ 'low': lambda report: _report_has_pattern(report, 'SAP HANA needs to be updated before the RHEL upgrade'),
}
@@ -180,8 +180,69 @@ class MockSAPHanaVersionInstance(object):
(2, 49, 0, True),
)
)
-def test_checksaphana__fullfills_hana_min_version(monkeypatch, major, rev, patchlevel, result):
- monkeypatch.setattr(checksaphana, 'SAP_HANA_RHEL8_REQUIRED_PATCH_LEVELS', ((4, 48, 2), (5, 52, 0)))
+def test_checksaphana__fullfills_rhel82_hana_min_version(monkeypatch, major, rev, patchlevel, result):
+ monkeypatch.setattr(version, 'get_target_major_version', lambda: '8')
+ monkeypatch.setattr(version, 'get_target_version', lambda: '8.2')
+ monkeypatch.setattr(checksaphana, 'SAP_HANA_RHEL82_REQUIRED_PATCH_LEVELS', ((4, 48, 2), (5, 52, 0)))
+ assert checksaphana._fullfills_hana_min_version(
+ MockSAPHanaVersionInstance(
+ major=major,
+ rev=rev,
+ patchlevel=patchlevel,
+ )
+ ) == result
+
+
+@pytest.mark.parametrize(
+ 'major,rev,patchlevel,result', (
+ (2, 52, 0, True),
+ (2, 52, 1, True),
+ (2, 52, 2, True),
+ (2, 53, 0, True),
+ (2, 60, 0, True),
+ (2, 48, 2, True),
+ (2, 48, 1, False),
+ (2, 48, 0, False),
+ (2, 38, 2, False),
+ (2, 49, 0, True),
+ )
+)
+def test_checksaphana__fullfills_rhel86_hana_min_version(monkeypatch, major, rev, patchlevel, result):
+ monkeypatch.setattr(version, 'get_target_major_version', lambda: '8')
+ monkeypatch.setattr(version, 'get_target_version', lambda: '8.6')
+ monkeypatch.setattr(checksaphana, 'SAP_HANA_RHEL86_REQUIRED_PATCH_LEVELS', ((4, 48, 2), (5, 52, 0)))
+ assert checksaphana._fullfills_hana_min_version(
+ MockSAPHanaVersionInstance(
+ major=major,
+ rev=rev,
+ patchlevel=patchlevel,
+ )
+ ) == result
+
+
+@pytest.mark.parametrize(
+ 'major,rev,patchlevel,result', (
+ (2, 59, 4, True),
+ (2, 59, 5, True),
+ (2, 59, 6, True),
+ (2, 60, 0, False),
+ (2, 61, 0, False),
+ (2, 62, 0, False),
+ (2, 63, 2, True),
+ (2, 48, 1, False),
+ (2, 48, 0, False),
+ (2, 59, 0, False),
+ (2, 59, 1, False),
+ (2, 59, 2, False),
+ (2, 59, 3, False),
+ (2, 38, 2, False),
+ (2, 64, 0, True),
+ )
+)
+def test_checksaphana__fullfills_hana_rhel9_min_version(monkeypatch, major, rev, patchlevel, result):
+ monkeypatch.setattr(version, 'get_target_major_version', lambda: '9')
+ monkeypatch.setattr(version, 'get_target_version', lambda: '9.0')
+ monkeypatch.setattr(checksaphana, 'SAP_HANA_RHEL9_REQUIRED_PATCH_LEVELS', ((5, 59, 4), (6, 63, 0)))
assert checksaphana._fullfills_hana_min_version(
MockSAPHanaVersionInstance(
major=major,
@@ -196,7 +257,9 @@ def test_checksaphana_perform_check(monkeypatch):
v2names = ('JKL', 'MNO', 'PQR', 'STU')
v2lownames = ('VWX', 'YZA')
reports = []
- monkeypatch.setattr(checksaphana, 'SAP_HANA_RHEL8_REQUIRED_PATCH_LEVELS', ((4, 48, 2), (5, 52, 0)))
+ monkeypatch.setattr(checksaphana, 'SAP_HANA_RHEL86_REQUIRED_PATCH_LEVELS', ((4, 48, 2), (5, 52, 0)))
+ monkeypatch.setattr(version, 'get_target_major_version', lambda: '8')
+ monkeypatch.setattr(version, 'get_target_version', lambda: '8.6')
monkeypatch.setattr(checksaphana.reporting, 'create_report', _report_collector(reports))
monkeypatch.setattr(checksaphana.api, 'consume', _consume_mock_sap_hana_info(
v1names=v1names, v2names=v2names, v2lownames=v2lownames, running=True))
--
2.37.3

View File

@ -2,7 +2,7 @@
%global repositorydir %{leapp_datadir}/repositories
%global custom_repositorydir %{leapp_datadir}/custom-repositories
%define leapp_repo_deps 7
%define leapp_repo_deps 10
%if 0%{?rhel} == 7
%define leapp_python_sitelib %{python2_sitelib}
@ -41,26 +41,22 @@ py2_byte_compile "%1" "%2"}
# RHEL 8+ packages to be consistent with other leapp projects in future.
Name: leapp-repository
Version: 0.17.0
Release: 1%{?dist}.2
Version: 0.20.0
Release: 2%{?dist}
Summary: Repositories for leapp
License: ASL 2.0
URL: https://oamg.github.io/leapp/
Source0: https://github.com/oamg/%{name}/archive/v%{version}.tar.gz#/%{name}-%{version}.tar.gz
Source1: deps-pkgs-7.tar.gz
Source1: deps-pkgs-10.tar.gz
# NOTE: Our packages must be noarch. Do no drop this in any way.
BuildArch: noarch
### PATCHES HERE
# Patch0001: filename.patch
Patch0001: 0001-CheckVDO-Ask-user-only-faiulres-and-undetermined-dev.patch
Patch0004: 0004-Enforce-the-removal-of-rubygem-irb-do-not-install-it.patch
Patch0005: 0005-rhui-azure-sap-apps-consider-RHUI-client-as-signed.patch
Patch0006: 0006-rhui-azure-sap-apps-handle-EUS-SAP-Apps-content-on-R.patch
Patch0007: 0007-checksaphana-Move-to-common.patch
Patch0008: 0008-checksaphana-Adjust-for-el7toel8-and-el8toel9-requir.patch
Patch0001: 0001-rhui-do-not-bootstrap-target-client-on-aws.patch
%description
@ -100,18 +96,22 @@ Conflicts: leapp-upgrade-el7toel8
%endif
# IMPORTANT: everytime the requirements are changed, increment number by one
# IMPORTANT: every time the requirements are changed, increment number by one
# - same for Provides in deps subpackage
Requires: leapp-repository-dependencies = %{leapp_repo_deps}
# IMPORTANT: this is capability provided by the leapp framework rpm.
# Check that 'version' instead of the real framework rpm version.
Requires: leapp-framework >= 3.1
Requires: leapp-framework >= 5.0
# Since we provide sub-commands for the leapp utility, we expect the leapp
# tool to be installed as well.
Requires: leapp
# Used to determine RHEL version of a given target RHEL installation image -
# uncompressing redhat-release package from the ISO.
Requires: cpio
# The leapp-repository rpm is renamed to %%{lpr_name}
Obsoletes: leapp-repository < 0.14.0-5
Provides: leapp-repository = %{version}-%{release}
@ -134,7 +134,7 @@ Leapp repositories for the in-place upgrade to the next major version
of the Red Hat Enterprise Linux system.
# This metapackage should contain all RPM dependencies exluding deps on *leapp*
# This metapackage should contain all RPM dependencies excluding deps on *leapp*
# RPMs. This metapackage will be automatically replaced during the upgrade
# to satisfy dependencies with RPMs from target system.
%package -n %{lpr_name}-deps
@ -143,7 +143,7 @@ Summary: Meta-package with system dependencies of %{lpr_name} package
# The package has been renamed, so let's obsoletes the old one
Obsoletes: leapp-repository-deps < 0.14.0-5
# IMPORTANT: everytime the requirements are changed, increment number by one
# IMPORTANT: every time the requirements are changed, increment number by one
# - same for Requires in main package
Provides: leapp-repository-dependencies = %{leapp_repo_deps}
##################################################
@ -151,6 +151,16 @@ Provides: leapp-repository-dependencies = %{leapp_repo_deps}
##################################################
Requires: dnf >= 4
Requires: pciutils
# required to be able to format disk images with XFS file systems (default)
Requires: xfsprogs
# required to be able to format disk images with Ext4 file systems
# NOTE: this is not happening by default, but we can expact that many customers
# will want to / need to do this - especially on RHEL 7 now. Adding this deps
# as the best trade-off to resolve this problem.
Requires: e2fsprogs
%if 0%{?rhel} && 0%{?rhel} == 7
# Required to gather system facts about SELinux
Requires: libselinux-python
@ -178,6 +188,11 @@ Requires: kmod
# and missing dracut could be killing situation for us :)
Requires: dracut
# Required to scan NetworkManagerConnection (e.g. to recognize secrets)
# NM is requested to be used on RHEL 8+ systems
Requires: NetworkManager-libnm
Requires: python3-gobject-base
%endif
##################################################
# end requirement
@ -195,18 +210,13 @@ Requires: dracut
# APPLY PATCHES HERE
# %%patch0001 -p1
%patch0001 -p1
%patch0004 -p1
%patch0005 -p1
%patch0006 -p1
%patch0007 -p1
%patch0008 -p1
%build
%if 0%{?rhel} == 7
cp -a leapp*deps-el8*rpm repos/system_upgrade/el7toel8/files/bundled-rpms/
cp -a leapp*deps*el8.noarch.rpm repos/system_upgrade/el7toel8/files/bundled-rpms/
%else
cp -a leapp*deps-el9*rpm repos/system_upgrade/el8toel9/files/bundled-rpms/
cp -a leapp*deps*el9.noarch.rpm repos/system_upgrade/el8toel9/files/bundled-rpms/
%endif
@ -218,6 +228,7 @@ install -m 0755 -d %{buildroot}%{_sysconfdir}/leapp/repos.d/
install -m 0755 -d %{buildroot}%{_sysconfdir}/leapp/transaction/
install -m 0755 -d %{buildroot}%{_sysconfdir}/leapp/files/
install -m 0644 etc/leapp/transaction/* %{buildroot}%{_sysconfdir}/leapp/transaction
install -m 0644 etc/leapp/files/* %{buildroot}%{_sysconfdir}/leapp/files
# install CLI commands for the leapp utility on the expected path
install -m 0755 -d %{buildroot}%{leapp_python_sitelib}/leapp/cli/
@ -237,6 +248,7 @@ rm -rf %{buildroot}%{repositorydir}/common/actors/testactor
find %{buildroot}%{repositorydir}/common -name "test.py" -delete
rm -rf `find %{buildroot}%{repositorydir} -name "tests" -type d`
find %{buildroot}%{repositorydir} -name "Makefile" -delete
find %{buildroot} -name "*.py.orig" -delete
for DIRECTORY in $(find %{buildroot}%{repositorydir}/ -mindepth 1 -maxdepth 1 -type d);
do
@ -264,6 +276,7 @@ done;
%dir %{repositorydir}
%dir %{custom_repositorydir}
%dir %{leapp_python_sitelib}/leapp/cli/commands
%config %{_sysconfdir}/leapp/files/*
%{_sysconfdir}/leapp/repos.d/*
%{_sysconfdir}/leapp/transaction/*
%{repositorydir}/*
@ -274,12 +287,175 @@ done;
# no files here
%changelog
* Thu Oct 20 2022 Petr Stodulka <pstodulk@redhat.com> - 0.17.0-1.2
- Add checks for the in-place upgrades of RHEL for SAP
- RHUI: Fix the in-place upgrade on Azure for RHEL SAP Applications
- Resolves: rhbz#2125284
* Tue Feb 20 2024 Petr Stodulka <pstodulk@redhat.com> - 0.20.0-2
- Fallback to original RHUI solution on AWS to fix issues caused by changes in RHUI client
- Resolves: RHEL-16729
* Thu Sep 08 2022 Petr Stodulka <pstodulk@redhat.com> - 0.17.0-1.1
* Tue Feb 13 2024 Toshio Kuratomi <toshio@fedoraproject.org> - 0.20.0-1
- Rebase to new upstream v0.20.0.
- Fix semanage import issue
- Fix handling of libvirt's systemd services
- Add a dracut breakpoint for the pre-upgrade step.
- Drop obsoleted upgrade paths (obsoleted releases: 8.6, 8.9, 9.0, 9.3)
- Resolves: RHEL-16729
* Tue Jan 23 2024 Toshio Kuratomi <toshio@fedoraproject.org> - 0.19.0-10
- Print nice error msg when device and driver deprecation data is malformed
- Fix another cornercase when preserving symlinks to certificates in /etc/pki
- Update the leapp upgrade data files - fixing upgrades with idm-tomcatjss
- Resolves: RHEL-16729
* Fri Jan 19 2024 Petr Stodulka <pstodulk@redhat.com> - 0.19.0-9
- Do not try to download data files anymore when missing as the service
is obsoleted since the data is part of installed packages
- Update error messages and reports when installed upgrade data files
are malformed or missing to instruct user how to resolve it
- Update the leapp upgrade data files - bump data stream to "3.0"
- Resolves: RHEL-16729
* Fri Jan 12 2024 Petr Stodulka <pstodulk@redhat.com> - 0.19.0-7
- Add detection of possible usage of OpenSSL IBMCA engine on IBM Z machines
- Add detection of modified /etc/pki/tls/openssl.cnf file
- Update the leapp upgrade data files
- Fix handling of symlinks under /etc/pki with relative paths specified
- Report custom actors and modifications of the upgrade tooling
- Requires xfsprogs and e2fsprogs to ensure that Ext4 and XFS tools are installed
- Bump leapp-repository-dependencies to 10
- Resolves: RHEL-1774, RHEL-16729
* Thu Nov 16 2023 Petr Stodulka <pstodulk@redhat.com> - 0.19.0-5
- Enable new upgrade path for RHEL 8.10 -> RHEL 9.4 (including RHEL with SAP HANA)
- Introduce generic transition of systemd services states during the IPU
- Introduce possibility to upgrade with local repositories
- Improve possibilities of upgrade when a proxy is configured in DNF configutation file
- Fix handling of symlinks under /etc/pki when managing certificates
- Fix the upgrade with custom https repositories
- Default to the NO_RHSM mode when subscription-manager is not installed
- Detect customized configuration of dynamic linker
- Drop the invalid `tuv` target channel for the --channel option
- Fix the issue of going out of bounds in the isccfg parser
- Fix traceback when saving the rhsm facts results and the /etc/rhsm/facts directory doesnt exist yet
- Load all rpm repository substitutions that dnf knows about, not just "releasever" only
- Simplify handling of upgrades on systems using RHUI, reducing the maintenance burden for cloud providers
- Detect possible unexpected RPM GPG keys has been installed during RPM transaction
- Resolves: RHEL-16729
* Thu Nov 02 2023 Petr Stodulka <pstodulk@redhat.com> - 0.19.0-4
- Fix the upgrade for systems without subscription-manager package
- Resolves: RHEL-14901
* Tue Oct 31 2023 Petr Stodulka <pstodulk@redhat.com> - 0.19.0-3
- Fix the upgrade when the release is locked by new subscription-manager
- Resolves: RHEL-14901
* Wed Aug 23 2023 Petr Stodulka <pstodulk@redhat.com> - 0.19.0-1
- Rebase to v0.19.0
- Requires leapp-framework 5.0
- Handle correctly the installed certificates to allow upgrades with custom repositories using HTTPs with enabled SSL verification
- Fix failing upgrades with devtmpfs file systems specified in FSTAB
- Do not try to update GRUB core on IBM Z systems
- Minor improvements and fixes of various reports and error messages
- Redesign handling of information about kernel (booted and target) to reflect changes in RHEL 9.3
- Use new leapp CLI API which provides better report summary output
- Resolves: rhbz#2215997, rhbz#2222861, rhbz#2232618
* Tue Jul 18 2023 Petr Stodulka <pstodulk@redhat.com> - 0.18.0-5
- Fix the calculation of the required free space on each partitions/volume for the upgrade transactions
- Create source overlay images with dynamic sizes to optimize disk space consumption
- Update GRUB2 when /boot resides on multiple devices aggregated in RAID
- Use new leapp CLI API which provides better report summary output
- Introduce possibility to add (custom) kernel drivers to initramfs
- Detect and report use of deprecated Xorg drivers
- Fix the generation of the report about hybrid images
- Inhibit the upgrade when unsupported x86-64 microarchitecture is detected
- Minor improvements and fixes of various reports
- Requires leapp-framework 4.0
- Update leapp data files
- Resolves: rhbz#2140011, rhbz#2144304, rhbz#2174095, rhbz#2215997
* Mon Jun 19 2023 Petr Stodulka <pstodulk@redhat.com> - 0.18.0-4
- Introduce new upgrade path RHEL 8.9 -> 9.3
- Update leapp data files to reflect new changes between systems
- Detect and report use of deprecated Xorg drivers
- Minor improvements of generated reports
- Fix false positive report about invalid symlinks
- Inhibit the upgrade when unsupported x86-64 microarchitecture is detected
- Resolves: rhbz#2215997
* Mon Jun 05 2023 Petr Stodulka <pstodulk@redhat.com> - 0.18.0-3
- Update the repomap.json file to address planned changes on RHUI Azure
- Resolves: rhbz#2203800
* Fri May 19 2023 Petr Stodulka <pstodulk@redhat.com> - 0.18.0-2
- Include leap data files in the package
- Introduce in-place upgrades for systems with enabled FIPS mode
- Enable the upgrade path 8.8 -> 9.2 for RHEL with SAP HANA
- Fix the upgrade of ruby-irb package
- Resolves: rhbz#2030627, rhbz#2097003, rhbz#2203800, rhbz#2203803
* Tue Feb 21 2023 Petr Stodulka <pstodulk@redhat.com> - 0.18.0-1
- Rebase to v0.18.0
- Introduce new upgrade path RHEL 8.8 -> 9.2
- Requires cpio
- Requires python3-gobject-base, NetworkManager-libnm
- Bump leapp-repository-dependencies to 9
- Add breadcrumbs results to RHSM facts
- Add leapp RHUI packages to an allowlist to drop confusing reports
- Added checks for RHEL SAP IPU 8.6 -> 9.0
- Check RPM signatures during the upgrade
- Check only mounted XFS partitions
- Check the validity and compatitibility of used leapp data
- Detect CIFS also when upgrading from RHEL8 to RHEL9 (PR1035)
- Detect RoCE on IBM Z machines and check the configuration is safe for the upgrade
- Detect a proxy configuration in YUM/DNF and adjust an error msg on issues caused by the configuration
- Detect and report systemd symlinks that are broken before the upgrade
- Detect the kernel-core RPM instead of kernel to prevent an error during post-upgrade phases
- Disable the amazon-id DNF plugin on AWS during the upgrade stage to omit confusing error messages
- Do not create new *pyc files when running leapp after the DNF upgrade transaction
- Drop obsoleted upgrade paths
- Enable upgrades of RHEL 8 for SAP HANA to RHEL 9 on ppc64le
- Enable upgrades on s390x when /boot is part of rootfs
- Extend the allow list of RHUI clients by azure-sap-apps to omit confusing report
- Filter out PES events unrelated for the used upgrade path and handle overlapping event
(fixes upgrades with quagga installed)
- Fix scan of ceph volumes on systems without ceph-osd or when ceph-osd container is not found
- Fix systemd symlinks that become incorrect during the IPU
- Fix the check of memory (RAM) limits and use human readable values in the report
- Fix the kernel detection during initramfs creation for new kernel on RHEL 9.2+
- Fix the upgrade of IBM Z machines configured with ZFCP
- Fix the upgrade on Azure using RHUI for SAP Apps images
- Ignore external accounts in /etc/passwd
- Improve remediation instructions for packages in unknown repositories
- Improve the error message to guide users when discovered more space is needed
- Improve the handling of blocklisted certificates
- Inhibit the upgrade when entries in /etc/fstab cause overshadowing during the upgrade
- Introduced an option to use an ISO file as a target RHEL version content source
- Introduced possibility to specify what systemd services should be enabled/disabled on the upgraded system
- Introduced the --nogpgcheck option to skip checking of RPM signatures
- Map the target repositories also based on the installed content
- Prevent re-run of leapp in the upgrade initramfs in case of previous failure
- Prevent the upgrade with RHSM when Baseos and Appstream target repositories are not discovered
- Provide common information about systemd services
- RHUI(Azure) Handle correctly various SAP images
- Register subscribed systems automatically to Red Hat Insights unless --no-insights-register is used
- Remove obsoleted GPG keys provided by RH after the upgrade to prevent errors
- Rework the network configuration handling and parse the configuration data properly
- Set the system release lock after the upgrade also for premium channels
- Small improvements in various reports
- Resolves: rhbz#2088492, rhbz#2111691, rhbz#2127920, rhbz#2129716,rhbz#2139907, rhbz#2139907, rhbz#2141393, rhbz#2143372, rhbz#2155661
* Wed Sep 07 2022 Petr Stodulka <pstodulk@redhat.com> - 0.17.0-3
- Adding back instruction to not install rubygem-irb during the in-place upgrade
to prevent conflict between files
- Resolves: rhbz#2090995
* Wed Sep 07 2022 Petr Stodulka <pstodulk@redhat.com> - 0.17.0-2
- Update VDO checks to enable user to decide the system state on check failures
and undetermined block devices
- The VDO dialog and related VDO reports have been properly updated
- Resolves: rhbz#2096159
* Wed Aug 24 2022 Petr Stodulka <pstodulk@redhat.com> - 0.17.0-1
- Rebase to v0.17.0
- Support upgrade path RHEL 8.7 -> 9.0 and RHEL SAP 8.6 -> 9.0
- Provide and require leapp-repository-dependencies 7
@ -320,10 +496,7 @@ done;
- Skip comment lines when parsing the GRUB configuration file
- Stop propagating the “debug” and ”enforcing=0” kernel cmdline options into the target kernel cmdline options
- Mass refactoring to be compatible with leapp v0.15.0
- Update VDO checks to enable user to decide the system state on check failures
and undetermined block devices
- The VDO dialog and related VDO reports have been properly updated
- Resolves: rhbz#2125284
- Resolves: rhbz#2090995, rhbz#2040470, rhbz#2092005, rhbz#2093220, rhbz#2095704, rhbz#2096159, rhbz#2100108, rhbz#2100110, rhbz#2103282, rhbz#2106904, rhbz#2110627
* Wed Apr 27 2022 Petr Stodulka <pstodulk@redhat.com> - 0.16.0-6
- Skip comments in /etc/default/grub during the parsing