From 091a7af85031211ec98ae05ab5d4ef8736e0dd1f Mon Sep 17 00:00:00 2001 From: CentOS Sources Date: Tue, 8 Nov 2022 01:44:12 -0500 Subject: [PATCH] import leapp-repository-0.17.0-3.el8 --- .gitignore | 4 +- .leapp-repository.metadata | 4 +- ...r-only-faiulres-and-undetermined-dev.patch | 562 ++++++++++++++++++ ...-Also-match-deprecation-data-agains.patch1 | 70 --- ...-issues-in-regards-to-pci-address-ha.patch | 44 -- ...-repositories-are-enabled-on-Satelli.patch | 78 --- ...U-8-9-Migrate-blacklisted-CAs-hotfix.patch | 209 ------- ...es-when-parsing-grub-configuration-f.patch | 108 ---- SPECS/leapp-repository.spec | 100 +++- 9 files changed, 649 insertions(+), 530 deletions(-) create mode 100644 SOURCES/0001-CheckVDO-Ask-user-only-faiulres-and-undetermined-dev.patch delete mode 100644 SOURCES/0001-pcidevicesscanner-Also-match-deprecation-data-agains.patch1 delete mode 100644 SOURCES/0002-pciscanner-Fix-2-issues-in-regards-to-pci-address-ha.patch delete mode 100644 SOURCES/0003-Ensure-the-right-repositories-are-enabled-on-Satelli.patch delete mode 100644 SOURCES/0005-IPU-8-9-Migrate-blacklisted-CAs-hotfix.patch delete mode 100644 SOURCES/0006-Skip-comment-lines-when-parsing-grub-configuration-f.patch diff --git a/.gitignore b/.gitignore index 5efef74..3dc1fcd 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,2 @@ -SOURCES/deps-pkgs-6.tar.gz -SOURCES/leapp-repository-0.16.0.tar.gz +SOURCES/deps-pkgs-7.tar.gz +SOURCES/leapp-repository-0.17.0.tar.gz diff --git a/.leapp-repository.metadata b/.leapp-repository.metadata index f61e49c..9f0e332 100644 --- a/.leapp-repository.metadata +++ b/.leapp-repository.metadata @@ -1,2 +1,2 @@ -a5100971d63814c213c5245181891329578baf8d SOURCES/deps-pkgs-6.tar.gz -2bcc851f1344107581096a6b564375c440a4df4a SOURCES/leapp-repository-0.16.0.tar.gz +4886551d9ee2259cdfbd8d64a02d0ab9a381ba3d SOURCES/deps-pkgs-7.tar.gz +cbb3e6025c6567507d3bc317731b4c2f0a0eb872 SOURCES/leapp-repository-0.17.0.tar.gz diff --git a/SOURCES/0001-CheckVDO-Ask-user-only-faiulres-and-undetermined-dev.patch b/SOURCES/0001-CheckVDO-Ask-user-only-faiulres-and-undetermined-dev.patch new file mode 100644 index 0000000..b4d4e68 --- /dev/null +++ b/SOURCES/0001-CheckVDO-Ask-user-only-faiulres-and-undetermined-dev.patch @@ -0,0 +1,562 @@ +From 505963d51e3989a7d907861dd870133c670ccb78 Mon Sep 17 00:00:00 2001 +From: Joe Shimkus +Date: Wed, 24 Aug 2022 13:30:19 -0400 +Subject: [PATCH] CheckVDO: Ask user only faiulres and undetermined devices (+ + report update) + +The previous solution made possible to skip the VDO check answering +the user question (confirming no vdo devices are present) if the +vdo package is not installed (as the scan of the system could not +be performed). However as part of the bug 2096159 it was discovered +that some systems have very dynamic storage which could dissapear +in the very moment the check by the vdo tool is performed which lead +to the reported inhibitor. We have discovered that this could be real +blocker of the upgrade on such systems as it's pretty simple to hit +at least 1 of N devices to raise such an issue. (*) + +To make the upgrade possible on such systems, the dialog has been +updated to be able to skip any problematic VDO checks: + - undetermined block devices + - failures during the vdo scan of a block device + +In such a case, user must confirm that no VDO device non-managed +by LVM is present. The dialog is asking now for the `confirm` key +from user instead of `all_vdo_converted`. If any non-LVM managed VDO +devices are discovered, the upgrade is inhibited despite the answer +(this is supposed to happen only when user's answer is not right so +we are ok about that behaviour). + +Also reports are updated, as previously it could happen that several +reports with the same title appear during one run of leapp, but each +of them has a different meaning. Set individual titles to all +reports. Also summaries or reports have been updated. + +(*) This also includes situations when discovered list of devices +is not complete as some block devices could be loaded after the +initial scan of block devices (StorageInfo msg) is created. Which +means that such devices will not be checked at all as they will not +be known to other actors. We consider this ok as when a system with +dynamic storage is present, usually many of block devices are +redundant. So usually user will have to answer the dialog anyway due +to other "unstable" block devices. + +Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2096159 +Jira: OAMG-7025 +--- + .../el8toel9/actors/checkvdo/actor.py | 96 +++++++---- + .../actors/checkvdo/libraries/checkvdo.py | 155 ++++++++++-------- + .../checkvdo/tests/unit_test_checkvdo.py | 44 +++-- + 3 files changed, 184 insertions(+), 111 deletions(-) + +diff --git a/repos/system_upgrade/el8toel9/actors/checkvdo/actor.py b/repos/system_upgrade/el8toel9/actors/checkvdo/actor.py +index c28b3a9..d43bac0 100644 +--- a/repos/system_upgrade/el8toel9/actors/checkvdo/actor.py ++++ b/repos/system_upgrade/el8toel9/actors/checkvdo/actor.py +@@ -12,7 +12,7 @@ class CheckVdo(Actor): + + `Background` + ============ +- In RHEL 9.0 the indepdent VDO management software, `vdo manager`, is ++ In RHEL 9.0 the independent VDO management software, `vdo manager`, is + superseded by LVM management. Existing VDOs must be converted to LVM-based + management *before* upgrading to RHEL 9.0. + +@@ -32,12 +32,24 @@ class CheckVdo(Actor): + If the VdoConversionInfo model indicates unexpected errors occurred during + scanning CheckVdo will produce appropriate inhibitory reports. + +- Lastly, if the VdoConversionInfo model indicates conditions exist where VDO +- devices could exist but the necessary software to check was not installed +- on the system CheckVdo will present a dialog to the user. This dialog will +- ask the user to either install the required software if the user knows or +- is unsure that VDO devices exist or to approve the continuation of the +- upgrade if the user is certain that no VDO devices exist. ++ If the VdoConversionInfo model indicates conditions exist where VDO devices ++ could exist but the necessary software to check was not installed on the ++ system CheckVdo will present a dialog to the user. This dialog will ask the ++ user to either install the required software if the user knows or is unsure ++ that VDO devices exist or to approve the continuation of the upgrade if the ++ user is certain that either there are no VDO devices present or that all ++ VDO devices have been successfully converted. ++ ++ To maximize safety CheckVdo operates against all block devices which ++ match the criteria for potential VDO devices. Given the dynamic nature ++ of device presence within a system some devices which may have been present ++ during leapp discovery may not be present when CheckVdo runs. As CheckVdo ++ defaults to producing inhibitory reports if a device cannot be checked ++ (for any reason) this dynamism may be problematic. To prevent CheckVdo ++ producing an inhibitory report for devices which are dynamically no longer ++ present within the system the user may answer the previously mentioned ++ dialog in the affirmative when the user knows that all VDO devices have ++ been converted. This will circumvent checks of block devices. + """ + + name = 'check_vdo' +@@ -50,37 +62,55 @@ class CheckVdo(Actor): + reason='Confirmation', + components=( + BooleanComponent( +- key='no_vdo_devices', +- label='Are there no VDO devices on the system?', +- description='Enter True if there are no VDO devices on ' +- 'the system and False continue the upgrade. ' +- 'If the system has no VDO devices, then it ' +- 'is safe to continue the upgrade. If there ' +- 'are VDO devices they must all be converted ' +- 'to LVM management before the upgrade can ' +- 'proceed.', +- reason='Based on installed packages it is possible that ' +- 'VDO devices exist on the system. All VDO devices ' +- 'must be converted to being managed by LVM before ' +- 'the upgrade occurs. Because the \'vdo\' package ' +- 'is not installed, Leapp cannot determine whether ' +- 'any VDO devices exist that have not yet been ' +- 'converted. If the devices are not converted and ' +- 'the upgrade proceeds the data on unconverted VDO ' +- 'devices will be inaccessible. If you have any ' +- 'doubts you should choose to install the \'vdo\' ' +- 'package and re-run the upgrade process to check ' +- 'for unconverted VDO devices. If you are certain ' +- 'that the system has no VDO devices or that all ' +- 'VDO devices have been converted to LVM management ' +- 'you may opt to allow the upgrade to proceed.' ++ key='confirm', ++ label='Are all VDO devices, if any, successfully converted to LVM management?', ++ description='Enter True if no VDO devices are present ' ++ 'on the system or all VDO devices on the system ' ++ 'have been successfully converted to LVM ' ++ 'management. ' ++ 'Entering True will circumvent check of failures ' ++ 'and undetermined devices. ' ++ 'Recognized VDO devices that have not been ' ++ 'converted to LVM management can still block ' ++ 'the upgrade despite the answer.' ++ 'All VDO devices must be converted to LVM ' ++ 'management before upgrading.', ++ reason='To maximize safety all block devices on a system ' ++ 'that meet the criteria as possible VDO devices ' ++ 'are checked to verify that, if VDOs, they have ' ++ 'been converted to LVM management. ' ++ 'If the devices are not converted and the upgrade ' ++ 'proceeds the data on unconverted VDO devices will ' ++ 'be inaccessible. ' ++ 'In order to perform checking the \'vdo\' package ' ++ 'must be installed. ' ++ 'If the \'vdo\' package is not installed and there ' ++ 'are any doubts the \'vdo\' package should be ' ++ 'installed and the upgrade process re-run to check ' ++ 'for unconverted VDO devices. ' ++ 'If the check of any device fails for any reason ' ++ 'an upgrade inhibiting report is generated. ' ++ 'This may be problematic if devices are ' ++ 'dynamically removed from the system subsequent to ' ++ 'having been identified during device discovery. ' ++ 'If it is certain that all VDO devices have been ' ++ 'successfully converted to LVM management this ' ++ 'dialog may be answered in the affirmative which ' ++ 'will circumvent block device checking.' + ), + ) + ), + ) ++ _asked_answer = False ++ _vdo_answer = None + +- def get_no_vdo_devices_response(self): +- return self.get_answers(self.dialogs[0]).get('no_vdo_devices') ++ def get_vdo_answer(self): ++ if not self._asked_answer: ++ self._asked_answer = True ++ # calling this multiple times could lead to possible issues ++ # or at least in redundant reports ++ self._vdo_answer = self.get_answers(self.dialogs[0]).get('confirm') ++ return self._vdo_answer + + def process(self): + for conversion_info in self.consume(VdoConversionInfo): +diff --git a/repos/system_upgrade/el8toel9/actors/checkvdo/libraries/checkvdo.py b/repos/system_upgrade/el8toel9/actors/checkvdo/libraries/checkvdo.py +index 9ba5c70..3b161c9 100644 +--- a/repos/system_upgrade/el8toel9/actors/checkvdo/libraries/checkvdo.py ++++ b/repos/system_upgrade/el8toel9/actors/checkvdo/libraries/checkvdo.py +@@ -1,10 +1,35 @@ + from leapp import reporting + from leapp.libraries.stdlib import api + +-_report_title = reporting.Title('VDO devices migration to LVM management') + ++def _report_skip_check(): ++ if not api.current_actor().get_vdo_answer(): ++ return ++ ++ summary = ('User has asserted all VDO devices on the system have been ' ++ 'successfully converted to LVM management or no VDO ' ++ 'devices are present.') ++ reporting.create_report([ ++ reporting.Title('Skipping the VDO check of block devices'), ++ reporting.Summary(summary), ++ reporting.Severity(reporting.Severity.INFO), ++ reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]), ++ ]) ++ ++ ++def _process_failed_check_devices(conversion_info): ++ # Post-conversion VDOs that were not successfully checked for having ++ # completed the migration to LVM management. ++ # Return True if failed checks detected ++ devices = [x for x in conversion_info.post_conversion if (not x.complete) and x.check_failed] ++ devices += [x for x in conversion_info.undetermined_conversion if x.check_failed] ++ if not devices: ++ return False ++ ++ if api.current_actor().get_vdo_answer(): ++ # User asserted all possible VDO should be already converted - skip ++ return True + +-def _create_unexpected_resuilt_report(devices): + names = [x.name for x in devices] + multiple = len(names) > 1 + summary = ['Unexpected result checking device{0}'.format('s' if multiple else '')] +@@ -16,13 +41,14 @@ def _create_unexpected_resuilt_report(devices): + 'and re-run the upgrade.')) + + reporting.create_report([ +- _report_title, ++ reporting.Title('Checking VDO conversion to LVM management of block devices failed'), + reporting.Summary(summary), + reporting.Severity(reporting.Severity.HIGH), + reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]), + reporting.Remediation(hint=remedy_hint), + reporting.Groups([reporting.Groups.INHIBITOR]) + ]) ++ return True + + + def _process_post_conversion_vdos(vdos): +@@ -32,23 +58,28 @@ def _process_post_conversion_vdos(vdos): + if post_conversion: + devices = [x.name for x in post_conversion] + multiple = len(devices) > 1 +- summary = ''.join(('VDO device{0} \'{1}\' '.format('s' if multiple else '', +- ', '.join(devices)), +- 'did not complete migration to LVM management. ', +- 'The named device{0} '.format('s' if multiple else ''), +- '{0} successfully converted at the '.format('were' if multiple else 'was'), +- 'device format level; however, the expected LVM management ' +- 'portion of the conversion did not take place. This ' +- 'indicates that an exceptional condition (for example, a ' +- 'system crash) likely occured during the conversion ' +- 'process. The LVM portion of the conversion must be ' +- 'performed in order for upgrade to proceed.')) ++ summary = ( ++ 'VDO device{s_suffix} \'{devices_str}\' ' ++ 'did not complete migration to LVM management. ' ++ 'The named device{s_suffix} {was_were} successfully converted ' ++ 'at the device format level; however, the expected LVM management ' ++ 'portion of the conversion did not take place. This indicates ' ++ 'that an exceptional condition (for example, a system crash) ' ++ 'likely occurred during the conversion process. The LVM portion ' ++ 'of the conversion must be performed in order for upgrade ' ++ 'to proceed.' ++ .format( ++ s_suffix='s' if multiple else '', ++ devices_str=', '.join(devices), ++ was_were='were' if multiple else 'was', ++ ) ++ ) + + remedy_hint = ('Consult the VDO to LVM conversion process ' + 'documentation for how to complete the conversion.') + + reporting.create_report([ +- _report_title, ++ reporting.Title('Detected VDO devices that have not finished the conversion to LVM management.'), + reporting.Summary(summary), + reporting.Severity(reporting.Severity.HIGH), + reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]), +@@ -56,33 +87,32 @@ def _process_post_conversion_vdos(vdos): + reporting.Groups([reporting.Groups.INHIBITOR]) + ]) + +- # Post-conversion VDOs that were not successfully checked for having +- # completed the migration to LVM management. +- post_conversion = [x for x in vdos if (not x.complete) and x.check_failed] +- if post_conversion: +- _create_unexpected_resuilt_report(post_conversion) +- + + def _process_pre_conversion_vdos(vdos): + # Pre-conversion VDOs generate an inhibiting report. + if vdos: + devices = [x.name for x in vdos] + multiple = len(devices) > 1 +- summary = ''.join(('VDO device{0} \'{1}\' require{2} '.format('s' if multiple else '', +- ', '.join(devices), +- '' if multiple else 's'), +- 'migration to LVM management.' +- 'After performing the upgrade VDO devices can only be ' +- 'managed via LVM. Any VDO device not currently managed ' +- 'by LVM must be converted to LVM management before ' +- 'upgrading. The data on any VDO device not converted to ' +- 'LVM management will be inaccessible after upgrading.')) ++ summary = ( ++ 'VDO device{s_suffix} \'{devices_str}\' require{s_suffix_verb} ' ++ 'migration to LVM management.' ++ 'After performing the upgrade VDO devices can only be ' ++ 'managed via LVM. Any VDO device not currently managed ' ++ 'by LVM must be converted to LVM management before ' ++ 'upgrading. The data on any VDO device not converted to ' ++ 'LVM management will be inaccessible after upgrading.' ++ .format( ++ s_suffix='s' if multiple else '', ++ s_suffix_verb='' if multiple else 's', ++ devices_str=', '.join(devices), ++ ) ++ ) + + remedy_hint = ('Consult the VDO to LVM conversion process ' + 'documentation for how to perform the conversion.') + + reporting.create_report([ +- _report_title, ++ reporting.Title('Detected VDO devices not managed by LVM'), + reporting.Summary(summary), + reporting.Severity(reporting.Severity.HIGH), + reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]), +@@ -104,43 +134,40 @@ def _process_undetermined_conversion_devices(devices): + # A device can only end up as undetermined either via a check that failed + # or if it was not checked. If the info for the device indicates that it + # did not have a check failure that means it was not checked. +- +- checked = [x for x in devices if x.check_failed] +- if checked: +- _create_unexpected_resuilt_report(checked) ++ # Return True if failed checks detected + + unchecked = [x for x in devices if not x.check_failed] +- if unchecked: +- no_vdo_devices = api.current_actor().get_no_vdo_devices_response() +- if no_vdo_devices: +- summary = ('User has asserted there are no VDO devices on the ' +- 'system in need of conversion to LVM management.') +- +- reporting.create_report([ +- _report_title, +- reporting.Summary(summary), +- reporting.Severity(reporting.Severity.INFO), +- reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]), +- reporting.Groups([]) +- ]) +- elif no_vdo_devices is False: +- summary = ('User has opted to inhibit upgrade in regard to ' +- 'potential VDO devices requiring conversion to LVM ' +- 'management.') +- remedy_hint = ('Install the \'vdo\' package and re-run upgrade to ' +- 'check for VDO devices requiring conversion.') +- +- reporting.create_report([ +- _report_title, +- reporting.Summary(summary), +- reporting.Severity(reporting.Severity.HIGH), +- reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]), +- reporting.Remediation(hint=remedy_hint), +- reporting.Groups([reporting.Groups.INHIBITOR]) +- ]) ++ if not unchecked: ++ return False ++ ++ if api.current_actor().get_vdo_answer(): ++ # User asserted no VDO devices are present ++ return True ++ ++ summary = ( ++ 'The check of block devices could not be performed as the \'vdo\' ' ++ 'package is not installed. All VDO devices must be converted to ' ++ 'LVM management prior to the upgrade to prevent the loss of data.') ++ remedy_hint = ('Install the \'vdo\' package and re-run upgrade to ' ++ 'check for VDO devices requiring conversion or confirm ' ++ 'that all VDO devices, if any, are managed by LVM.') ++ ++ reporting.create_report([ ++ reporting.Title('Cannot perform the VDO check of block devices'), ++ reporting.Summary(summary), ++ reporting.Severity(reporting.Severity.HIGH), ++ reporting.Groups([reporting.Groups.SERVICES, reporting.Groups.DRIVERS]), ++ reporting.Remediation(hint=remedy_hint), ++ reporting.Groups([reporting.Groups.INHIBITOR]) ++ ]) ++ return True + + + def check_vdo(conversion_info): + _process_pre_conversion_vdos(conversion_info.pre_conversion) + _process_post_conversion_vdos(conversion_info.post_conversion) +- _process_undetermined_conversion_devices(conversion_info.undetermined_conversion) ++ ++ detected_under_dev = _process_undetermined_conversion_devices(conversion_info.undetermined_conversion) ++ detected_failed_check = _process_failed_check_devices(conversion_info) ++ if detected_under_dev or detected_failed_check: ++ _report_skip_check() +diff --git a/repos/system_upgrade/el8toel9/actors/checkvdo/tests/unit_test_checkvdo.py b/repos/system_upgrade/el8toel9/actors/checkvdo/tests/unit_test_checkvdo.py +index e0ac39d..865e036 100644 +--- a/repos/system_upgrade/el8toel9/actors/checkvdo/tests/unit_test_checkvdo.py ++++ b/repos/system_upgrade/el8toel9/actors/checkvdo/tests/unit_test_checkvdo.py +@@ -13,14 +13,16 @@ from leapp.models import ( + from leapp.utils.report import is_inhibitor + + +-class MockedActorNoVdoDevices(CurrentActorMocked): +- def get_no_vdo_devices_response(self): +- return True ++# Mock actor base for CheckVdo tests. ++class MockedActorCheckVdo(CurrentActorMocked): ++ def get_vdo_answer(self): ++ return False + + +-class MockedActorSomeVdoDevices(CurrentActorMocked): +- def get_no_vdo_devices_response(self): +- return False ++# Mock actor for all_vdo_converted dialog response. ++class MockedActorAllVdoConvertedTrue(MockedActorCheckVdo): ++ def get_vdo_answer(self): ++ return True + + + def aslist(f): +@@ -66,6 +68,7 @@ def _undetermined_conversion_vdos(count=0, failing=False, start_char='a'): + + # No VDOs tests. + def test_no_vdos(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(), +@@ -76,6 +79,7 @@ def test_no_vdos(monkeypatch): + + # Concurrent pre- and post-conversion tests. + def test_both_conversion_vdo_incomplete(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + post_count = 7 + checkvdo.check_vdo( +@@ -89,6 +93,7 @@ def test_both_conversion_vdo_incomplete(monkeypatch): + + # Post-conversion tests. + def test_post_conversion_multiple_vdo_incomplete(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(7, 5), +@@ -100,6 +105,7 @@ def test_post_conversion_multiple_vdo_incomplete(monkeypatch): + + + def test_post_conversion_multiple_vdo_complete(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(7, 7), +@@ -109,6 +115,7 @@ def test_post_conversion_multiple_vdo_complete(monkeypatch): + + + def test_post_conversion_single_vdo_incomplete(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(1), +@@ -121,6 +128,7 @@ def test_post_conversion_single_vdo_incomplete(monkeypatch): + + + def test_post_conversion_single_check_failing(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(2, complete=1, failing=1), +@@ -135,6 +143,7 @@ def test_post_conversion_single_check_failing(monkeypatch): + + + def test_post_conversion_multiple_check_failing(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(7, complete=4, failing=3), +@@ -147,6 +156,7 @@ def test_post_conversion_multiple_check_failing(monkeypatch): + + + def test_post_conversion_incomplete_and_check_failing(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(2, failing=1), +@@ -158,6 +168,7 @@ def test_post_conversion_incomplete_and_check_failing(monkeypatch): + + # Pre-conversion tests. + def test_pre_conversion_multiple_vdo_incomplete(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(), +@@ -169,6 +180,7 @@ def test_pre_conversion_multiple_vdo_incomplete(monkeypatch): + + + def test_pre_conversion_single_vdo_incomplete(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(), +@@ -182,6 +194,7 @@ def test_pre_conversion_single_vdo_incomplete(monkeypatch): + + # Undetermined tests. + def test_undetermined_single_check_failing(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(), +@@ -196,6 +209,7 @@ def test_undetermined_single_check_failing(monkeypatch): + + + def test_undetermined_multiple_check_failing(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(), +@@ -207,27 +221,29 @@ def test_undetermined_multiple_check_failing(monkeypatch): + 'Unexpected result checking devices') + + +-def test_undetermined_multiple_no_check_no_vdos(monkeypatch): +- monkeypatch.setattr(api, 'current_actor', MockedActorNoVdoDevices()) ++def test_undetermined_multiple_no_check(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorCheckVdo()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(), + pre_conversion=_pre_conversion_vdos(), + undetermined_conversion=_undetermined_conversion_vdos(3))) + assert reporting.create_report.called == 1 +- assert not is_inhibitor(reporting.create_report.report_fields) ++ assert is_inhibitor(reporting.create_report.report_fields) + assert reporting.create_report.report_fields['summary'].startswith( +- 'User has asserted there are no VDO devices') ++ 'The check of block devices could not be performed as the \'vdo\' ' ++ 'package is not installed.') + + +-def test_undetermined_multiple_no_check_some_vdos(monkeypatch): +- monkeypatch.setattr(api, 'current_actor', MockedActorSomeVdoDevices()) ++# all_vdo_converted test. ++def test_all_vdo_converted_true(monkeypatch): ++ monkeypatch.setattr(api, 'current_actor', MockedActorAllVdoConvertedTrue()) + monkeypatch.setattr(reporting, 'create_report', create_report_mocked()) + checkvdo.check_vdo( + VdoConversionInfo(post_conversion=_post_conversion_vdos(), + pre_conversion=_pre_conversion_vdos(), + undetermined_conversion=_undetermined_conversion_vdos(3))) + assert reporting.create_report.called == 1 +- assert is_inhibitor(reporting.create_report.report_fields) ++ assert not is_inhibitor(reporting.create_report.report_fields) + assert reporting.create_report.report_fields['summary'].startswith( +- 'User has opted to inhibit upgrade') ++ 'User has asserted all VDO devices on the system have been successfully converted') +-- +2.37.2 + diff --git a/SOURCES/0001-pcidevicesscanner-Also-match-deprecation-data-agains.patch1 b/SOURCES/0001-pcidevicesscanner-Also-match-deprecation-data-agains.patch1 deleted file mode 100644 index 00949e8..0000000 --- a/SOURCES/0001-pcidevicesscanner-Also-match-deprecation-data-agains.patch1 +++ /dev/null @@ -1,70 +0,0 @@ -From b4fc2e0ae62e68dd246ed2eedda0df2a3ba90633 Mon Sep 17 00:00:00 2001 -From: Vinzenz Feenstra -Date: Fri, 1 Apr 2022 15:13:51 +0200 -Subject: [PATCH] pcidevicesscanner: Also match deprecation data against kernel - modules - -Previously when the deprecation data got introduced the kernel drivers -reported to be used by lspci have not been checked. -This patch fixes this regression. - -Signed-off-by: Vinzenz Feenstra ---- - .../libraries/pcidevicesscanner.py | 29 ++++++++++++++++++- - 1 file changed, 28 insertions(+), 1 deletion(-) - -diff --git a/repos/system_upgrade/common/actors/pcidevicesscanner/libraries/pcidevicesscanner.py b/repos/system_upgrade/common/actors/pcidevicesscanner/libraries/pcidevicesscanner.py -index 146f1a33..0f02bd02 100644 ---- a/repos/system_upgrade/common/actors/pcidevicesscanner/libraries/pcidevicesscanner.py -+++ b/repos/system_upgrade/common/actors/pcidevicesscanner/libraries/pcidevicesscanner.py -@@ -1,7 +1,13 @@ - import re - - from leapp.libraries.stdlib import api, run --from leapp.models import DetectedDeviceOrDriver, DeviceDriverDeprecationData, PCIDevice, PCIDevices -+from leapp.models import ( -+ ActiveKernelModulesFacts, -+ DetectedDeviceOrDriver, -+ DeviceDriverDeprecationData, -+ PCIDevice, -+ PCIDevices -+) - - # Regex to capture Vendor, Device and SVendor and SDevice values - PCI_ID_REG = re.compile(r"(?<=Vendor:\t|Device:\t)\w+") -@@ -82,6 +88,26 @@ def produce_detected_devices(devices): - ]) - - -+def produce_detected_drivers(devices): -+ active_modules = { -+ module.file_name -+ for message in api.consume(ActiveKernelModulesFacts) for module in message.kernel_modules -+ } -+ -+ # Create a lookup by driver_name and filter out the kernel that are active -+ entry_lookup = { -+ entry.driver_name: entry -+ for message in api.consume(DeviceDriverDeprecationData) for entry in message.entries -+ if entry.driver_name and entry.driver_name not in active_modules -+ } -+ -+ drivers = {device.driver for device in devices if device.driver in entry_lookup} -+ api.produce(*[ -+ DetectedDeviceOrDriver(**entry_lookup[driver].dump()) -+ for driver in drivers -+ ]) -+ -+ - def produce_pci_devices(producer, devices): - """ Produce a Leapp message with all PCI devices """ - producer(PCIDevices(devices=devices)) -@@ -93,4 +119,5 @@ def scan_pci_devices(producer): - pci_numeric = run(['lspci', '-vmmkn'], checked=False)['stdout'] - devices = parse_pci_devices(pci_textual, pci_numeric) - produce_detected_devices(devices) -+ produce_detected_drivers(devices) - produce_pci_devices(producer, devices) --- -2.35.1 - diff --git a/SOURCES/0002-pciscanner-Fix-2-issues-in-regards-to-pci-address-ha.patch b/SOURCES/0002-pciscanner-Fix-2-issues-in-regards-to-pci-address-ha.patch deleted file mode 100644 index 4b2809c..0000000 --- a/SOURCES/0002-pciscanner-Fix-2-issues-in-regards-to-pci-address-ha.patch +++ /dev/null @@ -1,44 +0,0 @@ -From 53ceded213ae17ca5d27268bc496e736dfea7e64 Mon Sep 17 00:00:00 2001 -From: Vinzenz Feenstra -Date: Thu, 14 Apr 2022 14:50:07 +0200 -Subject: [PATCH 2/3] pciscanner: Fix 2 issues in regards to pci address - handling - -In a previous patch, the introduction of the new handling of deprecation -data, 2 problems slipped through. - -1. The regex replacement for pci ids errornous adds an empty space - instead of empty string -2. Drivers should be matched on lspci output against the driver - deprecation data only if the pci_id is empty - -Signed-off-by: Vinzenz Feenstra ---- - .../actors/pcidevicesscanner/libraries/pcidevicesscanner.py | 4 ++-- - 1 file changed, 2 insertions(+), 2 deletions(-) - -diff --git a/repos/system_upgrade/common/actors/pcidevicesscanner/libraries/pcidevicesscanner.py b/repos/system_upgrade/common/actors/pcidevicesscanner/libraries/pcidevicesscanner.py -index 0f02bd02..eb063abb 100644 ---- a/repos/system_upgrade/common/actors/pcidevicesscanner/libraries/pcidevicesscanner.py -+++ b/repos/system_upgrade/common/actors/pcidevicesscanner/libraries/pcidevicesscanner.py -@@ -78,7 +78,7 @@ def parse_pci_devices(pci_textual, pci_numeric): - def produce_detected_devices(devices): - prefix_re = re.compile('0x') - entry_lookup = { -- prefix_re.sub(' ', entry.device_id): entry -+ prefix_re.sub('', entry.device_id): entry - for message in api.consume(DeviceDriverDeprecationData) for entry in message.entries - } - api.produce(*[ -@@ -98,7 +98,7 @@ def produce_detected_drivers(devices): - entry_lookup = { - entry.driver_name: entry - for message in api.consume(DeviceDriverDeprecationData) for entry in message.entries -- if entry.driver_name and entry.driver_name not in active_modules -+ if not entry.device_id and entry.driver_name and entry.driver_name not in active_modules - } - - drivers = {device.driver for device in devices if device.driver in entry_lookup} --- -2.35.1 - diff --git a/SOURCES/0003-Ensure-the-right-repositories-are-enabled-on-Satelli.patch b/SOURCES/0003-Ensure-the-right-repositories-are-enabled-on-Satelli.patch deleted file mode 100644 index aa04a8a..0000000 --- a/SOURCES/0003-Ensure-the-right-repositories-are-enabled-on-Satelli.patch +++ /dev/null @@ -1,78 +0,0 @@ -From a1fdabea9c00a96ffc1504577f12733e1c1830ee Mon Sep 17 00:00:00 2001 -From: Evgeni Golov -Date: Thu, 7 Apr 2022 14:56:18 +0200 -Subject: [PATCH 3/3] Ensure the right repositories are enabled on Satellite - Capsules - ---- - .../actors/satellite_upgrade_facts/actor.py | 6 +++- - .../unit_test_satellite_upgrade_facts.py | 34 ++++++++++++++++++- - 2 files changed, 38 insertions(+), 2 deletions(-) - -diff --git a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/actor.py b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/actor.py -index eb87cd68..fb83107e 100644 ---- a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/actor.py -+++ b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/actor.py -@@ -129,6 +129,10 @@ class SatelliteUpgradeFacts(Actor): - modules_to_enable=modules_to_enable - ) - ) -- repositories_to_enable = ['ansible-2.9-for-rhel-8-x86_64-rpms', 'satellite-6.11-for-rhel-8-x86_64-rpms', -+ repositories_to_enable = ['ansible-2.9-for-rhel-8-x86_64-rpms', - 'satellite-maintenance-6.11-for-rhel-8-x86_64-rpms'] -+ if has_package(InstalledRPM, 'foreman'): -+ repositories_to_enable.append('satellite-6.11-for-rhel-8-x86_64-rpms') -+ else: -+ repositories_to_enable.append('satellite-capsule-6.11-for-rhel-8-x86_64-rpms') - self.produce(RepositoriesSetupTasks(to_enable=repositories_to_enable)) -diff --git a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py -index 5c8e79ff..e77b7b58 100644 ---- a/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py -+++ b/repos/system_upgrade/el7toel8/actors/satellite_upgrade_facts/tests/unit_test_satellite_upgrade_facts.py -@@ -1,6 +1,14 @@ - import os - --from leapp.models import DNFWorkaround, InstalledRPM, Module, RPM, RpmTransactionTasks, SatelliteFacts -+from leapp.models import ( -+ DNFWorkaround, -+ InstalledRPM, -+ Module, -+ RepositoriesSetupTasks, -+ RPM, -+ RpmTransactionTasks, -+ SatelliteFacts -+) - from leapp.snactor.fixture import current_actor_context - - RH_PACKAGER = 'Red Hat, Inc. ' -@@ -87,3 +95,27 @@ def test_detects_remote_postgresql(current_actor_context): - assert not satellitemsg.postgresql.local_postgresql - - assert not current_actor_context.consume(DNFWorkaround) -+ -+ -+def test_enables_right_repositories_on_satellite(current_actor_context): -+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_RPM])) -+ current_actor_context.run() -+ -+ rpmmessage = current_actor_context.consume(RepositoriesSetupTasks)[0] -+ -+ assert 'ansible-2.9-for-rhel-8-x86_64-rpms' in rpmmessage.to_enable -+ assert 'satellite-maintenance-6.11-for-rhel-8-x86_64-rpms' in rpmmessage.to_enable -+ assert 'satellite-6.11-for-rhel-8-x86_64-rpms' in rpmmessage.to_enable -+ assert 'satellite-capsule-6.11-for-rhel-8-x86_64-rpms' not in rpmmessage.to_enable -+ -+ -+def test_enables_right_repositories_on_capsule(current_actor_context): -+ current_actor_context.feed(InstalledRPM(items=[FOREMAN_PROXY_RPM])) -+ current_actor_context.run() -+ -+ rpmmessage = current_actor_context.consume(RepositoriesSetupTasks)[0] -+ -+ assert 'ansible-2.9-for-rhel-8-x86_64-rpms' in rpmmessage.to_enable -+ assert 'satellite-maintenance-6.11-for-rhel-8-x86_64-rpms' in rpmmessage.to_enable -+ assert 'satellite-6.11-for-rhel-8-x86_64-rpms' not in rpmmessage.to_enable -+ assert 'satellite-capsule-6.11-for-rhel-8-x86_64-rpms' in rpmmessage.to_enable --- -2.35.1 - diff --git a/SOURCES/0005-IPU-8-9-Migrate-blacklisted-CAs-hotfix.patch b/SOURCES/0005-IPU-8-9-Migrate-blacklisted-CAs-hotfix.patch deleted file mode 100644 index d7ca5cd..0000000 --- a/SOURCES/0005-IPU-8-9-Migrate-blacklisted-CAs-hotfix.patch +++ /dev/null @@ -1,209 +0,0 @@ -From eeb4f99f57c67937ea562fce11fd5607470ae0a6 Mon Sep 17 00:00:00 2001 -From: Petr Stodulka -Date: Fri, 22 Apr 2022 00:20:15 +0200 -Subject: [PATCH] [IPU 8 -> 9] Migrate blacklisted CAs (hotfix) - -Preserve blacklisted certificates during the IPU 8 -> 9 - -Path for the blacklisted certificates has been changed on RHEL 9. -The original paths on RHEL 8 and older systems have been: - /etc/pki/ca-trust/source/blacklist/ - /usr/share/pki/ca-trust-source/blacklist/ -However on RHEL 9 the blacklist directory has been renamed to 'blocklist'. -So the paths are: - /etc/pki/ca-trust/source/blocklist/ - /usr/share/pki/ca-trust-source/blocklist/ -This actor moves all blacklisted certificates into the expected directories -and fix symlinks if to point to the new dirs if they originally pointed -to one of obsoleted dirs. - -Covered cases: -- covered situations with missing dirs -- covered both mentioned blacklist directories -- update symlinks in case they point to one of obsoleted directories -- remove obsoleted directories when all files migrated successfully -- execute /usr/bin/update-ca-trust in the end -- remove original a blacklist directory in case all discovered files - inside are migrated successfully -- print error logs in case of any issues so the upgrade does not - crash in case of troubles and users could deal with problems - manually after the upgrade - -The actor is not covered by unit-tests as it's just a hotfix. Follow -up works are expected to extend the problem with reports during -preupgrade phases, improve the test coverage, .... - -BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2077432 -Followup ticket: CRYPTO-7097 ---- - .../actors/migrateblacklistca/actor.py | 28 ++++++ - .../libraries/migrateblacklistca.py | 89 +++++++++++++++++++ - .../tests/unit_test_migrateblacklistca.py | 25 ++++++ - 3 files changed, 142 insertions(+) - create mode 100644 repos/system_upgrade/el8toel9/actors/migrateblacklistca/actor.py - create mode 100644 repos/system_upgrade/el8toel9/actors/migrateblacklistca/libraries/migrateblacklistca.py - create mode 100644 repos/system_upgrade/el8toel9/actors/migrateblacklistca/tests/unit_test_migrateblacklistca.py - -diff --git a/repos/system_upgrade/el8toel9/actors/migrateblacklistca/actor.py b/repos/system_upgrade/el8toel9/actors/migrateblacklistca/actor.py -new file mode 100644 -index 00000000..863a0063 ---- /dev/null -+++ b/repos/system_upgrade/el8toel9/actors/migrateblacklistca/actor.py -@@ -0,0 +1,28 @@ -+from leapp.actors import Actor -+from leapp.libraries.actor import migrateblacklistca -+from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag -+ -+ -+class MigrateBlacklistCA(Actor): -+ """ -+ Preserve blacklisted certificates during the upgrade -+ -+ Path for the blacklisted certificates has been changed on RHEL 9. -+ The original paths on RHEL 8 and older systems have been: -+ /etc/pki/ca-trust/source/blacklist/ -+ /usr/share/pki/ca-trust-source/blacklist/ -+ However on RHEL 9 the blacklist directory has been renamed to 'blocklist'. -+ So the new paths are: -+ /etc/pki/ca-trust/source/blocklist/ -+ /usr/share/pki/ca-trust-source/blocklist/ -+ This actor moves all blacklisted certificates into the expected directories -+ and fix symlinks if needed. -+ """ -+ -+ name = 'migrate_blacklist_ca' -+ consumes = () -+ produces = () -+ tags = (ApplicationsPhaseTag, IPUWorkflowTag) -+ -+ def process(self): -+ migrateblacklistca.process() -diff --git a/repos/system_upgrade/el8toel9/actors/migrateblacklistca/libraries/migrateblacklistca.py b/repos/system_upgrade/el8toel9/actors/migrateblacklistca/libraries/migrateblacklistca.py -new file mode 100644 -index 00000000..73c9d565 ---- /dev/null -+++ b/repos/system_upgrade/el8toel9/actors/migrateblacklistca/libraries/migrateblacklistca.py -@@ -0,0 +1,89 @@ -+import os -+import shutil -+ -+from leapp.libraries.stdlib import api, CalledProcessError, run -+ -+# dict(orig_dir: new_dir) -+DIRS_CHANGE = { -+ '/etc/pki/ca-trust/source/blacklist/': '/etc/pki/ca-trust/source/blocklist/', -+ '/usr/share/pki/ca-trust-source/blacklist/': '/usr/share/pki/ca-trust-source/blocklist/' -+} -+ -+ -+def _link_src_path(filepath): -+ """ -+ Return expected target path for the symlink. -+ -+ In case the symlink points to one of dirs supposed to be migrated in this -+ actor, we need to point to the new directory instead. -+ -+ In case the link points anywhere else, keep the target path as it is. -+ """ -+ realpath = os.path.realpath(filepath) -+ for dirname in DIRS_CHANGE: -+ if realpath.startswith(dirname): -+ return realpath.replace(dirname, DIRS_CHANGE[dirname]) -+ -+ # it seems we can keep this path -+ return realpath -+ -+ -+def _migrate_file(filename, src_basedir): -+ dst_path = filename.replace(src_basedir, DIRS_CHANGE[src_basedir]) -+ if os.path.exists(dst_path): -+ api.current_logger().info( -+ 'Skipping migration of the {} certificate. The target file already exists' -+ .format(filename) -+ ) -+ return -+ os.makedirs(os.path.dirname(dst_path), mode=0o755, exist_ok=True) -+ if os.path.islink(filename): -+ # create the new symlink instead of the moving the file -+ # as the target path could be different as well -+ link_src_path = _link_src_path(filename) -+ # TODO: is the broken symlink ok? -+ os.symlink(link_src_path, dst_path) -+ os.unlink(filename) -+ else: -+ # normal file, just move it -+ shutil.move(filename, dst_path) -+ -+ -+def _get_files(dirname): -+ return run(['find', dirname, '-type', 'f,l'], split=True)['stdout'] -+ -+ -+def process(): -+ for dirname in DIRS_CHANGE: -+ if not os.path.exists(dirname): -+ # The directory does not exist; nothing to do here -+ continue -+ try: -+ blacklisted_certs = _get_files(dirname) -+ except (CalledProcessError, OSError) as e: -+ # TODO: create post-upgrade report -+ api.current_logger().error('Cannot get list of files in {}: {}.'.format(dirname, e)) -+ api.current_logger().error('Certificates under {} must be migrated manually.'.format(dirname)) -+ continue -+ failed_files = [] -+ for filename in blacklisted_certs: -+ try: -+ _migrate_file(filename, dirname) -+ except OSError as e: -+ api.current_logger().error( -+ 'Failed migration of blacklisted certificate {}: {}' -+ .format(filename, e) -+ ) -+ failed_files.append(filename) -+ if not failed_files: -+ # the failed removal is not such a big issue here -+ # clean the dir if all files have been migrated successfully -+ shutil.rmtree(dirname, ignore_errors=True) -+ try: -+ run(['/usr/bin/update-ca-trust']) -+ except (CalledProcessError, OSError) as e: -+ api.current_logger().error( -+ 'Cannot update CA trust on the system.' -+ ' It needs to be done manually after the in-place upgrade.' -+ ' Reason: {}'.format(e) -+ ) -diff --git a/repos/system_upgrade/el8toel9/actors/migrateblacklistca/tests/unit_test_migrateblacklistca.py b/repos/system_upgrade/el8toel9/actors/migrateblacklistca/tests/unit_test_migrateblacklistca.py -new file mode 100644 -index 00000000..970dcb97 ---- /dev/null -+++ b/repos/system_upgrade/el8toel9/actors/migrateblacklistca/tests/unit_test_migrateblacklistca.py -@@ -0,0 +1,25 @@ -+import os -+ -+from leapp.libraries.actor import migrateblacklistca -+from leapp.libraries.common.testutils import CurrentActorMocked -+from leapp.libraries.stdlib import api -+ -+ -+class MockedGetFiles(): -+ def __init__(self): -+ self.called = 0 -+ -+ def __call__(self): -+ self.called += 1 -+ return [] -+ -+ -+def test_no_dirs_exist(monkeypatch): -+ mocked_files = MockedGetFiles() -+ monkeypatch.setattr(os.path, 'exists', lambda dummy: False) -+ monkeypatch.setattr(migrateblacklistca, '_get_files', mocked_files) -+ monkeypatch.setattr(api, 'current_actor', CurrentActorMocked()) -+ # this is bad mock, but we want to be sure that update-ca-trust is not -+ # called on the testing machine -+ monkeypatch.setattr(migrateblacklistca, 'run', lambda dummy: dummy) -+ assert not mocked_files.called --- -2.35.1 - diff --git a/SOURCES/0006-Skip-comment-lines-when-parsing-grub-configuration-f.patch b/SOURCES/0006-Skip-comment-lines-when-parsing-grub-configuration-f.patch deleted file mode 100644 index 0b99e81..0000000 --- a/SOURCES/0006-Skip-comment-lines-when-parsing-grub-configuration-f.patch +++ /dev/null @@ -1,108 +0,0 @@ -From 32702c7c7d1c445b9ab95e0d1bbdfdf8f06d4303 Mon Sep 17 00:00:00 2001 -From: Petr Stodulka -Date: Wed, 27 Apr 2022 11:25:40 +0200 -Subject: [PATCH] Skip comment lines when parsing grub configuration file - -Added simple unit-test for default grub info to see the valid lines -can be parsed as expected. ---- - .../systemfacts/libraries/systemfacts.py | 21 ++++++++- - .../tests/test_systemfacts_grub.py | 46 +++++++++++++++++++ - 2 files changed, 65 insertions(+), 2 deletions(-) - create mode 100644 repos/system_upgrade/common/actors/systemfacts/tests/test_systemfacts_grub.py - -diff --git a/repos/system_upgrade/common/actors/systemfacts/libraries/systemfacts.py b/repos/system_upgrade/common/actors/systemfacts/libraries/systemfacts.py -index 0de8b383..81aea6f5 100644 ---- a/repos/system_upgrade/common/actors/systemfacts/libraries/systemfacts.py -+++ b/repos/system_upgrade/common/actors/systemfacts/libraries/systemfacts.py -@@ -9,6 +9,7 @@ import re - import six - - from leapp import reporting -+from leapp.exceptions import StopActorExecutionError - from leapp.libraries.common import repofileutils - from leapp.libraries.common.config import architecture - from leapp.libraries.stdlib import api, CalledProcessError, run -@@ -289,9 +290,25 @@ def _default_grub_info(): - ]) - else: - for line in run(['cat', default_grb_fpath], split=True)['stdout']: -- if not line.strip(): -+ line = line.strip() -+ if not line or line[0] == '#': -+ # skip comments and empty lines - continue -- name, value = tuple(map(type(line).strip, line.split('=', 1))) -+ try: -+ name, value = tuple(map(type(line).strip, line.split('=', 1))) -+ except ValueError as e: -+ # we do not want to really continue when we cannot parse this file -+ # TODO(pstodulk): rewrite this in the form we produce inhibitor -+ # with problematic lines. This is improvement just in comparison -+ # to the original hard crash. -+ raise StopActorExecutionError( -+ 'Failed parsing of {}'.format(default_grb_fpath), -+ details={ -+ 'error': str(e), -+ 'problematic line': str(line) -+ } -+ ) -+ - yield DefaultGrub( - name=name, - value=value -diff --git a/repos/system_upgrade/common/actors/systemfacts/tests/test_systemfacts_grub.py b/repos/system_upgrade/common/actors/systemfacts/tests/test_systemfacts_grub.py -new file mode 100644 -index 00000000..08552771 ---- /dev/null -+++ b/repos/system_upgrade/common/actors/systemfacts/tests/test_systemfacts_grub.py -@@ -0,0 +1,46 @@ -+import os -+ -+from leapp.libraries.actor import systemfacts -+from leapp.models import DefaultGrub -+ -+ -+class RunMocked(object): -+ def __init__(self, cmd_result): -+ self.called = 0 -+ self.cmd_result = cmd_result -+ self.split = False -+ self.cmd = None -+ -+ def __call__(self, cmd, split=False): -+ self.cmd = cmd -+ self.split = split -+ self.called += 1 -+ return self.cmd_result -+ -+ -+def test_default_grub_info_valid(monkeypatch): -+ mocked_run = RunMocked({ -+ 'stdout': [ -+ 'line="whatever else here"', -+ 'newline="whatever"', -+ '# comment here', -+ 'why_not=value', -+ ' # whitespaces around comment ', -+ ' ', -+ ' last=last really' -+ ], -+ }) -+ expected_result = [ -+ DefaultGrub(name='line', value='"whatever else here"'), -+ DefaultGrub(name='newline', value='"whatever"'), -+ DefaultGrub(name='why_not', value='value'), -+ DefaultGrub(name='last', value='last really'), -+ ] -+ monkeypatch.setattr(systemfacts, 'run', mocked_run) -+ monkeypatch.setattr(os.path, 'isfile', lambda dummy: True) -+ for msg in systemfacts._default_grub_info(): -+ expected_msg = expected_result.pop(0) -+ assert msg.name == expected_msg.name -+ assert msg.value == expected_msg.value -+ assert mocked_run.called -+ assert not expected_result --- -2.35.1 - diff --git a/SPECS/leapp-repository.spec b/SPECS/leapp-repository.spec index 14ea971..aca0745 100644 --- a/SPECS/leapp-repository.spec +++ b/SPECS/leapp-repository.spec @@ -2,7 +2,7 @@ %global repositorydir %{leapp_datadir}/repositories %global custom_repositorydir %{leapp_datadir}/custom-repositories -%define leapp_repo_deps 6 +%define leapp_repo_deps 7 %if 0%{?rhel} == 7 %define leapp_python_sitelib %{python2_sitelib} @@ -41,24 +41,26 @@ py2_byte_compile "%1" "%2"} # RHEL 8+ packages to be consistent with other leapp projects in future. Name: leapp-repository -Version: 0.16.0 -Release: 6%{?dist} +Version: 0.17.0 +Release: 3%{?dist} Summary: Repositories for leapp License: ASL 2.0 URL: https://oamg.github.io/leapp/ -Source0: https://github.com/oamg/leapp-repository/archive/v%{version}.tar.gz#/%{name}-%{version}.tar.gz -Source1: deps-pkgs-6.tar.gz +Source0: https://github.com/oamg/%{name}/archive/v%{version}.tar.gz#/%{name}-%{version}.tar.gz +Source1: deps-pkgs-7.tar.gz + +# NOTE: Our packages must be noarch. Do no drop this in any way. BuildArch: noarch ### PATCHES HERE # Patch0001: filename.patch -Patch0001: 0001-pcidevicesscanner-Also-match-deprecation-data-agains.patch1 -Patch0002: 0002-pciscanner-Fix-2-issues-in-regards-to-pci-address-ha.patch -Patch0003: 0003-Ensure-the-right-repositories-are-enabled-on-Satelli.patch +Patch0001: 0001-CheckVDO-Ask-user-only-faiulres-and-undetermined-dev.patch + +## DO NOT REMOVE THIS PATCH UNLESS THE RUBYGEM-IRB ISSUE IS RESOLVED IN ACTORS! +# See: https://bugzilla.redhat.com/show_bug.cgi?id=2030627 Patch0004: 0004-Enforce-the-removal-of-rubygem-irb-do-not-install-it.patch -Patch0005: 0005-IPU-8-9-Migrate-blacklisted-CAs-hotfix.patch -Patch0006: 0006-Skip-comment-lines-when-parsing-grub-configuration-f.patch + %description %{summary} @@ -77,7 +79,7 @@ Requires: python2-leapp Obsoletes: leapp-repository-data <= 0.6.1 Provides: leapp-repository-data <= 0.6.1 -# Former leapp subpackage that is part of the sos package since HEL 7.8 +# Former leapp subpackage that is part of the sos package since RHEL 7.8 Obsoletes: leapp-repository-sos-plugin <= 0.10.0 # Set the conflict to be sure this RPM is not upgraded automatically to @@ -103,7 +105,7 @@ Requires: leapp-repository-dependencies = %{leapp_repo_deps} # IMPORTANT: this is capability provided by the leapp framework rpm. # Check that 'version' instead of the real framework rpm version. -Requires: leapp-framework >= 2.2 +Requires: leapp-framework >= 3.1 # Since we provide sub-commands for the leapp utility, we expect the leapp # tool to be installed as well. @@ -117,6 +119,14 @@ Provides: leapp-repository = %{version}-%{release} # to install "leapp-upgrade" in the official docs. Provides: leapp-upgrade = %{version}-%{release} +# Provide leapp-commands so the framework could refer to them when customers +# do not have installed particular leapp-repositories +Provides: leapp-command(answer) +Provides: leapp-command(preupgrade) +Provides: leapp-command(upgrade) +Provides: leapp-command(rerun) +Provides: leapp-command(list-runs) + %description -n %{lpr_name} Leapp repositories for the in-place upgrade to the next major version @@ -160,6 +170,13 @@ Requires: python3-requests Requires: python3-six # required by SELinux actors Requires: policycoreutils-python-utils +# required by systemfacts, and several other actors +Requires: procps-ng +Requires: kmod +# since RHEL 8+ dracut does not have to be present on the system all the time +# and missing dracut could be killing situation for us :) +Requires: dracut + %endif ################################################## # end requirement @@ -177,13 +194,8 @@ Requires: policycoreutils-python-utils # APPLY PATCHES HERE # %%patch0001 -p1 %patch0001 -p1 -%patch0002 -p1 -%patch0003 -p1 %patch0004 -p1 -%patch0005 -p1 -%patch0006 -p1 -# enforce removal of packages below during the upgrade %build %if 0%{?rhel} == 7 @@ -257,6 +269,60 @@ done; # no files here %changelog +* Wed Sep 07 2022 Petr Stodulka - 0.17.0-3 +- Adding back instruction to not install rubygem-irb during the in-place upgrade + to prevent conflict between files +- Resolves: rhbz#2090995 + +* Wed Sep 07 2022 Petr Stodulka - 0.17.0-2 +- Update VDO checks to enable user to decide the system state on check failures + and undetermined block devices +- The VDO dialog and related VDO reports have been properly updated +- Resolves: rhbz#2096159 + +* Wed Aug 24 2022 Petr Stodulka - 0.17.0-1 +- Rebase to v0.17.0 +- Support upgrade path RHEL 8.7 -> 9.0 and RHEL SAP 8.6 -> 9.0 +- Provide and require leapp-repository-dependencies 7 +- Provide `leapp-command()` for each CLI command provided by leapp-repository +- Require dracut, kmod, procps-ng on RHEL 8+ +- Require leapp-framework >= 3.1 +- Add actors covering removal of NIS components on RHEL 9 +- Add checks for obsolete .NET versions +- Allow specifying the report schema v1.2.0 +- Check and handle upgrades with custom crypto policies +- Check and migrate OpenSSH configuration +- Check and migrate multipath configuration +- Check minimum memory requirements +- Do not create the upgrade bootloader entry when the dnf dry-run actor stops the upgrade +- Enable Base and SAP in-place upgrades on Azure +- Enable in-place upgrade in case LUKS volumes are Ceph OSDs +- Enable in-place upgrades in Azure RHEL 8 base images using RHUI +- Enable in-place upgrades on IBM z16 machines +- Enable the CRB repository for the upgrade only if enabled on the source system +- Fix cloud provider detection on AWS +- Fix detection of the latest kernel +- Fix issues caused by leapp artifacts from previous in-place upgrades +- Fix issues with false positive switch to emergency console during the upgrade +- Fix swap page size on aarch64 +- Fix the VDO scanner to skip partitions unrelated to VDO and adjust error messages +- Fix the false positive NFS storage detection on NFS servers and improve the report msg +- Fix the issues on systems with the LANGUAGE environment variable +- Fix the root directory scan to deal with non-utf8 filenames +- Handle upgrades of SAP systems on AWS +- Inform about necessary migrations related to bacula-director when installed on the system +- Inhibit the upgrade when /var/lib/leapp being mounted in a non-persistent fashion to prevent failures +- Inhibit the upgrade when /var/lib/leapp mounted with the noexec option to prevent failures +- Inhibit upgrade when NVIDIA driver is detected +- Make the application of custom selinux rules more reliable and do not override changes done by RPM scriptlets +- Migrate the OpenSSL configuration +- PESEventScanner actor has been fully refactored +- Report changes around SCP and SFTP +- Skip comment lines when parsing the GRUB configuration file +- Stop propagating the “debug” and ”enforcing=0” kernel cmdline options into the target kernel cmdline options +- Mass refactoring to be compatible with leapp v0.15.0 +- Resolves: rhbz#2090995, rhbz#2040470, rhbz#2092005, rhbz#2093220, rhbz#2095704, rhbz#2096159, rhbz#2100108, rhbz#2100110, rhbz#2103282, rhbz#2106904, rhbz#2110627 + * Wed Apr 27 2022 Petr Stodulka - 0.16.0-6 - Skip comments in /etc/default/grub during the parsing - Resolves: #1997076