diff --git a/README.md b/README.md index 4de458b..7b614d8 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,7 @@ +# Leapp Elevation Repository **Before doing anything, please read [Leapp framework documentation](https://leapp.readthedocs.io/).** ---- - ## Troubleshooting ### Where can I report an issue or RFE related to the framework or other actors? @@ -11,6 +10,13 @@ - Leapp framework: [https://github.com/oamg/leapp/issues/new/choose](https://github.com/oamg/leapp/issues/new/choose) - Leapp actors: [https://github.com/oamg/leapp-repository/issues/new/choose](https://github.com/oamg/leapp-repository/issues/new/choose) +### Where can I report an issue or RFE related to the AlmaLinux actor or data modifications? +- GitHub issues are preferred: + - Leapp actors: [https://github.com/AlmaLinux/leapp-repository/issues/new/choose](https://github.com/AlmaLinux/leapp-repository/issues/new/choose) + - Leapp data: [https://github.com/AlmaLinux/leapp-data/issues/new/choose](https://github.com/AlmaLinux/leapp-data/issues/new/choose) + +### What data should be provided when making a report? + - When filing an issue, include: - Steps to reproduce the issue - *All files in /var/log/leapp* @@ -25,7 +31,149 @@ Then you may attach only the `leapp-logs.tgz` file. ### Where can I seek help? -We’ll gladly answer your questions and lead you to through any troubles with the -actor development. +We’ll gladly answer your questions and lead you to through any troubles with the actor development. You can reach us at IRC: `#leapp` on freenode. + +## Third-party integration + +If you would like to add your **signed** 3rd party packages into the upgrade process, you can use the third-party integration mechanism. + +There are four components for adding your information to the elevation process: +- _map.json: repository mapping file +- .repo: package repository information +- .sigs: list of package signatures of vendor repositories +- _pes.json: package migration event list + +All these files **must** have the same part. + +### Repository mapping file + +This JSON file provides information on mappings between source system repositories (repositories present on the system being upgraded) and target system repositories (package repositories to be used during the upgrade). + +The file contains two sections, `mapping` and `repositories`. + +`repositories` descripes the source and target repositories themselves. Each entry should have a unique string ID specific to mapping/PES files - `pesid`, and a list of attributes: +- major_version: major system version that this repository targets +- repo_type: repository type, see below +- repoid: repository ID, same as in *.repo files. Doesn't have to exactly match `pesid` +- arch: system architecture for which this repository is relevant +- channel: repository channel, see below + + +**Repository types**: +- rpm: normal RPM packages +- srpm: source packages +- debuginfo: packages with debug information + +**Repository channels**: +- ga: general availability repositories + - AKA stable repositories. +- beta: beta-testing repositories +- eus, e4s, aus, tus: Extended Update Support, Update Services for SAP Solutions, Advanced Update Support, Telco Extended Update Support + - Red Hat update channel classification. Most of the time you won't need to use these. + +`mapping` establishes connections between described repositories. +Each entryy in the list defines a mapping between major system versions, and contains the following elements: +- source_major_version: major system version from which the system would be upgraded +- target_major_version: major system version to which the system would be elevated +- entries: the list of repository mappings + - source: source repository, one that would be found on a pre-upgrade system + - target: a list of target upgrade repositores that would contain new package versions. Each source repository can map to one or multiple target repositories + + +> **Important**: The repository mapping file also defines whether a vendor's packages will be included into the upgrade process at all. +> If at least one source repository listed in the file is present on the system, the vendor is considered active, and package repositories/PES events are enabled - otherwise, they **will not** affect the upgrade process. + +### Package repository information + +This file defines the vendor's package repositories to be used during the upgrade. + +The file has the same format normal YUM/DNF package repository files do. + +> NOTE: The repositories listed in this file are only used *during* the upgrade. Package repositories on the post-upgrade system should be provided through updated packages or custom repository deployment. + +### Package signature list + +This file should contain the list of public signature headers that the packages are signed with, one entry per line. + +You can find signature headers for your packages by running the following command: + +`rpm -qa --queryformat "%{NAME} || %|DSAHEADER?{%{DSAHEADER:pgpsig}}:{%|RSAHEADER?{%{RSAHEADER:pgpsig}}:{(none)}|}|\n" ` + +rpm will return an entry like the following: +`package-name || DSA/SHA1, Mon Aug 23 08:17:13 2021, Key ID 8c55a6628608cb71` + +The value after "Key ID", in this case, `8c55a6628608cb71`, is what you should put into the signature list file. + +### Package migration event list + +The Leapp upgrade process uses information from the AlmaLinux PES (Package Evolution System) to keep track of how packages change between the OS versions. This data is located in `leapp-data/vendors.d/_pes.json` in the GitHub repository and in `/etc/leapp/files/vendors.d/_pes.json` on a system being upgraded. + +> **Warning**: leapp doesn't force packages from out_packageset to be installed from the specific repository; instead, it enables repo from out_packageset and then DNF installs the latest package version from all enabled repos. + +#### Creating event lists through PES + +The recommended way to create new event lists is to use the PES mechanism. + +The web interface can create, manage and export groups of events to JSON files. + +This video demonstration walks through the steps of adding an action event group and exporting it as a JSON file to make use of it in the elevation process. + +> https://drive.google.com/file/d/1VqnQkUsxzLijIqySMBGu5lDrA72BVd5A/view?usp=sharing + +Please refer to the [PES contribution guide](https://wiki.almalinux.org/elevate/Contribution-guide.html) for additional information on entry fields. + +#### Manual editing + +To add new rules to the list, add a new entry to the `packageinfo` array. + +**Important**: actions from PES JSON files will be in effect only for those packages that are signed **and** have their signatures in one of the active .sigs files. Unsigned packages will be updated only if some signed package requires a new version, otherwise they will by left as they are. + +Required fields: + +- action: what action to perform on the listed package +- 0 - present +- 1 - removed +- 2 - deprecated +- 3 - replaced +- 4 - split +- 5 - merged +- 6 - moved to new repository +- 7 - renamed +- arches: what system architectures the listed entry relates to +- id: entry ID, must be unique +- in_packageset: set of packages on the old system +- out_packageset: set of packages to switch to, empty if removed or deprecated +- initial_release: source OS release +- release: target OS release + +`in_packageset` and `out_packageset` have the following format: + +```json + "in_packageset": { + "package": [ + { + "module_stream": null, + "name": "PackageKit", + "repository": "base" + }, + { + "module_stream": null, + "name": "PackageKit-yum", + "repository": "base" + } + ], + "set_id": 1592 + }, +``` + +For `in_packageset`, `repository` field defines the package repository the package was installed from on the source system. +For `out_packageset`, `repository` field for packages should be the same as the "Target system repo name in PES" field in the associated vendor repository mapping file. + +### Providing the data + +Once you've prepared the vendor data for migration, you can make a pull request to https://github.com/AlmaLinux/leapp-data/ to make it available publicly. +Files should be placed into the `vendors.d` subfolder if the data should be available for all elevation target OS variants, or into the `files//vendors.d/` if intended for a specific one. + +Alternatively, you can deploy the vendor files on a system prior to starting the upgrade. In this case, place the files into the folder `/etc/leapp/files/vendors.d/`. diff --git a/commands/command_utils.py b/commands/command_utils.py index da62c50..a8e7d76 100644 --- a/commands/command_utils.py +++ b/commands/command_utils.py @@ -12,7 +12,7 @@ LEAPP_UPGRADE_FLAVOUR_DEFAULT = 'default' LEAPP_UPGRADE_FLAVOUR_SAP_HANA = 'saphana' LEAPP_UPGRADE_PATHS = 'upgrade_paths.json' -VERSION_REGEX = re.compile(r"^([1-9]\d*)\.(\d+)$") +VERSION_REGEX = re.compile(r"^([1-9]\d*)(\.(\d+))?$") def check_version(version): diff --git a/repos/system_upgrade/common/actors/addupgradebootentry/libraries/addupgradebootentry.py b/repos/system_upgrade/common/actors/addupgradebootentry/libraries/addupgradebootentry.py index a2cede0..5ff1c76 100644 --- a/repos/system_upgrade/common/actors/addupgradebootentry/libraries/addupgradebootentry.py +++ b/repos/system_upgrade/common/actors/addupgradebootentry/libraries/addupgradebootentry.py @@ -17,7 +17,7 @@ def add_boot_entry(configs=None): '/usr/sbin/grubby', '--add-kernel', '{0}'.format(kernel_dst_path), '--initrd', '{0}'.format(initram_dst_path), - '--title', 'RHEL-Upgrade-Initramfs', + '--title', 'ELevate-Upgrade-Initramfs', '--copy-default', '--make-default', '--args', '{DEBUG} enforcing=0 rd.plymouth=0 plymouth.enable=0'.format(DEBUG=debug) diff --git a/repos/system_upgrade/common/actors/addupgradebootentry/tests/unit_test_addupgradebootentry.py b/repos/system_upgrade/common/actors/addupgradebootentry/tests/unit_test_addupgradebootentry.py index bb89c9f..2b8e7c8 100644 --- a/repos/system_upgrade/common/actors/addupgradebootentry/tests/unit_test_addupgradebootentry.py +++ b/repos/system_upgrade/common/actors/addupgradebootentry/tests/unit_test_addupgradebootentry.py @@ -42,7 +42,7 @@ run_args_add = [ '/usr/sbin/grubby', '--add-kernel', '/abc', '--initrd', '/def', - '--title', 'RHEL-Upgrade-Initramfs', + '--title', 'ELevate-Upgrade-Initramfs', '--copy-default', '--make-default', '--args', diff --git a/repos/system_upgrade/common/actors/checkenabledvendorrepos/actor.py b/repos/system_upgrade/common/actors/checkenabledvendorrepos/actor.py new file mode 100644 index 0000000..5284aec --- /dev/null +++ b/repos/system_upgrade/common/actors/checkenabledvendorrepos/actor.py @@ -0,0 +1,53 @@ +from leapp.actors import Actor +from leapp.libraries.stdlib import api +from leapp.models import ( + RepositoriesFacts, + VendorSourceRepos, + ActiveVendorList, +) +from leapp.tags import FactsPhaseTag, IPUWorkflowTag + + +class CheckEnabledVendorRepos(Actor): + """ + Create a list of vendors whose repositories are present on the system. + Only those vendors' configurations (new repositories, PES actions, etc.) + will be included in the upgrade process. + """ + + name = "check_enabled_vendor_repos" + consumes = (RepositoriesFacts, VendorSourceRepos) + produces = (ActiveVendorList) + tags = (IPUWorkflowTag, FactsPhaseTag.Before) + + def process(self): + vendor_mapping_data = {} + active_vendors = set() + + # Make a dict for easy lookup of repoid -> vendor name. + for vendor_src_repodata in api.consume(VendorSourceRepos): + for vendor_src_repo in vendor_src_repodata.source_repoids: + vendor_mapping_data[vendor_src_repo] = vendor_src_repodata.vendor + + # Is the repo listed in the vendor map as from_repoid present on the system? + for repos in api.consume(RepositoriesFacts): + for repo_file in repos.repositories: + for repo in repo_file.data: + self.log.debug( + "Looking for repository {} in vendor maps".format(repo.repoid) + ) + if repo.repoid in vendor_mapping_data: + # If the vendor's repository is present in the system, count the vendor as active. + new_vendor = vendor_mapping_data[repo.repoid] + self.log.debug( + "Repository {} found, enabling vendor {}".format( + repo.repoid, new_vendor + ) + ) + active_vendors.add(new_vendor) + + if active_vendors: + self.log.debug("Active vendor list: {}".format(active_vendors)) + api.produce(ActiveVendorList(data=list(active_vendors))) + else: + self.log.info("No active vendors found, vendor list not generated") diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh index acdb93b..da1e814 100755 --- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh +++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh @@ -8,7 +8,7 @@ fi type getarg >/dev/null 2>&1 || . /lib/dracut-lib.sh get_rhel_major_release() { - local os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*\.' | grep -o '[0-9]*') + local os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*' | grep -o '[0-9]*') [ -z "$os_version" ] && { # This should not happen as /etc/initrd-release is supposed to have API # stability, but check is better than broken system. @@ -326,4 +326,3 @@ getarg 'rd.break=leapp-logs' && emergency_shell -n upgrade "Break after LEAPP sa sync mount -o "remount,$old_opts" $NEWROOT exit $result - diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator index 14bd6e3..f6adacf 100755 --- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator +++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator @@ -1,7 +1,7 @@ #!/bin/sh get_rhel_major_release() { - local os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*\.' | grep -o '[0-9]*') + local os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*' | grep -o '[0-9]*') [ -z "$os_version" ] && { # This should not happen as /etc/initrd-release is supposed to have API # stability, but check is better than broken system. diff --git a/repos/system_upgrade/common/actors/efibootorderfix/finalization/actor.py b/repos/system_upgrade/common/actors/efibootorderfix/finalization/actor.py index f42909f..caa94e0 100644 --- a/repos/system_upgrade/common/actors/efibootorderfix/finalization/actor.py +++ b/repos/system_upgrade/common/actors/efibootorderfix/finalization/actor.py @@ -1,17 +1,78 @@ +import os + +from leapp.libraries.stdlib import run, api from leapp.actors import Actor -from leapp.libraries.common import efi_reboot_fix +from leapp.models import InstalledTargetKernelVersion, KernelCmdlineArg, FirmwareFacts, MountEntry from leapp.tags import FinalizationPhaseTag, IPUWorkflowTag +from leapp.exceptions import StopActorExecutionError class EfiFinalizationFix(Actor): """ - Adjust EFI boot entry for final reboot + Ensure that EFI boot order is updated, which is particularly necessary + when upgrading to a different OS distro. Also rebuilds grub config + if necessary. """ name = 'efi_finalization_fix' - consumes = () + consumes = (KernelCmdlineArg, InstalledTargetKernelVersion, FirmwareFacts, MountEntry) produces = () tags = (FinalizationPhaseTag, IPUWorkflowTag) def process(self): - efi_reboot_fix.maybe_emit_updated_boot_entry() + is_system_efi = False + ff = next(self.consume(FirmwareFacts), None) + + dirname = { + 'AlmaLinux': 'almalinux', + 'CentOS Linux': 'centos', + 'CentOS Stream': 'centos', + 'Oracle Linux Server': 'redhat', + 'Red Hat Enterprise Linux': 'redhat', + 'Rocky Linux': 'rocky' + } + + efi_shimname_dict = { + 'x86_64': 'shimx64.efi', + 'aarch64': 'shimaa64.efi' + } + + with open('/etc/system-release', 'r') as sr: + release_line = next(line for line in sr if 'release' in line) + distro = release_line.split(' release ', 1)[0] + + efi_bootentry_label = distro + distro_dir = dirname.get(distro, 'default') + shim_filename = efi_shimname_dict.get(api.current_actor().configuration.architecture, 'shimx64.efi') + + shim_path = '/boot/efi/EFI/' + distro_dir + '/' + shim_filename + grub_cfg_path = '/boot/efi/EFI/' + distro_dir + '/grub.cfg' + bootmgr_path = '\\EFI\\' + distro_dir + '\\' + shim_filename + + has_efibootmgr = os.path.exists('/sbin/efibootmgr') + has_shim = os.path.exists(shim_path) + has_grub_cfg = os.path.exists(grub_cfg_path) + + if not ff: + raise StopActorExecutionError( + 'Could not identify system firmware', + details={'details': 'Actor did not receive FirmwareFacts message.'} + ) + + if not has_efibootmgr: + return + + for fact in self.consume(FirmwareFacts): + if fact.firmware == 'efi': + is_system_efi = True + break + + if is_system_efi and has_shim: + with open('/proc/mounts', 'r') as fp: + for line in fp: + if '/boot/efi' in line: + efidev = line.split(' ', 1)[0] + run(['/sbin/efibootmgr', '-c', '-d', efidev, '-p 1', '-l', bootmgr_path, '-L', efi_bootentry_label]) + + if not has_grub_cfg: + run(['/sbin/grub2-mkconfig', '-o', grub_cfg_path]) diff --git a/repos/system_upgrade/common/actors/ipuworkflowconfig/libraries/ipuworkflowconfig.py b/repos/system_upgrade/common/actors/ipuworkflowconfig/libraries/ipuworkflowconfig.py index edf978f..7fea4ec 100644 --- a/repos/system_upgrade/common/actors/ipuworkflowconfig/libraries/ipuworkflowconfig.py +++ b/repos/system_upgrade/common/actors/ipuworkflowconfig/libraries/ipuworkflowconfig.py @@ -47,6 +47,7 @@ def get_os_release(path): :return: `OSRelease` model if the file can be parsed :raises: `IOError` """ + os_version = '.'.join(platform.dist()[1].split('.')[:2]) try: with open(path) as f: data = dict(l.strip().split('=', 1) for l in f.readlines() if '=' in l) @@ -55,7 +56,7 @@ def get_os_release(path): name=data.get('NAME', '').strip('"'), pretty_name=data.get('PRETTY_NAME', '').strip('"'), version=data.get('VERSION', '').strip('"'), - version_id=data.get('VERSION_ID', '').strip('"'), + version_id=os_version, variant=data.get('VARIANT', '').strip('"') or None, variant_id=data.get('VARIANT_ID', '').strip('"') or None ) diff --git a/repos/system_upgrade/common/actors/peseventsscanner/actor.py b/repos/system_upgrade/common/actors/peseventsscanner/actor.py index fadf76b..7ef2664 100644 --- a/repos/system_upgrade/common/actors/peseventsscanner/actor.py +++ b/repos/system_upgrade/common/actors/peseventsscanner/actor.py @@ -1,3 +1,6 @@ +import os +import os.path + from leapp.actors import Actor from leapp.libraries.actor.peseventsscanner import pes_events_scanner from leapp.models import ( @@ -9,11 +12,15 @@ from leapp.models import ( RepositoriesMapping, RepositoriesSetupTasks, RHUIInfo, - RpmTransactionTasks + RpmTransactionTasks, + ActiveVendorList, ) from leapp.reporting import Report from leapp.tags import FactsPhaseTag, IPUWorkflowTag +LEAPP_FILES_DIR = "/etc/leapp/files" +VENDORS_DIR = "/etc/leapp/files/vendors.d" + class PesEventsScanner(Actor): """ @@ -32,9 +39,22 @@ class PesEventsScanner(Actor): RepositoriesMapping, RHUIInfo, RpmTransactionTasks, + ActiveVendorList, ) produces = (PESRpmTransactionTasks, RepositoriesSetupTasks, Report) tags = (IPUWorkflowTag, FactsPhaseTag) def process(self): - pes_events_scanner('/etc/leapp/files', 'pes-events.json') + pes_events_scanner(LEAPP_FILES_DIR, "pes-events.json") + + active_vendors = [] + for vendor_list in self.consume(ActiveVendorList): + active_vendors.extend(vendor_list.data) + + pes_json_suffix = "_pes.json" + if os.path.isdir(VENDORS_DIR): + vendor_pesfiles = list(filter(lambda vfile: pes_json_suffix in vfile, os.listdir(VENDORS_DIR))) + + for pesfile in vendor_pesfiles: + if pesfile[:-len(pes_json_suffix)] in active_vendors: + pes_events_scanner(VENDORS_DIR, pesfile) diff --git a/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner.py b/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner.py index 1be2caa..072de17 100644 --- a/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner.py +++ b/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner.py @@ -138,19 +138,26 @@ def _get_repositories_mapping(target_pesids): :return: Dictionary with all repositories mapped. """ - repositories_map_msgs = api.consume(RepositoriesMapping) - repositories_map_msg = next(repositories_map_msgs, None) - if list(repositories_map_msgs): - api.current_logger().warning('Unexpectedly received more than one RepositoriesMapping message.') - if not repositories_map_msg: - raise StopActorExecutionError( - 'Cannot parse RepositoriesMapping data properly', - details={'Problem': 'Did not receive a message with mapped repositories'} - ) + composite_mapping = [] + composite_repos = [] + + for repomap_msg in api.consume(RepositoriesMapping): + if not repomap_msg: + raise StopActorExecutionError( + 'Cannot parse RepositoriesMapping data properly', + details={'Problem': 'Received a blank message with mapped repositories'} + ) + composite_mapping.extend(repomap_msg.mapping) + composite_repos.extend(repomap_msg.repositories) + + composite_map_msg = RepositoriesMapping( + mapping=composite_mapping, + repositories=composite_repos + ) rhui_info = next(api.consume(RHUIInfo), RHUIInfo(provider='')) - repomap = peseventsscanner_repomap.RepoMapDataHandler(repositories_map_msg, cloud_provider=rhui_info.provider) + repomap = peseventsscanner_repomap.RepoMapDataHandler(composite_map_msg, cloud_provider=rhui_info.provider) # NOTE: We have to calculate expected target repositories # like in the setuptargetrepos actor. It's planned to handle this in different # way in future... @@ -324,7 +331,7 @@ def parse_pes_events(json_data): :return: List of Event tuples, where each event contains event type and input/output pkgs """ data = json.loads(json_data) - if not isinstance(data, dict) or not data.get('packageinfo'): + if not isinstance(data, dict) or data.get('packageinfo') is None: raise ValueError('Found PES data with invalid structure') return list(chain(*[parse_entry(entry) for entry in data['packageinfo']])) diff --git a/repos/system_upgrade/common/actors/peseventsscanner/tests/unit_test_peseventsscanner.py b/repos/system_upgrade/common/actors/peseventsscanner/tests/unit_test_peseventsscanner.py index f4b02e9..c22165e 100644 --- a/repos/system_upgrade/common/actors/peseventsscanner/tests/unit_test_peseventsscanner.py +++ b/repos/system_upgrade/common/actors/peseventsscanner/tests/unit_test_peseventsscanner.py @@ -492,6 +492,10 @@ def test_get_events(monkeypatch): assert reporting.create_report.called == 1 assert 'inhibitor' in reporting.create_report.report_fields['flags'] + with open(os.path.join(CUR_DIR, 'files/sample04.json')) as f: + events = parse_pes_events(f.read()) + assert len(events) == 0 + def test_pes_data_not_found(monkeypatch): def read_or_fetch_mocked(filename, directory="/etc/leapp/files", service=None, allow_empty=False): diff --git a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py b/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py index 01f6df3..0bb0726 100644 --- a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py +++ b/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py @@ -1,27 +1,65 @@ from leapp.actors import Actor from leapp.libraries.common import rhui -from leapp.models import InstalledRedHatSignedRPM, InstalledRPM, InstalledUnsignedRPM +from leapp.models import InstalledRedHatSignedRPM, InstalledRPM, InstalledUnsignedRPM, VendorSignatures from leapp.tags import FactsPhaseTag, IPUWorkflowTag -class RedHatSignedRpmScanner(Actor): +VENDOR_SIGS = { + 'rhel': ['199e2f91fd431d51', + '5326810137017186', + '938a80caf21541eb', + 'fd372689897da07a', + '45689c882fa658e0'], + 'centos': ['24c6a8a7f4a80eb5', + '05b555b38483c65d', + '4eb84e71f2ee9d55'], + 'cloudlinux': ['8c55a6628608cb71'], + 'almalinux': ['51d6647ec21ad6ea', + 'd36cb86cb86b3716'], + 'rocky': ['15af5dac6d745a60', + '702d426d350d275d'], + 'ol': ['72f97b74ec551f03', + '82562ea9ad986da3', + 'bc4d06a08d8b756f'], + 'eurolinux': ['75c333f418cd4a9e', + 'b413acad6275f250', + 'f7ad3e5a1c9fd080'] +} + +VENDOR_PACKAGERS = { + "rhel": "Red Hat, Inc.", + "centos": "CentOS", + "cloudlinux": "CloudLinux Packaging Team", + "almalinux": "AlmaLinux Packaging Team", + "rocky": "infrastructure@rockylinux.org", + "eurolinux": "EuroLinux", +} + + +class VendorSignedRpmScanner(Actor): """Provide data about installed RPM Packages signed by Red Hat. After filtering the list of installed RPM packages by signature, a message with relevant data will be produced. """ - name = 'red_hat_signed_rpm_scanner' - consumes = (InstalledRPM,) - produces = (InstalledRedHatSignedRPM, InstalledUnsignedRPM,) + name = "vendor_signed_rpm_scanner" + consumes = (InstalledRPM, VendorSignatures) + produces = ( + InstalledRedHatSignedRPM, + InstalledUnsignedRPM, + ) tags = (IPUWorkflowTag, FactsPhaseTag) def process(self): - RH_SIGS = ['199e2f91fd431d51', - '5326810137017186', - '938a80caf21541eb', - 'fd372689897da07a', - '45689c882fa658e0'] + vendor = self.configuration.os_release.release_id + vendor_keys = sum(VENDOR_SIGS.values(), []) + vendor_packager = VENDOR_PACKAGERS.get(vendor, "not-available") + + for siglist in self.consume(VendorSignatures): + vendor_keys.extend(siglist.sigs) + + self.log.debug("Signature list: {}".format(vendor_keys)) signed_pkgs = InstalledRedHatSignedRPM() unsigned_pkgs = InstalledUnsignedRPM() @@ -32,11 +70,11 @@ class RedHatSignedRpmScanner(Actor): all_signed = [ env for env in env_vars - if env.name == 'LEAPP_DEVEL_RPMS_ALL_SIGNED' and env.value == '1' + if env.name == "LEAPP_DEVEL_RPMS_ALL_SIGNED" and env.value == "1" ] - def has_rhsig(pkg): - return any(key in pkg.pgpsig for key in RH_SIGS) + def has_vendorsig(pkg): + return any(key in pkg.pgpsig for key in vendor_keys) def is_gpg_pubkey(pkg): """Check if gpg-pubkey pkg exists or LEAPP_DEVEL_RPMS_ALL_SIGNED=1 @@ -44,15 +82,15 @@ class RedHatSignedRpmScanner(Actor): gpg-pubkey is not signed as it would require another package to verify its signature """ - return ( # pylint: disable-msg=consider-using-ternary - pkg.name == 'gpg-pubkey' - and pkg.packager.startswith('Red Hat, Inc.') - or all_signed + return ( # pylint: disable-msg=consider-using-ternary + pkg.name == "gpg-pubkey" + and (pkg.packager.startswith(vendor_packager)) + or all_signed ) def has_katello_prefix(pkg): """Whitelist the katello package.""" - return pkg.name.startswith('katello-ca-consumer') + return pkg.name.startswith("katello-ca-consumer") def is_azure_pkg(pkg): """Whitelist Azure config package.""" @@ -68,16 +106,24 @@ class RedHatSignedRpmScanner(Actor): for pkg in rpm_pkgs.items: if any( [ - has_rhsig(pkg), + has_vendorsig(pkg), is_gpg_pubkey(pkg), has_katello_prefix(pkg), is_azure_pkg(pkg), ] ): signed_pkgs.items.append(pkg) + self.log.debug( + "Package {} is signed, packager: {}, signature: {}".format( + pkg.name, pkg.packager, pkg.pgpsig + ) + ) continue unsigned_pkgs.items.append(pkg) + self.log.debug( + "Package {} is unsigned, packager: {}, signature: {}".format(pkg.name, pkg.packager, pkg.pgpsig) + ) self.produce(signed_pkgs) self.produce(unsigned_pkgs) diff --git a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py index b2d00f3..e9458c5 100644 --- a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py +++ b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py @@ -1,12 +1,9 @@ -from collections import defaultdict -import json import os -from leapp.exceptions import StopActorExecutionError from leapp.libraries.common.config.version import get_target_major_version, get_source_major_version -from leapp.libraries.common.fetch import read_or_fetch +from leapp.libraries.common.repomaputils import RepoMapData, read_repofile, inhibit_upgrade from leapp.libraries.stdlib import api -from leapp.models import RepositoriesMapping, PESIDRepositoryEntry, RepoMapEntry +from leapp.models import RepositoriesMapping from leapp.models.fields import ModelViolationError OLD_REPOMAP_FILE = 'repomap.csv' @@ -16,144 +13,9 @@ REPOMAP_FILE = 'repomap.json' """The name of the new repository mapping file.""" -class RepoMapData(object): - VERSION_FORMAT = '1.0.0' - - def __init__(self): - self.repositories = [] - self.mapping = {} - - def add_repository(self, data, pesid): - """ - Add new PESIDRepositoryEntry with given pesid from the provided dictionary. - - :param data: A dict containing the data of the added repository. The dictionary structure corresponds - to the repositories entries in the repository mapping JSON schema. - :type data: Dict[str, str] - :param pesid: PES id of the repository family that the newly added repository belongs to. - :type pesid: str - """ - self.repositories.append(PESIDRepositoryEntry( - repoid=data['repoid'], - channel=data['channel'], - rhui=data.get('rhui', ''), - repo_type=data['repo_type'], - arch=data['arch'], - major_version=data['major_version'], - pesid=pesid - )) - - def get_repositories(self, valid_major_versions): - """ - Return the list of PESIDRepositoryEntry object matching the specified major versions. - """ - return [repo for repo in self.repositories if repo.major_version in valid_major_versions] - - def add_mapping(self, source_major_version, target_major_version, source_pesid, target_pesid): - """ - Add a new mapping entry that is mapping the source pesid to the destination pesid(s), - relevant in an IPU from the supplied source major version to the supplied target - major version. - - :param str source_major_version: Specifies the major version of the source system - for which the added mapping applies. - :param str target_major_version: Specifies the major version of the target system - for which the added mapping applies. - :param str source_pesid: PESID of the source repository. - :param Union[str|List[str]] target_pesid: A single target PESID or a list of target - PESIDs of the added mapping. - """ - # NOTE: it could be more simple, but I prefer to be sure the input data - # contains just one map per source PESID. - key = '{}:{}'.format(source_major_version, target_major_version) - rmap = self.mapping.get(key, defaultdict(set)) - self.mapping[key] = rmap - if isinstance(target_pesid, list): - rmap[source_pesid].update(target_pesid) - else: - rmap[source_pesid].add(target_pesid) - - def get_mappings(self, src_major_version, dst_major_version): - """ - Return the list of RepoMapEntry objects for the specified upgrade path. - - IOW, the whole mapping for specified IPU. - """ - key = '{}:{}'.format(src_major_version, dst_major_version) - rmap = self.mapping.get(key, None) - if not rmap: - return None - map_list = [] - for src_pesid in sorted(rmap.keys()): - map_list.append(RepoMapEntry(source=src_pesid, target=sorted(rmap[src_pesid]))) - return map_list - - @staticmethod - def load_from_dict(data): - if data['version_format'] != RepoMapData.VERSION_FORMAT: - raise ValueError( - 'The obtained repomap data has unsupported version of format.' - ' Get {} required {}' - .format(data['version_format'], RepoMapData.VERSION_FORMAT) - ) - - repomap = RepoMapData() - - # Load reposiories - existing_pesids = set() - for repo_family in data['repositories']: - existing_pesids.add(repo_family['pesid']) - for repo in repo_family['entries']: - repomap.add_repository(repo, repo_family['pesid']) - - # Load mappings - for mapping in data['mapping']: - for entry in mapping['entries']: - if not isinstance(entry['target'], list): - raise ValueError( - 'The target field of a mapping entry is not a list: {}' - .format(entry) - ) - - for pesid in [entry['source']] + entry['target']: - if pesid not in existing_pesids: - raise ValueError( - 'The {} pesid is not related to any repository.' - .format(pesid) - ) - repomap.add_mapping( - source_major_version=mapping['source_major_version'], - target_major_version=mapping['target_major_version'], - source_pesid=entry['source'], - target_pesid=entry['target'], - ) - return repomap - - -def _inhibit_upgrade(msg): - raise StopActorExecutionError( - msg, - details={'hint': ('Read documentation at the following link for more' - ' information about how to retrieve the valid file:' - ' https://access.redhat.com/articles/3664871')}) - - -def _read_repofile(repofile): - # NOTE: what about catch StopActorExecution error when the file cannot be - # obtained -> then check whether old_repomap file exists and in such a case - # inform user they have to provde the new repomap.json file (we have the - # warning now only which could be potentially overlooked) - try: - return json.loads(read_or_fetch(repofile)) - except ValueError: - # The data does not contain a valid json - _inhibit_upgrade('The repository mapping file is invalid: file does not contain a valid JSON object.') - return None # Avoids inconsistent-return-statements warning - - -def scan_repositories(read_repofile_func=_read_repofile): +def scan_repositories(read_repofile_func=read_repofile): """ - Scan the repository mapping file and produce RepositoriesMap msg. + Scan the repository mapping file and produce RepositoriesMapping msg. See the description of the actor for more details. """ @@ -185,10 +47,10 @@ def scan_repositories(read_repofile_func=_read_repofile): 'the JSON does not match required schema (wrong field type/value): {}' .format(err) ) - _inhibit_upgrade(err_message) + inhibit_upgrade(err_message) except KeyError as err: - _inhibit_upgrade( + inhibit_upgrade( 'The repository mapping file is invalid: the JSON is missing a required field: {}'.format(err)) except ValueError as err: # The error should contain enough information, so we do not need to clarify it further - _inhibit_upgrade('The repository mapping file is invalid: {}'.format(err)) + inhibit_upgrade('The repository mapping file is invalid: {}'.format(err)) diff --git a/repos/system_upgrade/common/actors/repositoriesmapping/tests/unit_test_repositoriesmapping.py b/repos/system_upgrade/common/actors/repositoriesmapping/tests/unit_test_repositoriesmapping.py index 3c0b04b..3480432 100644 --- a/repos/system_upgrade/common/actors/repositoriesmapping/tests/unit_test_repositoriesmapping.py +++ b/repos/system_upgrade/common/actors/repositoriesmapping/tests/unit_test_repositoriesmapping.py @@ -15,7 +15,6 @@ from leapp.models import PESIDRepositoryEntry CUR_DIR = os.path.dirname(os.path.abspath(__file__)) - @pytest.fixture def adjust_cwd(): previous_cwd = os.getcwd() diff --git a/repos/system_upgrade/common/actors/scanvendorrepofiles/actor.py b/repos/system_upgrade/common/actors/scanvendorrepofiles/actor.py new file mode 100644 index 0000000..6c7f3a3 --- /dev/null +++ b/repos/system_upgrade/common/actors/scanvendorrepofiles/actor.py @@ -0,0 +1,20 @@ +from leapp.actors import Actor +from leapp.libraries.actor import scanvendorrepofiles +from leapp.models import CustomTargetRepository, CustomTargetRepositoryFile, ActiveVendorList +from leapp.tags import FactsPhaseTag, IPUWorkflowTag +from leapp.libraries.stdlib import api + + +class ScanVendorRepofiles(Actor): + """ + Load and produce custom repository data from vendor-provided files. + Only those vendors whose source system repoids were found on the system will be included. + """ + + name = "scan_vendor_repofiles" + consumes = (ActiveVendorList) + produces = (CustomTargetRepository, CustomTargetRepositoryFile) + tags = (FactsPhaseTag, IPUWorkflowTag) + + def process(self): + scanvendorrepofiles.process() diff --git a/repos/system_upgrade/common/actors/scanvendorrepofiles/libraries/scanvendorrepofiles.py b/repos/system_upgrade/common/actors/scanvendorrepofiles/libraries/scanvendorrepofiles.py new file mode 100644 index 0000000..5a65f5a --- /dev/null +++ b/repos/system_upgrade/common/actors/scanvendorrepofiles/libraries/scanvendorrepofiles.py @@ -0,0 +1,68 @@ +import os + +from leapp.libraries.common import repofileutils +from leapp.libraries.stdlib import api +from leapp.models import CustomTargetRepository, CustomTargetRepositoryFile, ActiveVendorList + + +VENDORS_DIR = "/etc/leapp/files/vendors.d/" +REPOFILE_SUFFIX = ".repo" + + +def process(): + """ + Produce CustomTargetRepository msgs for the vendor repo files inside the + . + + The CustomTargetRepository messages are produced only if a "from" vendor repository + listed indide its map matched one of the repositories active on the system. + """ + if not os.path.isdir(VENDORS_DIR): + api.current_logger().debug( + "The {} directory doesn't exist. Nothing to do.".format(VENDORS_DIR) + ) + return + + for reponame in os.listdir(VENDORS_DIR): + if not reponame.endswith(REPOFILE_SUFFIX): + continue + # Cut the .repo part to get only the name. + vendor_name = reponame[:-5] + + active_vendors = [] + for vendor_list in api.consume(ActiveVendorList): + active_vendors.extend(vendor_list.data) + + api.current_logger().debug( + "Active vendor list: {}".format(active_vendors) + ) + + if vendor_name not in active_vendors: + api.current_logger().debug( + "Vendor {} not in active list, skipping".format(vendor_name) + ) + continue + + api.current_logger().debug( + "Vendor {} found in active list, processing file".format(vendor_name) + ) + full_repo_path = os.path.join(VENDORS_DIR, reponame) + repofile = repofileutils.parse_repofile(full_repo_path) + + api.produce(CustomTargetRepositoryFile(file=full_repo_path)) + for repo in repofile.data: + api.current_logger().debug( + "Loaded repository {} from file {}".format(repo.repoid, reponame) + ) + api.produce( + CustomTargetRepository( + repoid=repo.repoid, + name=repo.name, + baseurl=repo.baseurl, + enabled=repo.enabled, + ) + ) + + api.current_logger().info( + "The {} directory exists, vendor repositories loaded.".format(VENDORS_DIR) + ) diff --git a/repos/system_upgrade/common/actors/scanvendorrepofiles/tests/test_scanvendorrepofiles.py b/repos/system_upgrade/common/actors/scanvendorrepofiles/tests/test_scanvendorrepofiles.py new file mode 100644 index 0000000..cb5c7ab --- /dev/null +++ b/repos/system_upgrade/common/actors/scanvendorrepofiles/tests/test_scanvendorrepofiles.py @@ -0,0 +1,131 @@ +import os + +from leapp.libraries.actor import scancustomrepofile +from leapp.libraries.common import repofileutils +from leapp.libraries.common.testutils import produce_mocked +from leapp.libraries.stdlib import api + +from leapp.models import (CustomTargetRepository, CustomTargetRepositoryFile, + RepositoryData, RepositoryFile) + + +_REPODATA = [ + RepositoryData(repoid="repo1", name="repo1name", baseurl="repo1url", enabled=True), + RepositoryData(repoid="repo2", name="repo2name", baseurl="repo2url", enabled=False), + RepositoryData(repoid="repo3", name="repo3name", enabled=True), + RepositoryData(repoid="repo4", name="repo4name", mirrorlist="mirror4list", enabled=True), +] + +_CUSTOM_REPOS = [ + CustomTargetRepository(repoid="repo1", name="repo1name", baseurl="repo1url", enabled=True), + CustomTargetRepository(repoid="repo2", name="repo2name", baseurl="repo2url", enabled=False), + CustomTargetRepository(repoid="repo3", name="repo3name", baseurl=None, enabled=True), + CustomTargetRepository(repoid="repo4", name="repo4name", baseurl=None, enabled=True), +] + +_CUSTOM_REPO_FILE_MSG = CustomTargetRepositoryFile(file=scancustomrepofile.CUSTOM_REPO_PATH) + + +_TESTING_REPODATA = [ + RepositoryData(repoid="repo1-stable", name="repo1name", baseurl="repo1url", enabled=True), + RepositoryData(repoid="repo2-testing", name="repo2name", baseurl="repo2url", enabled=False), + RepositoryData(repoid="repo3-stable", name="repo3name", enabled=False), + RepositoryData(repoid="repo4-testing", name="repo4name", mirrorlist="mirror4list", enabled=True), +] + +_TESTING_CUSTOM_REPOS_STABLE_TARGET = [ + CustomTargetRepository(repoid="repo1-stable", name="repo1name", baseurl="repo1url", enabled=True), + CustomTargetRepository(repoid="repo2-testing", name="repo2name", baseurl="repo2url", enabled=False), + CustomTargetRepository(repoid="repo3-stable", name="repo3name", baseurl=None, enabled=False), + CustomTargetRepository(repoid="repo4-testing", name="repo4name", baseurl=None, enabled=True), +] + +_TESTING_CUSTOM_REPOS_BETA_TARGET = [ + CustomTargetRepository(repoid="repo1-stable", name="repo1name", baseurl="repo1url", enabled=True), + CustomTargetRepository(repoid="repo2-testing", name="repo2name", baseurl="repo2url", enabled=True), + CustomTargetRepository(repoid="repo3-stable", name="repo3name", baseurl=None, enabled=False), + CustomTargetRepository(repoid="repo4-testing", name="repo4name", baseurl=None, enabled=True), +] + +_PROCESS_STABLE_TARGET = "stable" +_PROCESS_BETA_TARGET = "beta" + + +class LoggerMocked(object): + def __init__(self): + self.infomsg = None + self.debugmsg = None + + def info(self, msg): + self.infomsg = msg + + def debug(self, msg): + self.debugmsg = msg + + def __call__(self): + return self + + +def test_no_repofile(monkeypatch): + monkeypatch.setattr(os.path, 'isfile', lambda dummy: False) + monkeypatch.setattr(api, 'produce', produce_mocked()) + monkeypatch.setattr(api, 'current_logger', LoggerMocked()) + scancustomrepofile.process() + msg = "The {} file doesn't exist. Nothing to do.".format(scancustomrepofile.CUSTOM_REPO_PATH) + assert api.current_logger.debugmsg == msg + assert not api.produce.called + + +def test_valid_repofile_exists(monkeypatch): + def _mocked_parse_repofile(fpath): + return RepositoryFile(file=fpath, data=_REPODATA) + monkeypatch.setattr(os.path, 'isfile', lambda dummy: True) + monkeypatch.setattr(api, 'produce', produce_mocked()) + monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile) + monkeypatch.setattr(api, 'current_logger', LoggerMocked()) + scancustomrepofile.process() + msg = "The {} file exists, custom repositories loaded.".format(scancustomrepofile.CUSTOM_REPO_PATH) + assert api.current_logger.infomsg == msg + assert api.produce.called == len(_CUSTOM_REPOS) + 1 + assert _CUSTOM_REPO_FILE_MSG in api.produce.model_instances + for crepo in _CUSTOM_REPOS: + assert crepo in api.produce.model_instances + + +def test_target_stable_repos(monkeypatch): + def _mocked_parse_repofile(fpath): + return RepositoryFile(file=fpath, data=_TESTING_REPODATA) + monkeypatch.setattr(os.path, 'isfile', lambda dummy: True) + monkeypatch.setattr(api, 'produce', produce_mocked()) + monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile) + + scancustomrepofile.process(_PROCESS_STABLE_TARGET) + assert api.produce.called == len(_TESTING_CUSTOM_REPOS_STABLE_TARGET) + 1 + for crepo in _TESTING_CUSTOM_REPOS_STABLE_TARGET: + assert crepo in api.produce.model_instances + + +def test_target_beta_repos(monkeypatch): + def _mocked_parse_repofile(fpath): + return RepositoryFile(file=fpath, data=_TESTING_REPODATA) + monkeypatch.setattr(os.path, 'isfile', lambda dummy: True) + monkeypatch.setattr(api, 'produce', produce_mocked()) + monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile) + + scancustomrepofile.process(_PROCESS_BETA_TARGET) + assert api.produce.called == len(_TESTING_CUSTOM_REPOS_BETA_TARGET) + 1 + for crepo in _TESTING_CUSTOM_REPOS_BETA_TARGET: + assert crepo in api.produce.model_instances + + +def test_empty_repofile_exists(monkeypatch): + def _mocked_parse_repofile(fpath): + return RepositoryFile(file=fpath, data=[]) + monkeypatch.setattr(os.path, 'isfile', lambda dummy: True) + monkeypatch.setattr(api, 'produce', produce_mocked()) + monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile) + monkeypatch.setattr(api, 'current_logger', LoggerMocked()) + scancustomrepofile.process() + msg = "The {} file exists, but is empty. Nothing to do.".format(scancustomrepofile.CUSTOM_REPO_PATH) + assert api.current_logger.infomsg == msg + assert not api.produce.called diff --git a/repos/system_upgrade/common/actors/setuptargetrepos/actor.py b/repos/system_upgrade/common/actors/setuptargetrepos/actor.py index 00de073..fb86639 100644 --- a/repos/system_upgrade/common/actors/setuptargetrepos/actor.py +++ b/repos/system_upgrade/common/actors/setuptargetrepos/actor.py @@ -12,6 +12,7 @@ from leapp.models import ( UsedRepositories ) from leapp.tags import FactsPhaseTag, IPUWorkflowTag +from leapp.libraries.stdlib import api class SetupTargetRepos(Actor): diff --git a/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos.py b/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos.py index 3f34aed..2992037 100644 --- a/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos.py +++ b/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos.py @@ -59,9 +59,20 @@ def _get_used_repo_dict(): def _setup_repomap_handler(src_repoids): - repo_mappig_msg = next(api.consume(RepositoriesMapping), RepositoriesMapping()) + combined_mapping = [] + combined_repositories = [] + # Depending on whether there are any vendors present, we might get more than one message. + for msg in api.consume(RepositoriesMapping): + combined_mapping.extend(msg.mapping) + combined_repositories.extend(msg.repositories) + + combined_repomapping = RepositoriesMapping( + mapping=combined_mapping, + repositories=combined_repositories + ) + rhui_info = next(api.consume(RHUIInfo), RHUIInfo(provider='')) - repomap = setuptargetrepos_repomap.RepoMapDataHandler(repo_mappig_msg, cloud_provider=rhui_info.provider) + repomap = setuptargetrepos_repomap.RepoMapDataHandler(combined_repomapping, cloud_provider=rhui_info.provider) # TODO(pstodulk): what about skip this completely and keep the default 'ga'..? default_channels = setuptargetrepos_repomap.get_default_repository_channels(repomap, src_repoids) repomap.set_default_channels(default_channels) diff --git a/repos/system_upgrade/common/actors/systemfacts/actor.py b/repos/system_upgrade/common/actors/systemfacts/actor.py index 59b12c8..85d4a09 100644 --- a/repos/system_upgrade/common/actors/systemfacts/actor.py +++ b/repos/system_upgrade/common/actors/systemfacts/actor.py @@ -47,7 +47,7 @@ class SystemFactsActor(Actor): GrubCfgBios, Report ) - tags = (IPUWorkflowTag, FactsPhaseTag,) + tags = (IPUWorkflowTag, FactsPhaseTag.Before,) def process(self): self.produce(systemfacts.get_sysctls_status()) diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py index 7a8bd99..f59c909 100644 --- a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py +++ b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py @@ -592,6 +592,7 @@ def _install_custom_repofiles(context, custom_repofiles): """ for rfile in custom_repofiles: _dst_path = os.path.join('/etc/yum.repos.d', os.path.basename(rfile.file)) + api.current_logger().debug("Copying {} to {}".format(rfile.file, _dst_path)) context.copy_to(rfile.file, _dst_path) diff --git a/repos/system_upgrade/common/actors/vendorreposignaturescanner/actor.py b/repos/system_upgrade/common/actors/vendorreposignaturescanner/actor.py new file mode 100644 index 0000000..f74de27 --- /dev/null +++ b/repos/system_upgrade/common/actors/vendorreposignaturescanner/actor.py @@ -0,0 +1,70 @@ +import os + +from leapp.actors import Actor +from leapp.models import VendorSignatures, ActiveVendorList +from leapp.tags import FactsPhaseTag, IPUWorkflowTag + + +VENDORS_DIR = "/etc/leapp/files/vendors.d/" +SIGFILE_SUFFIX = ".sigs" + + +class VendorRepoSignatureScanner(Actor): + """ + Produce VendorSignatures msgs for the vendor signature files inside the + . + + The messages are produced only if a "from" vendor repository + listed indide its map matched one of the repositories active on the system. + """ + + name = 'vendor_repo_signature_scanner' + consumes = (ActiveVendorList) + produces = (VendorSignatures) + tags = (IPUWorkflowTag, FactsPhaseTag.Before) + + def process(self): + if not os.path.isdir(VENDORS_DIR): + self.log.debug( + "The {} directory doesn't exist. Nothing to do.".format(VENDORS_DIR) + ) + return + + for sigfile_name in os.listdir(VENDORS_DIR): + if not sigfile_name.endswith(SIGFILE_SUFFIX): + continue + # Cut the suffix part to get only the name. + vendor_name = sigfile_name[:-5] + + active_vendors = [] + for vendor_list in self.consume(ActiveVendorList): + active_vendors.extend(vendor_list.data) + + self.log.debug( + "Active vendor list: {}".format(active_vendors) + ) + + if vendor_name not in active_vendors: + self.log.debug( + "Vendor {} not in active list, skipping".format(vendor_name) + ) + continue + + self.log.debug( + "Vendor {} found in active list, processing file".format(vendor_name) + ) + + full_sigfile_path = os.path.join(VENDORS_DIR, sigfile_name) + with open(full_sigfile_path) as f: + signatures = [line for line in f.read().splitlines() if line] + + self.produce( + VendorSignatures( + vendor=vendor_name, + sigs=signatures, + ) + ) + + self.log.info( + "The {} directory exists, vendor signatures loaded.".format(VENDORS_DIR) + ) diff --git a/repos/system_upgrade/common/actors/vendorrepositoriesmapping/actor.py b/repos/system_upgrade/common/actors/vendorrepositoriesmapping/actor.py new file mode 100644 index 0000000..1325647 --- /dev/null +++ b/repos/system_upgrade/common/actors/vendorrepositoriesmapping/actor.py @@ -0,0 +1,19 @@ +from leapp.actors import Actor +# from leapp.libraries.common.repomaputils import scan_vendor_repomaps, VENDOR_REPOMAP_DIR +from leapp.libraries.actor.vendorrepositoriesmapping import scan_vendor_repomaps +from leapp.models import VendorSourceRepos, RepositoriesMapping +from leapp.tags import FactsPhaseTag, IPUWorkflowTag + + +class VendorRepositoriesMapping(Actor): + """ + Scan the vendor repository mapping files and provide the data to other actors. + """ + + name = "vendor_repositories_mapping" + consumes = () + produces = (RepositoriesMapping, VendorSourceRepos,) + tags = (IPUWorkflowTag, FactsPhaseTag.Before) + + def process(self): + scan_vendor_repomaps() diff --git a/repos/system_upgrade/common/actors/vendorrepositoriesmapping/libraries/vendorrepositoriesmapping.py b/repos/system_upgrade/common/actors/vendorrepositoriesmapping/libraries/vendorrepositoriesmapping.py new file mode 100644 index 0000000..204d0dc --- /dev/null +++ b/repos/system_upgrade/common/actors/vendorrepositoriesmapping/libraries/vendorrepositoriesmapping.py @@ -0,0 +1,66 @@ +import os + +from leapp.libraries.common.config.version import get_target_major_version, get_source_major_version +from leapp.libraries.common.repomaputils import RepoMapData, read_repofile, inhibit_upgrade +from leapp.libraries.stdlib import api +from leapp.models import VendorSourceRepos, RepositoriesMapping +from leapp.models.fields import ModelViolationError + + +VENDORS_DIR = "/etc/leapp/files/vendors.d" +"""The folder containing the vendor repository mapping files.""" + + +def read_repomap_file(repomap_file, read_repofile_func, vendor_name): + json_data = read_repofile_func(repomap_file, VENDORS_DIR) + try: + repomap_data = RepoMapData.load_from_dict(json_data) + + api.produce(VendorSourceRepos( + vendor=vendor_name, + source_repoids=repomap_data.get_version_repoids(get_source_major_version()) + )) + + mapping = repomap_data.get_mappings(get_source_major_version(), get_target_major_version()) + valid_major_versions = [get_source_major_version(), get_target_major_version()] + api.produce(RepositoriesMapping( + mapping=mapping, + repositories=repomap_data.get_repositories(valid_major_versions) + )) + except ModelViolationError as err: + err_message = ( + 'The repository mapping file is invalid: ' + 'the JSON does not match required schema (wrong field type/value): {}' + .format(err) + ) + inhibit_upgrade(err_message) + except KeyError as err: + inhibit_upgrade( + 'The repository mapping file is invalid: the JSON is missing a required field: {}'.format(err)) + except ValueError as err: + # The error should contain enough information, so we do not need to clarify it further + inhibit_upgrade('The repository mapping file is invalid: {}'.format(err)) + + +def scan_vendor_repomaps(read_repofile_func=read_repofile): + """ + Scan the repository mapping file and produce RepositoriesMapping msg. + + See the description of the actor for more details. + """ + + map_json_suffix = "_map.json" + if os.path.isdir(VENDORS_DIR): + vendor_mapfiles = list(filter(lambda vfile: map_json_suffix in vfile, os.listdir(VENDORS_DIR))) + + for mapfile in vendor_mapfiles: + read_repomap_file(mapfile, read_repofile_func, mapfile[:-len(map_json_suffix)]) + else: + api.current_logger().debug( + "The {} directory doesn't exist. Nothing to do.".format(VENDORS_DIR) + ) + # vendor_repomap_collection = scan_vendor_repomaps(VENDOR_REPOMAP_DIR) + # if vendor_repomap_collection: + # self.produce(vendor_repomap_collection) + # for repomap in vendor_repomap_collection.maps: + # self.produce(repomap) diff --git a/repos/system_upgrade/common/libraries/config/version.py b/repos/system_upgrade/common/libraries/config/version.py index 03f3cd4..7fcb6aa 100644 --- a/repos/system_upgrade/common/libraries/config/version.py +++ b/repos/system_upgrade/common/libraries/config/version.py @@ -13,8 +13,8 @@ OP_MAP = { _SUPPORTED_VERSIONS = { # Note: 'rhel-alt' is detected when on 'rhel' with kernel 4.x - '7': {'rhel': ['7.9'], 'rhel-alt': ['7.6'], 'rhel-saphana': ['7.9']}, - '8': {'rhel': ['8.5', '8.6']}, + '7': {'rhel': ['7.9'], 'rhel-alt': ['7.6'], 'rhel-saphana': ['7.9'], 'centos': ['7.9'], 'eurolinux': ['7.9'], 'ol': ['7.9']}, + '8': {'rhel': ['8.5', '8.6'], 'centos': ['8.5'], 'almalinux': ['8.6'], 'eurolinux': ['8.6'], 'ol': ['8.6'], 'rocky': ['8.6']}, } diff --git a/repos/system_upgrade/common/libraries/dnfplugin.py b/repos/system_upgrade/common/libraries/dnfplugin.py index 4010e9f..00323a7 100644 --- a/repos/system_upgrade/common/libraries/dnfplugin.py +++ b/repos/system_upgrade/common/libraries/dnfplugin.py @@ -4,6 +4,8 @@ import json import os import shutil +import six + from leapp.exceptions import StopActorExecutionError from leapp.libraries.common import dnfconfig, guards, mounting, overlaygen, rhsm, utils from leapp.libraries.common.config.version import get_target_major_version, get_target_version @@ -213,10 +215,15 @@ def _transaction(context, stage, target_repoids, tasks, plugin_info, test=False, message='Failed to execute dnf. Reason: {}'.format(str(e)) ) except CalledProcessError as e: + if six.PY2: + e.stdout = e.stdout.encode('utf-8', 'xmlcharrefreplace') + e.stderr = e.stdout.encode('utf-8', 'xmlcharrefreplace') + api.current_logger().error('DNF execution failed: ') raise StopActorExecutionError( message='DNF execution failed with non zero exit code.\nSTDOUT:\n{stdout}\nSTDERR:\n{stderr}'.format( - stdout=e.stdout, stderr=e.stderr) + stdout=e.stdout, stderr=e.stderr + ) ) finally: if stage == 'check': diff --git a/repos/system_upgrade/common/libraries/fetch.py b/repos/system_upgrade/common/libraries/fetch.py index 1c58148..37313b6 100644 --- a/repos/system_upgrade/common/libraries/fetch.py +++ b/repos/system_upgrade/common/libraries/fetch.py @@ -73,7 +73,7 @@ def read_or_fetch(filename, directory="/etc/leapp/files", service=None, allow_em data = f.read() if not allow_empty and not data: _raise_error(local_path, "File {lp} exists but is empty".format(lp=local_path)) - logger.warning("File {lp} successfully read ({l} bytes)".format(lp=local_path, l=len(data))) + logger.debug("File {lp} successfully read ({l} bytes)".format(lp=local_path, l=len(data))) return data except EnvironmentError: _raise_error(local_path, "File {lp} exists but couldn't be read".format(lp=local_path)) diff --git a/repos/system_upgrade/common/libraries/repomaputils.py b/repos/system_upgrade/common/libraries/repomaputils.py new file mode 100644 index 0000000..5c41620 --- /dev/null +++ b/repos/system_upgrade/common/libraries/repomaputils.py @@ -0,0 +1,147 @@ +import json +from collections import defaultdict + +from leapp.exceptions import StopActorExecutionError +from leapp.libraries.common.fetch import read_or_fetch +from leapp.models import PESIDRepositoryEntry, RepoMapEntry + + +def inhibit_upgrade(msg): + raise StopActorExecutionError( + msg, + details={'hint': ('Read documentation at the following link for more' + ' information about how to retrieve the valid file:' + ' https://access.redhat.com/articles/3664871')}) + + +def read_repofile(repofile, directory="/etc/leapp/files"): + # NOTE: what about catch StopActorExecution error when the file cannot be + # obtained -> then check whether old_repomap file exists and in such a case + # inform user they have to provde the new repomap.json file (we have the + # warning now only which could be potentially overlooked) + try: + return json.loads(read_or_fetch(repofile, directory)) + except ValueError: + # The data does not contain a valid json + inhibit_upgrade('The repository mapping file is invalid: file does not contain a valid JSON object.') + return None # Avoids inconsistent-return-statements warning + + +class RepoMapData(object): + VERSION_FORMAT = '1.0.0' + + def __init__(self): + self.repositories = [] + self.mapping = {} + + def add_repository(self, data, pesid): + """ + Add new PESIDRepositoryEntry with given pesid from the provided dictionary. + + :param data: A dict containing the data of the added repository. The dictionary structure corresponds + to the repositories entries in the repository mapping JSON schema. + :type data: Dict[str, str] + :param pesid: PES id of the repository family that the newly added repository belongs to. + :type pesid: str + """ + self.repositories.append(PESIDRepositoryEntry( + repoid=data['repoid'], + channel=data['channel'], + rhui=data.get('rhui', ''), + repo_type=data['repo_type'], + arch=data['arch'], + major_version=data['major_version'], + pesid=pesid + )) + + def get_repositories(self, valid_major_versions): + """ + Return the list of PESIDRepositoryEntry object matching the specified major versions. + """ + return [repo for repo in self.repositories if repo.major_version in valid_major_versions] + + def get_version_repoids(self, major_version): + """ + Return the list of repository ID strings for repositories matching the specified major version. + """ + return [repo.repoid for repo in self.repositories if repo.major_version == major_version] + + def add_mapping(self, source_major_version, target_major_version, source_pesid, target_pesid): + """ + Add a new mapping entry that is mapping the source pesid to the destination pesid(s), + relevant in an IPU from the supplied source major version to the supplied target + major version. + + :param str source_major_version: Specifies the major version of the source system + for which the added mapping applies. + :param str target_major_version: Specifies the major version of the target system + for which the added mapping applies. + :param str source_pesid: PESID of the source repository. + :param Union[str|List[str]] target_pesid: A single target PESID or a list of target + PESIDs of the added mapping. + """ + # NOTE: it could be more simple, but I prefer to be sure the input data + # contains just one map per source PESID. + key = '{}:{}'.format(source_major_version, target_major_version) + rmap = self.mapping.get(key, defaultdict(set)) + self.mapping[key] = rmap + if isinstance(target_pesid, list): + rmap[source_pesid].update(target_pesid) + else: + rmap[source_pesid].add(target_pesid) + + def get_mappings(self, src_major_version, dst_major_version): + """ + Return the list of RepoMapEntry objects for the specified upgrade path. + + IOW, the whole mapping for specified IPU. + """ + key = '{}:{}'.format(src_major_version, dst_major_version) + rmap = self.mapping.get(key, None) + if not rmap: + return None + map_list = [] + for src_pesid in sorted(rmap.keys()): + map_list.append(RepoMapEntry(source=src_pesid, target=sorted(rmap[src_pesid]))) + return map_list + + @staticmethod + def load_from_dict(data): + if data['version_format'] != RepoMapData.VERSION_FORMAT: + raise ValueError( + 'The obtained repomap data has unsupported version of format.' + ' Get {} required {}' + .format(data['version_format'], RepoMapData.VERSION_FORMAT) + ) + + repomap = RepoMapData() + + # Load reposiories + existing_pesids = set() + for repo_family in data['repositories']: + existing_pesids.add(repo_family['pesid']) + for repo in repo_family['entries']: + repomap.add_repository(repo, repo_family['pesid']) + + # Load mappings + for mapping in data['mapping']: + for entry in mapping['entries']: + if not isinstance(entry['target'], list): + raise ValueError( + 'The target field of a mapping entry is not a list: {}' + .format(entry) + ) + + for pesid in [entry['source']] + entry['target']: + if pesid not in existing_pesids: + raise ValueError( + 'The {} pesid is not related to any repository.' + .format(pesid) + ) + repomap.add_mapping( + source_major_version=mapping['source_major_version'], + target_major_version=mapping['target_major_version'], + source_pesid=entry['source'], + target_pesid=entry['target'], + ) + return repomap diff --git a/repos/system_upgrade/common/libraries/rhsm.py b/repos/system_upgrade/common/libraries/rhsm.py index b7e4b21..dc038bf 100644 --- a/repos/system_upgrade/common/libraries/rhsm.py +++ b/repos/system_upgrade/common/libraries/rhsm.py @@ -92,7 +92,7 @@ def _handle_rhsm_exceptions(hint=None): def skip_rhsm(): """Check whether we should skip RHSM related code.""" - return get_env('LEAPP_NO_RHSM', '0') == '1' + return True def with_rhsm(f): diff --git a/repos/system_upgrade/common/models/activevendorlist.py b/repos/system_upgrade/common/models/activevendorlist.py new file mode 100644 index 0000000..de4056f --- /dev/null +++ b/repos/system_upgrade/common/models/activevendorlist.py @@ -0,0 +1,7 @@ +from leapp.models import Model, fields +from leapp.topics import VendorTopic + + +class ActiveVendorList(Model): + topic = VendorTopic + data = fields.List(fields.String()) diff --git a/repos/system_upgrade/common/models/repositoriesmap.py b/repos/system_upgrade/common/models/repositoriesmap.py index c187333..a068a70 100644 --- a/repos/system_upgrade/common/models/repositoriesmap.py +++ b/repos/system_upgrade/common/models/repositoriesmap.py @@ -92,3 +92,4 @@ class RepositoriesMapping(Model): mapping = fields.List(fields.Model(RepoMapEntry), default=[]) repositories = fields.List(fields.Model(PESIDRepositoryEntry), default=[]) + diff --git a/repos/system_upgrade/common/models/vendorsignatures.py b/repos/system_upgrade/common/models/vendorsignatures.py new file mode 100644 index 0000000..f456aec --- /dev/null +++ b/repos/system_upgrade/common/models/vendorsignatures.py @@ -0,0 +1,8 @@ +from leapp.models import Model, fields +from leapp.topics import VendorTopic + + +class VendorSignatures(Model): + topic = VendorTopic + vendor = fields.String() + sigs = fields.List(fields.String()) diff --git a/repos/system_upgrade/common/models/vendorsourcerepos.py b/repos/system_upgrade/common/models/vendorsourcerepos.py new file mode 100644 index 0000000..b7a219b --- /dev/null +++ b/repos/system_upgrade/common/models/vendorsourcerepos.py @@ -0,0 +1,12 @@ +from leapp.models import Model, fields +from leapp.topics import VendorTopic + + +class VendorSourceRepos(Model): + """ + This model contains the data on all source repositories associated with a specific vendor. + Its data is used to determine whether the vendor should be included into the upgrade process. + """ + topic = VendorTopic + vendor = fields.String() + source_repoids = fields.List(fields.String()) diff --git a/repos/system_upgrade/common/topics/vendortopic.py b/repos/system_upgrade/common/topics/vendortopic.py new file mode 100644 index 0000000..014b7af --- /dev/null +++ b/repos/system_upgrade/common/topics/vendortopic.py @@ -0,0 +1,5 @@ +from leapp.topics import Topic + + +class VendorTopic(Topic): + name = 'vendor_topic'