leapp-repository/SOURCES/leapp-repository-0.16.0-ele...

6271 lines
264 KiB
Diff
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

diff --git a/Makefile b/Makefile
index 3e51e3c..b931920 100644
--- a/Makefile
+++ b/Makefile
@@ -209,7 +209,7 @@ install-deps:
case $(_PYTHON_VENV) in python3*) yum install -y ${shell echo $(_PYTHON_VENV) | tr -d .}; esac
@# in centos:7 python dependencies required gcc
case $(_PYTHON_VENV) in python3*) yum install gcc -y; esac
- virtualenv --system-site-packages -p /usr/bin/$(_PYTHON_VENV) $(VENVNAME); \
+ virtualenv -p /usr/bin/$(_PYTHON_VENV) $(VENVNAME); \
. $(VENVNAME)/bin/activate; \
pip install -U pip; \
pip install --upgrade setuptools; \
diff --git a/README.md b/README.md
index 4de458b..c82651d 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,61 @@
-**Before doing anything, please read
-[Leapp framework documentation](https://leapp.readthedocs.io/).**
+# Leapp ELevate Repository
----
+**Before doing anything, please read [Leapp framework documentation](https://leapp.readthedocs.io/).**
+
+## Running
+Make sure your system is fully updated before starting the upgrade process.
+
+```bash
+sudo yum update -y
+```
+
+Install `elevate-release` package with the project repo and GPG key.
+
+`sudo yum install -y http://repo.almalinux.org/elevate/elevate-release-latest-el7.noarch.rpm`
+
+Install leapp packages and migration data for the OS you want to upgrade. Possible options are:
+ - leapp-data-almalinux
+ - leapp-data-centos
+ - leapp-data-eurolinux
+ - leapp-data-oraclelinux
+ - leapp-data-rocky
+
+`sudo yum install -y leapp-upgrade leapp-data-almalinux`
+
+Start a preupgrade check. In the meantime, the Leapp utility creates a special /var/log/leapp/leapp-report.txt file that contains possible problems and recommended solutions. No rpm packages will be installed at this phase.
+
+`sudo leapp preupgrade`
+
+The preupgrade process may stall with the following message:
+> Inhibitor: Newest installed kernel not in use
+
+Make sure your system is running the latest kernel before proceeding with the upgrade. If you updated the system recently, a reboot may be sufficient to do so. Otherwise, edit your Grub configuration accordingly.
+
+> NOTE: In certain configurations, Leapp generates `/var/log/leapp/answerfile` with true/false questions. Leapp utility requires answers to all these questions in order to proceed with the upgrade.
+
+Once the preupgrade process completes, the results will be contained in `/var/log/leapp/leapp-report.txt` file.
+It's advised to review the report and consider how the changes will affect your system.
+
+Start an upgrade. Youll be offered to reboot the system after this process is completed.
+
+```bash
+sudo leapp upgrade
+sudo reboot
+```
+
+> NOTE: The upgrade process after the reboot may take a long time, up to 40-50 minutes, depending on the machine resources. If the machine remains unresponsive for more than 2 hours, assume the upgrade process failed during the post-reboot phase.
+> If it's still possible to access the machine in some way, for example, through remote VNC access, the logs containing the information on what went wrong are located in this folder: `/var/log/leapp`
+
+A new entry in GRUB called ELevate-Upgrade-Initramfs will appear. The system will be automatically booted into it. Observe the update process in the console.
+
+After the reboot, login into the system and check the migration report. Verify that the current OS is the one you need.
+
+```bash
+cat /etc/redhat-release
+cat /etc/os-release
+```
+
+Check the leapp logs for .rpmnew configuration files that may have been created during the upgrade process. In some cases os-release or yum package files may not be replaced automatically, requiring the user to rename the .rpmnew files manually.
## Troubleshooting
@@ -11,6 +65,15 @@
- Leapp framework: [https://github.com/oamg/leapp/issues/new/choose](https://github.com/oamg/leapp/issues/new/choose)
- Leapp actors: [https://github.com/oamg/leapp-repository/issues/new/choose](https://github.com/oamg/leapp-repository/issues/new/choose)
+### Where can I report an issue or RFE related to the AlmaLinux actor or data modifications?
+- GitHub issues are preferred:
+ - Leapp actors: [https://github.com/AlmaLinux/leapp-repository/issues/new/choose](https://github.com/AlmaLinux/leapp-repository/issues/new/choose)
+ - Leapp data: [https://github.com/AlmaLinux/leapp-data/issues/new/choose](https://github.com/AlmaLinux/leapp-data/issues/new/choose)
+
+### What data should be provided when making a report?
+
+Before gathering data, if possible, run the *leapp* command that encountered an issue with the `--debug` flag, e.g.: `leapp upgrade --debug`.
+
- When filing an issue, include:
- Steps to reproduce the issue
- *All files in /var/log/leapp*
@@ -25,7 +88,638 @@
Then you may attach only the `leapp-logs.tgz` file.
### Where can I seek help?
-Well gladly answer your questions and lead you to through any troubles with the
-actor development.
+Well gladly answer your questions and lead you to through any troubles with the actor development.
+
+You can reach the primary Leapp development team at IRC: `#leapp` on freenode.
+
+## Third-party integration
+
+If you would like to add your **signed** 3rd party packages into the upgrade process, you can use the third-party integration mechanism.
+
+There are four components for adding your information to the elevation process:
+- <vendor_name>_map.json: repository mapping file
+- <vendor_name>.repo: package repository information
+- <vendor_name>.sigs: list of package signatures of vendor repositories
+- <vendor_name>_pes.json: package migration event list
+
+All these files **must** have the same <vendor_name> part.
+
+### Repository mapping file
+
+This JSON file provides information on mappings between source system repositories (repositories present on the system being upgraded) and target system repositories (package repositories to be used during the upgrade).
+
+The file contains two sections, `mapping` and `repositories`.
+
+`repositories` descripes the source and target repositories themselves. Each entry should have a unique string ID specific to mapping/PES files - `pesid`, and a list of attributes:
+- major_version: major system version that this repository targets
+- repo_type: repository type, see below
+- repoid: repository ID, same as in *.repo files. Doesn't have to exactly match `pesid`
+- arch: system architecture for which this repository is relevant
+- channel: repository channel, see below
+
+
+**Repository types**:
+- rpm: normal RPM packages
+- srpm: source packages
+- debuginfo: packages with debug information
+
+**Repository channels**:
+- ga: general availability repositories
+ - AKA stable repositories.
+- beta: beta-testing repositories
+- eus, e4s, aus, tus: Extended Update Support, Update Services for SAP Solutions, Advanced Update Support, Telco Extended Update Support
+ - Red Hat update channel classification. Most of the time you won't need to use these.
+
+`mapping` establishes connections between described repositories.
+Each entry in the list defines a mapping between major system versions, and contains the following elements:
+- source_major_version: major system version from which the system would be upgraded
+- target_major_version: major system version to which the system would be elevated
+- entries: the list of repository mappings
+ - source: source repository, one that would be found on a pre-upgrade system
+ - target: a list of target upgrade repositores that would contain new package versions. Each source repository can map to one or multiple target repositories
+
+
+> **Important**: The repository mapping file also defines whether a vendor's packages will be included into the upgrade process at all.
+> If at least one source repository listed in the file is present on the system, the vendor is considered active, and package repositories/PES events are enabled - otherwise, they **will not** affect the upgrade process.
+
+### Package repository information
+
+This file defines the vendor's package repositories to be used during the upgrade.
+
+The file has the same format normal YUM/DNF package repository files do.
+
+> NOTE: The repositories listed in this file are only used *during* the upgrade. Package repositories on the post-upgrade system should be provided through updated packages or custom repository deployment.
+
+### Package signature list
+
+This file should contain the list of public signature headers that the packages are signed with, one entry per line.
+
+You can find signature headers for your packages by running the following command:
+
+`rpm -qa --queryformat "%{NAME} || %|DSAHEADER?{%{DSAHEADER:pgpsig}}:{%|RSAHEADER?{%{RSAHEADER:pgpsig}}:{(none)}|}|\n" <PACKAGE_NAME>`
+
+rpm will return an entry like the following:
+`package-name || DSA/SHA1, Mon Aug 23 08:17:13 2021, Key ID 8c55a6628608cb71`
+
+The value after "Key ID", in this case, `8c55a6628608cb71`, is what you should put into the signature list file.
+
+### Package migration event list
+
+The Leapp upgrade process uses information from the AlmaLinux PES (Package Evolution System) to keep track of how packages change between the OS versions. This data is located in `leapp-data/vendors.d/<vendor_name>_pes.json` in the GitHub repository and in `/etc/leapp/files/vendors.d/<vendor_name>_pes.json` on a system being upgraded.
+
+> **Warning**: leapp doesn't force packages from out_packageset to be installed from the specific repository; instead, it enables repo from out_packageset and then DNF installs the latest package version from all enabled repos.
+
+#### Creating event lists through PES
+
+The recommended way to create new event lists is to use the PES mechanism.
+
+The web interface can create, manage and export groups of events to JSON files.
+
+This video demonstration walks through the steps of adding an action event group and exporting it as a JSON file to make use of it in the elevation process.
+
+> https://drive.google.com/file/d/1VqnQkUsxzLijIqySMBGu5lDrA72BVd5A/view?usp=sharing
+
+Please refer to the [PES contribution guide](https://wiki.almalinux.org/elevate/Contribution-guide.html) for additional information on entry fields.
+
+#### Manual editing
+
+To add new rules to the list, add a new entry to the `packageinfo` array.
+
+**Important**: actions from PES JSON files will be in effect only for those packages that are signed **and** have their signatures in one of the active <vendor_name>.sigs files. Unsigned packages will be updated only if some signed package requires a new version, otherwise they will by left as they are.
+
+Required fields:
+
+- action: what action to perform on the listed package
+ - 0 - present
+ - keep the packages in `in_packageset` to make sure the repo they're in on the target system gets enabled
+ - additional behaviour present, see below
+ - 1 - removed
+ - remove all packages in `in_packageset`
+ - 2 - deprecated
+ - keep the packages in `in_packageset` to make sure the repo they're in on the target system gets enabled
+ - 3 - replaced
+ - remove all packages in `in_packageset`
+ - install parts of the `out_packageset` that are not present on the system
+ - keep the packages from `out_packageset` that are already installed
+ - 4 - split
+ - install parts of the `out_packageset` that are not present on the system
+ - keep the present `out_packageset`
+ - remove packages from `in_packageset` that are not present in `out_packageset`
+ - in case of package X being split to Y and Z, package X will be removed
+ - in case of package X being split to X and Y, package X will **not** be removed
+ - 5 - merged
+ - same as `split`
+ - additional behaviour present, see below
+ - 6 - moved to new repository
+ - keep the package to make sure the repo it's in on the target system gets enabled
+ - nothing is done to `in_packageset` as it always contains one package - the same as the "out" package
+ - 7 - renamed
+ - remove the `in_packageset` and install the `out_packageset` if not installed
+ - if already installed, keep the `out_packageset` as-is
+ - 8 - reinstalled
+ - reinstall the `in_packageset` package during the upgrade transaction
+ - mostly useful for packages that have the same version string between major versions, and thus won't be upgraded automatically
+ - Additional notes:
+ - any event except `present` is ignored if any of packages in `in_packageset` are marked for removal
+ - any event except `merged` is ignored if any of packages in `in_packageset` are neither installed nor marked for installation
+ - for `merged` events it is sufficient to have at least one package from `in_packageset` are either installed or marked for installation
+- arches: what system architectures the listed entry relates to
+- id: entry ID, must be unique
+- in_packageset: set of packages on the old system
+- out_packageset: set of packages to switch to, empty if removed or deprecated
+- initial_release: source OS release
+- release: target OS release
+
+`in_packageset` and `out_packageset` have the following format:
+
+```json
+ "in_packageset": {
+ "package": [
+ {
+ "module_stream": null,
+ "name": "PackageKit",
+ "repository": "base"
+ },
+ {
+ "module_stream": null,
+ "name": "PackageKit-yum",
+ "repository": "base"
+ }
+ ],
+ "set_id": 1592
+ },
+```
+
+For `in_packageset`, `repository` field defines the package repository the package was installed from on the source system.
+For `out_packageset`, `repository` field for packages should be the same as the "Target system repo name in PES" field in the associated vendor repository mapping file.
+
+### Providing the data
+
+Once you've prepared the vendor data for migration, you can make a pull request to https://github.com/AlmaLinux/leapp-data/ to make it available publicly.
+Files should be placed into the `vendors.d` subfolder if the data should be available for all elevation target OS variants, or into the `files/<target_OS>/vendors.d/` if intended for a specific one.
+
+Alternatively, you can deploy the vendor files on a system prior to starting the upgrade. In this case, place the files into the folder `/etc/leapp/files/vendors.d/`.
+
+## Adding complex changes (custom actors for migration)
+To perform any changes of arbitrary complexity during the migration process, add a component to the existing Leapp pipeline.
+
+To begin, clone the code repository: https://github.com/AlmaLinux/leapp-repository
+For instructions on how to deploy a development enviroment, refer to [Leapp framework documentation](https://leapp.readthedocs.io/en/latest/devenv-install.html).
+
+Create an actor inside the main system_upgrade leapp repository:
+
+```bash
+cd ./leapp-repository/repos/system_upgrade/common
+snactor new-actor testactor
+```
+
+Alternatively, you can [create your own repository](https://leapp.readthedocs.io/en/latest/create-repository.html) in the system_upgrade folder, if you wish to keep your actors separate from others.
+Keep in mind that youll need to link all other repositories whose functions you will use.
+The created subfolder will contain the main Python file of your new actor.
+
+The actors main class has three fields of interest:
+- consumes
+- produces
+- tags
+
+consumes and produces defines the [data that the actor may receive or provide to other actors](https://leapp.readthedocs.io/en/latest/messaging.html).
+
+Tags define the phase of the upgrade process during which the actor runs.
+All actors also must be assigned the `IPUWorkflowTag` to mark them as a part of the in-place upgrade process.
+The file `leapp-repository/repos/system_upgrade/common/workflows/inplace_upgrade.py` lists all phases of the elevation process.
+
+### Submitting changes
+Changes you want to submit upstream should be sent through pull requests to repositories https://github.com/AlmaLinux/leapp-repository and https://github.com/AlmaLinux/leapp-data.
+The standard GitHub contribution process applies - fork the repository, make your changes inside of it, then submit the pull request to be reviewed.
+
+### Custom actor example
+
+"Actors" in Leapp terminology are Python scripts that run during the upgrade process.
+Actors are a core concept of the framework, and the entire process is built from them.
+
+Custom actors are the actors that are added by third-party developers, and are not present in the upstream Leapp repository.
+
+Actors can gather data, communicate with each other and modify the system during the upgrade.
+
+Let's examine how an upgrade problem might be resolved with a custom actor.
+
+#### Problem
+
+If you ever ran `leapp preupgrade` on unprepared systems before, you likely have seen the following message:
+
+```
+Upgrade has been inhibited due to the following problems:
+ 1. Inhibitor: Possible problems with remote login using root account
+```
+
+It's caused by the change in default behaviour for permitting root logins between RHEL 7 and 8.
+In RHEL 8 logging in as root via password authentication is no longer allowed by default, which means that some machines can become inaccessible after the upgrade.
+
+Some configurations require an administrator's intervention to resolve this issue, but SSHD configurations where no `PermitRootLogin` options were explicitly set can be modified to preserve the RHEL 7 default behaviour and not require manual modification.
+
+Let's create a custom actor to handle such cases for us.
+
+#### Creating an actor
+
+Actors are contained in ["repositories"](https://leapp.readthedocs.io/en/latest/leapp-repositories.html) - subfolders containing compartmentalized code and resources that the Leapp framework will use during the upgrade.
+
+> Do not confuse Leapp repositories with Git repositories - these are two different concepts, independent of one another.
+
+Inside the `leapp-repository` GitHub repo, Leapp repositories are contained inside the `repos` subfolder.
+
+Everything related to system upgrade proper is inside the `system_upgrade` folder.
+`el7toel8` contains resources used when upgrading from RHEL 7 to RHEL 8, `el8toel9` - RHEL 8 to 9, `common` - shared resources.
+
+Since the change in system behaviour we're looking to mitigate occurs between RHEL 7 and 8, the appopriate repository to place the actor in is `el7toel8`.
+
+You can [create new actors](https://leapp.readthedocs.io/en/latest/first-actor.html) by using the `snactor` tool provided by Leapp, or manually.
+
+`snactor new-actor ACTOR_NAME`
+
+The bare-bones actor code consists of a file named `actor.py` contained inside the `actors/<actor_name>` subfolder of a Leapp repository.
+
+In this case, then, it should be located in a directory like `leapp-repository/repos/system_upgrade/el7toel8/actors/opensshmodifypermitroot`
+
+If you used snactor to create it, you'll see contents like the following:
+
+```python
+from leapp.actors import Actor
+
+
+class OpenSSHModifyPermitRoot(Actor):
+ """
+ No documentation has been provided for the open_ssh_actor_example actor.
+ """
+
+ name = 'openssh_modify_permit_root'
+ consumes = ()
+ produces = ()
+ tags = ()
+
+ def process(self):
+ pass
+```
+
+#### Configuring the actor
+
+Actors' `consumes` and `produces` attributes define types of [*messages*](https://leapp.readthedocs.io/en/latest/messaging.html) these actors receive or send.
+
+For instance, during the initial upgrade stages several standard actors gather system information and *produce* messages with gathered data to other actors.
+
+> Messages are defined by *message models*, which are contained inside Leapp repository's `models` subfolder, just like all actors are contained in `actors`.
+
+Actors' `tags` attributes define the [phase of the upgrade](https://leapp.readthedocs.io/en/latest/working-with-workflows.html) during which that actor gets executed.
+
+> The list of all phases can be found in file `leapp-repository/repos/system_upgrade/common/workflows/inplace_upgrade.py`.
+
+##### Receiving messages
+
+Leapp already provides information about the OpenSSH configuration through the `OpenSshConfigScanner` actor. This actor provides a message with a message model `OpenSshConfig`.
+
+Instead of opening and reading the configuration file in our own actor, we can simply read the provided message to see if we can safely alter the configuration automatically.
+
+To begin with, import the message model from `leapp.models`:
+
+```python
+from leapp.models import OpenSshConfig
+```
+
+> It doesn't matter in which Leapp repository the model is located. Leapp will gather all availabile data inside its submodules.
+
+Add the message model to the list of messages to be received:
+
+```python
+consumes = (OpenSshConfig, )
+```
+
+The actor now will be able to read messages of this format provided by other actors that were executed prior to its own execution.
+
+##### Sending messages
+
+To ensure that the user knows about the automatic configuration change that will occur, we can send a *report*.
+
+> Reports are a built-in type of Leapp messages that are added to the `/var/log/leapp/leapp-report.txt` file at the end of the upgrade process.
+
+To start off with, add a Report message model to the `produces` attribute of the actor.
+
+```python
+produces = (Report, )
+```
+
+Don't forget to import the model type from `leapp.models`.
+
+All done - now we're ready to make use of the models inside the actor's code.
+
+
+##### Running phase
+
+Both workflow and phase tags are imported from leapp.tags:
+
+```python
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+```
+
+All actors to be run during the upgrade must contain the upgrade workflow tag. It looks as follows:
+
+```python
+tags = (IPUWorkflowTag, )
+```
+
+To define the upgrade phase during which an actor will run, set the appropriate tag in the `tags` attribute.
+
+Standard actor `OpenSshPermitRootLoginCheck` that blocks the upgrade if it detects potential problems in SSH configuration, runs during the *checks* phase, and has the `ChecksPhaseTag` inside its `tags`.
+
+Therefore, we want to run our new actor before it. We can select an earlier phase from the list of phases - or we can mark our actor to run *before other actors* in the phase with a modifier as follows:
+
+```python
+tags = (ChecksPhaseTag.Before, IPUWorkflowTag, )
+```
+
+All phases have built-in `.Before` and `.After` stages that can be used this way. Now our actor is guaranteed to be run before the `OpenSshPermitRootLoginCheck` actor.
+
+
+#### Actor code
+
+With configuration done, it's time to write the actual code of the actor that will be executed during the upgrade.
+
+The entry point for it is the actor's `process` function.
+
+First, let's start by reading the SSH config message we've set the actor to receive.
+
+```python
+# Importing from Leapp built-ins.
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.stdlib import api
+
+def process(self):
+ # Retreive the OpenSshConfig message.
+
+ # Actors have `consume` and `produce` methods that work with messages.
+ # `consume` expects a message type that is listed inside the `consumes` attribute.
+ openssh_messages = self.consume(OpenSshConfig)
+
+ # The return value of self.consume is a generator of messages of the provided type.
+ config = next(openssh_messages, None)
+ # We expect to get only one message of this type. If there's more than one, something's wrong.
+ if list(openssh_messages):
+ # api.current_logger lets you pass messages into Leapp's log. By default, they will
+ # be displayed in `/var/log/leapp/leapp-preupgrade.log`
+ # or `/var/log/leapp/leapp-upgrade.log`, depending on which command you ran.
+ api.current_logger().warning('Unexpectedly received more than one OpenSshConfig message.')
+ # If the config message is not present, the standard actor failed to read it.
+ # Stop here.
+ if not config:
+ # StopActorExecutionError is a Leapp built-in exception type that halts the actor execution.
+ # By default this will also halt the upgrade phase and the upgrade process in general.
+ raise StopActorExecutionError(
+ 'Could not check openssh configuration', details={'details': 'No OpenSshConfig facts found.'}
+ )
+```
+
+Next, let's read the received message and see if we can modify the configuration.
+
+```python
+import errno
+
+CONFIG = '/etc/ssh/sshd_config'
+CONFIG_BACKUP = '/etc/ssh/sshd_config.leapp_backup'
+
+ # The OpenSshConfig model has a permit_root_login attribute that contains
+ # all instances of PermitRootLogin option present in the config.
+ # See leapp-repository/repos/system_upgrade/el7toel8/models/opensshconfig.py
+
+ # We can only safely modify the config to preserve the default behaviour if no
+ # explicit PermitRootLogin option was set anywhere in the config.
+ if not config.permit_root_login:
+ try:
+ # Read the config into memory to prepare for its modification.
+ with open(CONFIG, 'r') as fd:
+ sshd_config = fd.readlines()
+
+ # These are the lines we want to add to the configuration file.
+ permit_autoconf = [
+ "# Automatically added by Leapp to preserve RHEL7 default\n",
+ "# behaviour after migration.\n",
+ "# Placed on top of the file to avoid being included into Match blocks.\n",
+ "PermitRootLogin yes\n"
+ "\n",
+ ]
+ permit_autoconf.extend(sshd_config)
+ # Write the changed config into the file.
+ with open(CONFIG, 'w') as fd:
+ fd.writelines(permit_autoconf)
+ # Write the backup file with the old configuration.
+ with open(CONFIG_BACKUP, 'w') as fd:
+ fd.writelines(sshd_config)
+
+ # Handle errors.
+ except IOError as err:
+ if err.errno != errno.ENOENT:
+ error = 'Failed to open sshd_config: {}'.format(str(err))
+ api.current_logger().error(error)
+ return
+```
+
+The functional part of the actor itself is done. Now, let's add a report to let the user know
+the machine's SSH configuration has changed.
+
+```python
+# These Leapp imports are required to create reports.
+from leapp import reporting
+from leapp.models import Report
+from leapp.reporting import create_report
+
+# Tags signify the categories the report and the associated issue are related to.
+COMMON_REPORT_TAGS = [
+ reporting.Tags.AUTHENTICATION,
+ reporting.Tags.SECURITY,
+ reporting.Tags.NETWORK,
+ reporting.Tags.SERVICES
+]
+
+ # Related resources are listed in the report to help resolving the issue.
+ resources = [
+ reporting.RelatedResource('package', 'openssh-server'),
+ reporting.RelatedResource('file', '/etc/ssh/sshd_config')
+ reporting.RelatedResource('file', '/etc/ssh/sshd_config.leapp_backup')
+ ]
+ # This function creates and submits the actual report message.
+ # Normally you'd need to call self.produce() to send messages,
+ # but reports are a special case that gets handled automatically.
+ create_report([
+ # Report title and summary.
+ reporting.Title('SSH configuration automatically modified to permit root login'),
+ reporting.Summary(
+ 'Your OpenSSH configuration file does not explicitly state '
+ 'the option PermitRootLogin in sshd_config file. '
+ 'Its default is "yes" in RHEL7, but will change in '
+ 'RHEL8 to "prohibit-password", which may affect your ability '
+ 'to log onto this machine after the upgrade. '
+ 'To prevent this from occuring, the PermitRootLogin option '
+ 'has been explicity set to "yes" to preserve the default behaivour '
+ 'after migration.'
+ 'The original configuration file has been backed up to'
+ '/etc/ssh/sshd_config.leapp_backup'
+ ),
+ # Reports are ordered by severity in the list.
+ reporting.Severity(reporting.Severity.MEDIUM),
+ reporting.Tags(COMMON_REPORT_TAGS),
+ # Remediation section contains hints on how to resolve the reported (potential) problem.
+ reporting.Remediation(
+ hint='If you would prefer to configure the root login policy yourself, '
+ 'consider setting the PermitRootLogin option '
+ 'in sshd_config explicitly.'
+ )
+ ] + resources) # Resources are added to the list of data for the report.
+```
+
+The actor code is now complete. The final version with less verbose comments will look something like this:
+
+```python
+from leapp import reporting
+from leapp.actors import Actor
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.stdlib import api
+from leapp.models import OpenSshConfig, Report
+from leapp.reporting import create_report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+
+import errno
+
+CONFIG = '/etc/ssh/sshd_config'
+CONFIG_BACKUP = '/etc/ssh/sshd_config.leapp_backup'
+
+COMMON_REPORT_TAGS = [
+ reporting.Tags.AUTHENTICATION,
+ reporting.Tags.SECURITY,
+ reporting.Tags.NETWORK,
+ reporting.Tags.SERVICES
+]
+
+
+class OpenSSHModifyPermitRoot(Actor):
+ """
+ OpenSSH doesn't allow root logins with password by default on RHEL8.
+
+ Check the values of PermitRootLogin in OpenSSH server configuration file
+ and see if it was set explicitly.
+ If not, adding an explicit "PermitRootLogin yes" will preserve the current
+ default behaviour.
+ """
+
+ name = 'openssh_modify_permit_root'
+ consumes = (OpenSshConfig, )
+ produces = (Report, )
+ tags = (ChecksPhaseTag.Before, IPUWorkflowTag, )
+
+ def process(self):
+ # Retreive the OpenSshConfig message.
+ openssh_messages = self.consume(OpenSshConfig)
+ config = next(openssh_messages, None)
+ if list(openssh_messages):
+ api.current_logger().warning('Unexpectedly received more than one OpenSshConfig message.')
+ if not config:
+ raise StopActorExecutionError(
+ 'Could not check openssh configuration', details={'details': 'No OpenSshConfig facts found.'}
+ )
+
+ # Read and modify the config.
+ # Only act if there's no explicit PermitRootLogin option set anywhere in the config.
+ if not config.permit_root_login:
+ try:
+ with open(CONFIG, 'r') as fd:
+ sshd_config = fd.readlines()
+
+ permit_autoconf = [
+ "# Automatically added by Leapp to preserve RHEL7 default\n",
+ "# behaviour after migration.\n",
+ "# Placed on top of the file to avoid being included into Match blocks.\n",
+ "PermitRootLogin yes\n"
+ "\n",
+ ]
+ permit_autoconf.extend(sshd_config)
+ with open(CONFIG, 'w') as fd:
+ fd.writelines(permit_autoconf)
+ with open(CONFIG_BACKUP, 'w') as fd:
+ fd.writelines(sshd_config)
+
+ except IOError as err:
+ if err.errno != errno.ENOENT:
+ error = 'Failed to open sshd_config: {}'.format(str(err))
+ api.current_logger().error(error)
+ return
+
+ # Create a report letting the user know what happened.
+ resources = [
+ reporting.RelatedResource('package', 'openssh-server'),
+ reporting.RelatedResource('file', '/etc/ssh/sshd_config'),
+ reporting.RelatedResource('file', '/etc/ssh/sshd_config.leapp_backup')
+ ]
+ create_report([
+ reporting.Title('SSH configuration automatically modified to permit root login'),
+ reporting.Summary(
+ 'Your OpenSSH configuration file does not explicitly state '
+ 'the option PermitRootLogin in sshd_config file. '
+ 'Its default is "yes" in RHEL7, but will change in '
+ 'RHEL8 to "prohibit-password", which may affect your ability '
+ 'to log onto this machine after the upgrade. '
+ 'To prevent this from occuring, the PermitRootLogin option '
+ 'has been explicity set to "yes" to preserve the default behaivour '
+ 'after migration.'
+ 'The original configuration file has been backed up to'
+ '/etc/ssh/sshd_config.leapp_backup'
+ ),
+ reporting.Severity(reporting.Severity.MEDIUM),
+ reporting.Tags(COMMON_REPORT_TAGS),
+ reporting.Remediation(
+ hint='If you would prefer to configure the root login policy yourself, '
+ 'consider setting the PermitRootLogin option '
+ 'in sshd_config explicitly.'
+ )
+ ] + resources)
+```
+
+Due to this actor's small size, the entire code can be fit inside the `process` function.
+If it grows beyond manageable size, or you want to run unit tests on its components, it's advised to move out all of the functional parts from the `process` function into the *actor library*.
+
+#### Libraries
+
+Larger actors can import code from [common libraries](https://leapp.readthedocs.io/en/latest/best-practices.html#move-generic-functionality-to-libraries) or define their own "libraries" and run code from them inside the `process` function.
+
+In such cases, the directory layout looks like this:
+```
+actors
++ example_actor_name
+| + libraries
+| + example_actor_name.py
+| + actor.py
+...
+```
+
+and importing code from them looks like this:
+
+`from leapp.libraries.actor.example_actor_name import example_lib_function`
+
+This is also the main way of [writing unit-testable code](https://leapp.readthedocs.io/en/latest/best-practices.html#write-unit-testable-code), since the code contained inside the `process` function cannot be unit-tested normally.
+
+In this actor format, you would move all of the actual actor code into the associated library, leaving only preparation and function calls inside the `process` function.
+
+#### Debugging
+
+The Leapp utility `snactor` can also be used for unit-testing the created actors.
+
+It is capable of saving the output of actors as locally stored messages, so that they can be consumed by other actors that are being developed.
+
+For example, to test our new actor, we need the OpenSshConfig message, which is produced by the OpenSshConfigScanner standard actor. To make the data consumable, run the actor producing the data with the save-output option:
+
+`snactor run --save-output OpenSshConfigScanner`
+
+The output of the actor is stored in the local repository data file, and it can be used by other actors. To flush all saved messages from the repository database, run `snactor messages clear`.
+
+With the input messages available and stored, the actor being developed can be tested.
+
+`snactor run --print-output OpenSshModifyPermitRoot`
+
+#### Additional information
-You can reach us at IRC: `#leapp` on freenode.
+For more information about Leapp and additional tutorials, visit the [official Leapp documentation](https://leapp.readthedocs.io/en/latest/tutorials.html).
diff --git a/commands/command_utils.py b/commands/command_utils.py
index da62c50..f062f78 100644
--- a/commands/command_utils.py
+++ b/commands/command_utils.py
@@ -12,7 +12,7 @@ LEAPP_UPGRADE_FLAVOUR_DEFAULT = 'default'
LEAPP_UPGRADE_FLAVOUR_SAP_HANA = 'saphana'
LEAPP_UPGRADE_PATHS = 'upgrade_paths.json'
-VERSION_REGEX = re.compile(r"^([1-9]\d*)\.(\d+)$")
+VERSION_REGEX = re.compile(r"^([1-9]\d*)(\.(\d+))?$")
def check_version(version):
@@ -68,9 +68,36 @@ def get_os_release_version_id(filepath):
:return: `str` version_id
"""
- with open(filepath) as f:
- data = dict(l.strip().split('=', 1) for l in f.readlines() if '=' in l)
- return data.get('VERSION_ID', '').strip('"')
+ try:
+ with open(filepath) as f:
+ data = dict(l.strip().split('=', 1) for l in f.readlines() if '=' in l)
+ return data.get('VERSION_ID', '').strip('"')
+ except OSError as e:
+ raise CommandError(
+ "Unable to read system OS release from file {}, "
+ "error: {}".format(
+ filepath,
+ e.strerror
+ ))
+
+
+def get_os_release_id(filepath):
+ """
+ Retrieve data about System OS ID from provided file.
+
+ :return: `str` version_id
+ """
+ try:
+ with open(filepath) as f:
+ data = dict(l.strip().split('=', 1) for l in f.readlines() if '=' in l)
+ return data.get('ID', '').strip('"')
+ except OSError as e:
+ raise CommandError(
+ "Unable to read system OS ID from file {}, "
+ "error: {}".format(
+ filepath,
+ e.strerror
+ ))
def get_upgrade_paths_config():
diff --git a/commands/preupgrade/__init__.py b/commands/preupgrade/__init__.py
index 92038bb..8dd8966 100644
--- a/commands/preupgrade/__init__.py
+++ b/commands/preupgrade/__init__.py
@@ -48,6 +48,16 @@ def preupgrade(args, breadcrumbs):
logger = configure_logger('leapp-preupgrade.log')
os.environ['LEAPP_EXECUTION_ID'] = context
+ sentry_client = None
+ sentry_dsn = cfg.get('sentry', 'dsn')
+ if sentry_dsn:
+ try:
+ from raven import Client
+ from raven.transport.http import HTTPTransport
+ sentry_client = Client(sentry_dsn, transport=HTTPTransport)
+ except ImportError:
+ logger.warn("Cannot import the Raven library - remote error logging not functional")
+
try:
repositories = util.load_repositories()
except LeappError as exc:
@@ -56,7 +66,8 @@ def preupgrade(args, breadcrumbs):
workflow = repositories.lookup_workflow('IPUWorkflow')()
util.warn_if_unsupported(configuration)
util.process_whitelist_experimental(repositories, workflow, configuration, logger)
- with beautify_actor_exception():
+
+ with util.format_actor_exceptions(logger, sentry_client):
workflow.load_answers(answerfile_path, userchoices_path)
until_phase = 'ReportsPhase'
logger.info('Executing workflow until phase: %s', until_phase)
@@ -68,12 +79,17 @@ def preupgrade(args, breadcrumbs):
logger.info("Answerfile will be created at %s", answerfile_path)
workflow.save_answers(answerfile_path, userchoices_path)
- util.generate_report_files(context, report_schema)
+
+ util.log_errors(workflow.errors, logger)
+ util.log_inhibitors(context, logger, sentry_client)
report_errors(workflow.errors)
report_inhibitors(context)
+
+ util.generate_report_files(context, report_schema)
report_files = util.get_cfg_files('report', cfg)
log_files = util.get_cfg_files('logs', cfg)
report_info(report_files, log_files, answerfile_path, fail=workflow.failure)
+
if workflow.failure:
sys.exit(1)
diff --git a/commands/upgrade/__init__.py b/commands/upgrade/__init__.py
index c9c2741..f773a5f 100644
--- a/commands/upgrade/__init__.py
+++ b/commands/upgrade/__init__.py
@@ -9,7 +9,7 @@ from leapp.exceptions import CommandError, LeappError
from leapp.logger import configure_logger
from leapp.utils.audit import Execution
from leapp.utils.clicmd import command, command_opt
-from leapp.utils.output import beautify_actor_exception, report_errors, report_info, report_inhibitors
+from leapp.utils.output import report_errors, report_info, report_inhibitors
# NOTE:
# If you are adding new parameters please ensure that they are set in the upgrade function invocation in `rerun`
@@ -18,6 +18,7 @@ from leapp.utils.output import beautify_actor_exception, report_errors, report_i
@command('upgrade', help='Upgrade the current system to the next available major version.')
@command_opt('resume', is_flag=True, help='Continue the last execution after it was stopped (e.g. after reboot)')
+@command_opt('nowarn', is_flag=True, help='Do not display interactive warnings')
@command_opt('reboot', is_flag=True, help='Automatically performs reboot when requested.')
@command_opt('whitelist-experimental', action='append', metavar='ActorName', help='Enable experimental actors')
@command_opt('debug', is_flag=True, help='Enable debug mode', inherit=False)
@@ -77,6 +78,16 @@ def upgrade(args, breadcrumbs):
logger = configure_logger('leapp-upgrade.log')
os.environ['LEAPP_EXECUTION_ID'] = context
+ sentry_client = None
+ sentry_dsn = cfg.get('sentry', 'dsn')
+ if sentry_dsn:
+ try:
+ from raven import Client
+ from raven.transport.http import HTTPTransport
+ sentry_client = Client(sentry_dsn, transport=HTTPTransport)
+ except ImportError:
+ logger.warn("Cannot import the Raven library - remote error logging not functional")
+
if args.resume:
logger.info("Resuming execution after phase: %s", skip_phases_until)
try:
@@ -86,7 +97,13 @@ def upgrade(args, breadcrumbs):
workflow = repositories.lookup_workflow('IPUWorkflow')(auto_reboot=args.reboot)
util.process_whitelist_experimental(repositories, workflow, configuration, logger)
util.warn_if_unsupported(configuration)
- with beautify_actor_exception():
+
+ if not args.resume and not args.nowarn:
+ if not util.ask_to_continue():
+ logger.info("Upgrade cancelled by user")
+ sys.exit(1)
+
+ with util.format_actor_exceptions(logger, sentry_client):
logger.info("Using answerfile at %s", answerfile_path)
workflow.load_answers(answerfile_path, userchoices_path)
@@ -98,14 +115,19 @@ def upgrade(args, breadcrumbs):
logger.info("Answerfile will be created at %s", answerfile_path)
workflow.save_answers(answerfile_path, userchoices_path)
+
+ util.log_errors(workflow.errors, logger)
+ util.log_inhibitors(context, logger, sentry_client)
report_errors(workflow.errors)
report_inhibitors(context)
+
util.generate_report_files(context, report_schema)
report_files = util.get_cfg_files('report', cfg)
log_files = util.get_cfg_files('logs', cfg)
report_info(report_files, log_files, answerfile_path, fail=workflow.failure)
if workflow.failure:
+ logger.error("Upgrade workflow failed, check log for details")
sys.exit(1)
diff --git a/commands/upgrade/util.py b/commands/upgrade/util.py
index 75ffa6a..9022b29 100644
--- a/commands/upgrade/util.py
+++ b/commands/upgrade/util.py
@@ -2,18 +2,23 @@ import functools
import itertools
import json
import os
+import sys
import shutil
import tarfile
+import six.moves
from datetime import datetime
+from contextlib import contextmanager
+import six
from leapp.cli.commands import command_utils
from leapp.cli.commands.config import get_config
-from leapp.exceptions import CommandError
+from leapp.exceptions import CommandError, LeappRuntimeError
from leapp.repository.scan import find_and_scan_repositories
from leapp.utils import audit
from leapp.utils.audit import get_checkpoints, get_connection, get_messages
-from leapp.utils.output import report_unsupported
+from leapp.utils.output import report_unsupported, pretty_block_text, pretty_block, Color
from leapp.utils.report import fetch_upgrade_report_messages, generate_report_file
+from leapp.models import ErrorModel
def disable_database_sync():
@@ -167,6 +172,44 @@ def warn_if_unsupported(configuration):
report_unsupported(devel_vars, configuration["whitelist_experimental"])
+def ask_to_continue():
+ """
+ Pause before starting the upgrade, warn the user about potential conseqences
+ and ask for confirmation.
+ Only done on whitelisted OS.
+
+ :return: True if it's OK to continue, False if the upgrade should be interrupted.
+ """
+
+ ask_on_os = ['cloudlinux']
+ os_id = command_utils.get_os_release_id('/etc/os-release')
+
+ if os_id not in ask_on_os:
+ return True
+
+ with pretty_block(
+ text="Upgrade workflow initiated",
+ end_text="Continue?",
+ target=sys.stdout,
+ color=Color.bold,
+ ):
+ warn_msg = (
+ "Past this point, Leapp will begin making changes to your system.\n"
+ "An improperly or incompletely configured upgrade may break the system, "
+ "up to and including making it *completely inaccessible*.\n"
+ "Even if you've followed all the preparation steps correctly, "
+ "the chance of the upgrade going wrong remains non-zero.\n"
+ "Make sure you've run the pre-check, checked the logs and reports, and have a backup prepared."
+ )
+ print(warn_msg)
+
+ response = ""
+ while response not in ["y", "n"]:
+ response = six.moves.input("Y/N> ").lower()
+
+ return response == "y"
+
+
def handle_output_level(args):
"""
Set environment variables following command line arguments.
@@ -236,3 +279,76 @@ def process_report_schema(args, configuration):
raise CommandError('--report-schema version can not be greater that the '
'actual {} one.'.format(default_report_schema))
return args.report_schema or default_report_schema
+
+
+# TODO: This and the following functions should eventually be placed into the
+# leapp.utils.output module.
+def pretty_block_log(string, logger_level, width=60):
+ log_str = "\n{separator}\n{text}\n{separator}\n".format(
+ separator="=" * width,
+ text=string.center(width))
+ logger_level(log_str)
+
+
+@contextmanager
+def format_actor_exceptions(logger, sentry):
+ try:
+ try:
+ yield
+ except LeappRuntimeError as err:
+ msg = "{} - Please check the above details".format(err.message)
+ sys.stderr.write("\n")
+ sys.stderr.write(pretty_block_text(msg, color="", width=len(msg)))
+ logger.error(err.message)
+ if sentry:
+ sent_code = sentry.captureException()
+ logger.info("Error \"{}\" sent to Sentry with code {}".format(err, sent_code))
+ finally:
+ pass
+
+
+def log_errors(errors, logger):
+ if errors:
+ pretty_block_log("ERRORS", logger.info)
+
+ for error in errors:
+ model = ErrorModel.create(json.loads(error['message']['data']))
+ error_message = model.message
+ if six.PY2:
+ error_message = model.message.encode('utf-8', 'xmlcharrefreplace')
+
+ logger.error("{time} [{severity}] Actor: {actor}\nMessage: {message}\n".format(
+ severity=model.severity.upper(),
+ message=error_message, time=model.time, actor=model.actor))
+ if model.details:
+ print('Summary:')
+ details = json.loads(model.details)
+ for detail in details:
+ print(' {k}: {v}'.format(
+ k=detail.capitalize(),
+ v=details[detail].rstrip().replace('\n', '\n' + ' ' * (6 + len(detail)))))
+
+
+def log_inhibitors(context_id, logger, sentry):
+ from leapp.reporting import Flags # pylint: disable=import-outside-toplevel
+ reports = fetch_upgrade_report_messages(context_id)
+ inhibitors = [report for report in reports if Flags.INHIBITOR in report.get('flags', [])]
+ if inhibitors:
+ pretty_block_log("UPGRADE INHIBITED", logger.error)
+ logger.error('Upgrade has been inhibited due to the following problems:')
+ for position, report in enumerate(inhibitors, start=1):
+ logger.error('{idx:5}. Inhibitor: {title}'.format(idx=position, title=report['title']))
+ logger.info('Consult the pre-upgrade report for details and possible remediation.')
+
+ if sentry:
+ for inhibitor in inhibitors:
+ sentry.captureMessage(
+ "Inhibitor: {}\n"
+ "Severity: {}\n"
+ "{}".format(
+ inhibitor['title'],
+ inhibitor['severity'],
+ inhibitor['summary']
+ )
+ )
+ logger.info("Inhibitor \"{}\" sent to Sentry".format(inhibitor['title']))
diff --git a/etc/leapp/transaction/to_reinstall b/etc/leapp/transaction/to_reinstall
new file mode 100644
index 0000000..c6694a8
--- /dev/null
+++ b/etc/leapp/transaction/to_reinstall
@@ -0,0 +1,3 @@
+### List of packages (each on new line) to be reinstalled to the upgrade transaction
+### Useful for packages that have identical version strings but contain binary changes between major OS versions
+### Packages that aren't installed will be skipped
diff --git a/packaging/leapp-repository.spec b/packaging/leapp-repository.spec
index af4b31d..b9e88a9 100644
--- a/packaging/leapp-repository.spec
+++ b/packaging/leapp-repository.spec
@@ -137,6 +137,8 @@ Requires: pciutils
# Required to gather system facts about SELinux
Requires: libselinux-python
Requires: python-pyudev
+# Required to gather data about actor exceptions or inhibitors
+Requires: python-raven
# required by SELinux actors
Requires: policycoreutils-python
# Required to fetch leapp data
@@ -147,6 +149,8 @@ Requires: python-requests
# systemd-nspawn utility
Requires: systemd-container
Requires: python3-pyudev
+# Required to gather data about actor exceptions or inhibitors
+Requires: python3-raven
# Required to fetch leapp data
Requires: python3-requests
# Required because the code is kept Py2 & Py3 compatible
@@ -196,6 +200,8 @@ rm -rf %{buildroot}%{leapp_python_sitelib}/leapp/cli/commands/tests
rm -rf %{buildroot}%{repositorydir}/system_upgrade/el8toel9
%else
rm -rf %{buildroot}%{repositorydir}/system_upgrade/el7toel8
+# CloudLinux migration only supports el7 to el8
+rm -rf %{buildroot}%{repositorydir}/system_upgrade/cloudlinux
%endif
# remove component/unit tests, Makefiles, ... stuff that related to testing only
diff --git a/repos/system_upgrade/cloudlinux/.leapp/info b/repos/system_upgrade/cloudlinux/.leapp/info
new file mode 100644
index 0000000..1f16b9f
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/.leapp/info
@@ -0,0 +1 @@
+{"name": "cloudlinux", "id": "427ddd90-9b5e-4400-b21e-73d77791f175", "repos": ["644900a5-c347-43a3-bfab-f448f46d9647", "c47fbc3d-ae38-416e-9176-7163d67d94f6", "efcf9016-f2d1-4609-9329-a298e6587b3c"]}
\ No newline at end of file
diff --git a/repos/system_upgrade/cloudlinux/.leapp/leapp.conf b/repos/system_upgrade/cloudlinux/.leapp/leapp.conf
new file mode 100644
index 0000000..b459134
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/.leapp/leapp.conf
@@ -0,0 +1,6 @@
+
+[repositories]
+repo_path=${repository:root_dir}
+
+[database]
+path=${repository:state_dir}/leapp.db
diff --git a/repos/system_upgrade/cloudlinux/actors/addcustomrepositories/actor.py b/repos/system_upgrade/cloudlinux/actors/addcustomrepositories/actor.py
new file mode 100644
index 0000000..783e347
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/addcustomrepositories/actor.py
@@ -0,0 +1,21 @@
+from leapp.actors import Actor
+from leapp.tags import FirstBootPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+from leapp.libraries.actor.addcustomrepositories import add_custom
+
+
+class AddCustomRepositories(Actor):
+ """
+ Move the files inside the custom-repos folder of this leapp repository into the /etc/yum.repos.d repository.
+ """
+
+ name = 'add_custom_repositories'
+ consumes = ()
+ produces = ()
+ tags = (IPUWorkflowTag, FirstBootPhaseTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ # We only want to run this actor on CloudLinux systems.
+ # current_version returns a tuple (release_name, version_value).
+ add_custom(self.log)
diff --git a/repos/system_upgrade/cloudlinux/actors/addcustomrepositories/libraries/addcustomrepositories.py b/repos/system_upgrade/cloudlinux/actors/addcustomrepositories/libraries/addcustomrepositories.py
new file mode 100644
index 0000000..74ba425
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/addcustomrepositories/libraries/addcustomrepositories.py
@@ -0,0 +1,26 @@
+import os
+import os.path
+import shutil
+import logging
+
+from leapp.libraries.stdlib import api
+
+CUSTOM_REPOS_FOLDER = 'custom-repos'
+REPO_ROOT_PATH = "/etc/yum.repos.d"
+
+
+def add_custom(log):
+ # type: (logging.Logger) -> None
+ custom_repo_dir = api.get_common_folder_path(CUSTOM_REPOS_FOLDER)
+ repofiles = os.listdir(custom_repo_dir)
+
+ # If any components are missing, halt.
+ if not repofiles or not custom_repo_dir:
+ return
+
+ for repofile in repofiles:
+ full_repo_path = os.path.join(custom_repo_dir, repofile)
+
+ log.debug("Copying repo file {} to {}".format(repofile, REPO_ROOT_PATH))
+
+ shutil.copy(full_repo_path, REPO_ROOT_PATH)
diff --git a/repos/system_upgrade/cloudlinux/actors/backupmysqldata/actor.py b/repos/system_upgrade/cloudlinux/actors/backupmysqldata/actor.py
new file mode 100644
index 0000000..8e0f0e2
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/backupmysqldata/actor.py
@@ -0,0 +1,22 @@
+import os
+from leapp.actors import Actor
+from leapp.tags import InterimPreparationPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+from leapp.libraries.common.backup import backup_file, CLSQL_BACKUP_FILES
+
+
+class BackupMySqlData(Actor):
+ """
+ Backup cl-mysql configuration data to an external folder.
+ """
+
+ name = 'backup_my_sql_data'
+ consumes = ()
+ produces = ()
+ tags = (InterimPreparationPhaseTag.Before, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ for filename in CLSQL_BACKUP_FILES:
+ if os.path.isfile(filename):
+ backup_file(filename, os.path.basename(filename))
diff --git a/repos/system_upgrade/cloudlinux/actors/checkcllicense/actor.py b/repos/system_upgrade/cloudlinux/actors/checkcllicense/actor.py
new file mode 100644
index 0000000..7934a9c
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/checkcllicense/actor.py
@@ -0,0 +1,42 @@
+from leapp.actors import Actor
+from leapp import reporting
+from leapp.reporting import Report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+from leapp.libraries.stdlib import CalledProcessError, run
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+import os
+
+
+class CheckClLicense(Actor):
+ """
+ Check if the server has a CL license
+ """
+
+ name = 'check_cl_license'
+ consumes = ()
+ produces = (Report,)
+ tags = (ChecksPhaseTag, IPUWorkflowTag)
+
+ system_id_path = '/etc/sysconfig/rhn/systemid'
+ rhn_check_bin = '/usr/sbin/rhn_check'
+
+ @run_on_cloudlinux
+ def process(self):
+ res = None
+ if os.path.exists(self.system_id_path):
+ res = run([self.rhn_check_bin])
+ self.log.debug('rhn_check result: %s', res)
+ if not res or res['exit_code'] != 0 or res['stderr']:
+ title = 'Server does not have an active CloudLinux license'
+ summary = 'Server does not have an active CloudLinux license. This renders key CloudLinux packages ' \
+ 'inaccessible, inhibiting the upgrade process.'
+ remediation = 'Activate a CloudLinux license on this machine before running Leapp again.'
+ reporting.create_report([
+ reporting.Title(title),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Tags([reporting.Tags.OS_FACTS]),
+ reporting.Flags([reporting.Flags.INHIBITOR]),
+ reporting.Remediation(hint=remediation),
+ ])
diff --git a/repos/system_upgrade/cloudlinux/actors/checkpanelmemory/actor.py b/repos/system_upgrade/cloudlinux/actors/checkpanelmemory/actor.py
new file mode 100644
index 0000000..1b1ffbc
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/checkpanelmemory/actor.py
@@ -0,0 +1,20 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import checkpanelmemory
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+from leapp.models import MemoryInfo, InstalledControlPanel, Report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+
+
+class CheckPanelMemory(Actor):
+ """
+ Check if the system has enough memory for the corresponding panel.
+ """
+
+ name = 'check_panel_memory'
+ consumes = (MemoryInfo, InstalledControlPanel,)
+ produces = (Report,)
+ tags = (ChecksPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ checkpanelmemory.process()
diff --git a/repos/system_upgrade/cloudlinux/actors/checkpanelmemory/libraries/checkpanelmemory.py b/repos/system_upgrade/cloudlinux/actors/checkpanelmemory/libraries/checkpanelmemory.py
new file mode 100644
index 0000000..81c6566
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/checkpanelmemory/libraries/checkpanelmemory.py
@@ -0,0 +1,57 @@
+from leapp import reporting
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.stdlib import api
+from leapp.models import MemoryInfo, InstalledControlPanel
+
+from leapp.libraries.common.detectcontrolpanel import (
+ NOPANEL_NAME,
+ UNKNOWN_NAME,
+ INTEGRATED_NAME,
+ CPANEL_NAME,
+)
+
+required_memory = {
+ NOPANEL_NAME: 1536 * 1024, # 1.5 Gb
+ UNKNOWN_NAME: 1536 * 1024, # 1.5 Gb
+ INTEGRATED_NAME: 1536 * 1024, # 1.5 Gb
+ CPANEL_NAME: 2048 * 1024, # 2 Gb
+}
+
+
+def _check_memory(panel, mem_info):
+ msg = {}
+
+ min_req = required_memory[panel]
+ is_ok = mem_info.mem_total >= min_req
+ msg = {} if is_ok else {"detected": mem_info.mem_total, "minimal_req": min_req}
+
+ return msg
+
+
+def process():
+ panel = next(api.consume(InstalledControlPanel), None)
+ memoryinfo = next(api.consume(MemoryInfo), None)
+ if panel is None:
+ raise StopActorExecutionError(message=("Missing information about the installed web panel."))
+ if memoryinfo is None:
+ raise StopActorExecutionError(message=("Missing information about system memory."))
+
+ minimum_req_error = _check_memory(panel.name, memoryinfo)
+
+ if minimum_req_error:
+ title = "Minimum memory requirements for panel {} are not met".format(panel.name)
+ summary = (
+ "Insufficient memory may result in an instability of the upgrade process."
+ " This can cause an interruption of the process,"
+ " which can leave the system in an unusable state. Memory detected:"
+ " {} KiB, required: {} KiB".format(minimum_req_error["detected"], minimum_req_error["minimal_req"])
+ )
+ reporting.create_report(
+ [
+ reporting.Title(title),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Tags([reporting.Tags.SANITY]),
+ reporting.Flags([reporting.Flags.INHIBITOR]),
+ ]
+ )
diff --git a/repos/system_upgrade/cloudlinux/actors/checkpanelmemory/tests/test_checkpanelmemory.py b/repos/system_upgrade/cloudlinux/actors/checkpanelmemory/tests/test_checkpanelmemory.py
new file mode 100644
index 0000000..7a3c0be
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/checkpanelmemory/tests/test_checkpanelmemory.py
@@ -0,0 +1,42 @@
+from leapp import reporting
+from leapp.libraries.actor import checkmemory
+from leapp.libraries.common.testutils import create_report_mocked, CurrentActorMocked
+from leapp.libraries.stdlib import api
+from leapp.models import MemoryInfo, InstalledControlPanel
+
+from leapp.libraries.common.detectcontrolpanel import (
+ UNKNOWN_NAME,
+ INTEGRATED_NAME,
+ CPANEL_NAME,
+)
+
+
+def test_check_memory_low(monkeypatch):
+ minimum_req_error = []
+ monkeypatch.setattr(api, "current_actor", CurrentActorMocked())
+ minimum_req_error = checkmemory._check_memory(
+ InstalledControlPanel(name=INTEGRATED_NAME), MemoryInfo(mem_total=1024)
+ )
+ assert minimum_req_error
+
+
+def test_check_memory_high(monkeypatch):
+ minimum_req_error = []
+ monkeypatch.setattr(api, "current_actor", CurrentActorMocked())
+ minimum_req_error = checkmemory._check_memory(
+ InstalledControlPanel(name=CPANEL_NAME), MemoryInfo(mem_total=16273492)
+ )
+ assert not minimum_req_error
+
+
+def test_report(monkeypatch):
+ title_msg = "Minimum memory requirements for panel {} are not met".format(
+ UNKNOWN_NAME
+ )
+ monkeypatch.setattr(api, "current_actor", CurrentActorMocked())
+ monkeypatch.setattr(api, "consume", lambda x: iter([MemoryInfo(mem_total=129)]))
+ monkeypatch.setattr(reporting, "create_report", create_report_mocked())
+ checkmemory.process()
+ assert reporting.create_report.called
+ assert title_msg == reporting.create_report.report_fields["title"]
+ assert reporting.Flags.INHIBITOR in reporting.create_report.report_fields["flags"]
diff --git a/repos/system_upgrade/cloudlinux/actors/checkrhnclienttools/actor.py b/repos/system_upgrade/cloudlinux/actors/checkrhnclienttools/actor.py
new file mode 100644
index 0000000..a1c1cee
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/checkrhnclienttools/actor.py
@@ -0,0 +1,58 @@
+from leapp.actors import Actor
+from leapp import reporting
+from leapp.reporting import Report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+from leapp.libraries.actor.version import (
+ Version, VersionParsingError,
+)
+
+import subprocess
+
+
+class CheckRhnClientToolsVersion(Actor):
+ """
+ Check the rhn-client-tools package version
+ """
+
+ name = 'check_rhn_client_tools_version'
+ consumes = ()
+ produces = (Report,)
+ tags = (ChecksPhaseTag, IPUWorkflowTag)
+
+ minimal_version = Version('2.0.2')
+ minimal_release_int = 43
+ minimal_release = '%s.el7.cloudlinux' % minimal_release_int
+
+ @run_on_cloudlinux
+ def process(self):
+ title, summary, remediation = None, None, None
+ # ex:
+ # Version : 2.0.2
+ # Release : 43.el7.cloudlinux
+ # res is: b'2.0.2\n43.el7.cloudlinux\n'
+ cmd = "yum info installed rhn-client-tools | grep '^Version' -A 1 | awk '{print $3}'"
+ res = subprocess.check_output(cmd, shell=True)
+ rhn_version, rhn_release = res.decode().split()
+ self.log.info('Current rhn-client-tools version: "%s"', rhn_version)
+ try:
+ current_version = Version(rhn_version)
+ except VersionParsingError:
+ title = 'rhn-client-tools: package is not installed'
+ summary = 'rhn-client-tools package is required to perform elevation.'
+ remediation = 'Install rhn-client-tools "%s" version before running Leapp again.' % self.minimal_version
+ else:
+ if current_version < self.minimal_version or int(rhn_release.split('.')[0]) < self.minimal_release_int:
+ title = 'rhn-client-tools: package version is too low'
+ summary = 'Current version of the rhn-client-tools package has no capability to perform elevation.'
+ remediation = 'Update rhn-client-tools to "%s %s" version before running Leapp again.' % (self.minimal_version, self.minimal_release)
+ if title:
+ reporting.create_report([
+ reporting.Title(title),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Tags([reporting.Tags.OS_FACTS]),
+ reporting.Flags([reporting.Flags.INHIBITOR]),
+ reporting.Remediation(hint=remediation),
+ ])
diff --git a/repos/system_upgrade/cloudlinux/actors/checkrhnclienttools/libraries/version.py b/repos/system_upgrade/cloudlinux/actors/checkrhnclienttools/libraries/version.py
new file mode 100644
index 0000000..149bce2
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/checkrhnclienttools/libraries/version.py
@@ -0,0 +1,46 @@
+from six import reraise as raise_
+import sys
+
+
+class VersionException(Exception):
+ pass
+
+
+class VersionParsingError(VersionException):
+ pass
+
+
+class Version(object):
+ def __init__(self, version):
+ self._raw = version
+ try:
+ self.value = tuple(
+ map(lambda x: int(x), version.split('.'))
+ )
+ except Exception:
+ tb = sys.exc_info()[2]
+ raise_(VersionParsingError, 'failed to parse version: "%s"' % self._raw, tb)
+
+ def __eq__(self, other):
+ return self.value == other.value
+
+ def __gt__(self, other):
+ return any(
+ [v[0] > v[1] for v in zip(self.value, other.value)]
+ )
+
+ def __ge__(self, other):
+ return all(
+ [v[0] >= v[1] for v in zip(self.value, other.value)]
+ )
+
+ def __lt__(self, other):
+ return any(
+ [v[0] < v[1] for v in zip(self.value, other.value)]
+ )
+
+ def __le__(self, other):
+ return all(
+ [v[0] <= v[1] for v in zip(self.value, other.value)]
+ )
+
diff --git a/repos/system_upgrade/cloudlinux/actors/checkrhnversionoverride/actor.py b/repos/system_upgrade/cloudlinux/actors/checkrhnversionoverride/actor.py
new file mode 100644
index 0000000..2321bde
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/checkrhnversionoverride/actor.py
@@ -0,0 +1,39 @@
+from leapp.actors import Actor
+from leapp import reporting
+from leapp.reporting import Report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+
+class CheckRhnVersionOverride(Actor):
+ """
+ Check if the up2date versionOverride option has not been set.
+ """
+
+ name = 'check_rhn_version_override'
+ consumes = ()
+ produces = (Report,)
+ tags = (ChecksPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ up2date_config = '/etc/sysconfig/rhn/up2date'
+ with open(up2date_config, 'r') as f:
+ config_data = f.readlines()
+ for line in config_data:
+ if line.startswith('versionOverride=') and line.strip() != 'versionOverride=':
+ title = 'RHN up2date: versionOverride not empty'
+ summary = ('The RHN config file up2date has a set value of the versionOverride option.'
+ ' This value will get overwritten by the upgrade process, and non-supported values'
+ ' carry a risk of causing issues during the upgrade.')
+ remediation = ('Remove the versionOverride value from the up2date config file'
+ ' before running Leapp again.')
+ reporting.create_report([
+ reporting.Title(title),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Tags([reporting.Tags.OS_FACTS]),
+ reporting.Flags([reporting.Flags.INHIBITOR]),
+ reporting.Remediation(hint=remediation),
+ reporting.RelatedResource('file', '/etc/sysconfig/rhn/up2date')
+ ])
diff --git a/repos/system_upgrade/cloudlinux/actors/checkup2dateconfig/actor.py b/repos/system_upgrade/cloudlinux/actors/checkup2dateconfig/actor.py
new file mode 100644
index 0000000..bfc0642
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/checkup2dateconfig/actor.py
@@ -0,0 +1,48 @@
+from leapp.actors import Actor
+from leapp.tags import FirstBootPhaseTag, IPUWorkflowTag
+from leapp import reporting
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+import os
+
+
+class CheckUp2dateConfig(Actor):
+ """
+ Move up2date.rpmnew config to the old one's place
+ """
+
+ name = 'check_up2date_config'
+ consumes = ()
+ produces = ()
+ tags = (FirstBootPhaseTag, IPUWorkflowTag)
+
+ original = '/etc/sysconfig/rhn/up2date'
+ new = original + '.rpmnew'
+
+ @run_on_cloudlinux
+ def process(self):
+ """
+ For some reason we get new .rpmnew file instead of the modified `original`
+ This actor tries to save the old `serverURL` parameter to new config and move new instead of old one
+ """
+ replace, old_lines, new_lines = None, None, None
+ if os.path.exists(self.new):
+ self.log.warning('"%s" config found, trying to replace the old one', self.new)
+ with open(self.original) as o, open(self.new) as n:
+ old_lines = o.readlines()
+ new_lines = n.readlines()
+ for l in old_lines:
+ if l.startswith('serverURL=') and l not in new_lines:
+ replace = l
+ break
+ if replace:
+ for i, line in enumerate(new_lines):
+ if line.startswith('serverURL='):
+ new_lines[i] = replace
+ self.log.warning('"serverURL" parameter will be saved as "%s"', line.strip())
+ break
+ with open(self.original, 'w') as f:
+ f.writelines(new_lines)
+ self.log.info('"%s" config is overwritten by the contents of "%s"', self.original, self.new)
+ os.unlink(self.new)
+ self.log.info('"%s" config deleted', self.new)
diff --git a/repos/system_upgrade/cloudlinux/actors/clearpackageconflicts/actor.py b/repos/system_upgrade/cloudlinux/actors/clearpackageconflicts/actor.py
new file mode 100644
index 0000000..cd6801b
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/clearpackageconflicts/actor.py
@@ -0,0 +1,36 @@
+import os
+import errno
+import shutil
+
+from leapp.actors import Actor
+from leapp.libraries.common.rpms import has_package
+from leapp.models import InstalledRPM
+from leapp.tags import DownloadPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+class ClearPackageConflicts(Actor):
+ """
+ Remove several python package files manually to resolve conflicts between versions of packages to be upgraded.
+ """
+
+ name = 'clear_package_conflicts'
+ consumes = (InstalledRPM,)
+ produces = ()
+ tags = (DownloadPhaseTag.Before, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ problem_packages = ["alt-python37-six", "alt-python37-pytz"]
+ problem_packages_installed = any([has_package(InstalledRPM, pkg) for pkg in problem_packages])
+
+ if problem_packages_installed:
+ problem_dirs = [
+ "/opt/alt/python37/lib/python3.7/site-packages/six-1.15.0-py3.7.egg-info",
+ "/opt/alt/python37/lib/python3.7/site-packages/pytz-2017.2-py3.7.egg-info"]
+ for p_dir in problem_dirs:
+ try:
+ if os.path.isdir(p_dir):
+ shutil.rmtree(p_dir)
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
diff --git a/repos/system_upgrade/cloudlinux/actors/clmysqlrepositorysetup/actor.py b/repos/system_upgrade/cloudlinux/actors/clmysqlrepositorysetup/actor.py
new file mode 100644
index 0000000..4651580
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/clmysqlrepositorysetup/actor.py
@@ -0,0 +1,35 @@
+from leapp.actors import Actor
+from leapp.reporting import Report
+from leapp.libraries.actor import clmysqlrepositorysetup
+from leapp.models import (
+ CustomTargetRepository,
+ CustomTargetRepositoryFile,
+ InstalledMySqlTypes,
+ RpmTransactionTasks,
+ InstalledRPM,
+)
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+
+class ClMysqlRepositorySetup(Actor):
+ """
+ Gather data on what MySQL/MariaDB variant is installed on the system, if any.
+ Then prepare the custom repository data and the corresponding file
+ to be sent to the target environment creator.
+ """
+
+ name = "cl_mysql_repository_setup"
+ consumes = (InstalledRPM,)
+ produces = (
+ CustomTargetRepository,
+ CustomTargetRepositoryFile,
+ InstalledMySqlTypes,
+ RpmTransactionTasks,
+ Report,
+ )
+ tags = (FactsPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ clmysqlrepositorysetup.process()
diff --git a/repos/system_upgrade/cloudlinux/actors/clmysqlrepositorysetup/libraries/clmysqlrepositorysetup.py b/repos/system_upgrade/cloudlinux/actors/clmysqlrepositorysetup/libraries/clmysqlrepositorysetup.py
new file mode 100644
index 0000000..1d5e4a0
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/clmysqlrepositorysetup/libraries/clmysqlrepositorysetup.py
@@ -0,0 +1,254 @@
+import os
+
+from leapp.models import (
+ InstalledMySqlTypes,
+ CustomTargetRepositoryFile,
+ CustomTargetRepository,
+ RpmTransactionTasks,
+ InstalledRPM,
+ Module
+)
+from leapp.libraries.stdlib import api
+from leapp.libraries.common import repofileutils
+from leapp import reporting
+from leapp.libraries.common.clmysql import get_clmysql_type, get_pkg_prefix, MODULE_STREAMS
+
+REPO_DIR = '/etc/yum.repos.d'
+TEMP_DIR = '/var/lib/leapp/yum_custom_repofiles'
+REPOFILE_SUFFIX = ".repo"
+LEAPP_COPY_SUFFIX = "_leapp_custom.repo"
+CL_MARKERS = ['cl-mysql', 'cl-mariadb', 'cl-percona']
+MARIA_MARKERS = ['MariaDB']
+MYSQL_MARKERS = ['mysql-community']
+OLD_MYSQL_VERSIONS = ['5.7', '5.6', '5.5']
+
+
+def produce_leapp_repofile_copy(repofile_data, repo_name):
+ """
+ Create a copy of an existing Yum repository config file, modified
+ to be used during the Leapp transaction.
+ It will be placed inside the isolated overlay environment Leapp runs the upgrade from.
+ """
+ if not os.path.isdir(TEMP_DIR):
+ os.makedirs(TEMP_DIR)
+ leapp_repofile = repo_name + LEAPP_COPY_SUFFIX
+ leapp_repo_path = os.path.join(TEMP_DIR, leapp_repofile)
+ if os.path.exists(leapp_repo_path):
+ os.unlink(leapp_repo_path)
+
+ api.current_logger().debug('Producing a Leapp repofile copy: {}'.format(leapp_repo_path))
+ repofileutils.save_repofile(repofile_data, leapp_repo_path)
+ api.produce(CustomTargetRepositoryFile(file=leapp_repo_path))
+
+
+def build_install_list(prefix):
+ """
+ Find the installed cl-mysql packages that match the active
+ cl-mysql type as per Governor config.
+
+ :param prefix: Package name prefix to search for.
+ :return: List of matching packages.
+ """
+ to_upgrade = []
+ if prefix:
+ for rpm_pkgs in api.consume(InstalledRPM):
+ for pkg in rpm_pkgs.items:
+ if (pkg.name.startswith(prefix)):
+ to_upgrade.append(pkg.name)
+ api.current_logger().debug('cl-mysql packages to upgrade: {}'.format(to_upgrade))
+ return to_upgrade
+
+
+def process():
+ mysql_types = []
+ clmysql_type = None
+ custom_repo_msgs = []
+
+ for repofile_full in os.listdir(REPO_DIR):
+ # Don't touch non-repository files or copied repofiles created by Leapp.
+ if repofile_full.endswith(LEAPP_COPY_SUFFIX) or not repofile_full.endswith(REPOFILE_SUFFIX):
+ continue
+ # Cut the .repo part to get only the name.
+ repofile_name = repofile_full[:-len(REPOFILE_SUFFIX)]
+ full_repo_path = os.path.join(REPO_DIR, repofile_full)
+
+ # Parse any repository files that may have something to do with MySQL or MariaDB.
+ api.current_logger().debug('Processing repofile {}, full path: {}'.format(repofile_full, full_repo_path))
+
+ # Process CL-provided options.
+ if any(mark in repofile_name for mark in CL_MARKERS):
+ repofile_data = repofileutils.parse_repofile(full_repo_path)
+ data_to_log = [
+ (repo_data.repoid, "enabled" if repo_data.enabled else "disabled")
+ for repo_data in repofile_data.data
+ ]
+
+ api.current_logger().debug('repoids from CloudLinux repofile {}: {}'.format(repofile_name, data_to_log))
+
+ # Were any repositories enabled?
+ for repo in repofile_data.data:
+ # cl-mysql URLs look like this:
+ # baseurl=http://repo.cloudlinux.com/other/cl$releasever/mysqlmeta/cl-mariadb-10.3/$basearch/
+ # We don't want any duplicate repoid entries.
+ repo.repoid = repo.repoid + '-8'
+ # releasever may be something like 8.6, while only 8 is acceptable.
+ repo.baseurl = repo.baseurl.replace('/cl$releasever/', '/cl8/')
+
+ # mysqlclient is usually disabled when installed from CL MySQL Governor.
+ # However, it should be enabled for the Leapp upgrade, seeing as some packages
+ # from it won't update otherwise.
+ if repo.enabled or repo.repoid == 'mysqclient-8':
+ clmysql_type = get_clmysql_type()
+ api.current_logger().debug('Generating custom cl-mysql repo: {}'.format(repo.repoid))
+ custom_repo_msgs.append(CustomTargetRepository(
+ repoid=repo.repoid,
+ name=repo.name,
+ baseurl=repo.baseurl,
+ enabled=True,
+ ))
+
+ if any(repo.enabled for repo in repofile_data.data):
+ mysql_types.append('cloudlinux')
+ produce_leapp_repofile_copy(repofile_data, repofile_name)
+ else:
+ api.current_logger().debug("No repos from CloudLinux repofile {} enabled, ignoring".format(
+ repofile_name
+ ))
+
+ # Process MariaDB options.
+ elif any(mark in repofile_name for mark in MARIA_MARKERS):
+ repofile_data = repofileutils.parse_repofile(full_repo_path)
+
+ for repo in repofile_data.data:
+ # Maria URLs look like this:
+ # baseurl = https://archive.mariadb.org/mariadb-10.3/yum/centos/7/x86_64
+ # baseurl = https://archive.mariadb.org/mariadb-10.7/yum/centos7-ppc64/
+ # We want to replace the 7 in OS name after /yum/
+ repo.repoid = repo.repoid + '-8'
+ if repo.enabled:
+ url_parts = repo.baseurl.split('yum')
+ url_parts[1] = 'yum' + url_parts[1].replace('7', '8')
+ repo.baseurl = ''.join(url_parts)
+
+ api.current_logger().debug('Generating custom MariaDB repo: {}'.format(repo.repoid))
+ custom_repo_msgs.append(CustomTargetRepository(
+ repoid=repo.repoid,
+ name=repo.name,
+ baseurl=repo.baseurl,
+ enabled=repo.enabled,
+ ))
+
+ if any(repo.enabled for repo in repofile_data.data):
+ # Since MariaDB URLs have major versions written in, we need a new repo file
+ # to feed to the target userspace.
+ mysql_types.append('mariadb')
+ produce_leapp_repofile_copy(repofile_data, repofile_name)
+ else:
+ api.current_logger().debug("No repos from MariaDB repofile {} enabled, ignoring".format(
+ repofile_name
+ ))
+
+ # Process MySQL options.
+ elif any(mark in repofile_name for mark in MYSQL_MARKERS):
+ repofile_data = repofileutils.parse_repofile(full_repo_path)
+
+ for repo in repofile_data.data:
+ if repo.enabled:
+ # MySQL package repos don't have these versions available for EL8 anymore.
+ # There'll be nothing to upgrade to.
+ # CL repositories do provide them, though.
+ if any(ver in repo.name for ver in OLD_MYSQL_VERSIONS):
+ reporting.create_report([
+ reporting.Title('An old MySQL version will no longer be available in EL8'),
+ reporting.Summary(
+ 'A yum repository for an old MySQL version is enabled on this system. '
+ 'It will no longer be available on the target system. '
+ 'This situation cannot be automatically resolved by Leapp. '
+ 'Problematic repository: {0}'.format(repo.repoid)
+ ),
+ reporting.Severity(reporting.Severity.MEDIUM),
+ reporting.Tags([reporting.Tags.REPOSITORY]),
+ reporting.Flags([reporting.Flags.INHIBITOR]),
+ reporting.Remediation(hint=(
+ 'Upgrade to a more recent MySQL version, '
+ 'uninstall the deprecated MySQL packages and disable the repository, '
+ 'or switch to CloudLinux MySQL Governor-provided version of MySQL to continue using '
+ 'the old MySQL version.'
+ )
+ )
+ ])
+ else:
+ # URLs look like this:
+ # baseurl = https://repo.mysql.com/yum/mysql-8.0-community/el/7/x86_64/
+ repo.repoid = repo.repoid + '-8'
+ repo.baseurl = repo.baseurl.replace('/el/7/', '/el/8/')
+ api.current_logger().debug('Generating custom MySQL repo: {}'.format(repo.repoid))
+ custom_repo_msgs.append(CustomTargetRepository(
+ repoid=repo.repoid,
+ name=repo.name,
+ baseurl=repo.baseurl,
+ enabled=repo.enabled,
+ ))
+
+ if any(repo.enabled for repo in repofile_data.data):
+ mysql_types.append('mysql')
+ produce_leapp_repofile_copy(repofile_data, repofile_name)
+ else:
+ api.current_logger().debug("No repos from MySQL repofile {} enabled, ignoring".format(
+ repofile_name
+ ))
+
+ if len(mysql_types) == 0:
+ api.current_logger().debug('No installed MySQL/MariaDB detected')
+ else:
+ reporting.create_report([
+ reporting.Title('MySQL database backup recommended'),
+ reporting.Summary(
+ 'A MySQL/MariaDB installation has been detected on this machine. '
+ 'It is recommended to make a database backup before proceeding with the upgrade.'
+ ),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Tags([reporting.Tags.REPOSITORY]),
+ ])
+
+ for msg in custom_repo_msgs:
+ api.produce(msg)
+
+ if len(mysql_types) == 1:
+ api.current_logger().debug(
+ "Detected MySQL/MariaDB type: {}, version: {}".format(
+ mysql_types[0], clmysql_type
+ )
+ )
+ else:
+ api.current_logger().warning('Detected multiple MySQL types: {}'.format(", ".join(mysql_types)))
+ reporting.create_report([
+ reporting.Title('Multpile MySQL/MariaDB versions detected'),
+ reporting.Summary(
+ 'Package repositories for multiple distributions of MySQL/MariaDB '
+ 'were detected on the system. '
+ 'Leapp will attempt to update all distributions detected. '
+ 'To update only the distribution you use, disable YUM package repositories for all '
+ 'other distributions. '
+ 'Detected: {0}'.format(", ".join(mysql_types))
+ ),
+ reporting.Severity(reporting.Severity.MEDIUM),
+ reporting.Tags([reporting.Tags.REPOSITORY, reporting.Tags.OS_FACTS]),
+ ])
+
+ if 'cloudlinux' in mysql_types and clmysql_type in MODULE_STREAMS.keys():
+ mod_name, mod_stream = MODULE_STREAMS[clmysql_type].split(':')
+ modules_to_enable = [Module(name=mod_name, stream=mod_stream)]
+ pkg_prefix = get_pkg_prefix(clmysql_type)
+
+ api.current_logger().debug('Enabling DNF module: {}:{}'.format(mod_name, mod_stream))
+ api.produce(RpmTransactionTasks(
+ to_upgrade=build_install_list(pkg_prefix),
+ modules_to_enable=modules_to_enable
+ )
+ )
+
+ api.produce(InstalledMySqlTypes(
+ types=mysql_types,
+ version=clmysql_type,
+ ))
diff --git a/repos/system_upgrade/cloudlinux/actors/detectcontrolpanel/actor.py b/repos/system_upgrade/cloudlinux/actors/detectcontrolpanel/actor.py
new file mode 100644
index 0000000..75904d9
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/detectcontrolpanel/actor.py
@@ -0,0 +1,54 @@
+from leapp.actors import Actor
+from leapp import reporting
+from leapp.reporting import Report
+from leapp.models import InstalledControlPanel
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+from leapp.exceptions import StopActorExecutionError
+
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+from leapp.libraries.common.detectcontrolpanel import (
+ NOPANEL_NAME,
+ UNKNOWN_NAME,
+ INTEGRATED_NAME,
+ CPANEL_NAME,
+)
+
+
+class DetectControlPanel(Actor):
+ """
+ Inhibit the upgrade if an unsupported control panel is found.
+ """
+
+ name = "detect_control_panel"
+ consumes = (InstalledControlPanel,)
+ produces = (Report,)
+ tags = (ChecksPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ panel = next(self.consume(InstalledControlPanel), None)
+ if panel is None:
+ raise StopActorExecutionError(message=("Missing information about the installed web panel."))
+
+ if panel.name == CPANEL_NAME:
+ self.log.debug('cPanel detected, upgrade proceeding')
+ elif panel.name == INTEGRATED_NAME or panel.name == UNKNOWN_NAME or panel.name == NOPANEL_NAME:
+ self.log.debug('Integrated/no panel detected, upgrade proceeding')
+ elif panel:
+ # Block the upgrade on any systems with a non-supported panel detected.
+ reporting.create_report(
+ [
+ reporting.Title(
+ "The upgrade process should not be run on systems with a control panel present."
+ ),
+ reporting.Summary(
+ "Systems with a control panel present are not supported at the moment."
+ " No control panels are currently included in the Leapp database, which"
+ " makes loss of functionality after the upgrade extremely likely."
+ " Detected panel: {}.".format(panel.name)
+ ),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Tags([reporting.Tags.OS_FACTS]),
+ reporting.Flags([reporting.Flags.INHIBITOR]),
+ ]
+ )
diff --git a/repos/system_upgrade/cloudlinux/actors/enableyumspacewalkplugin/actor.py b/repos/system_upgrade/cloudlinux/actors/enableyumspacewalkplugin/actor.py
new file mode 100644
index 0000000..95fcce9
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/enableyumspacewalkplugin/actor.py
@@ -0,0 +1,56 @@
+from leapp.actors import Actor
+from leapp.tags import FirstBootPhaseTag, IPUWorkflowTag
+from leapp import reporting
+from leapp.reporting import Report
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+try:
+ # py2
+ import ConfigParser as configparser
+ ParserClass = configparser.SafeConfigParser
+except Exception:
+ # py3
+ import configparser
+ ParserClass = configparser.ConfigParser
+
+
+class EnableYumSpacewalkPlugin(Actor):
+ """
+ Enable yum spacewalk plugin if it's disabled
+ Required for the CLN channel functionality to work properly
+ """
+
+ name = 'enable_yum_spacewalk_plugin'
+ consumes = ()
+ produces = (Report,)
+ tags = (FirstBootPhaseTag, IPUWorkflowTag)
+
+ config = '/etc/yum/pluginconf.d/spacewalk.conf'
+
+ @run_on_cloudlinux
+ def process(self):
+ summary = 'yum spacewalk plugin must be enabled for the CLN channels to work properly. ' \
+ 'Please make sure it is enabled. Default config path is "%s"' % self.config
+ title = None
+
+ parser = ParserClass(allow_no_value=True)
+ try:
+ red = parser.read(self.config)
+ if not red:
+ title = 'yum spacewalk plugin config not found'
+ if parser.get('main', 'enabled') != '1':
+ parser.set('main', 'enabled', '1')
+ with open(self.config, 'w') as f:
+ parser.write(f)
+ self.log.info('yum spacewalk plugin enabled')
+ return
+ except Exception as e:
+ title = 'yum spacewalk plugin config error: %s' % e
+
+ if title:
+ reporting.create_report([
+ reporting.Title(title),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.MEDIUM),
+ reporting.Tags([reporting.Tags.SANITY])
+ ])
diff --git a/repos/system_upgrade/cloudlinux/actors/registerpackageworkarounds/actor.py b/repos/system_upgrade/cloudlinux/actors/registerpackageworkarounds/actor.py
new file mode 100644
index 0000000..e946c2b
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/registerpackageworkarounds/actor.py
@@ -0,0 +1,21 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import registerpackageworkarounds
+from leapp.models import InstalledRPM, DNFWorkaround, PreRemovedRpmPackages
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+
+class RegisterPackageWorkarounds(Actor):
+ """
+ Registers a yum workaround that adjusts the problematic packages that would
+ break the main upgrade transaction otherwise.
+ """
+
+ name = 'register_package_workarounds'
+ consumes = (InstalledRPM,)
+ produces = (DNFWorkaround, PreRemovedRpmPackages)
+ tags = (IPUWorkflowTag, FactsPhaseTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ registerpackageworkarounds.process()
diff --git a/repos/system_upgrade/cloudlinux/actors/registerpackageworkarounds/libraries/registerpackageworkarounds.py b/repos/system_upgrade/cloudlinux/actors/registerpackageworkarounds/libraries/registerpackageworkarounds.py
new file mode 100644
index 0000000..358403b
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/registerpackageworkarounds/libraries/registerpackageworkarounds.py
@@ -0,0 +1,38 @@
+from leapp.actors import Actor
+from leapp.models import InstalledRPM, DNFWorkaround, PreRemovedRpmPackages
+from leapp.libraries.stdlib import api
+
+# NOTE: The related packages are listed both here *and* in the workaround script!
+# If the list changes, it has to change in both places.
+# This is a limitation of the current DNFWorkaround implementation.
+# TODO: unify the list in one place. A separate common file, perhaps?
+TO_REINSTALL = ['gettext-devel'] # These packages will be marked for installation
+TO_DELETE = [] # These won't be
+
+
+def produce_workaround_msg(pkg_list, reinstall):
+ if not pkg_list:
+ return
+ preremoved_pkgs = PreRemovedRpmPackages(install=reinstall)
+ # Only produce a message if a package is actually about to be uninstalled
+ for rpm_pkgs in api.consume(InstalledRPM):
+ for pkg in rpm_pkgs.items:
+ if (pkg.name in pkg_list):
+ preremoved_pkgs.items.append(pkg)
+ api.current_logger().debug("Listing package {} to be pre-removed".format(pkg.name))
+ if preremoved_pkgs.items:
+ api.produce(preremoved_pkgs)
+
+
+def process():
+ produce_workaround_msg(TO_REINSTALL, True)
+ produce_workaround_msg(TO_DELETE, False)
+
+ api.produce(
+ # yum doesn't consider attempting to remove a non-existent package to be an error
+ # we can safely give it the entire package list without checking if all are installed
+ DNFWorkaround(
+ display_name='problem package modification',
+ script_path=api.get_tool_path('remove-problem-packages'),
+ )
+ )
diff --git a/repos/system_upgrade/cloudlinux/actors/replacerpmnewconfigs/actor.py b/repos/system_upgrade/cloudlinux/actors/replacerpmnewconfigs/actor.py
new file mode 100644
index 0000000..56cce4f
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/replacerpmnewconfigs/actor.py
@@ -0,0 +1,81 @@
+from __future__ import print_function
+import os
+import fileinput
+
+from leapp.actors import Actor
+from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag
+from leapp import reporting
+from leapp.reporting import Report
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+REPO_DIR = '/etc/yum.repos.d'
+REPO_DELETE_MARKERS = ['cloudlinux', 'imunify', 'epel']
+REPO_BACKUP_MARKERS = []
+RPMNEW = '.rpmnew'
+LEAPP_BACKUP_SUFFIX = '.leapp-backup'
+
+
+class ReplaceRpmnewConfigs(Actor):
+ """
+ Replace CloudLinux-related repository config .rpmnew files.
+ """
+
+ name = 'replace_rpmnew_configs'
+ consumes = ()
+ produces = (Report,)
+ tags = (ApplicationsPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ deleted_repofiles = []
+ renamed_repofiles = []
+
+ for reponame in os.listdir(REPO_DIR):
+ if any(mark in reponame for mark in REPO_DELETE_MARKERS) and RPMNEW in reponame:
+ base_reponame = reponame[:-len(RPMNEW)]
+ base_path = os.path.join(REPO_DIR, base_reponame)
+ new_file_path = os.path.join(REPO_DIR, reponame)
+
+ os.unlink(base_path)
+ os.rename(new_file_path, base_path)
+ deleted_repofiles.append(base_reponame)
+ self.log.debug('Yum repofile replaced: {}'.format(base_path))
+
+ if any(mark in reponame for mark in REPO_BACKUP_MARKERS) and RPMNEW in reponame:
+ base_reponame = reponame[:-len(RPMNEW)]
+ base_path = os.path.join(REPO_DIR, base_reponame)
+ new_file_path = os.path.join(REPO_DIR, reponame)
+ backup_path = os.path.join(REPO_DIR, base_reponame + LEAPP_BACKUP_SUFFIX)
+
+ os.rename(base_path, backup_path)
+ os.rename(new_file_path, base_path)
+ renamed_repofiles.append(base_reponame)
+ self.log.debug('Yum repofile replaced with backup: {}'.format(base_path))
+
+ # Disable any old repositories.
+ for reponame in os.listdir(REPO_DIR):
+ if LEAPP_BACKUP_SUFFIX in reponame:
+ repofile_path = os.path.join(REPO_DIR, reponame)
+ for line in fileinput.input(repofile_path, inplace=True):
+ if line.startswith('enabled'):
+ print("enabled = 0")
+ else:
+ print(line, end='')
+
+ if renamed_repofiles or deleted_repofiles:
+ deleted_string = '\n'.join(['{}'.format(repofile_name) for repofile_name in deleted_repofiles])
+ replaced_string = '\n'.join(['{}'.format(repofile_name) for repofile_name in renamed_repofiles])
+ reporting.create_report([
+ reporting.Title('CloudLinux repository config files replaced by updated versions'),
+ reporting.Summary(
+ 'One or more RPM repository configuration files '
+ 'have been replaced with new versions provided by the upgraded packages. '
+ 'Any manual modifications to these files have been overriden by this process. '
+ 'Old versions of backed up files are contained in files with a naming pattern '
+ '<repository_file_name>.leapp-backup. '
+ 'Deleted repository files: \n{}\n'
+ 'Backed up repository files: \n{}'.format(deleted_string, replaced_string)
+ ),
+ reporting.Severity(reporting.Severity.MEDIUM),
+ reporting.Tags([reporting.Tags.UPGRADE_PROCESS])
+ ])
diff --git a/repos/system_upgrade/cloudlinux/actors/resetrhnversionoverride/actor.py b/repos/system_upgrade/cloudlinux/actors/resetrhnversionoverride/actor.py
new file mode 100644
index 0000000..21b2164
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/resetrhnversionoverride/actor.py
@@ -0,0 +1,25 @@
+from leapp.actors import Actor
+from leapp.tags import FinalizationPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+
+class ResetRhnVersionOverride(Actor):
+ """
+ Reset the versionOverride value in the RHN up2date config to empty.
+ """
+
+ name = 'reset_rhn_version_override'
+ consumes = ()
+ produces = ()
+ tags = (FinalizationPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ up2date_config = '/etc/sysconfig/rhn/up2date'
+ with open(up2date_config, 'r') as f:
+ config_data = f.readlines()
+ for line in config_data:
+ if line.startswith('versionOverride='):
+ line = 'versionOverride='
+ with open(up2date_config, 'w') as f:
+ f.writelines(config_data)
diff --git a/repos/system_upgrade/cloudlinux/actors/restoremysqldata/actor.py b/repos/system_upgrade/cloudlinux/actors/restoremysqldata/actor.py
new file mode 100644
index 0000000..8e27d99
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/restoremysqldata/actor.py
@@ -0,0 +1,46 @@
+import os
+from leapp.actors import Actor
+from leapp import reporting
+from leapp.models import Report
+from leapp.tags import ThirdPartyApplicationsPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+from leapp.libraries.common.backup import restore_file, CLSQL_BACKUP_FILES, BACKUP_DIR
+
+
+class RestoreMySqlData(Actor):
+ """
+ Restore cl-mysql configuration data from an external folder.
+ """
+
+ name = 'restore_my_sql_data'
+ consumes = ()
+ produces = (Report,)
+ tags = (ThirdPartyApplicationsPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ failed_files = []
+
+ for filepath in CLSQL_BACKUP_FILES:
+ try:
+ restore_file(os.path.basename(filepath), filepath)
+ except OSError as e:
+ failed_files.append(filepath)
+ self.log.error('Could not restore file {}: {}'.format(filepath, e.strerror))
+
+ if failed_files:
+ title = "Failed to restore backed up configuration files"
+ summary = (
+ "Some backed up configuration files were unable to be restored automatically."
+ " Please check the upgrade log for detailed error descriptions"
+ " and restore the files from the backup directory {} manually if needed."
+ " Files not restored: {}".format(BACKUP_DIR, failed_files)
+ )
+ reporting.create_report(
+ [
+ reporting.Title(title),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Tags([reporting.Tags.UPGRADE_PROCESS]),
+ ]
+ )
diff --git a/repos/system_upgrade/cloudlinux/actors/restorerepositoryconfigurations/actor.py b/repos/system_upgrade/cloudlinux/actors/restorerepositoryconfigurations/actor.py
new file mode 100644
index 0000000..5b90c59
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/restorerepositoryconfigurations/actor.py
@@ -0,0 +1,46 @@
+from leapp.actors import Actor
+from leapp.libraries.stdlib import api
+from leapp.libraries.common import dnfconfig, mounting, repofileutils
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+from leapp.models import (
+ RepositoriesFacts,
+)
+from leapp.tags import ApplicationsPhaseTag, IPUWorkflowTag
+
+
+class RestoreRepositoryConfigurations(Actor):
+ """
+ Go over the list of repositories that were present on the pre-upgrade system and compare them to the
+ current list (after the main upgrade transaction).
+ If any of the repositories with same repoIDs have changed their enabled state, due to changes coming
+ from RPM package updates or something else, restore their enabled settings to the pre-upgrade state.
+ """
+
+ name = 'restore_repository_configurations'
+ consumes = (RepositoriesFacts)
+ produces = ()
+ tags = (ApplicationsPhaseTag.After, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ current_repofiles = repofileutils.get_parsed_repofiles()
+ current_repository_list = []
+ for repofile in current_repofiles:
+ current_repository_list.extend(repofile.data)
+ current_repodict = dict((repo.repoid, repo) for repo in current_repository_list)
+
+ current_repoids_string = ", ".join(current_repodict.keys())
+ self.log.debug("Repositories currently present on the system: {}".format(current_repoids_string))
+
+ cmd_context = mounting.NotIsolatedActions(base_dir='/')
+
+ for repos_facts in api.consume(RepositoriesFacts):
+ for repo_file in repos_facts.repositories:
+ for repo_data in repo_file.data:
+ if repo_data.repoid in current_repodict:
+ if repo_data.enabled and not current_repodict[repo_data.repoid].enabled:
+ self.log.debug("Repository {} was enabled pre-upgrade, restoring".format(repo_data.repoid))
+ dnfconfig.enable_repository(cmd_context, repo_data.repoid)
+ elif not repo_data.enabled and current_repodict[repo_data.repoid].enabled:
+ self.log.debug("Repository {} was disabled pre-upgrade, restoring".format(repo_data.repoid))
+ dnfconfig.disable_repository(cmd_context, repo_data.repoid)
diff --git a/repos/system_upgrade/cloudlinux/actors/scancontrolpanel/actor.py b/repos/system_upgrade/cloudlinux/actors/scancontrolpanel/actor.py
new file mode 100644
index 0000000..96524ed
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/scancontrolpanel/actor.py
@@ -0,0 +1,27 @@
+from leapp.actors import Actor
+from leapp.models import InstalledControlPanel
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+from leapp.libraries.common.detectcontrolpanel import detect_panel
+
+
+class ScanControlPanel(Actor):
+ """
+ Scan for a presence of a control panel, and produce a corresponding message.
+ """
+
+ name = 'scan_control_panel'
+ consumes = ()
+ produces = (InstalledControlPanel,)
+ tags = (FactsPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ detected_panel = detect_panel()
+
+ self.produce(
+ InstalledControlPanel(
+ name=detected_panel
+ )
+ )
diff --git a/repos/system_upgrade/cloudlinux/actors/scanrolloutrepositories/actor.py b/repos/system_upgrade/cloudlinux/actors/scanrolloutrepositories/actor.py
new file mode 100644
index 0000000..202e5f7
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/scanrolloutrepositories/actor.py
@@ -0,0 +1,30 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import scanrolloutrepositories
+from leapp.models import (
+ CustomTargetRepositoryFile,
+ CustomTargetRepository,
+ UsedRepositories
+)
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+
+class ScanRolloutRepositories(Actor):
+ """
+ Scan for repository files associated with the Gradual Rollout System.
+
+ Normally these repositories aren't included into the upgrade, but if one of
+ the packages on the system was installed from them, we can potentially run
+ into problems if ignoring these.
+
+ Only those repositories that had packages installed from them are included.
+ """
+
+ name = 'scan_rollout_repositories'
+ consumes = (UsedRepositories)
+ produces = (CustomTargetRepositoryFile, CustomTargetRepository)
+ tags = (FactsPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ scanrolloutrepositories.process()
diff --git a/repos/system_upgrade/cloudlinux/actors/scanrolloutrepositories/libraries/scanrolloutrepositories.py b/repos/system_upgrade/cloudlinux/actors/scanrolloutrepositories/libraries/scanrolloutrepositories.py
new file mode 100644
index 0000000..cc2d1e9
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/scanrolloutrepositories/libraries/scanrolloutrepositories.py
@@ -0,0 +1,46 @@
+import os
+
+from leapp.models import (
+ CustomTargetRepositoryFile,
+ CustomTargetRepository,
+ UsedRepositories
+)
+from leapp.libraries.stdlib import api
+from leapp.libraries.common import repofileutils
+
+REPO_DIR = '/etc/yum.repos.d'
+ROLLOUT_MARKER = 'rollout'
+CL_MARKERS = ['cloudlinux', 'imunify']
+
+
+def process():
+ used_list = []
+ for used_repos in api.consume(UsedRepositories):
+ for used_repo in used_repos.repositories:
+ used_list.append(used_repo.repository)
+
+ for reponame in os.listdir(REPO_DIR):
+ if ROLLOUT_MARKER not in reponame or not any(mark in reponame for mark in CL_MARKERS):
+ continue
+
+ api.current_logger().debug("Detected a rollout repository file: {}".format(reponame))
+
+ full_repo_path = os.path.join(REPO_DIR, reponame)
+ repofile = repofileutils.parse_repofile(full_repo_path)
+
+ # Ignore the repositories that are enabled, but have no packages installed from them.
+ if not any(repo.repoid in used_list for repo in repofile.data):
+ api.current_logger().debug("No used repositories found in {}, skipping".format(reponame))
+ continue
+ else:
+ api.current_logger().debug("Rollout file {} has used repositories, adding".format(reponame))
+
+ for repo in repofile.data:
+ api.produce(CustomTargetRepository(
+ repoid=repo.repoid,
+ name=repo.name,
+ baseurl=repo.baseurl,
+ enabled=repo.enabled,
+ ))
+
+ api.produce(CustomTargetRepositoryFile(file=full_repo_path))
diff --git a/repos/system_upgrade/cloudlinux/actors/switchclnchannel/actor.py b/repos/system_upgrade/cloudlinux/actors/switchclnchannel/actor.py
new file mode 100644
index 0000000..79eb3e4
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/switchclnchannel/actor.py
@@ -0,0 +1,59 @@
+from leapp.actors import Actor
+from leapp.libraries.stdlib import api
+from leapp.tags import DownloadPhaseTag, IPUWorkflowTag
+from leapp.libraries.stdlib import CalledProcessError, run
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+from leapp import reporting
+from leapp.reporting import Report
+
+
+class SwitchClnChannel(Actor):
+ """
+ Switch CLN channel from 7 to 8 to be able to download upgrade packages
+ """
+
+ name = "switch_cln_channel"
+ consumes = ()
+ produces = (Report,)
+ tags = (IPUWorkflowTag, DownloadPhaseTag.Before)
+
+ switch_bin = "/usr/sbin/cln-switch-channel"
+
+ @run_on_cloudlinux
+ def process(self):
+ switch_cmd = [self.switch_bin, "-t", "8", "-o", "-f"]
+ yum_clean_cmd = ["yum", "clean", "all"]
+ update_release_cmd = ["yum", "update", "-y", "cloudlinux-release"]
+ try:
+ res = run(switch_cmd)
+ self.log.debug('Command "%s" result: %s', switch_cmd, res)
+ res = run(yum_clean_cmd) # required to update the repolist
+ self.log.debug('Command "%s" result: %s', yum_clean_cmd, res)
+ res = run(update_release_cmd)
+ self.log.debug('Command "%s" result: %s', update_release_cmd, res)
+ except CalledProcessError as e:
+ reporting.create_report(
+ [
+ reporting.Title(
+ "Failed to switch CloudLinux Network channel from 7 to 8."
+ ),
+ reporting.Summary(
+ "Command {} failed with exit code {}."
+ " The most probable cause of that is a problem with this system's"
+ " CloudLinux Network registration.".format(e.command, e.exit_code)
+ ),
+ reporting.Remediation(
+ hint="Check the state of this system's registration with \'rhn_check\'."
+ " Attempt to re-register the system with \'rhnreg_ks --force\'."
+ ),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Tags(
+ [reporting.Tags.OS_FACTS, reporting.Tags.AUTHENTICATION]
+ ),
+ reporting.Flags([reporting.Flags.INHIBITOR]),
+ ]
+ )
+ except OSError as e:
+ api.current_logger().error(
+ "Could not call RHN command: Message: %s", str(e), exc_info=True
+ )
diff --git a/repos/system_upgrade/cloudlinux/actors/updatecagefs/actor.py b/repos/system_upgrade/cloudlinux/actors/updatecagefs/actor.py
new file mode 100644
index 0000000..c6590d2
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/actors/updatecagefs/actor.py
@@ -0,0 +1,37 @@
+import os
+
+from leapp.actors import Actor
+from leapp.libraries.stdlib import run, CalledProcessError
+from leapp.reporting import Report, create_report
+from leapp.tags import FirstBootPhaseTag, IPUWorkflowTag
+from leapp.libraries.common.cllaunch import run_on_cloudlinux
+
+
+class UpdateCagefs(Actor):
+ """
+ Force update of cagefs.
+
+ cagefs should reflect massive changes in system made in previous phases
+ """
+
+ name = 'update_cagefs'
+ consumes = ()
+ produces = (Report,)
+ tags = (FirstBootPhaseTag, IPUWorkflowTag)
+
+ @run_on_cloudlinux
+ def process(self):
+ if os.path.exists('/usr/sbin/cagefsctl'):
+ try:
+ run(['/usr/sbin/cagefsctl', '--force-update'], checked=True)
+ self.log.info('cagefs update was successful')
+ except CalledProcessError as e:
+ # cagefsctl prints errors in stdout
+ self.log.error(e.stdout)
+ self.log.error
+ (
+ 'Command "cagefsctl --force-update" finished with exit code {}, '
+ 'the filesystem inside cagefs may be out-of-date.\n'
+ 'Check cagefsctl output above and in /var/log/cagefs-update.log, '
+ 'rerun "cagefsctl --force-update" after fixing the issues.'.format(e.exit_code)
+ )
diff --git a/repos/system_upgrade/cloudlinux/libraries/backup.py b/repos/system_upgrade/cloudlinux/libraries/backup.py
new file mode 100644
index 0000000..249c99e
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/libraries/backup.py
@@ -0,0 +1,49 @@
+import os
+import shutil
+from leapp.libraries.stdlib import api
+
+CLSQL_BACKUP_FILES = [
+ "/etc/container/dbuser-map",
+ "/etc/container/ve.cfg",
+ "/etc/container/mysql-governor.xml",
+ "/etc/container/governor_package_limit.json"
+]
+
+BACKUP_DIR = "/var/lib/leapp/cl_backup"
+
+
+def backup_file(source, destination, dir=None):
+ # type: (str, str, str) -> None
+ """
+ Backup file to a backup directory.
+
+ :param source: Path of the file to backup.
+ :param destination: Destination name of a file in the backup directory.
+ :param dir: Backup directory override, defaults to None
+ """
+ if not dir:
+ dir = BACKUP_DIR
+ if not os.path.isdir(dir):
+ os.makedirs(dir)
+
+ dest_path = os.path.join(dir, destination)
+
+ api.current_logger().debug('Backing up file: {} to {}'.format(source, dest_path))
+ shutil.copy(source, dest_path)
+
+
+def restore_file(source, destination, dir=None):
+ # type: (str, str, str) -> None
+ """
+ Restore file from a backup directory.
+
+ :param source: Name of a file in the backup directory.
+ :param destination: Destination path to restore the file to.
+ :param dir: Backup directory override, defaults to None
+ """
+ if not dir:
+ dir = BACKUP_DIR
+ src_path = os.path.join(dir, source)
+
+ api.current_logger().debug('Restoring file: {} to {}'.format(src_path, destination))
+ shutil.copy(src_path, destination)
diff --git a/repos/system_upgrade/cloudlinux/libraries/cllaunch.py b/repos/system_upgrade/cloudlinux/libraries/cllaunch.py
new file mode 100644
index 0000000..6cbab5d
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/libraries/cllaunch.py
@@ -0,0 +1,11 @@
+import functools
+from leapp.libraries.common.config import version
+
+
+def run_on_cloudlinux(func):
+ @functools.wraps(func)
+ def wrapper(*args, **kwargs):
+ if (version.current_version()[0] != "cloudlinux"):
+ return
+ return func(*args, **kwargs)
+ return wrapper
diff --git a/repos/system_upgrade/cloudlinux/libraries/clmysql.py b/repos/system_upgrade/cloudlinux/libraries/clmysql.py
new file mode 100644
index 0000000..f04f6c5
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/libraries/clmysql.py
@@ -0,0 +1,43 @@
+import os
+
+# This file contains the data on the currently active MySQL installation type and version.
+CL7_MYSQL_TYPE_FILE = "/usr/share/lve/dbgovernor/mysql.type.installed"
+
+# This dict matches the MySQL type strings with DNF module and stream IDs.
+MODULE_STREAMS = {
+ "mysql55": "mysql:cl-MySQL55",
+ "mysql56": "mysql:cl-MySQL56",
+ "mysql57": "mysql:cl-MySQL57",
+ "mysql80": "mysql:cl-MySQL80",
+ "mariadb55": "mariadb:cl-MariaDB55",
+ "mariadb100": "mariadb:cl-MariaDB100",
+ "mariadb101": "mariadb:cl-MariaDB101",
+ "mariadb102": "mariadb:cl-MariaDB102",
+ "mariadb103": "mariadb:cl-MariaDB103",
+ "mariadb104": "mariadb:cl-MariaDB104",
+ "mariadb105": "mariadb:cl-MariaDB105",
+ "mariadb106": "mariadb:cl-MariaDB106",
+ "percona56": "percona:cl-Percona56",
+ "auto": "mysql:8.0"
+}
+
+
+def get_pkg_prefix(clmysql_type):
+ """
+ Get a Yum package prefix string from cl-mysql type.
+ """
+ if "mysql" in clmysql_type:
+ return "cl-MySQL"
+ elif "mariadb" in clmysql_type:
+ return "cl-MariaDB"
+ elif "percona" in clmysql_type:
+ return "cl-Percona"
+ else:
+ return None
+
+
+def get_clmysql_type():
+ if os.path.isfile(CL7_MYSQL_TYPE_FILE):
+ with open(CL7_MYSQL_TYPE_FILE, "r") as mysql_f:
+ return mysql_f.read()
+ return None
diff --git a/repos/system_upgrade/cloudlinux/libraries/detectcontrolpanel.py b/repos/system_upgrade/cloudlinux/libraries/detectcontrolpanel.py
new file mode 100644
index 0000000..7c92f10
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/libraries/detectcontrolpanel.py
@@ -0,0 +1,69 @@
+import os
+import os.path
+
+from leapp.libraries.stdlib import api
+
+
+NOPANEL_NAME = 'No panel'
+CPANEL_NAME = 'cPanel'
+DIRECTADMIN_NAME = 'DirectAdmin'
+PLESK_NAME = 'Plesk'
+ISPMANAGER_NAME = 'ISPManager'
+INTERWORX_NAME = 'InterWorx'
+UNKNOWN_NAME = 'Unknown (legacy)'
+INTEGRATED_NAME = 'Integrated'
+
+CLSYSCONFIG = '/etc/sysconfig/cloudlinux'
+
+
+def lvectl_custompanel_script():
+ """
+ Retrives custom panel script for lvectl from CL config file
+ :return: Script path or None if script filename wasn't found in config
+ """
+ config_param_name = 'CUSTOM_GETPACKAGE_SCRIPT'
+ try:
+ # Try to determine the custom script name
+ if os.path.exists(CLSYSCONFIG):
+ with open(CLSYSCONFIG, 'r') as f:
+ file_lines = f.readlines()
+ for line in file_lines:
+ line = line.strip()
+ if line.startswith(config_param_name):
+ line_parts = line.split('=')
+ if len(line_parts) == 2 and line_parts[0].strip() == config_param_name:
+ script_name = line_parts[1].strip()
+ if os.path.exists(script_name):
+ return script_name
+ except (OSError, IOError, IndexError):
+ # Ignore errors - what's important is that the script wasn't found
+ pass
+ return None
+
+
+def detect_panel():
+ """
+ This function will try to detect control panels supported by CloudLinux
+ :return: Detected control panel name or None
+ """
+ panel_name = NOPANEL_NAME
+ if os.path.isfile('/opt/cpvendor/etc/integration.ini'):
+ panel_name = INTEGRATED_NAME
+ elif os.path.isfile('/usr/local/cpanel/cpanel'):
+ panel_name = CPANEL_NAME
+ elif os.path.isfile('/usr/local/directadmin/directadmin') or\
+ os.path.isfile('/usr/local/directadmin/custombuild/build'):
+ panel_name = DIRECTADMIN_NAME
+ elif os.path.isfile('/usr/local/psa/version'):
+ panel_name = PLESK_NAME
+ # ispmanager must have:
+ # v5: /usr/local/mgr5/ directory,
+ # v4: /usr/local/ispmgr/bin/ispmgr file
+ elif os.path.isfile('/usr/local/ispmgr/bin/ispmgr') or os.path.isdir('/usr/local/mgr5'):
+ panel_name = ISPMANAGER_NAME
+ elif os.path.isdir('/usr/local/interworx'):
+ panel_name = INTERWORX_NAME
+ # Check if the CL config has a legacy custom script for a control panel
+ elif lvectl_custompanel_script():
+ panel_name = UNKNOWN_NAME
+ return panel_name
diff --git a/repos/system_upgrade/cloudlinux/models/installedcontrolpanel.py b/repos/system_upgrade/cloudlinux/models/installedcontrolpanel.py
new file mode 100644
index 0000000..ace1e15
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/models/installedcontrolpanel.py
@@ -0,0 +1,12 @@
+from leapp.models import Model, fields
+from leapp.topics import SystemInfoTopic
+
+
+class InstalledControlPanel(Model):
+ """
+ Name of the web control panel present on the system.
+ 'Unknown' if detection failed.
+ """
+
+ topic = SystemInfoTopic
+ name = fields.String()
diff --git a/repos/system_upgrade/cloudlinux/models/installedmysqltype.py b/repos/system_upgrade/cloudlinux/models/installedmysqltype.py
new file mode 100644
index 0000000..5cc475d
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/models/installedmysqltype.py
@@ -0,0 +1,12 @@
+from leapp.models import Model, fields
+from leapp.topics import SystemInfoTopic
+
+
+class InstalledMySqlTypes(Model):
+ """
+ Contains data about the MySQL/MariaDB/Percona installation on the source system.
+ """
+
+ topic = SystemInfoTopic
+ types = fields.List(fields.String())
+ version = fields.Nullable(fields.String(default=None)) # used for cl-mysql
diff --git a/repos/system_upgrade/cloudlinux/tools/remove-problem-packages b/repos/system_upgrade/cloudlinux/tools/remove-problem-packages
new file mode 100755
index 0000000..38eba60
--- /dev/null
+++ b/repos/system_upgrade/cloudlinux/tools/remove-problem-packages
@@ -0,0 +1,4 @@
+#!/usr/bin/bash -e
+
+# can't be removed in the main transaction due to errors in %preun
+yum -y --setopt=tsflags=noscripts remove gettext-devel
diff --git a/repos/system_upgrade/common/actors/addupgradebootentry/libraries/addupgradebootentry.py b/repos/system_upgrade/common/actors/addupgradebootentry/libraries/addupgradebootentry.py
index a2cede0..5ff1c76 100644
--- a/repos/system_upgrade/common/actors/addupgradebootentry/libraries/addupgradebootentry.py
+++ b/repos/system_upgrade/common/actors/addupgradebootentry/libraries/addupgradebootentry.py
@@ -17,7 +17,7 @@ def add_boot_entry(configs=None):
'/usr/sbin/grubby',
'--add-kernel', '{0}'.format(kernel_dst_path),
'--initrd', '{0}'.format(initram_dst_path),
- '--title', 'RHEL-Upgrade-Initramfs',
+ '--title', 'ELevate-Upgrade-Initramfs',
'--copy-default',
'--make-default',
'--args', '{DEBUG} enforcing=0 rd.plymouth=0 plymouth.enable=0'.format(DEBUG=debug)
diff --git a/repos/system_upgrade/common/actors/addupgradebootentry/tests/unit_test_addupgradebootentry.py b/repos/system_upgrade/common/actors/addupgradebootentry/tests/unit_test_addupgradebootentry.py
index bb89c9f..2b8e7c8 100644
--- a/repos/system_upgrade/common/actors/addupgradebootentry/tests/unit_test_addupgradebootentry.py
+++ b/repos/system_upgrade/common/actors/addupgradebootentry/tests/unit_test_addupgradebootentry.py
@@ -42,7 +42,7 @@ run_args_add = [
'/usr/sbin/grubby',
'--add-kernel', '/abc',
'--initrd', '/def',
- '--title', 'RHEL-Upgrade-Initramfs',
+ '--title', 'ELevate-Upgrade-Initramfs',
'--copy-default',
'--make-default',
'--args',
diff --git a/repos/system_upgrade/common/actors/checkenabledvendorrepos/actor.py b/repos/system_upgrade/common/actors/checkenabledvendorrepos/actor.py
new file mode 100644
index 0000000..52f5af9
--- /dev/null
+++ b/repos/system_upgrade/common/actors/checkenabledvendorrepos/actor.py
@@ -0,0 +1,53 @@
+from leapp.actors import Actor
+from leapp.libraries.stdlib import api
+from leapp.models import (
+ RepositoriesFacts,
+ VendorSourceRepos,
+ ActiveVendorList,
+)
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class CheckEnabledVendorRepos(Actor):
+ """
+ Create a list of vendors whose repositories are present on the system and enabled.
+ Only those vendors' configurations (new repositories, PES actions, etc.)
+ will be included in the upgrade process.
+ """
+
+ name = "check_enabled_vendor_repos"
+ consumes = (RepositoriesFacts, VendorSourceRepos)
+ produces = (ActiveVendorList)
+ tags = (IPUWorkflowTag, FactsPhaseTag.Before)
+
+ def process(self):
+ vendor_mapping_data = {}
+ active_vendors = set()
+
+ # Make a dict for easy mapping of repoid -> corresponding vendor name.
+ for vendor_src_repodata in api.consume(VendorSourceRepos):
+ for vendor_src_repo in vendor_src_repodata.source_repoids:
+ vendor_mapping_data[vendor_src_repo] = vendor_src_repodata.vendor
+
+ # Is the repo listed in the vendor map as from_repoid present on the system?
+ for repos_facts in api.consume(RepositoriesFacts):
+ for repo_file in repos_facts.repositories:
+ for repo_data in repo_file.data:
+ self.log.debug(
+ "Looking for repository {} in vendor maps".format(repo_data.repoid)
+ )
+ if repo_data.enabled and repo_data.repoid in vendor_mapping_data:
+ # If the vendor's repository is present in the system and enabled, count the vendor as active.
+ new_vendor = vendor_mapping_data[repo_data.repoid]
+ self.log.debug(
+ "Repository {} found and enabled, enabling vendor {}".format(
+ repo_data.repoid, new_vendor
+ )
+ )
+ active_vendors.add(new_vendor)
+
+ if active_vendors:
+ self.log.debug("Active vendor list: {}".format(active_vendors))
+ api.produce(ActiveVendorList(data=list(active_vendors)))
+ else:
+ self.log.info("No active vendors found, vendor list not generated")
diff --git a/repos/system_upgrade/common/actors/checketcreleasever/libraries/checketcreleasever.py b/repos/system_upgrade/common/actors/checketcreleasever/libraries/checketcreleasever.py
index ed5089e..8cf4b75 100644
--- a/repos/system_upgrade/common/actors/checketcreleasever/libraries/checketcreleasever.py
+++ b/repos/system_upgrade/common/actors/checketcreleasever/libraries/checketcreleasever.py
@@ -1,22 +1,21 @@
from leapp import reporting
from leapp.models import PkgManagerInfo, RHUIInfo
from leapp.libraries.stdlib import api
+from leapp.libraries.common.config.version import get_target_major_version
def handle_etc_releasever():
- target_version = api.current_actor().configuration.version.target
+ target_version = get_target_major_version()
reporting.create_report([
reporting.Title(
- 'Release version in /etc/dnf/vars/releasever will be set to the current target release'
+ 'Release version in /etc/dnf/vars/releasever will be set to the major target release'
),
reporting.Summary(
'On this system, Leapp detected "releasever" variable is either configured through DNF/YUM configuration '
'file and/or the system is using RHUI infrastructure. In order to avoid issues with repofile URLs '
'(when --release option is not provided) in cases where there is the previous major.minor version value '
- 'in the configuration, release version will be set to the target release version ({}). This will also '
- 'ensure the system stays on the target version after the upgrade. In order to enable latest minor version '
- 'updates, you can remove "/etc/dnf/vars/releasever" file.'.format(
+ 'in the configuration, release version will be set to the major target release version ({}).'.format(
target_version
)
),
diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
index acdb93b..da1e814 100755
--- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
+++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/85sys-upgrade-redhat/do-upgrade.sh
@@ -8,7 +8,7 @@ fi
type getarg >/dev/null 2>&1 || . /lib/dracut-lib.sh
get_rhel_major_release() {
- local os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*\.' | grep -o '[0-9]*')
+ local os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*' | grep -o '[0-9]*')
[ -z "$os_version" ] && {
# This should not happen as /etc/initrd-release is supposed to have API
# stability, but check is better than broken system.
@@ -326,4 +326,3 @@ getarg 'rd.break=leapp-logs' && emergency_shell -n upgrade "Break after LEAPP sa
sync
mount -o "remount,$old_opts" $NEWROOT
exit $result
-
diff --git a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator
index 14bd6e3..f6adacf 100755
--- a/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator
+++ b/repos/system_upgrade/common/actors/commonleappdracutmodules/files/dracut/90sys-upgrade/initrd-system-upgrade-generator
@@ -1,7 +1,7 @@
#!/bin/sh
get_rhel_major_release() {
- local os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*\.' | grep -o '[0-9]*')
+ local os_version=$(cat /etc/initrd-release | grep -o '^VERSION="[0-9][0-9]*' | grep -o '[0-9]*')
[ -z "$os_version" ] && {
# This should not happen as /etc/initrd-release is supposed to have API
# stability, but check is better than broken system.
diff --git a/repos/system_upgrade/common/actors/detectwebservers/actor.py b/repos/system_upgrade/common/actors/detectwebservers/actor.py
new file mode 100644
index 0000000..ac79714
--- /dev/null
+++ b/repos/system_upgrade/common/actors/detectwebservers/actor.py
@@ -0,0 +1,53 @@
+from leapp.actors import Actor
+from leapp import reporting
+from leapp.reporting import Report
+from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
+
+from leapp.libraries.actor.detectwebservers import (
+ detect_litespeed,
+ detect_nginx
+)
+
+
+class DetectWebServers(Actor):
+ """
+ Check for a presence of a web server, and produce a warning if one is installed.
+ """
+
+ name = 'detect_web_servers'
+ consumes = ()
+ produces = (Report)
+ tags = (ChecksPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ litespeed_installed = detect_litespeed()
+ nginx_installed = detect_nginx()
+
+ if litespeed_installed or nginx_installed:
+ server_name = "NGINX" if nginx_installed else "LiteSpeed"
+ reporting.create_report(
+ [
+ reporting.Title(
+ "An installed web server might not be upgraded properly."
+ ),
+ reporting.Summary(
+ "A web server is present on the system."
+ " Depending on the source of installation, "
+ " it may not upgrade to the new version correctly,"
+ " since not all installation configurations are currently supported by Leapp."
+ " Failing to upgrade the webserver may result in it malfunctioning"
+ " after the upgrade process finishes."
+ " Please review the list of packages that won't be upgraded in the report."
+ " If the web server packages are present in the list of packages that won't be upgraded,"
+ " expect the server to be non-functional on the post-upgrade system."
+ " You may still continue with the upgrade, but you'll need to"
+ " upgrade the web server manually after the process finishes."
+ " Detected webserver: {}.".format(server_name)
+ ),
+ reporting.Severity(reporting.Severity.HIGH),
+ reporting.Tags([
+ reporting.Tags.OS_FACTS,
+ reporting.Tags.SERVICES
+ ]),
+ ]
+ )
diff --git a/repos/system_upgrade/common/actors/detectwebservers/libraries/detectwebservers.py b/repos/system_upgrade/common/actors/detectwebservers/libraries/detectwebservers.py
new file mode 100644
index 0000000..e0058e6
--- /dev/null
+++ b/repos/system_upgrade/common/actors/detectwebservers/libraries/detectwebservers.py
@@ -0,0 +1,42 @@
+import os
+
+LITESPEED_CONFIG_FILE = '/usr/local/lsws/conf/httpd_config.xml'
+LITESPEED_OPEN_CONFIG_FILE = '/usr/local/lsws/conf/httpd_config.conf'
+NGINX_BINARY = '/usr/sbin/nginx'
+
+
+def detect_webservers():
+ """
+ Wrapper function for detection.
+ """
+ return (detect_litespeed() or detect_nginx())
+
+
+# Detect LiteSpeed
+def detect_litespeed():
+ """
+ LiteSpeed can be enterprise or open source, and each of them
+ stores config in different formats
+ """
+ return detect_enterprise_litespeed() or detect_open_litespeed()
+
+
+def detect_enterprise_litespeed():
+ """
+ Detects LSWS Enterprise presence
+ """
+ return os.path.isfile(LITESPEED_CONFIG_FILE)
+
+
+def detect_open_litespeed():
+ """
+ Detects OpenLiteSpeed presence
+ """
+ return os.path.isfile(LITESPEED_OPEN_CONFIG_FILE)
+
+
+def detect_nginx():
+ """
+ Detects NGINX presence
+ """
+ return os.path.isfile(NGINX_BINARY)
diff --git a/repos/system_upgrade/common/actors/efibootorderfix/finalization/actor.py b/repos/system_upgrade/common/actors/efibootorderfix/finalization/actor.py
index f42909f..2728cb4 100644
--- a/repos/system_upgrade/common/actors/efibootorderfix/finalization/actor.py
+++ b/repos/system_upgrade/common/actors/efibootorderfix/finalization/actor.py
@@ -1,17 +1,102 @@
+import os
+import re
+
+from leapp.libraries.stdlib import run, api
from leapp.actors import Actor
-from leapp.libraries.common import efi_reboot_fix
+from leapp.models import InstalledTargetKernelVersion, KernelCmdlineArg, FirmwareFacts, MountEntry
from leapp.tags import FinalizationPhaseTag, IPUWorkflowTag
+from leapp.exceptions import StopActorExecutionError
class EfiFinalizationFix(Actor):
"""
- Adjust EFI boot entry for final reboot
+ Ensure that EFI boot order is updated, which is particularly necessary
+ when upgrading to a different OS distro. Also rebuilds grub config
+ if necessary.
"""
name = 'efi_finalization_fix'
- consumes = ()
+ consumes = (KernelCmdlineArg, InstalledTargetKernelVersion, FirmwareFacts, MountEntry)
produces = ()
tags = (FinalizationPhaseTag, IPUWorkflowTag)
def process(self):
- efi_reboot_fix.maybe_emit_updated_boot_entry()
+ is_system_efi = False
+ ff = next(self.consume(FirmwareFacts), None)
+
+ dirname = {
+ 'AlmaLinux': 'almalinux',
+ 'CentOS Linux': 'centos',
+ 'CentOS Stream': 'centos',
+ 'Oracle Linux Server': 'redhat',
+ 'Red Hat Enterprise Linux': 'redhat',
+ 'Rocky Linux': 'rocky',
+ 'Scientific Linux': 'redhat',
+ 'CloudLinux': 'centos',
+ }
+
+ efi_shimname_dict = {
+ 'x86_64': 'shimx64.efi',
+ 'aarch64': 'shimaa64.efi'
+ }
+
+ def devparts(dev):
+ part = next(re.finditer(r'\d+$', dev)).group(0)
+ dev = dev[:-len(part)]
+ return [dev, part];
+
+ with open('/etc/system-release', 'r') as sr:
+ release_line = next(line for line in sr if 'release' in line)
+ distro = release_line.split(' release ', 1)[0]
+
+ efi_bootentry_label = distro
+ distro_dir = dirname.get(distro, 'default')
+ shim_filename = efi_shimname_dict.get(api.current_actor().configuration.architecture, 'shimx64.efi')
+
+ shim_path = '/boot/efi/EFI/' + distro_dir + '/' + shim_filename
+ grub_cfg_path = '/boot/efi/EFI/' + distro_dir + '/grub.cfg'
+ bootmgr_path = '\\EFI\\' + distro_dir + '\\' + shim_filename
+
+ has_efibootmgr = os.path.exists('/sbin/efibootmgr')
+ has_shim = os.path.exists(shim_path)
+ has_grub_cfg = os.path.exists(grub_cfg_path)
+
+ if not ff:
+ raise StopActorExecutionError(
+ 'Could not identify system firmware',
+ details={'details': 'Actor did not receive FirmwareFacts message.'}
+ )
+
+ if not has_efibootmgr:
+ return
+
+ for fact in self.consume(FirmwareFacts):
+ if fact.firmware == 'efi':
+ is_system_efi = True
+ break
+
+ if is_system_efi and has_shim:
+ efidevlist = []
+ with open('/proc/mounts', 'r') as fp:
+ for line in fp:
+ if '/boot/efi' in line:
+ efidevpath = line.split(' ', 1)[0]
+ efidevpart = efidevpath.split('/')[-1]
+ if os.path.exists('/proc/mdstat'):
+ with open('/proc/mdstat', 'r') as mds:
+ for line in mds:
+ if line.startswith(efidevpart):
+ mddev = line.split(' ')
+ for md in mddev:
+ if '[' in md:
+ efimd = md.split('[', 1)[0]
+ efidp = efidevpath.replace(efidevpart, efimd)
+ efidevlist.append(efidp)
+ if len(efidevlist) == 0:
+ efidevlist.append(efidevpath)
+ for devpath in efidevlist:
+ efidev, efipart = devparts(devpath)
+ run(['/sbin/efibootmgr', '-c', '-d', efidev, '-p', efipart, '-l', bootmgr_path, '-L', efi_bootentry_label])
+
+ if not has_grub_cfg:
+ run(['/sbin/grub2-mkconfig', '-o', grub_cfg_path])
diff --git a/repos/system_upgrade/common/actors/filterrpmtransactionevents/actor.py b/repos/system_upgrade/common/actors/filterrpmtransactionevents/actor.py
index e0d89d9..8fc5954 100644
--- a/repos/system_upgrade/common/actors/filterrpmtransactionevents/actor.py
+++ b/repos/system_upgrade/common/actors/filterrpmtransactionevents/actor.py
@@ -3,7 +3,8 @@ from leapp.models import (
FilteredRpmTransactionTasks,
InstalledRedHatSignedRPM,
PESRpmTransactionTasks,
- RpmTransactionTasks
+ RpmTransactionTasks,
+ PreRemovedRpmPackages
)
from leapp.tags import ChecksPhaseTag, IPUWorkflowTag
@@ -18,34 +19,53 @@ class FilterRpmTransactionTasks(Actor):
"""
name = 'check_rpm_transaction_events'
- consumes = (PESRpmTransactionTasks, RpmTransactionTasks, InstalledRedHatSignedRPM,)
+ consumes = (
+ PESRpmTransactionTasks,
+ RpmTransactionTasks,
+ InstalledRedHatSignedRPM,
+ PreRemovedRpmPackages,
+ )
produces = (FilteredRpmTransactionTasks,)
tags = (IPUWorkflowTag, ChecksPhaseTag)
def process(self):
installed_pkgs = set()
+ preremoved_pkgs = set()
+ preremoved_pkgs_to_install = set()
+
for rpm_pkgs in self.consume(InstalledRedHatSignedRPM):
installed_pkgs.update([pkg.name for pkg in rpm_pkgs.items])
+ for rpm_pkgs in self.consume(PreRemovedRpmPackages):
+ preremoved_pkgs.update([pkg.name for pkg in rpm_pkgs.items])
+ preremoved_pkgs_to_install.update([pkg.name for pkg in rpm_pkgs.items if rpm_pkgs.install])
+
+ installed_pkgs.difference_update(preremoved_pkgs)
local_rpms = set()
to_install = set()
to_remove = set()
to_keep = set()
to_upgrade = set()
+ to_reinstall = set()
modules_to_enable = {}
modules_to_reset = {}
+
+ to_install.update(preremoved_pkgs_to_install)
for event in self.consume(RpmTransactionTasks, PESRpmTransactionTasks):
local_rpms.update(event.local_rpms)
to_install.update(event.to_install)
to_remove.update(installed_pkgs.intersection(event.to_remove))
to_keep.update(installed_pkgs.intersection(event.to_keep))
+ to_reinstall.update(installed_pkgs.intersection(event.to_reinstall))
modules_to_enable.update({'{}:{}'.format(m.name, m.stream): m for m in event.modules_to_enable})
modules_to_reset.update({'{}:{}'.format(m.name, m.stream): m for m in event.modules_to_reset})
to_remove.difference_update(to_keep)
# run upgrade for the rest of RH signed pkgs which we do not have rule for
- to_upgrade = installed_pkgs - (to_install | to_remove)
+ to_upgrade = installed_pkgs - (to_install | to_remove | to_reinstall)
+
+ self.log.debug('DNF modules to enable: {}'.format(modules_to_enable.keys()))
self.produce(FilteredRpmTransactionTasks(
local_rpms=list(local_rpms),
@@ -53,5 +73,6 @@ class FilterRpmTransactionTasks(Actor):
to_remove=list(to_remove),
to_keep=list(to_keep),
to_upgrade=list(to_upgrade),
+ to_reinstall=list(to_reinstall),
modules_to_reset=list(modules_to_reset.values()),
modules_to_enable=list(modules_to_enable.values())))
diff --git a/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/files/generate-initram.sh b/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/files/generate-initram.sh
index 3e904f6..9b717a8 100755
--- a/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/files/generate-initram.sh
+++ b/repos/system_upgrade/common/actors/initramfs/upgradeinitramfsgenerator/files/generate-initram.sh
@@ -7,7 +7,7 @@ stage() {
}
get_kernel_version() {
- rpm -qa | grep kernel-modules | cut -d- -f3- | sort | tail -n 1
+ rpm -qa kernel --qf '%{VERSION}-%{RELEASE}.%{ARCH}\n' | sort --version-sort | tail --lines=1
}
dracut_install_modules()
diff --git a/repos/system_upgrade/common/actors/ipuworkflowconfig/libraries/ipuworkflowconfig.py b/repos/system_upgrade/common/actors/ipuworkflowconfig/libraries/ipuworkflowconfig.py
index edf978f..7fea4ec 100644
--- a/repos/system_upgrade/common/actors/ipuworkflowconfig/libraries/ipuworkflowconfig.py
+++ b/repos/system_upgrade/common/actors/ipuworkflowconfig/libraries/ipuworkflowconfig.py
@@ -47,6 +47,7 @@ def get_os_release(path):
:return: `OSRelease` model if the file can be parsed
:raises: `IOError`
"""
+ os_version = '.'.join(platform.dist()[1].split('.')[:2])
try:
with open(path) as f:
data = dict(l.strip().split('=', 1) for l in f.readlines() if '=' in l)
@@ -55,7 +56,7 @@ def get_os_release(path):
name=data.get('NAME', '').strip('"'),
pretty_name=data.get('PRETTY_NAME', '').strip('"'),
version=data.get('VERSION', '').strip('"'),
- version_id=data.get('VERSION_ID', '').strip('"'),
+ version_id=os_version,
variant=data.get('VARIANT', '').strip('"') or None,
variant_id=data.get('VARIANT_ID', '').strip('"') or None
)
diff --git a/repos/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py b/repos/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py
index 134d1aa..c4d9931 100644
--- a/repos/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py
+++ b/repos/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py
@@ -125,7 +125,12 @@ def process():
api.current_logger().debug('Current kernel EVR: {}'.format(current_evr))
api.current_logger().debug('Newest kernel EVR: {}'.format(newest_evr))
- if current_evr != newest_evr:
+ # LVE kernels can be installed over newer kernels and be older
+ # than the most current avalable ones - that's not an inhibitor, it's expected
+ # They're marked with 'lve' in the release string
+ lve_kernel = "lve" in current_evr[2]
+
+ if current_evr != newest_evr and not lve_kernel:
title = 'Newest installed kernel not in use'
summary = ('To ensure a stable upgrade, the machine needs to be'
' booted into the latest installed kernel.')
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/actor.py b/repos/system_upgrade/common/actors/peseventsscanner/actor.py
index fadf76b..7ef2664 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/actor.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/actor.py
@@ -1,3 +1,6 @@
+import os
+import os.path
+
from leapp.actors import Actor
from leapp.libraries.actor.peseventsscanner import pes_events_scanner
from leapp.models import (
@@ -9,11 +12,15 @@ from leapp.models import (
RepositoriesMapping,
RepositoriesSetupTasks,
RHUIInfo,
- RpmTransactionTasks
+ RpmTransactionTasks,
+ ActiveVendorList,
)
from leapp.reporting import Report
from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+LEAPP_FILES_DIR = "/etc/leapp/files"
+VENDORS_DIR = "/etc/leapp/files/vendors.d"
+
class PesEventsScanner(Actor):
"""
@@ -32,9 +39,22 @@ class PesEventsScanner(Actor):
RepositoriesMapping,
RHUIInfo,
RpmTransactionTasks,
+ ActiveVendorList,
)
produces = (PESRpmTransactionTasks, RepositoriesSetupTasks, Report)
tags = (IPUWorkflowTag, FactsPhaseTag)
def process(self):
- pes_events_scanner('/etc/leapp/files', 'pes-events.json')
+ pes_events_scanner(LEAPP_FILES_DIR, "pes-events.json")
+
+ active_vendors = []
+ for vendor_list in self.consume(ActiveVendorList):
+ active_vendors.extend(vendor_list.data)
+
+ pes_json_suffix = "_pes.json"
+ if os.path.isdir(VENDORS_DIR):
+ vendor_pesfiles = list(filter(lambda vfile: pes_json_suffix in vfile, os.listdir(VENDORS_DIR)))
+
+ for pesfile in vendor_pesfiles:
+ if pesfile[:-len(pes_json_suffix)] in active_vendors:
+ pes_events_scanner(VENDORS_DIR, pesfile)
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner.py b/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner.py
index 1be2caa..aca8a72 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/libraries/peseventsscanner.py
@@ -59,12 +59,14 @@ class Action(IntEnum):
MERGED = 5
MOVED = 6
RENAMED = 7
+ REINSTALLED = 8
class Task(IntEnum):
KEEP = 0
INSTALL = 1
REMOVE = 2
+ REINSTALL = 3
def past(self):
return ['kept', 'installed', 'removed'][self]
@@ -138,19 +140,26 @@ def _get_repositories_mapping(target_pesids):
:return: Dictionary with all repositories mapped.
"""
- repositories_map_msgs = api.consume(RepositoriesMapping)
- repositories_map_msg = next(repositories_map_msgs, None)
- if list(repositories_map_msgs):
- api.current_logger().warning('Unexpectedly received more than one RepositoriesMapping message.')
- if not repositories_map_msg:
- raise StopActorExecutionError(
- 'Cannot parse RepositoriesMapping data properly',
- details={'Problem': 'Did not receive a message with mapped repositories'}
- )
+ composite_mapping = []
+ composite_repos = []
+
+ for repomap_msg in api.consume(RepositoriesMapping):
+ if not repomap_msg:
+ raise StopActorExecutionError(
+ 'Cannot parse RepositoriesMapping data properly',
+ details={'Problem': 'Received a blank message with mapped repositories'}
+ )
+ composite_mapping.extend(repomap_msg.mapping)
+ composite_repos.extend(repomap_msg.repositories)
+
+ composite_map_msg = RepositoriesMapping(
+ mapping=composite_mapping,
+ repositories=composite_repos
+ )
rhui_info = next(api.consume(RHUIInfo), RHUIInfo(provider=''))
- repomap = peseventsscanner_repomap.RepoMapDataHandler(repositories_map_msg, cloud_provider=rhui_info.provider)
+ repomap = peseventsscanner_repomap.RepoMapDataHandler(composite_map_msg, cloud_provider=rhui_info.provider)
# NOTE: We have to calculate expected target repositories
# like in the setuptargetrepos actor. It's planned to handle this in different
# way in future...
@@ -233,6 +242,7 @@ def get_transaction_configuration():
transaction_configuration.to_install.extend(tasks.to_install)
transaction_configuration.to_remove.extend(tasks.to_remove)
transaction_configuration.to_keep.extend(tasks.to_keep)
+ transaction_configuration.to_reinstall.extend(tasks.to_reinstall)
return transaction_configuration
@@ -324,7 +334,7 @@ def parse_pes_events(json_data):
:return: List of Event tuples, where each event contains event type and input/output pkgs
"""
data = json.loads(json_data)
- if not isinstance(data, dict) or not data.get('packageinfo'):
+ if not isinstance(data, dict) or data.get('packageinfo') is None:
raise ValueError('Found PES data with invalid structure')
return list(chain(*[parse_entry(entry) for entry in data['packageinfo']]))
@@ -565,6 +575,13 @@ def process_events(releases, events, installed_pkgs):
if event.action in [Action.RENAMED, Action.REPLACED, Action.REMOVED]:
add_packages_to_tasks(current, event.in_pkgs, Task.REMOVE)
+ if event.action == Action.REINSTALLED:
+ # These packages have the same version string but differ in contents.
+ # noarch packages will most likely work fine after the upgrade, but
+ # the others may break due to library/binary incompatibilities.
+ # This is why we mark them for reinstallation.
+ add_packages_to_tasks(current, event.in_pkgs, Task.REINSTALL)
+
do_not_remove = set()
for package in current[Task.REMOVE]:
if package in tasks[Task.KEEP]:
@@ -572,6 +589,11 @@ def process_events(releases, events, installed_pkgs):
'{p} :: {r} to be kept / currently removed - removing package'.format(
p=package[0], r=current[Task.REMOVE][package]))
del tasks[Task.KEEP][package]
+ if package in tasks[Task.REINSTALL]:
+ api.current_logger().warning(
+ '{p} :: {r} to be reinstalled / currently removed - removing package'.format(
+ p=package[0], r=current[Task.REMOVE][package]))
+ del tasks[Task.REINSTALL][package]
elif package in tasks[Task.INSTALL]:
api.current_logger().warning(
'{p} :: {r} to be installed / currently removed - ignoring tasks'.format(
@@ -599,6 +621,11 @@ def process_events(releases, events, installed_pkgs):
'{p} :: {r} to be removed / currently kept - keeping package'.format(
p=package[0], r=current[Task.KEEP][package]))
del tasks[Task.REMOVE][package]
+ if package in tasks[Task.REINSTALL]:
+ api.current_logger().warning(
+ '{p} :: {r} to be reinstalled / currently kept - keeping package'.format(
+ p=package[0], r=current[Task.KEEP][package]))
+ del tasks[Task.REINSTALL][package]
for key in Task: # noqa: E1133; pylint: disable=not-an-iterable
for package in current[key]:
@@ -610,7 +637,9 @@ def process_events(releases, events, installed_pkgs):
map_repositories(tasks[Task.INSTALL])
map_repositories(tasks[Task.KEEP])
+ map_repositories(tasks[Task.REINSTALL])
filter_out_pkgs_in_blacklisted_repos(tasks[Task.INSTALL])
+ filter_out_pkgs_in_blacklisted_repos(tasks[Task.REINSTALL])
resolve_conflicting_requests(tasks)
return tasks
@@ -689,15 +718,24 @@ def resolve_conflicting_requests(tasks):
-> without this function, sip would reside in both [Task.KEEP] and [Task.REMOVE], causing a dnf conflict
"""
pkgs_in_conflict = set()
- for pkg in list(tasks[Task.INSTALL].keys()) + list(tasks[Task.KEEP].keys()):
+ preserve_keys = (
+ list(tasks[Task.INSTALL].keys())
+ + list(tasks[Task.KEEP].keys())
+ + list(tasks[Task.REINSTALL].keys())
+ )
+
+ for pkg in preserve_keys:
if pkg in tasks[Task.REMOVE]:
pkgs_in_conflict.add(pkg)
del tasks[Task.REMOVE][pkg]
if pkgs_in_conflict:
- api.current_logger().debug('The following packages were marked to be kept/installed and removed at the same'
- ' time. Leapp will upgrade them.\n{}'.format(
- '\n'.join(sorted(pkg[0] for pkg in pkgs_in_conflict))))
+ api.current_logger().debug(
+ "The following packages were marked to be kept/installed/reinstalled"
+ " and removed at the same time. Leapp will upgrade them.\n{}".format(
+ "\n".join(sorted(pkg[0] for pkg in pkgs_in_conflict))
+ )
+ )
def get_repositories_blacklisted():
@@ -773,7 +811,7 @@ def add_output_pkgs_to_transaction_conf(transaction_configuration, events):
message = 'The following target system packages will not be installed:\n'
for event in events:
- if event.action in (Action.SPLIT, Action.MERGED, Action.REPLACED, Action.RENAMED):
+ if event.action in (Action.SPLIT, Action.MERGED, Action.REPLACED, Action.RENAMED, Action.REINSTALLED):
if all([pkg.name in transaction_configuration.to_remove for pkg in event.in_pkgs]):
transaction_configuration.to_remove.extend(pkg.name for pkg in event.out_pkgs)
message += (
@@ -800,6 +838,7 @@ def filter_out_transaction_conf_pkgs(tasks, transaction_configuration):
"""
do_not_keep = [p for p in tasks[Task.KEEP] if p[0] in transaction_configuration.to_remove]
do_not_install = [p for p in tasks[Task.INSTALL] if p[0] in transaction_configuration.to_remove]
+ do_not_reinstall = [p for p in tasks[Task.REINSTALL] if p[0] in transaction_configuration.to_remove]
do_not_remove = [p for p in tasks[Task.REMOVE] if p[0] in transaction_configuration.to_install
or p[0] in transaction_configuration.to_keep]
@@ -813,6 +852,12 @@ def filter_out_transaction_conf_pkgs(tasks, transaction_configuration):
api.current_logger().debug('The following packages will not be installed because of the'
' /etc/leapp/transaction/to_remove transaction configuration file:'
'\n- ' + '\n- '.join(p[0] for p in sorted(do_not_install)))
+ if do_not_reinstall:
+ for pkg in do_not_reinstall:
+ tasks[Task.REINSTALL].pop(pkg)
+ api.current_logger().debug('The following packages will not be reinstalled because of the'
+ ' /etc/leapp/transaction/to_remove transaction configuration file:'
+ '\n- ' + '\n- '.join(p[0] for p in sorted(do_not_reinstall)))
if do_not_remove:
for pkg in do_not_remove:
tasks[Task.REMOVE].pop(pkg)
@@ -837,17 +882,20 @@ def produce_messages(tasks):
# Type casting to list to be Py2&Py3 compatible as on Py3 keys() returns dict_keys(), not a list
to_install_pkgs = sorted(tasks[Task.INSTALL].keys())
to_remove_pkgs = sorted(tasks[Task.REMOVE].keys())
+ to_reinstall_pkgs = sorted(tasks[Task.REINSTALL].keys())
to_enable_repos = sorted(set(tasks[Task.INSTALL].values()) | set(tasks[Task.KEEP].values()))
- if to_install_pkgs or to_remove_pkgs:
+ if to_install_pkgs or to_remove_pkgs or to_reinstall_pkgs:
enabled_modules = _get_enabled_modules()
modules_to_enable = [Module(name=p[1][0], stream=p[1][1]) for p in to_install_pkgs if p[1]]
modules_to_reset = enabled_modules
to_install_pkg_names = [p[0] for p in to_install_pkgs]
to_remove_pkg_names = [p[0] for p in to_remove_pkgs]
+ to_reinstall_pkg_names = [p[0] for p in to_reinstall_pkgs]
api.produce(PESRpmTransactionTasks(to_install=to_install_pkg_names,
to_remove=to_remove_pkg_names,
+ to_reinstall=to_reinstall_pkg_names,
modules_to_enable=modules_to_enable,
modules_to_reset=modules_to_reset))
diff --git a/repos/system_upgrade/common/actors/peseventsscanner/tests/unit_test_peseventsscanner.py b/repos/system_upgrade/common/actors/peseventsscanner/tests/unit_test_peseventsscanner.py
index f4b02e9..c22165e 100644
--- a/repos/system_upgrade/common/actors/peseventsscanner/tests/unit_test_peseventsscanner.py
+++ b/repos/system_upgrade/common/actors/peseventsscanner/tests/unit_test_peseventsscanner.py
@@ -492,6 +492,10 @@ def test_get_events(monkeypatch):
assert reporting.create_report.called == 1
assert 'inhibitor' in reporting.create_report.report_fields['flags']
+ with open(os.path.join(CUR_DIR, 'files/sample04.json')) as f:
+ events = parse_pes_events(f.read())
+ assert len(events) == 0
+
def test_pes_data_not_found(monkeypatch):
def read_or_fetch_mocked(filename, directory="/etc/leapp/files", service=None, allow_empty=False):
diff --git a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py b/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py
index 01f6df3..8464970 100644
--- a/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py
+++ b/repos/system_upgrade/common/actors/redhatsignedrpmscanner/actor.py
@@ -1,27 +1,69 @@
from leapp.actors import Actor
from leapp.libraries.common import rhui
-from leapp.models import InstalledRedHatSignedRPM, InstalledRPM, InstalledUnsignedRPM
+from leapp.models import InstalledRedHatSignedRPM, InstalledRPM, InstalledUnsignedRPM, VendorSignatures
from leapp.tags import FactsPhaseTag, IPUWorkflowTag
-class RedHatSignedRpmScanner(Actor):
+VENDOR_SIGS = {
+ 'rhel': ['199e2f91fd431d51',
+ '5326810137017186',
+ '938a80caf21541eb',
+ 'fd372689897da07a',
+ '45689c882fa658e0'],
+ 'centos': ['24c6a8a7f4a80eb5',
+ '05b555b38483c65d',
+ '4eb84e71f2ee9d55',
+ 'a963bbdbf533f4fa',
+ '6c7cb6ef305d49d6'],
+ 'cloudlinux': ['8c55a6628608cb71'],
+ 'almalinux': ['51d6647ec21ad6ea',
+ 'd36cb86cb86b3716'],
+ 'rocky': ['15af5dac6d745a60',
+ '702d426d350d275d'],
+ 'ol': ['72f97b74ec551f03',
+ '82562ea9ad986da3',
+ 'bc4d06a08d8b756f'],
+ 'eurolinux': ['75c333f418cd4a9e',
+ 'b413acad6275f250',
+ 'f7ad3e5a1c9fd080'],
+ 'scientific': ['b0b4183f192a7d7d']
+}
+
+VENDOR_PACKAGERS = {
+ "rhel": "Red Hat, Inc.",
+ "centos": "CentOS",
+ "cloudlinux": "CloudLinux Packaging Team",
+ "almalinux": "AlmaLinux Packaging Team",
+ "rocky": "infrastructure@rockylinux.org",
+ "eurolinux": "EuroLinux",
+ "scientific": "Scientific Linux",
+}
+
+
+class VendorSignedRpmScanner(Actor):
"""Provide data about installed RPM Packages signed by Red Hat.
After filtering the list of installed RPM packages by signature, a message
with relevant data will be produced.
"""
- name = 'red_hat_signed_rpm_scanner'
- consumes = (InstalledRPM,)
- produces = (InstalledRedHatSignedRPM, InstalledUnsignedRPM,)
+ name = "vendor_signed_rpm_scanner"
+ consumes = (InstalledRPM, VendorSignatures)
+ produces = (
+ InstalledRedHatSignedRPM,
+ InstalledUnsignedRPM,
+ )
tags = (IPUWorkflowTag, FactsPhaseTag)
def process(self):
- RH_SIGS = ['199e2f91fd431d51',
- '5326810137017186',
- '938a80caf21541eb',
- 'fd372689897da07a',
- '45689c882fa658e0']
+ vendor = self.configuration.os_release.release_id
+ vendor_keys = sum(VENDOR_SIGS.values(), [])
+ vendor_packager = VENDOR_PACKAGERS.get(vendor, "not-available")
+
+ for siglist in self.consume(VendorSignatures):
+ vendor_keys.extend(siglist.sigs)
+
+ self.log.debug("Signature list: {}".format(vendor_keys))
signed_pkgs = InstalledRedHatSignedRPM()
unsigned_pkgs = InstalledUnsignedRPM()
@@ -32,11 +74,11 @@ class RedHatSignedRpmScanner(Actor):
all_signed = [
env
for env in env_vars
- if env.name == 'LEAPP_DEVEL_RPMS_ALL_SIGNED' and env.value == '1'
+ if env.name == "LEAPP_DEVEL_RPMS_ALL_SIGNED" and env.value == "1"
]
- def has_rhsig(pkg):
- return any(key in pkg.pgpsig for key in RH_SIGS)
+ def has_vendorsig(pkg):
+ return any(key in pkg.pgpsig for key in vendor_keys)
def is_gpg_pubkey(pkg):
"""Check if gpg-pubkey pkg exists or LEAPP_DEVEL_RPMS_ALL_SIGNED=1
@@ -44,15 +86,30 @@ class RedHatSignedRpmScanner(Actor):
gpg-pubkey is not signed as it would require another package
to verify its signature
"""
- return ( # pylint: disable-msg=consider-using-ternary
- pkg.name == 'gpg-pubkey'
- and pkg.packager.startswith('Red Hat, Inc.')
- or all_signed
+ return ( # pylint: disable-msg=consider-using-ternary
+ pkg.name == "gpg-pubkey"
+ and (pkg.packager.startswith(vendor_packager))
+ or all_signed
)
def has_katello_prefix(pkg):
"""Whitelist the katello package."""
- return pkg.name.startswith('katello-ca-consumer')
+ return pkg.name.startswith("katello-ca-consumer")
+
+ def has_cpanel_prefix(pkg):
+ """
+ Whitelist the cPanel packages.
+ A side effect of the cPanel's deployment method is that its packages both have no
+ PGP signature and aren't associated with any package repository.
+ They do, however, have a specific naming scheme that can be used to include them into
+ the upgrade process.
+ """
+
+ # NOTE: if another case like this and the above katello occurs, consider adding a
+ # mechanism (a third-party extension) to do this in a way that allows extending it to
+ # other configurations.
+ # A separate file for the "vendors.d" folder with package name wildcards?
+ return pkg.name.startswith("cpanel-")
def is_azure_pkg(pkg):
"""Whitelist Azure config package."""
@@ -68,16 +125,25 @@ class RedHatSignedRpmScanner(Actor):
for pkg in rpm_pkgs.items:
if any(
[
- has_rhsig(pkg),
+ has_vendorsig(pkg),
is_gpg_pubkey(pkg),
has_katello_prefix(pkg),
+ has_cpanel_prefix(pkg),
is_azure_pkg(pkg),
]
):
signed_pkgs.items.append(pkg)
+ self.log.debug(
+ "Package {} is signed, packager: {}, signature: {}".format(
+ pkg.name, pkg.packager, pkg.pgpsig
+ )
+ )
continue
unsigned_pkgs.items.append(pkg)
+ self.log.debug(
+ "Package {} is unsigned, packager: {}, signature: {}".format(pkg.name, pkg.packager, pkg.pgpsig)
+ )
self.produce(signed_pkgs)
self.produce(unsigned_pkgs)
diff --git a/repos/system_upgrade/common/actors/removeobsoletegpgkeys/actor.py b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/actor.py
new file mode 100644
index 0000000..5674ee3
--- /dev/null
+++ b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/actor.py
@@ -0,0 +1,24 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import removeobsoleterpmgpgkeys
+from leapp.models import DNFWorkaround, InstalledRPM
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class RemoveObsoleteGpgKeys(Actor):
+ """
+ Remove obsoleted RPM GPG keys.
+
+ New version might make existing RPM GPG keys obsolete. This might be caused
+ for example by the hashing algorithm becoming deprecated or by the key
+ getting replaced.
+
+ A DNFWorkaround is registered to actually remove the keys.
+ """
+
+ name = "remove_obsolete_gpg_keys"
+ consumes = (InstalledRPM,)
+ produces = (DNFWorkaround,)
+ tags = (FactsPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ removeobsoleterpmgpgkeys.process()
diff --git a/repos/system_upgrade/common/actors/removeobsoletegpgkeys/libraries/removeobsoleterpmgpgkeys.py b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/libraries/removeobsoleterpmgpgkeys.py
new file mode 100644
index 0000000..11c61e3
--- /dev/null
+++ b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/libraries/removeobsoleterpmgpgkeys.py
@@ -0,0 +1,51 @@
+from leapp.libraries.common.config.version import get_target_major_version
+from leapp.libraries.common.rpms import has_package
+from leapp.libraries.stdlib import api
+from leapp.models import DNFWorkaround, InstalledRPM
+
+# maps target version to keys obsoleted in that version
+OBSOLETED_KEYS_MAP = {
+ 7: [],
+ 8: [
+ "gpg-pubkey-2fa658e0-45700c69",
+ "gpg-pubkey-37017186-45761324",
+ "gpg-pubkey-db42a60e-37ea5438",
+ ],
+ 9: [
+ "gpg-pubkey-d4082792-5b32db75",
+ "gpg-pubkey-3abb34f8-5ffd890e",
+ "gpg-pubkey-6275f250-5e26cb2e",
+ ],
+}
+
+
+def _get_obsolete_keys():
+ """
+ Return keys obsoleted in target and previous versions
+ """
+ keys = []
+ for version in range(7, int(get_target_major_version()) + 1):
+ for key in OBSOLETED_KEYS_MAP[version]:
+ name, version, release = key.rsplit("-", 2)
+ if has_package(InstalledRPM, name, version=version, release=release):
+ keys.append(key)
+
+ return keys
+
+
+def register_dnfworkaround(keys):
+ api.produce(
+ DNFWorkaround(
+ display_name="remove obsolete RPM GPG keys from RPM DB",
+ script_path=api.current_actor().get_common_tool_path("removerpmgpgkeys"),
+ script_args=keys,
+ )
+ )
+
+
+def process():
+ keys = _get_obsolete_keys()
+ if not keys:
+ return
+
+ register_dnfworkaround(keys)
diff --git a/repos/system_upgrade/common/actors/removeobsoletegpgkeys/tests/test_removeobsoleterpmgpgkeys.py b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/tests/test_removeobsoleterpmgpgkeys.py
new file mode 100644
index 0000000..1d48781
--- /dev/null
+++ b/repos/system_upgrade/common/actors/removeobsoletegpgkeys/tests/test_removeobsoleterpmgpgkeys.py
@@ -0,0 +1,94 @@
+import pytest
+
+from leapp.libraries.actor import removeobsoleterpmgpgkeys
+from leapp.libraries.common.config.version import get_target_major_version
+from leapp.libraries.common.rpms import has_package
+from leapp.libraries.common.testutils import CurrentActorMocked, produce_mocked
+from leapp.libraries.stdlib import api
+from leapp.models import DNFWorkaround, InstalledRPM, RPM
+
+
+def _get_test_installedrpm():
+ return InstalledRPM(
+ items=[
+ RPM(
+ name='gpg-pubkey',
+ version='d4082792',
+ release='5b32db75',
+ epoch='0',
+ packager='Red Hat, Inc. (auxiliary key 2) <security@redhat.com>',
+ arch='noarch',
+ pgpsig=''
+ ),
+ RPM(
+ name='gpg-pubkey',
+ version='2fa658e0',
+ release='45700c69',
+ epoch='0',
+ packager='Red Hat, Inc. (auxiliary key) <security@redhat.com>',
+ arch='noarch',
+ pgpsig=''
+ ),
+ RPM(
+ name='gpg-pubkey',
+ version='12345678',
+ release='abcdefgh',
+ epoch='0',
+ packager='made up',
+ arch='noarch',
+ pgpsig=''
+ ),
+ ]
+ )
+
+
+@pytest.mark.parametrize(
+ "version, expected",
+ [
+ (9, ["gpg-pubkey-d4082792-5b32db75", "gpg-pubkey-2fa658e0-45700c69"]),
+ (8, ["gpg-pubkey-2fa658e0-45700c69"])
+ ]
+)
+def test_get_obsolete_keys(monkeypatch, version, expected):
+ def get_target_major_version_mocked():
+ return version
+
+ monkeypatch.setattr(
+ removeobsoleterpmgpgkeys,
+ "get_target_major_version",
+ get_target_major_version_mocked,
+ )
+
+ monkeypatch.setattr(
+ api,
+ "current_actor",
+ CurrentActorMocked(
+ msgs=[_get_test_installedrpm()]
+ ),
+ )
+
+ keys = removeobsoleterpmgpgkeys._get_obsolete_keys()
+ assert set(keys) == set(expected)
+
+
+@pytest.mark.parametrize(
+ "keys, should_register",
+ [
+ (["gpg-pubkey-d4082792-5b32db75"], True),
+ ([], False)
+ ]
+)
+def test_workaround_should_register(monkeypatch, keys, should_register):
+ def get_obsolete_keys_mocked():
+ return keys
+
+ monkeypatch.setattr(
+ removeobsoleterpmgpgkeys,
+ '_get_obsolete_keys',
+ get_obsolete_keys_mocked
+ )
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(api, "current_actor", CurrentActorMocked())
+
+ removeobsoleterpmgpgkeys.process()
+ assert api.produce.called == should_register
diff --git a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
index b2d00f3..e9458c5 100644
--- a/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
+++ b/repos/system_upgrade/common/actors/repositoriesmapping/libraries/repositoriesmapping.py
@@ -1,12 +1,9 @@
-from collections import defaultdict
-import json
import os
-from leapp.exceptions import StopActorExecutionError
from leapp.libraries.common.config.version import get_target_major_version, get_source_major_version
-from leapp.libraries.common.fetch import read_or_fetch
+from leapp.libraries.common.repomaputils import RepoMapData, read_repofile, inhibit_upgrade
from leapp.libraries.stdlib import api
-from leapp.models import RepositoriesMapping, PESIDRepositoryEntry, RepoMapEntry
+from leapp.models import RepositoriesMapping
from leapp.models.fields import ModelViolationError
OLD_REPOMAP_FILE = 'repomap.csv'
@@ -16,144 +13,9 @@ REPOMAP_FILE = 'repomap.json'
"""The name of the new repository mapping file."""
-class RepoMapData(object):
- VERSION_FORMAT = '1.0.0'
-
- def __init__(self):
- self.repositories = []
- self.mapping = {}
-
- def add_repository(self, data, pesid):
- """
- Add new PESIDRepositoryEntry with given pesid from the provided dictionary.
-
- :param data: A dict containing the data of the added repository. The dictionary structure corresponds
- to the repositories entries in the repository mapping JSON schema.
- :type data: Dict[str, str]
- :param pesid: PES id of the repository family that the newly added repository belongs to.
- :type pesid: str
- """
- self.repositories.append(PESIDRepositoryEntry(
- repoid=data['repoid'],
- channel=data['channel'],
- rhui=data.get('rhui', ''),
- repo_type=data['repo_type'],
- arch=data['arch'],
- major_version=data['major_version'],
- pesid=pesid
- ))
-
- def get_repositories(self, valid_major_versions):
- """
- Return the list of PESIDRepositoryEntry object matching the specified major versions.
- """
- return [repo for repo in self.repositories if repo.major_version in valid_major_versions]
-
- def add_mapping(self, source_major_version, target_major_version, source_pesid, target_pesid):
- """
- Add a new mapping entry that is mapping the source pesid to the destination pesid(s),
- relevant in an IPU from the supplied source major version to the supplied target
- major version.
-
- :param str source_major_version: Specifies the major version of the source system
- for which the added mapping applies.
- :param str target_major_version: Specifies the major version of the target system
- for which the added mapping applies.
- :param str source_pesid: PESID of the source repository.
- :param Union[str|List[str]] target_pesid: A single target PESID or a list of target
- PESIDs of the added mapping.
- """
- # NOTE: it could be more simple, but I prefer to be sure the input data
- # contains just one map per source PESID.
- key = '{}:{}'.format(source_major_version, target_major_version)
- rmap = self.mapping.get(key, defaultdict(set))
- self.mapping[key] = rmap
- if isinstance(target_pesid, list):
- rmap[source_pesid].update(target_pesid)
- else:
- rmap[source_pesid].add(target_pesid)
-
- def get_mappings(self, src_major_version, dst_major_version):
- """
- Return the list of RepoMapEntry objects for the specified upgrade path.
-
- IOW, the whole mapping for specified IPU.
- """
- key = '{}:{}'.format(src_major_version, dst_major_version)
- rmap = self.mapping.get(key, None)
- if not rmap:
- return None
- map_list = []
- for src_pesid in sorted(rmap.keys()):
- map_list.append(RepoMapEntry(source=src_pesid, target=sorted(rmap[src_pesid])))
- return map_list
-
- @staticmethod
- def load_from_dict(data):
- if data['version_format'] != RepoMapData.VERSION_FORMAT:
- raise ValueError(
- 'The obtained repomap data has unsupported version of format.'
- ' Get {} required {}'
- .format(data['version_format'], RepoMapData.VERSION_FORMAT)
- )
-
- repomap = RepoMapData()
-
- # Load reposiories
- existing_pesids = set()
- for repo_family in data['repositories']:
- existing_pesids.add(repo_family['pesid'])
- for repo in repo_family['entries']:
- repomap.add_repository(repo, repo_family['pesid'])
-
- # Load mappings
- for mapping in data['mapping']:
- for entry in mapping['entries']:
- if not isinstance(entry['target'], list):
- raise ValueError(
- 'The target field of a mapping entry is not a list: {}'
- .format(entry)
- )
-
- for pesid in [entry['source']] + entry['target']:
- if pesid not in existing_pesids:
- raise ValueError(
- 'The {} pesid is not related to any repository.'
- .format(pesid)
- )
- repomap.add_mapping(
- source_major_version=mapping['source_major_version'],
- target_major_version=mapping['target_major_version'],
- source_pesid=entry['source'],
- target_pesid=entry['target'],
- )
- return repomap
-
-
-def _inhibit_upgrade(msg):
- raise StopActorExecutionError(
- msg,
- details={'hint': ('Read documentation at the following link for more'
- ' information about how to retrieve the valid file:'
- ' https://access.redhat.com/articles/3664871')})
-
-
-def _read_repofile(repofile):
- # NOTE: what about catch StopActorExecution error when the file cannot be
- # obtained -> then check whether old_repomap file exists and in such a case
- # inform user they have to provde the new repomap.json file (we have the
- # warning now only which could be potentially overlooked)
- try:
- return json.loads(read_or_fetch(repofile))
- except ValueError:
- # The data does not contain a valid json
- _inhibit_upgrade('The repository mapping file is invalid: file does not contain a valid JSON object.')
- return None # Avoids inconsistent-return-statements warning
-
-
-def scan_repositories(read_repofile_func=_read_repofile):
+def scan_repositories(read_repofile_func=read_repofile):
"""
- Scan the repository mapping file and produce RepositoriesMap msg.
+ Scan the repository mapping file and produce RepositoriesMapping msg.
See the description of the actor for more details.
"""
@@ -185,10 +47,10 @@ def scan_repositories(read_repofile_func=_read_repofile):
'the JSON does not match required schema (wrong field type/value): {}'
.format(err)
)
- _inhibit_upgrade(err_message)
+ inhibit_upgrade(err_message)
except KeyError as err:
- _inhibit_upgrade(
+ inhibit_upgrade(
'The repository mapping file is invalid: the JSON is missing a required field: {}'.format(err))
except ValueError as err:
# The error should contain enough information, so we do not need to clarify it further
- _inhibit_upgrade('The repository mapping file is invalid: {}'.format(err))
+ inhibit_upgrade('The repository mapping file is invalid: {}'.format(err))
diff --git a/repos/system_upgrade/common/actors/repositoriesmapping/tests/unit_test_repositoriesmapping.py b/repos/system_upgrade/common/actors/repositoriesmapping/tests/unit_test_repositoriesmapping.py
index 3c0b04b..3480432 100644
--- a/repos/system_upgrade/common/actors/repositoriesmapping/tests/unit_test_repositoriesmapping.py
+++ b/repos/system_upgrade/common/actors/repositoriesmapping/tests/unit_test_repositoriesmapping.py
@@ -15,7 +15,6 @@ from leapp.models import PESIDRepositoryEntry
CUR_DIR = os.path.dirname(os.path.abspath(__file__))
-
@pytest.fixture
def adjust_cwd():
previous_cwd = os.getcwd()
diff --git a/repos/system_upgrade/common/actors/rpmtransactionconfigtaskscollector/libraries/rpmtransactionconfigtaskscollector.py b/repos/system_upgrade/common/actors/rpmtransactionconfigtaskscollector/libraries/rpmtransactionconfigtaskscollector.py
index 05797f5..e6f7803 100644
--- a/repos/system_upgrade/common/actors/rpmtransactionconfigtaskscollector/libraries/rpmtransactionconfigtaskscollector.py
+++ b/repos/system_upgrade/common/actors/rpmtransactionconfigtaskscollector/libraries/rpmtransactionconfigtaskscollector.py
@@ -18,21 +18,37 @@ def load_tasks_file(path, logger):
return []
+def filter_out(installed_rpm_names, to_filter, debug_msg):
+ # These are the packages that aren't installed on the system.
+ filtered_ok = [pkg for pkg in to_filter if pkg not in installed_rpm_names]
+
+ # And these ones are the ones that are.
+ filtered_out = list(set(to_filter) - set(filtered_ok))
+ if filtered_out:
+ api.current_logger().debug(
+ debug_msg +
+ '\n- ' + '\n- '.join(filtered_out)
+ )
+ # We may want to use either of the two sets.
+ return filtered_ok, filtered_out
+
+
def load_tasks(base_dir, logger):
# Loads configuration files to_install, to_keep, and to_remove from the given base directory
rpms = next(api.consume(InstalledRedHatSignedRPM))
rpm_names = [rpm.name for rpm in rpms.items]
+
to_install = load_tasks_file(os.path.join(base_dir, 'to_install'), logger)
+ install_debug_msg = 'The following packages from "to_install" file will be ignored as they are already installed:'
# we do not want to put into rpm transaction what is already installed (it will go to "to_upgrade" bucket)
- to_install_filtered = [pkg for pkg in to_install if pkg not in rpm_names]
+ to_install_filtered, _ = filter_out(rpm_names, to_install, install_debug_msg)
- filtered = set(to_install) - set(to_install_filtered)
- if filtered:
- api.current_logger().debug(
- 'The following packages from "to_install" file will be ignored as they are already installed:'
- '\n- ' + '\n- '.join(filtered))
+ to_reinstall = load_tasks_file(os.path.join(base_dir, 'to_reinstall'), logger)
+ reinstall_debug_msg = 'The following packages from "to_reinstall" file will be ignored as they are not installed:'
+ _, to_reinstall_filtered = filter_out(rpm_names, to_reinstall, reinstall_debug_msg)
return RpmTransactionTasks(
to_install=to_install_filtered,
+ to_reinstall=to_reinstall_filtered,
to_keep=load_tasks_file(os.path.join(base_dir, 'to_keep'), logger),
to_remove=load_tasks_file(os.path.join(base_dir, 'to_remove'), logger))
diff --git a/repos/system_upgrade/common/actors/scancustomrepofile/actor.py b/repos/system_upgrade/common/actors/scancustomrepofile/actor.py
index d46018f..bb49b4e 100644
--- a/repos/system_upgrade/common/actors/scancustomrepofile/actor.py
+++ b/repos/system_upgrade/common/actors/scancustomrepofile/actor.py
@@ -1,6 +1,9 @@
from leapp.actors import Actor
from leapp.libraries.actor import scancustomrepofile
-from leapp.models import CustomTargetRepository, CustomTargetRepositoryFile
+from leapp.models import (
+ CustomTargetRepository,
+ CustomTargetRepositoryFile,
+)
from leapp.tags import FactsPhaseTag, IPUWorkflowTag
@@ -18,7 +21,7 @@ class ScanCustomRepofile(Actor):
If the file doesn't exist, nothing happens.
"""
- name = 'scan_custom_repofile'
+ name = "scan_custom_repofile"
consumes = ()
produces = (CustomTargetRepository, CustomTargetRepositoryFile)
tags = (FactsPhaseTag, IPUWorkflowTag)
diff --git a/repos/system_upgrade/common/actors/scancustomrepofile/libraries/scancustomrepofile.py b/repos/system_upgrade/common/actors/scancustomrepofile/libraries/scancustomrepofile.py
index c294193..c0820eb 100644
--- a/repos/system_upgrade/common/actors/scancustomrepofile/libraries/scancustomrepofile.py
+++ b/repos/system_upgrade/common/actors/scancustomrepofile/libraries/scancustomrepofile.py
@@ -18,18 +18,27 @@ def process():
"""
if not os.path.isfile(CUSTOM_REPO_PATH):
api.current_logger().debug(
- "The {} file doesn't exist. Nothing to do."
- .format(CUSTOM_REPO_PATH))
+ "The {} file doesn't exist. Nothing to do.".format(CUSTOM_REPO_PATH)
+ )
return
- api.current_logger().info("The {} file exists.".format(CUSTOM_REPO_PATH))
+
repofile = repofileutils.parse_repofile(CUSTOM_REPO_PATH)
if not repofile.data:
+ api.current_logger().info(
+ "The {} file exists, but is empty. Nothing to do.".format(CUSTOM_REPO_PATH)
+ )
return
api.produce(CustomTargetRepositoryFile(file=CUSTOM_REPO_PATH))
+
for repo in repofile.data:
- api.produce(CustomTargetRepository(
- repoid=repo.repoid,
- name=repo.name,
- baseurl=repo.baseurl,
- enabled=repo.enabled,
- ))
+ api.produce(
+ CustomTargetRepository(
+ repoid=repo.repoid,
+ name=repo.name,
+ baseurl=repo.baseurl,
+ enabled=repo.enabled,
+ )
+ )
+ api.current_logger().info(
+ "The {} file exists, custom repositories loaded.".format(CUSTOM_REPO_PATH)
+ )
diff --git a/repos/system_upgrade/common/actors/scancustomrepofile/tests/test_scancustomrepofile.py b/repos/system_upgrade/common/actors/scancustomrepofile/tests/test_scancustomrepofile.py
index 4ea33a2..aaec273 100644
--- a/repos/system_upgrade/common/actors/scancustomrepofile/tests/test_scancustomrepofile.py
+++ b/repos/system_upgrade/common/actors/scancustomrepofile/tests/test_scancustomrepofile.py
@@ -4,8 +4,13 @@ from leapp.libraries.actor import scancustomrepofile
from leapp.libraries.common import repofileutils
from leapp.libraries.common.testutils import produce_mocked
from leapp.libraries.stdlib import api
-from leapp.models import CustomTargetRepository, CustomTargetRepositoryFile, RepositoryData, RepositoryFile
+from leapp.models import (
+ CustomTargetRepository,
+ CustomTargetRepositoryFile,
+ RepositoryData,
+ RepositoryFile,
+)
_REPODATA = [
RepositoryData(repoid="repo1", name="repo1name", baseurl="repo1url", enabled=True),
@@ -57,7 +62,7 @@ def test_valid_repofile_exists(monkeypatch):
monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile)
monkeypatch.setattr(api, 'current_logger', LoggerMocked())
scancustomrepofile.process()
- msg = "The {} file exists.".format(scancustomrepofile.CUSTOM_REPO_PATH)
+ msg = "The {} file exists, custom repositories loaded.".format(scancustomrepofile.CUSTOM_REPO_PATH)
assert api.current_logger.infomsg == msg
assert api.produce.called == len(_CUSTOM_REPOS) + 1
assert _CUSTOM_REPO_FILE_MSG in api.produce.model_instances
@@ -73,6 +78,6 @@ def test_empty_repofile_exists(monkeypatch):
monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile)
monkeypatch.setattr(api, 'current_logger', LoggerMocked())
scancustomrepofile.process()
- msg = "The {} file exists.".format(scancustomrepofile.CUSTOM_REPO_PATH)
+ msg = "The {} file exists, but is empty. Nothing to do.".format(scancustomrepofile.CUSTOM_REPO_PATH)
assert api.current_logger.infomsg == msg
assert not api.produce.called
diff --git a/repos/system_upgrade/common/actors/scanvendorrepofiles/actor.py b/repos/system_upgrade/common/actors/scanvendorrepofiles/actor.py
new file mode 100644
index 0000000..dd27b28
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scanvendorrepofiles/actor.py
@@ -0,0 +1,27 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import scanvendorrepofiles
+from leapp.models import (
+ CustomTargetRepositoryFile,
+ ActiveVendorList,
+ VendorCustomTargetRepositoryList,
+)
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+from leapp.libraries.stdlib import api
+
+
+class ScanVendorRepofiles(Actor):
+ """
+ Load and produce custom repository data from vendor-provided files.
+ Only those vendors whose source system repoids were found on the system will be included.
+ """
+
+ name = "scan_vendor_repofiles"
+ consumes = ActiveVendorList
+ produces = (
+ CustomTargetRepositoryFile,
+ VendorCustomTargetRepositoryList,
+ )
+ tags = (FactsPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ scanvendorrepofiles.process()
diff --git a/repos/system_upgrade/common/actors/scanvendorrepofiles/libraries/scanvendorrepofiles.py b/repos/system_upgrade/common/actors/scanvendorrepofiles/libraries/scanvendorrepofiles.py
new file mode 100644
index 0000000..ba74be1
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scanvendorrepofiles/libraries/scanvendorrepofiles.py
@@ -0,0 +1,72 @@
+import os
+
+from leapp.libraries.common import repofileutils
+from leapp.libraries.stdlib import api
+from leapp.models import (
+ CustomTargetRepository,
+ CustomTargetRepositoryFile,
+ ActiveVendorList,
+ VendorCustomTargetRepositoryList,
+)
+
+
+VENDORS_DIR = "/etc/leapp/files/vendors.d/"
+REPOFILE_SUFFIX = ".repo"
+
+
+def process():
+ """
+ Produce CustomTargetRepository msgs for the vendor repo files inside the
+ <CUSTOM_REPO_DIR>.
+
+ The CustomTargetRepository messages are produced only if a "from" vendor repository
+ listed indide its map matched one of the repositories active on the system.
+ """
+ if not os.path.isdir(VENDORS_DIR):
+ api.current_logger().debug(
+ "The {} directory doesn't exist. Nothing to do.".format(VENDORS_DIR)
+ )
+ return
+
+ for reponame in os.listdir(VENDORS_DIR):
+ if not reponame.endswith(REPOFILE_SUFFIX):
+ continue
+ # Cut the .repo part to get only the name.
+ vendor_name = reponame[:-5]
+
+ active_vendors = []
+ for vendor_list in api.consume(ActiveVendorList):
+ active_vendors.extend(vendor_list.data)
+
+ api.current_logger().debug("Active vendor list: {}".format(active_vendors))
+
+ if vendor_name not in active_vendors:
+ api.current_logger().debug(
+ "Vendor {} not in active list, skipping".format(vendor_name)
+ )
+ continue
+
+ api.current_logger().debug(
+ "Vendor {} found in active list, processing file".format(vendor_name)
+ )
+ full_repo_path = os.path.join(VENDORS_DIR, reponame)
+ repofile = repofileutils.parse_repofile(full_repo_path)
+
+ api.produce(CustomTargetRepositoryFile(file=full_repo_path))
+
+ custom_vendor_repos = [
+ CustomTargetRepository(
+ repoid=repo.repoid,
+ name=repo.name,
+ baseurl=repo.baseurl,
+ enabled=repo.enabled,
+ ) for repo in repofile.data
+ ]
+
+ api.produce(
+ VendorCustomTargetRepositoryList(vendor=vendor_name, repos=custom_vendor_repos)
+ )
+
+ api.current_logger().info(
+ "The {} directory exists, vendor repositories loaded.".format(VENDORS_DIR)
+ )
diff --git a/repos/system_upgrade/common/actors/scanvendorrepofiles/tests/test_scanvendorrepofiles.py b/repos/system_upgrade/common/actors/scanvendorrepofiles/tests/test_scanvendorrepofiles.py
new file mode 100644
index 0000000..cb5c7ab
--- /dev/null
+++ b/repos/system_upgrade/common/actors/scanvendorrepofiles/tests/test_scanvendorrepofiles.py
@@ -0,0 +1,131 @@
+import os
+
+from leapp.libraries.actor import scancustomrepofile
+from leapp.libraries.common import repofileutils
+from leapp.libraries.common.testutils import produce_mocked
+from leapp.libraries.stdlib import api
+
+from leapp.models import (CustomTargetRepository, CustomTargetRepositoryFile,
+ RepositoryData, RepositoryFile)
+
+
+_REPODATA = [
+ RepositoryData(repoid="repo1", name="repo1name", baseurl="repo1url", enabled=True),
+ RepositoryData(repoid="repo2", name="repo2name", baseurl="repo2url", enabled=False),
+ RepositoryData(repoid="repo3", name="repo3name", enabled=True),
+ RepositoryData(repoid="repo4", name="repo4name", mirrorlist="mirror4list", enabled=True),
+]
+
+_CUSTOM_REPOS = [
+ CustomTargetRepository(repoid="repo1", name="repo1name", baseurl="repo1url", enabled=True),
+ CustomTargetRepository(repoid="repo2", name="repo2name", baseurl="repo2url", enabled=False),
+ CustomTargetRepository(repoid="repo3", name="repo3name", baseurl=None, enabled=True),
+ CustomTargetRepository(repoid="repo4", name="repo4name", baseurl=None, enabled=True),
+]
+
+_CUSTOM_REPO_FILE_MSG = CustomTargetRepositoryFile(file=scancustomrepofile.CUSTOM_REPO_PATH)
+
+
+_TESTING_REPODATA = [
+ RepositoryData(repoid="repo1-stable", name="repo1name", baseurl="repo1url", enabled=True),
+ RepositoryData(repoid="repo2-testing", name="repo2name", baseurl="repo2url", enabled=False),
+ RepositoryData(repoid="repo3-stable", name="repo3name", enabled=False),
+ RepositoryData(repoid="repo4-testing", name="repo4name", mirrorlist="mirror4list", enabled=True),
+]
+
+_TESTING_CUSTOM_REPOS_STABLE_TARGET = [
+ CustomTargetRepository(repoid="repo1-stable", name="repo1name", baseurl="repo1url", enabled=True),
+ CustomTargetRepository(repoid="repo2-testing", name="repo2name", baseurl="repo2url", enabled=False),
+ CustomTargetRepository(repoid="repo3-stable", name="repo3name", baseurl=None, enabled=False),
+ CustomTargetRepository(repoid="repo4-testing", name="repo4name", baseurl=None, enabled=True),
+]
+
+_TESTING_CUSTOM_REPOS_BETA_TARGET = [
+ CustomTargetRepository(repoid="repo1-stable", name="repo1name", baseurl="repo1url", enabled=True),
+ CustomTargetRepository(repoid="repo2-testing", name="repo2name", baseurl="repo2url", enabled=True),
+ CustomTargetRepository(repoid="repo3-stable", name="repo3name", baseurl=None, enabled=False),
+ CustomTargetRepository(repoid="repo4-testing", name="repo4name", baseurl=None, enabled=True),
+]
+
+_PROCESS_STABLE_TARGET = "stable"
+_PROCESS_BETA_TARGET = "beta"
+
+
+class LoggerMocked(object):
+ def __init__(self):
+ self.infomsg = None
+ self.debugmsg = None
+
+ def info(self, msg):
+ self.infomsg = msg
+
+ def debug(self, msg):
+ self.debugmsg = msg
+
+ def __call__(self):
+ return self
+
+
+def test_no_repofile(monkeypatch):
+ monkeypatch.setattr(os.path, 'isfile', lambda dummy: False)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(api, 'current_logger', LoggerMocked())
+ scancustomrepofile.process()
+ msg = "The {} file doesn't exist. Nothing to do.".format(scancustomrepofile.CUSTOM_REPO_PATH)
+ assert api.current_logger.debugmsg == msg
+ assert not api.produce.called
+
+
+def test_valid_repofile_exists(monkeypatch):
+ def _mocked_parse_repofile(fpath):
+ return RepositoryFile(file=fpath, data=_REPODATA)
+ monkeypatch.setattr(os.path, 'isfile', lambda dummy: True)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile)
+ monkeypatch.setattr(api, 'current_logger', LoggerMocked())
+ scancustomrepofile.process()
+ msg = "The {} file exists, custom repositories loaded.".format(scancustomrepofile.CUSTOM_REPO_PATH)
+ assert api.current_logger.infomsg == msg
+ assert api.produce.called == len(_CUSTOM_REPOS) + 1
+ assert _CUSTOM_REPO_FILE_MSG in api.produce.model_instances
+ for crepo in _CUSTOM_REPOS:
+ assert crepo in api.produce.model_instances
+
+
+def test_target_stable_repos(monkeypatch):
+ def _mocked_parse_repofile(fpath):
+ return RepositoryFile(file=fpath, data=_TESTING_REPODATA)
+ monkeypatch.setattr(os.path, 'isfile', lambda dummy: True)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile)
+
+ scancustomrepofile.process(_PROCESS_STABLE_TARGET)
+ assert api.produce.called == len(_TESTING_CUSTOM_REPOS_STABLE_TARGET) + 1
+ for crepo in _TESTING_CUSTOM_REPOS_STABLE_TARGET:
+ assert crepo in api.produce.model_instances
+
+
+def test_target_beta_repos(monkeypatch):
+ def _mocked_parse_repofile(fpath):
+ return RepositoryFile(file=fpath, data=_TESTING_REPODATA)
+ monkeypatch.setattr(os.path, 'isfile', lambda dummy: True)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile)
+
+ scancustomrepofile.process(_PROCESS_BETA_TARGET)
+ assert api.produce.called == len(_TESTING_CUSTOM_REPOS_BETA_TARGET) + 1
+ for crepo in _TESTING_CUSTOM_REPOS_BETA_TARGET:
+ assert crepo in api.produce.model_instances
+
+
+def test_empty_repofile_exists(monkeypatch):
+ def _mocked_parse_repofile(fpath):
+ return RepositoryFile(file=fpath, data=[])
+ monkeypatch.setattr(os.path, 'isfile', lambda dummy: True)
+ monkeypatch.setattr(api, 'produce', produce_mocked())
+ monkeypatch.setattr(repofileutils, 'parse_repofile', _mocked_parse_repofile)
+ monkeypatch.setattr(api, 'current_logger', LoggerMocked())
+ scancustomrepofile.process()
+ msg = "The {} file exists, but is empty. Nothing to do.".format(scancustomrepofile.CUSTOM_REPO_PATH)
+ assert api.current_logger.infomsg == msg
+ assert not api.produce.called
diff --git a/repos/system_upgrade/common/actors/setetcreleasever/libraries/setetcreleasever.py b/repos/system_upgrade/common/actors/setetcreleasever/libraries/setetcreleasever.py
index 73d1ffd..046f3fb 100644
--- a/repos/system_upgrade/common/actors/setetcreleasever/libraries/setetcreleasever.py
+++ b/repos/system_upgrade/common/actors/setetcreleasever/libraries/setetcreleasever.py
@@ -1,5 +1,6 @@
from leapp.libraries.stdlib import api
from leapp.models import PkgManagerInfo, RHUIInfo
+from leapp.libraries.common.config.version import get_target_major_version
def _set_releasever(releasever):
@@ -10,7 +11,7 @@ def _set_releasever(releasever):
def process():
- target_version = api.current_actor().configuration.version.target
+ target_version = get_target_major_version()
pkg_facts = next(api.consume(PkgManagerInfo), None)
rhui_facts = next(api.consume(RHUIInfo), None)
diff --git a/repos/system_upgrade/common/actors/setuptargetrepos/actor.py b/repos/system_upgrade/common/actors/setuptargetrepos/actor.py
index 00de073..95cedcd 100644
--- a/repos/system_upgrade/common/actors/setuptargetrepos/actor.py
+++ b/repos/system_upgrade/common/actors/setuptargetrepos/actor.py
@@ -9,9 +9,11 @@ from leapp.models import (
RHUIInfo,
SkippedRepositories,
TargetRepositories,
- UsedRepositories
+ UsedRepositories,
+ VendorCustomTargetRepositoryList
)
from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+from leapp.libraries.stdlib import api
class SetupTargetRepos(Actor):
@@ -30,7 +32,8 @@ class SetupTargetRepos(Actor):
RepositoriesFacts,
RepositoriesBlacklisted,
RHUIInfo,
- UsedRepositories)
+ UsedRepositories,
+ VendorCustomTargetRepositoryList)
produces = (TargetRepositories, SkippedRepositories)
tags = (IPUWorkflowTag, FactsPhaseTag)
diff --git a/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos.py b/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos.py
index 3f34aed..9c1e360 100644
--- a/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos.py
+++ b/repos/system_upgrade/common/actors/setuptargetrepos/libraries/setuptargetrepos.py
@@ -12,7 +12,8 @@ from leapp.models import (
RHUIInfo,
SkippedRepositories,
TargetRepositories,
- UsedRepositories
+ UsedRepositories,
+ VendorCustomTargetRepositoryList
)
@@ -58,10 +59,21 @@ def _get_used_repo_dict():
return used
-def _setup_repomap_handler(src_repoids):
- repo_mappig_msg = next(api.consume(RepositoriesMapping), RepositoriesMapping())
+def _setup_repomap_handler(src_repoids, mapping_list):
+ combined_mapping = []
+ combined_repositories = []
+ # Depending on whether there are any vendors present, we might get more than one message.
+ for msg in mapping_list:
+ combined_mapping.extend(msg.mapping)
+ combined_repositories.extend(msg.repositories)
+
+ combined_repomapping = RepositoriesMapping(
+ mapping=combined_mapping,
+ repositories=combined_repositories
+ )
+
rhui_info = next(api.consume(RHUIInfo), RHUIInfo(provider=''))
- repomap = setuptargetrepos_repomap.RepoMapDataHandler(repo_mappig_msg, cloud_provider=rhui_info.provider)
+ repomap = setuptargetrepos_repomap.RepoMapDataHandler(combined_repomapping, cloud_provider=rhui_info.provider)
# TODO(pstodulk): what about skip this completely and keep the default 'ga'..?
default_channels = setuptargetrepos_repomap.get_default_repository_channels(repomap, src_repoids)
repomap.set_default_channels(default_channels)
@@ -77,22 +89,75 @@ def _get_mapped_repoids(repomap, src_repoids):
return mapped_repoids
+def _get_vendor_custom_repos(enabled_repos, mapping_list):
+ # Look at what source repos from the vendor mapping were enabled.
+ # If any of them are in beta, include vendor's custom repos in the list.
+ # Otherwise skip them.
+
+ result = []
+
+ # Build a dict of vendor mappings for easy lookup.
+ map_dict = {mapping.vendor: mapping for mapping in mapping_list if mapping.vendor}
+
+ for vendor_repolist in api.consume(VendorCustomTargetRepositoryList):
+ vendor_repomap = map_dict[vendor_repolist.vendor]
+
+ # Find the beta channel repositories for the vendor.
+ beta_repos = [
+ x.repoid for x in vendor_repomap.repositories if x.channel == "beta"
+ ]
+ api.current_logger().debug(
+ "Vendor {} beta repos: {}".format(vendor_repolist.vendor, beta_repos)
+ )
+
+ # Are any of the beta repos present and enabled on the system?
+ if any(rep in beta_repos for rep in enabled_repos):
+ # If so, use all repos including beta in the upgrade.
+ vendor_repos = vendor_repolist.repos
+ else:
+ # Otherwise filter beta repos out.
+ vendor_repos = [repo for repo in vendor_repolist.repos if repo.repoid not in beta_repos]
+
+ result.extend([CustomTargetRepository(
+ repoid=repo.repoid,
+ name=repo.name,
+ baseurl=repo.baseurl,
+ enabled=repo.enabled,
+ ) for repo in vendor_repos])
+
+ return result
+
+
def process():
# load all data / messages
used_repoids_dict = _get_used_repo_dict()
enabled_repoids = _get_enabled_repoids()
excluded_repoids = _get_blacklisted_repoids()
+
+ mapping_list = list(api.consume(RepositoriesMapping))
+
custom_repos = _get_custom_target_repos()
+ vendor_repos = _get_vendor_custom_repos(enabled_repoids, mapping_list)
+
+ api.current_logger().debug('Custom repos: {}'.format([f.repoid for f in custom_repos]))
+ api.current_logger().debug('Vendor repos: {}'.format([f.repoid for f in vendor_repos]))
+
+ custom_repos.extend(vendor_repos)
+
+ api.current_logger().debug('Used repos: {}'.format(used_repoids_dict.keys()))
+ api.current_logger().debug('Enabled repos: {}'.format(list(enabled_repoids)))
# TODO(pstodulk): isn't that a potential issue that we map just enabled repos
# instead of enabled + used repos??
# initialise basic data
- repomap = _setup_repomap_handler(enabled_repoids)
+ repomap = _setup_repomap_handler(enabled_repoids, mapping_list)
mapped_repoids = _get_mapped_repoids(repomap, enabled_repoids)
+ api.current_logger().debug('Mapped repos: {}'.format(mapped_repoids))
skipped_repoids = enabled_repoids & set(used_repoids_dict.keys()) - mapped_repoids
# Now get the info what should be the target RHEL repositories
expected_repos = repomap.get_expected_target_pesid_repos(enabled_repoids)
+ api.current_logger().debug('Expected repos: {}'.format(expected_repos.keys()))
target_rhel_repoids = set()
for target_pesid, target_pesidrepo in expected_repos.items():
if not target_pesidrepo:
diff --git a/repos/system_upgrade/common/actors/systemfacts/actor.py b/repos/system_upgrade/common/actors/systemfacts/actor.py
index 59b12c8..85d4a09 100644
--- a/repos/system_upgrade/common/actors/systemfacts/actor.py
+++ b/repos/system_upgrade/common/actors/systemfacts/actor.py
@@ -47,7 +47,7 @@ class SystemFactsActor(Actor):
GrubCfgBios,
Report
)
- tags = (IPUWorkflowTag, FactsPhaseTag,)
+ tags = (IPUWorkflowTag, FactsPhaseTag.Before,)
def process(self):
self.produce(systemfacts.get_sysctls_status())
diff --git a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
index 7a8bd99..d05aaf9 100644
--- a/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
+++ b/repos/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py
@@ -225,6 +225,12 @@ def _prep_repository_access(context, target_userspace):
target_etc = os.path.join(target_userspace, 'etc')
target_yum_repos_d = os.path.join(target_etc, 'yum.repos.d')
backup_yum_repos_d = os.path.join(target_etc, 'yum.repos.d.backup')
+
+ # Copy RHN data independent from RHSM config
+ if os.path.isdir('/etc/sysconfig/rhn'):
+ run(['rm', '-rf', os.path.join(target_etc, 'sysconfig/rhn')])
+ context.copytree_from('/etc/sysconfig/rhn', os.path.join(target_etc, 'sysconfig/rhn'))
+
if not rhsm.skip_rhsm():
run(['rm', '-rf', os.path.join(target_etc, 'pki')])
run(['rm', '-rf', os.path.join(target_etc, 'rhsm')])
@@ -392,6 +398,11 @@ def _inhibit_on_duplicate_repos(repofiles):
def _get_all_available_repoids(context):
repofiles = repofileutils.get_parsed_repofiles(context)
+
+ api.current_logger().debug("All available repositories inside the overlay FS:")
+ for repof in repofiles:
+ api.current_logger().debug("File: {}, repos: {}".format(repof.file, [repod.repoid for repod in repof.data]))
+
# TODO: this is not good solution, but keep it as it is now
# Issue: #486
if rhsm.skip_rhsm():
@@ -592,6 +603,7 @@ def _install_custom_repofiles(context, custom_repofiles):
"""
for rfile in custom_repofiles:
_dst_path = os.path.join('/etc/yum.repos.d', os.path.basename(rfile.file))
+ api.current_logger().debug("Copying {} to {}".format(rfile.file, _dst_path))
context.copy_to(rfile.file, _dst_path)
diff --git a/repos/system_upgrade/common/actors/vendorreposignaturescanner/actor.py b/repos/system_upgrade/common/actors/vendorreposignaturescanner/actor.py
new file mode 100644
index 0000000..e28b880
--- /dev/null
+++ b/repos/system_upgrade/common/actors/vendorreposignaturescanner/actor.py
@@ -0,0 +1,72 @@
+import os
+
+from leapp.actors import Actor
+from leapp.models import VendorSignatures, ActiveVendorList
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+VENDORS_DIR = "/etc/leapp/files/vendors.d/"
+SIGFILE_SUFFIX = ".sigs"
+
+
+class VendorRepoSignatureScanner(Actor):
+ """
+ Produce VendorSignatures messages for the vendor signature files inside the
+ <VENDORS_DIR>.
+ These messages are used to extend the list of pakcages Leapp will consider
+ signed and will attempt to upgrade.
+
+ The messages are produced only if a "from" vendor repository
+ listed indide its map matched one of the repositories active on the system.
+ """
+
+ name = 'vendor_repo_signature_scanner'
+ consumes = (ActiveVendorList)
+ produces = (VendorSignatures)
+ tags = (IPUWorkflowTag, FactsPhaseTag.Before)
+
+ def process(self):
+ if not os.path.isdir(VENDORS_DIR):
+ self.log.debug(
+ "The {} directory doesn't exist. Nothing to do.".format(VENDORS_DIR)
+ )
+ return
+
+ for sigfile_name in os.listdir(VENDORS_DIR):
+ if not sigfile_name.endswith(SIGFILE_SUFFIX):
+ continue
+ # Cut the suffix part to get only the name.
+ vendor_name = sigfile_name[:-5]
+
+ active_vendors = []
+ for vendor_list in self.consume(ActiveVendorList):
+ active_vendors.extend(vendor_list.data)
+
+ self.log.debug(
+ "Active vendor list: {}".format(active_vendors)
+ )
+
+ if vendor_name not in active_vendors:
+ self.log.debug(
+ "Vendor {} not in active list, skipping".format(vendor_name)
+ )
+ continue
+
+ self.log.debug(
+ "Vendor {} found in active list, processing file".format(vendor_name)
+ )
+
+ full_sigfile_path = os.path.join(VENDORS_DIR, sigfile_name)
+ with open(full_sigfile_path) as f:
+ signatures = [line for line in f.read().splitlines() if line]
+
+ self.produce(
+ VendorSignatures(
+ vendor=vendor_name,
+ sigs=signatures,
+ )
+ )
+
+ self.log.info(
+ "The {} directory exists, vendor signatures loaded.".format(VENDORS_DIR)
+ )
diff --git a/repos/system_upgrade/common/actors/vendorrepositoriesmapping/actor.py b/repos/system_upgrade/common/actors/vendorrepositoriesmapping/actor.py
new file mode 100644
index 0000000..1325647
--- /dev/null
+++ b/repos/system_upgrade/common/actors/vendorrepositoriesmapping/actor.py
@@ -0,0 +1,19 @@
+from leapp.actors import Actor
+# from leapp.libraries.common.repomaputils import scan_vendor_repomaps, VENDOR_REPOMAP_DIR
+from leapp.libraries.actor.vendorrepositoriesmapping import scan_vendor_repomaps
+from leapp.models import VendorSourceRepos, RepositoriesMapping
+from leapp.tags import FactsPhaseTag, IPUWorkflowTag
+
+
+class VendorRepositoriesMapping(Actor):
+ """
+ Scan the vendor repository mapping files and provide the data to other actors.
+ """
+
+ name = "vendor_repositories_mapping"
+ consumes = ()
+ produces = (RepositoriesMapping, VendorSourceRepos,)
+ tags = (IPUWorkflowTag, FactsPhaseTag.Before)
+
+ def process(self):
+ scan_vendor_repomaps()
diff --git a/repos/system_upgrade/common/actors/vendorrepositoriesmapping/libraries/vendorrepositoriesmapping.py b/repos/system_upgrade/common/actors/vendorrepositoriesmapping/libraries/vendorrepositoriesmapping.py
new file mode 100644
index 0000000..32ccf58
--- /dev/null
+++ b/repos/system_upgrade/common/actors/vendorrepositoriesmapping/libraries/vendorrepositoriesmapping.py
@@ -0,0 +1,72 @@
+import os
+
+from leapp.libraries.common.config.version import get_target_major_version, get_source_major_version
+from leapp.libraries.common.repomaputils import RepoMapData, read_repofile, inhibit_upgrade
+from leapp.libraries.stdlib import api
+from leapp.models import VendorSourceRepos, RepositoriesMapping
+from leapp.models.fields import ModelViolationError
+
+
+VENDORS_DIR = "/etc/leapp/files/vendors.d"
+"""The folder containing the vendor repository mapping files."""
+
+
+def read_repomap_file(repomap_file, read_repofile_func, vendor_name):
+ json_data = read_repofile_func(repomap_file, VENDORS_DIR)
+ try:
+ repomap_data = RepoMapData.load_from_dict(json_data)
+
+ source_major = get_source_major_version()
+ target_major = get_target_major_version()
+
+ api.produce(VendorSourceRepos(
+ vendor=vendor_name,
+ source_repoids=repomap_data.get_version_repoids(source_major)
+ ))
+
+ mapping = repomap_data.get_mappings(source_major, target_major)
+ valid_major_versions = [source_major, target_major]
+
+ api.produce(RepositoriesMapping(
+ mapping=mapping,
+ repositories=repomap_data.get_repositories(valid_major_versions),
+ vendor=vendor_name
+ ))
+ except ModelViolationError as err:
+ err_message = (
+ 'The repository mapping file is invalid: '
+ 'the JSON does not match required schema (wrong field type/value): {}. '
+ 'Ensure that the current upgrade path is correct and is present in the mappings: {} -> {}'
+ .format(err, source_major, target_major)
+ )
+ inhibit_upgrade(err_message)
+ except KeyError as err:
+ inhibit_upgrade(
+ 'The repository mapping file is invalid: the JSON is missing a required field: {}'.format(err))
+ except ValueError as err:
+ # The error should contain enough information, so we do not need to clarify it further
+ inhibit_upgrade('The repository mapping file is invalid: {}'.format(err))
+
+
+def scan_vendor_repomaps(read_repofile_func=read_repofile):
+ """
+ Scan the repository mapping file and produce RepositoriesMapping msg.
+
+ See the description of the actor for more details.
+ """
+
+ map_json_suffix = "_map.json"
+ if os.path.isdir(VENDORS_DIR):
+ vendor_mapfiles = list(filter(lambda vfile: map_json_suffix in vfile, os.listdir(VENDORS_DIR)))
+
+ for mapfile in vendor_mapfiles:
+ read_repomap_file(mapfile, read_repofile_func, mapfile[:-len(map_json_suffix)])
+ else:
+ api.current_logger().debug(
+ "The {} directory doesn't exist. Nothing to do.".format(VENDORS_DIR)
+ )
+ # vendor_repomap_collection = scan_vendor_repomaps(VENDOR_REPOMAP_DIR)
+ # if vendor_repomap_collection:
+ # self.produce(vendor_repomap_collection)
+ # for repomap in vendor_repomap_collection.maps:
+ # self.produce(repomap)
diff --git a/repos/system_upgrade/common/files/rhel_upgrade.py b/repos/system_upgrade/common/files/rhel_upgrade.py
index 62221a7..f5b4c70 100644
--- a/repos/system_upgrade/common/files/rhel_upgrade.py
+++ b/repos/system_upgrade/common/files/rhel_upgrade.py
@@ -171,6 +171,7 @@ class RhelUpgradeCommand(dnf.cli.Command):
to_install = self.plugin_data['pkgs_info']['to_install']
to_remove = self.plugin_data['pkgs_info']['to_remove']
to_upgrade = self.plugin_data['pkgs_info']['to_upgrade']
+ to_reinstall = self.plugin_data['pkgs_info']['to_reinstall']
# Modules to enable
self._process_entities(entities=[modules_to_enable], op=module_base.enable, entity_name='Module stream')
@@ -181,6 +182,8 @@ class RhelUpgradeCommand(dnf.cli.Command):
self._process_entities(entities=to_install, op=self.base.install, entity_name='Package')
# Packages to be upgraded
self._process_entities(entities=to_upgrade, op=self.base.upgrade, entity_name='Package')
+ # Packages to be reinstalled
+ self._process_entities(entities=to_reinstall, op=self.base.reinstall, entity_name='Package')
self.base.distro_sync()
diff --git a/repos/system_upgrade/common/libraries/config/version.py b/repos/system_upgrade/common/libraries/config/version.py
index 03f3cd4..e460729 100644
--- a/repos/system_upgrade/common/libraries/config/version.py
+++ b/repos/system_upgrade/common/libraries/config/version.py
@@ -13,8 +13,8 @@ OP_MAP = {
_SUPPORTED_VERSIONS = {
# Note: 'rhel-alt' is detected when on 'rhel' with kernel 4.x
- '7': {'rhel': ['7.9'], 'rhel-alt': ['7.6'], 'rhel-saphana': ['7.9']},
- '8': {'rhel': ['8.5', '8.6']},
+ '7': {'rhel': ['7.9'], 'rhel-alt': ['7.6'], 'rhel-saphana': ['7.9'], 'centos': ['7.9'], 'eurolinux': ['7.9'], 'ol': ['7.9'], 'cloudlinux': ['7.9'], 'scientific': ['7.9']},
+ '8': {'rhel': ['8.5', '8.6', '8.7', '8.8'], 'centos': ['8.5'], 'almalinux': ['8.6', '8.7', '8.8'], 'eurolinux': ['8.6', '8.7', '8.8'], 'ol': ['8.6', '8.7', '8.8'], 'rocky': ['8.6', '8.7', '8.8']},
}
diff --git a/repos/system_upgrade/common/libraries/dnfconfig.py b/repos/system_upgrade/common/libraries/dnfconfig.py
index 2125f6d..f1dbf70 100644
--- a/repos/system_upgrade/common/libraries/dnfconfig.py
+++ b/repos/system_upgrade/common/libraries/dnfconfig.py
@@ -114,3 +114,30 @@ def exclude_leapp_rpms(context):
"""
to_exclude = list(set(_get_excluded_pkgs(context) + get_leapp_packages()))
_set_excluded_pkgs(context, to_exclude)
+
+
+def enable_repository(context, reponame):
+ _set_repository_state(context, reponame, "enabled")
+
+
+def disable_repository(context, reponame):
+ _set_repository_state(context, reponame, "disabled")
+
+
+def _set_repository_state(context, repo_id, new_state):
+ """
+ Set the Yum repository with the provided ID as enabled or disabled.
+ """
+ if new_state == "enabled":
+ cmd_flag = '--set-enabled'
+ elif new_state == "disabled":
+ cmd_flag = '--set-disabled'
+
+ cmd = ['dnf', 'config-manager', cmd_flag, repo_id]
+
+ try:
+ context.call(cmd)
+ except CalledProcessError:
+ api.current_logger().error('Cannot set the dnf configuration')
+ raise
+ api.current_logger().debug('Repository {} has been {}'.format(repo_id, new_state))
diff --git a/repos/system_upgrade/common/libraries/dnfplugin.py b/repos/system_upgrade/common/libraries/dnfplugin.py
index 4010e9f..f095575 100644
--- a/repos/system_upgrade/common/libraries/dnfplugin.py
+++ b/repos/system_upgrade/common/libraries/dnfplugin.py
@@ -4,6 +4,8 @@ import json
import os
import shutil
+import six
+
from leapp.exceptions import StopActorExecutionError
from leapp.libraries.common import dnfconfig, guards, mounting, overlaygen, rhsm, utils
from leapp.libraries.common.config.version import get_target_major_version, get_target_version
@@ -85,6 +87,7 @@ def build_plugin_data(target_repoids, debug, test, tasks, on_aws):
'to_install': tasks.to_install,
'to_remove': tasks.to_remove,
'to_upgrade': tasks.to_upgrade,
+ 'to_reinstall': tasks.to_reinstall,
'modules_to_enable': ['{}:{}'.format(m.name, m.stream) for m in tasks.modules_to_enable],
},
'dnf_conf': {
@@ -213,10 +216,17 @@ def _transaction(context, stage, target_repoids, tasks, plugin_info, test=False,
message='Failed to execute dnf. Reason: {}'.format(str(e))
)
except CalledProcessError as e:
+ err_stdout = e.stdout
+ err_stderr = e.stderr
+ if six.PY2:
+ err_stdout = e.stdout.encode('utf-8', 'xmlcharrefreplace')
+ err_stderr = e.stderr.encode('utf-8', 'xmlcharrefreplace')
+
api.current_logger().error('DNF execution failed: ')
raise StopActorExecutionError(
message='DNF execution failed with non zero exit code.\nSTDOUT:\n{stdout}\nSTDERR:\n{stderr}'.format(
- stdout=e.stdout, stderr=e.stderr)
+ stdout=err_stdout, stderr=err_stderr
+ )
)
finally:
if stage == 'check':
@@ -241,7 +251,14 @@ def apply_workarounds(context=None):
for workaround in api.consume(DNFWorkaround):
try:
api.show_message('Applying transaction workaround - {}'.format(workaround.display_name))
- context.call(['/bin/bash', '-c', workaround.script_path])
+ if workaround.script_args:
+ cmd_str = '{script} {args}'.format(
+ script=workaround.script_path,
+ args=' '.join(workaround.script_args)
+ )
+ else:
+ cmd_str = workaround.script_path
+ context.call(['/bin/bash', '-c', cmd_str])
except (OSError, CalledProcessError) as e:
raise StopActorExecutionError(
message=('Failed to exceute script to apply transaction workaround {display_name}.'
diff --git a/repos/system_upgrade/common/libraries/fetch.py b/repos/system_upgrade/common/libraries/fetch.py
index 1c58148..37313b6 100644
--- a/repos/system_upgrade/common/libraries/fetch.py
+++ b/repos/system_upgrade/common/libraries/fetch.py
@@ -73,7 +73,7 @@ def read_or_fetch(filename, directory="/etc/leapp/files", service=None, allow_em
data = f.read()
if not allow_empty and not data:
_raise_error(local_path, "File {lp} exists but is empty".format(lp=local_path))
- logger.warning("File {lp} successfully read ({l} bytes)".format(lp=local_path, l=len(data)))
+ logger.debug("File {lp} successfully read ({l} bytes)".format(lp=local_path, l=len(data)))
return data
except EnvironmentError:
_raise_error(local_path, "File {lp} exists but couldn't be read".format(lp=local_path))
diff --git a/repos/system_upgrade/common/libraries/overlaygen.py b/repos/system_upgrade/common/libraries/overlaygen.py
index 6a2f5aa..51030fd 100644
--- a/repos/system_upgrade/common/libraries/overlaygen.py
+++ b/repos/system_upgrade/common/libraries/overlaygen.py
@@ -14,11 +14,15 @@ OVERLAY_DO_NOT_MOUNT = ('tmpfs', 'devpts', 'sysfs', 'proc', 'cramfs', 'sysv', 'v
MountPoints = namedtuple('MountPoints', ['fs_file', 'fs_vfstype'])
-def _ensure_enough_diskimage_space(space_needed, directory):
+def _ensure_enough_diskimage_space(space_needed, directory, xfs_mountpoint_count):
stat = os.statvfs(directory)
if (stat.f_frsize * stat.f_bavail) < (space_needed * 1024 * 1024):
message = ('Not enough space available for creating required disk images in {directory}. ' +
'Needed: {space_needed} MiB').format(space_needed=space_needed, directory=directory)
+ # An arbitrary cutoff, but "how many XFS mountpoints is too much" is subjective.
+ if xfs_mountpoint_count > 10:
+ message += (". Hint: there are {} XFS mountpoints with ftype=0 on the system. Space "
+ "required is calculated according to that amount".format(xfs_mountpoint_count))
api.current_logger().error(message)
raise StopActorExecutionError(message)
@@ -53,13 +57,14 @@ def _prepare_required_mounts(scratch_dir, mounts_dir, mount_points, xfs_info):
if not xfs_info.mountpoints_without_ftype:
return result
- space_needed = _overlay_disk_size() * len(xfs_info.mountpoints_without_ftype)
+ xfs_noftype_mounts = len(xfs_info.mountpoints_without_ftype)
+ space_needed = _overlay_disk_size() * xfs_noftype_mounts
disk_images_directory = os.path.join(scratch_dir, 'diskimages')
# Ensure we cleanup old disk images before we check for space contraints.
run(['rm', '-rf', disk_images_directory])
_create_diskimages_dir(scratch_dir, disk_images_directory)
- _ensure_enough_diskimage_space(space_needed, scratch_dir)
+ _ensure_enough_diskimage_space(space_needed, scratch_dir, xfs_noftype_mounts)
mount_names = [mount_point.fs_file for mount_point in mount_points]
diff --git a/repos/system_upgrade/common/libraries/repofileutils.py b/repos/system_upgrade/common/libraries/repofileutils.py
index a3f111b..26e4d3e 100644
--- a/repos/system_upgrade/common/libraries/repofileutils.py
+++ b/repos/system_upgrade/common/libraries/repofileutils.py
@@ -26,6 +26,18 @@ def _parse_repository(repoid, repo_data):
return RepositoryData(**prepared)
+def _prepare_config(repodata, config_parser):
+ for repo in repodata.data:
+ config_parser.add_section(repo.repoid)
+
+ repo_enabled = 1 if repo.enabled else 0
+ config_parser.set(repo.repoid, 'name', repo.name)
+ config_parser.set(repo.repoid, 'baseurl', repo.baseurl)
+ config_parser.set(repo.repoid, 'metalink', repo.metalink)
+ config_parser.set(repo.repoid, 'mirrorlist', repo.mirrorlist)
+ config_parser.set(repo.repoid, 'enabled', repo_enabled)
+
+
def parse_repofile(repofile):
"""
Parse the given repo file.
@@ -42,6 +54,21 @@ def parse_repofile(repofile):
return RepositoryFile(file=repofile, data=data)
+def save_repofile(repodata, repofile_path):
+ """
+ Save the given repository data to file.
+
+ :param repodata: Repository data to save
+ :type repodata: RepositoryFile
+ :param repofile_path: Path to the repo file
+ :type repofile_path: str
+ """
+ with open(repofile_path, mode='w') as fp:
+ cp = utils.create_parser()
+ _prepare_config(repodata, cp)
+ cp.write(fp)
+
+
def get_repodirs():
"""
Return all directories yum scans for repository files, if they exist.
diff --git a/repos/system_upgrade/common/libraries/repomaputils.py b/repos/system_upgrade/common/libraries/repomaputils.py
new file mode 100644
index 0000000..5c41620
--- /dev/null
+++ b/repos/system_upgrade/common/libraries/repomaputils.py
@@ -0,0 +1,147 @@
+import json
+from collections import defaultdict
+
+from leapp.exceptions import StopActorExecutionError
+from leapp.libraries.common.fetch import read_or_fetch
+from leapp.models import PESIDRepositoryEntry, RepoMapEntry
+
+
+def inhibit_upgrade(msg):
+ raise StopActorExecutionError(
+ msg,
+ details={'hint': ('Read documentation at the following link for more'
+ ' information about how to retrieve the valid file:'
+ ' https://access.redhat.com/articles/3664871')})
+
+
+def read_repofile(repofile, directory="/etc/leapp/files"):
+ # NOTE: what about catch StopActorExecution error when the file cannot be
+ # obtained -> then check whether old_repomap file exists and in such a case
+ # inform user they have to provde the new repomap.json file (we have the
+ # warning now only which could be potentially overlooked)
+ try:
+ return json.loads(read_or_fetch(repofile, directory))
+ except ValueError:
+ # The data does not contain a valid json
+ inhibit_upgrade('The repository mapping file is invalid: file does not contain a valid JSON object.')
+ return None # Avoids inconsistent-return-statements warning
+
+
+class RepoMapData(object):
+ VERSION_FORMAT = '1.0.0'
+
+ def __init__(self):
+ self.repositories = []
+ self.mapping = {}
+
+ def add_repository(self, data, pesid):
+ """
+ Add new PESIDRepositoryEntry with given pesid from the provided dictionary.
+
+ :param data: A dict containing the data of the added repository. The dictionary structure corresponds
+ to the repositories entries in the repository mapping JSON schema.
+ :type data: Dict[str, str]
+ :param pesid: PES id of the repository family that the newly added repository belongs to.
+ :type pesid: str
+ """
+ self.repositories.append(PESIDRepositoryEntry(
+ repoid=data['repoid'],
+ channel=data['channel'],
+ rhui=data.get('rhui', ''),
+ repo_type=data['repo_type'],
+ arch=data['arch'],
+ major_version=data['major_version'],
+ pesid=pesid
+ ))
+
+ def get_repositories(self, valid_major_versions):
+ """
+ Return the list of PESIDRepositoryEntry object matching the specified major versions.
+ """
+ return [repo for repo in self.repositories if repo.major_version in valid_major_versions]
+
+ def get_version_repoids(self, major_version):
+ """
+ Return the list of repository ID strings for repositories matching the specified major version.
+ """
+ return [repo.repoid for repo in self.repositories if repo.major_version == major_version]
+
+ def add_mapping(self, source_major_version, target_major_version, source_pesid, target_pesid):
+ """
+ Add a new mapping entry that is mapping the source pesid to the destination pesid(s),
+ relevant in an IPU from the supplied source major version to the supplied target
+ major version.
+
+ :param str source_major_version: Specifies the major version of the source system
+ for which the added mapping applies.
+ :param str target_major_version: Specifies the major version of the target system
+ for which the added mapping applies.
+ :param str source_pesid: PESID of the source repository.
+ :param Union[str|List[str]] target_pesid: A single target PESID or a list of target
+ PESIDs of the added mapping.
+ """
+ # NOTE: it could be more simple, but I prefer to be sure the input data
+ # contains just one map per source PESID.
+ key = '{}:{}'.format(source_major_version, target_major_version)
+ rmap = self.mapping.get(key, defaultdict(set))
+ self.mapping[key] = rmap
+ if isinstance(target_pesid, list):
+ rmap[source_pesid].update(target_pesid)
+ else:
+ rmap[source_pesid].add(target_pesid)
+
+ def get_mappings(self, src_major_version, dst_major_version):
+ """
+ Return the list of RepoMapEntry objects for the specified upgrade path.
+
+ IOW, the whole mapping for specified IPU.
+ """
+ key = '{}:{}'.format(src_major_version, dst_major_version)
+ rmap = self.mapping.get(key, None)
+ if not rmap:
+ return None
+ map_list = []
+ for src_pesid in sorted(rmap.keys()):
+ map_list.append(RepoMapEntry(source=src_pesid, target=sorted(rmap[src_pesid])))
+ return map_list
+
+ @staticmethod
+ def load_from_dict(data):
+ if data['version_format'] != RepoMapData.VERSION_FORMAT:
+ raise ValueError(
+ 'The obtained repomap data has unsupported version of format.'
+ ' Get {} required {}'
+ .format(data['version_format'], RepoMapData.VERSION_FORMAT)
+ )
+
+ repomap = RepoMapData()
+
+ # Load reposiories
+ existing_pesids = set()
+ for repo_family in data['repositories']:
+ existing_pesids.add(repo_family['pesid'])
+ for repo in repo_family['entries']:
+ repomap.add_repository(repo, repo_family['pesid'])
+
+ # Load mappings
+ for mapping in data['mapping']:
+ for entry in mapping['entries']:
+ if not isinstance(entry['target'], list):
+ raise ValueError(
+ 'The target field of a mapping entry is not a list: {}'
+ .format(entry)
+ )
+
+ for pesid in [entry['source']] + entry['target']:
+ if pesid not in existing_pesids:
+ raise ValueError(
+ 'The {} pesid is not related to any repository.'
+ .format(pesid)
+ )
+ repomap.add_mapping(
+ source_major_version=mapping['source_major_version'],
+ target_major_version=mapping['target_major_version'],
+ source_pesid=entry['source'],
+ target_pesid=entry['target'],
+ )
+ return repomap
diff --git a/repos/system_upgrade/common/libraries/rhsm.py b/repos/system_upgrade/common/libraries/rhsm.py
index b7e4b21..dc038bf 100644
--- a/repos/system_upgrade/common/libraries/rhsm.py
+++ b/repos/system_upgrade/common/libraries/rhsm.py
@@ -92,7 +92,7 @@ def _handle_rhsm_exceptions(hint=None):
def skip_rhsm():
"""Check whether we should skip RHSM related code."""
- return get_env('LEAPP_NO_RHSM', '0') == '1'
+ return True
def with_rhsm(f):
diff --git a/repos/system_upgrade/common/libraries/rpms.py b/repos/system_upgrade/common/libraries/rpms.py
index 86767c7..18fd63b 100644
--- a/repos/system_upgrade/common/libraries/rpms.py
+++ b/repos/system_upgrade/common/libraries/rpms.py
@@ -21,7 +21,10 @@ def get_installed_rpms():
def create_lookup(model, field, keys, context=stdlib.api):
"""
- Create a lookup set from one of the model fields.
+ Create a lookup list from one of the model fields.
+ Returns a list of keys instead of a set, as you might want to
+ access this data at some point later in some form of structured
+ manner. See package_data_for
:param model: model class
:param field: model field, its value will be taken for lookup data
@@ -30,27 +33,57 @@ def create_lookup(model, field, keys, context=stdlib.api):
"""
data = getattr(next((m for m in context.consume(model)), model()), field)
try:
- return {tuple(getattr(obj, key) for key in keys) for obj in data} if data else set()
+ return [tuple(getattr(obj, key) for key in keys) for obj in data] if data else list()
except TypeError:
# data is not iterable, not lookup can be built
stdlib.api.current_logger().error(
"{model}.{field}.{keys} is not iterable, can't build lookup".format(
model=model, field=field, keys=keys))
- return set()
+ return list()
-def has_package(model, package_name, arch=None, context=stdlib.api):
+def has_package(model, package_name, arch=None, version=None, release=None, context=stdlib.api):
"""
Expects a model InstalledRedHatSignedRPM or InstalledUnsignedRPM.
Can be useful in cases like a quick item presence check, ex. check in actor that
- a certain package is installed.
-
+ a certain package is installed. Returns BOOL
:param model: model class
:param package_name: package to be checked
:param arch: filter by architecture. None means all arches.
+ :param version: filter by version. None means all versions.
+ :param release: filter by release. None means all releases.
"""
if not (isinstance(model, type) and issubclass(model, InstalledRPM)):
return False
- keys = ('name',) if not arch else ('name', 'arch')
+ keys = ['name']
+ if arch:
+ keys.append('arch')
+ if version:
+ keys.append('version')
+ if release:
+ keys.append('release')
+ attributes = [package_name]
+ attributes += [attr for attr in (arch, version, release) if attr is not None]
rpm_lookup = create_lookup(model, field='items', keys=keys, context=context)
- return (package_name, arch) in rpm_lookup if arch else (package_name,) in rpm_lookup
+ return tuple(attributes) in rpm_lookup
+
+
+def package_data_for(model, package_name, context=stdlib.api):
+ """
+ Expects a model InstalledRedHatSignedRPM or InstalledUnsignedRPM.
+ Useful for where we want to know a thing is installed
+ THEN do something based on the data.
+ Returns list( name, arch, version, release ) for given RPM.
+ :param model: model class
+ :param package_name: package to be checked
+ :param arch: filter by architecture. None means all arches.
+ :param version: filter by version. None means all versions.
+ :param release: filter by release. None means all releases.
+ """
+ if not (isinstance(model, type) and issubclass(model, InstalledRPM)):
+ return list()
+
+ lookup_keys = ['name', 'arch', 'version', 'release']
+ for (rpmName,rpmArch,rpmVersion,rpmRelease) in create_lookup(model, field='items', keys=lookup_keys, context=context):
+ if package_name == rpmName:
+ return {'name': rpmName,'arch': rpmArch, 'version': rpmVersion, 'release': rpmRelease}
diff --git a/repos/system_upgrade/common/libraries/utils.py b/repos/system_upgrade/common/libraries/utils.py
index 6793de6..d201677 100644
--- a/repos/system_upgrade/common/libraries/utils.py
+++ b/repos/system_upgrade/common/libraries/utils.py
@@ -43,6 +43,14 @@ def parse_config(cfg=None, strict=True):
return parser
+def create_parser(strict=True):
+ if six.PY3:
+ parser = six.moves.configparser.ConfigParser(strict=strict) # pylint: disable=unexpected-keyword-arg
+ else:
+ parser = six.moves.configparser.ConfigParser()
+ return parser
+
+
def makedirs(path, mode=0o777, exists_ok=True):
mounting._makedirs(path=path, mode=mode, exists_ok=exists_ok)
diff --git a/repos/system_upgrade/common/models/activevendorlist.py b/repos/system_upgrade/common/models/activevendorlist.py
new file mode 100644
index 0000000..de4056f
--- /dev/null
+++ b/repos/system_upgrade/common/models/activevendorlist.py
@@ -0,0 +1,7 @@
+from leapp.models import Model, fields
+from leapp.topics import VendorTopic
+
+
+class ActiveVendorList(Model):
+ topic = VendorTopic
+ data = fields.List(fields.String())
diff --git a/repos/system_upgrade/common/models/dnfworkaround.py b/repos/system_upgrade/common/models/dnfworkaround.py
index c921c5f..4a813dc 100644
--- a/repos/system_upgrade/common/models/dnfworkaround.py
+++ b/repos/system_upgrade/common/models/dnfworkaround.py
@@ -15,6 +15,20 @@ class DNFWorkaround(Model):
topic = SystemInfoTopic
script_path = fields.String()
- """ Absolute path to a bash script to execute """
+ """
+ Absolute path to a bash script to execute
+ """
+
+ script_args = fields.List(fields.String(), default=[])
+ """
+ Arguments with which the script should be executed
+
+ In case that an argument contains a whitespace or an escapable character,
+ the argument must be already treated correctly. e.g.
+ `script_args = ['-i', 'my\\ string']
+ """
+
display_name = fields.String()
- """ Name to display for this script when executed """
+ """
+ Name to display for this script when executed
+ """
diff --git a/repos/system_upgrade/common/models/installedrpm.py b/repos/system_upgrade/common/models/installedrpm.py
index 28b0aba..e53ab93 100644
--- a/repos/system_upgrade/common/models/installedrpm.py
+++ b/repos/system_upgrade/common/models/installedrpm.py
@@ -27,3 +27,8 @@ class InstalledRedHatSignedRPM(InstalledRPM):
class InstalledUnsignedRPM(InstalledRPM):
pass
+
+
+class PreRemovedRpmPackages(InstalledRPM):
+ # Do we want to install the package again when upgrading?
+ install = fields.Boolean(default=True)
diff --git a/repos/system_upgrade/common/models/repositoriesmap.py b/repos/system_upgrade/common/models/repositoriesmap.py
index c187333..ea6a75d 100644
--- a/repos/system_upgrade/common/models/repositoriesmap.py
+++ b/repos/system_upgrade/common/models/repositoriesmap.py
@@ -92,3 +92,4 @@ class RepositoriesMapping(Model):
mapping = fields.List(fields.Model(RepoMapEntry), default=[])
repositories = fields.List(fields.Model(PESIDRepositoryEntry), default=[])
+ vendor = fields.Nullable(fields.String())
diff --git a/repos/system_upgrade/common/models/rpmtransactiontasks.py b/repos/system_upgrade/common/models/rpmtransactiontasks.py
index 7e2870d..05d4e94 100644
--- a/repos/system_upgrade/common/models/rpmtransactiontasks.py
+++ b/repos/system_upgrade/common/models/rpmtransactiontasks.py
@@ -10,6 +10,7 @@ class RpmTransactionTasks(Model):
to_keep = fields.List(fields.String(), default=[])
to_remove = fields.List(fields.String(), default=[])
to_upgrade = fields.List(fields.String(), default=[])
+ to_reinstall = fields.List(fields.String(), default=[])
modules_to_enable = fields.List(fields.Model(Module), default=[])
modules_to_reset = fields.List(fields.Model(Module), default=[])
diff --git a/repos/system_upgrade/common/models/targetrepositories.py b/repos/system_upgrade/common/models/targetrepositories.py
index 3604772..33f5dc8 100644
--- a/repos/system_upgrade/common/models/targetrepositories.py
+++ b/repos/system_upgrade/common/models/targetrepositories.py
@@ -21,6 +21,12 @@ class CustomTargetRepository(TargetRepositoryBase):
enabled = fields.Boolean(default=True)
+class VendorCustomTargetRepositoryList(Model):
+ topic = TransactionTopic
+ vendor = fields.String()
+ repos = fields.List(fields.Model(CustomTargetRepository))
+
+
class TargetRepositories(Model):
topic = TransactionTopic
rhel_repos = fields.List(fields.Model(RHELTargetRepository))
diff --git a/repos/system_upgrade/common/models/vendorsignatures.py b/repos/system_upgrade/common/models/vendorsignatures.py
new file mode 100644
index 0000000..f456aec
--- /dev/null
+++ b/repos/system_upgrade/common/models/vendorsignatures.py
@@ -0,0 +1,8 @@
+from leapp.models import Model, fields
+from leapp.topics import VendorTopic
+
+
+class VendorSignatures(Model):
+ topic = VendorTopic
+ vendor = fields.String()
+ sigs = fields.List(fields.String())
diff --git a/repos/system_upgrade/common/models/vendorsourcerepos.py b/repos/system_upgrade/common/models/vendorsourcerepos.py
new file mode 100644
index 0000000..b7a219b
--- /dev/null
+++ b/repos/system_upgrade/common/models/vendorsourcerepos.py
@@ -0,0 +1,12 @@
+from leapp.models import Model, fields
+from leapp.topics import VendorTopic
+
+
+class VendorSourceRepos(Model):
+ """
+ This model contains the data on all source repositories associated with a specific vendor.
+ Its data is used to determine whether the vendor should be included into the upgrade process.
+ """
+ topic = VendorTopic
+ vendor = fields.String()
+ source_repoids = fields.List(fields.String())
diff --git a/repos/system_upgrade/common/tools/removerpmgpgkeys b/repos/system_upgrade/common/tools/removerpmgpgkeys
new file mode 100755
index 0000000..afe1906
--- /dev/null
+++ b/repos/system_upgrade/common/tools/removerpmgpgkeys
@@ -0,0 +1,13 @@
+#!/usr/bin/sh
+
+exit_code=0
+
+for key in "$@"; do
+ echo >&2 "Info: Removing RPM GPG key: $key"
+ rpm --erase "$key" || {
+ exit_code=1
+ echo >&2 "Error: Failed to remove RPM GPG key: $key"
+ }
+done
+
+exit $exit_code
diff --git a/repos/system_upgrade/common/topics/vendortopic.py b/repos/system_upgrade/common/topics/vendortopic.py
new file mode 100644
index 0000000..014b7af
--- /dev/null
+++ b/repos/system_upgrade/common/topics/vendortopic.py
@@ -0,0 +1,5 @@
+from leapp.topics import Topic
+
+
+class VendorTopic(Topic):
+ name = 'vendor_topic'
diff --git a/repos/system_upgrade/el7toel8/actors/checkleftoverpackages/actor.py b/repos/system_upgrade/el7toel8/actors/checkleftoverpackages/actor.py
index 0c53950..33d7c1f 100644
--- a/repos/system_upgrade/el7toel8/actors/checkleftoverpackages/actor.py
+++ b/repos/system_upgrade/el7toel8/actors/checkleftoverpackages/actor.py
@@ -1,8 +1,24 @@
from leapp.actors import Actor
from leapp.libraries.common.rpms import get_installed_rpms
-from leapp.models import LeftoverPackages, TransactionCompleted, InstalledUnsignedRPM, RPM
+from leapp.models import (
+ LeftoverPackages,
+ TransactionCompleted,
+ InstalledUnsignedRPM,
+ RPM,
+)
from leapp.tags import RPMUpgradePhaseTag, IPUWorkflowTag
+LEAPP_PACKAGES = [
+ "leapp",
+ "leapp-repository",
+ "snactor",
+ "leapp-repository-deps-el8",
+ "leapp-deps-el8",
+ "python2-leapp",
+]
+
+CPANEL_SUFFIX = "cpanel-"
+
class CheckLeftoverPackages(Actor):
"""
@@ -11,36 +27,50 @@ class CheckLeftoverPackages(Actor):
Actor produces message containing these packages. Message is empty if there are no el7 package left.
"""
- name = 'check_leftover_packages'
+ name = "check_leftover_packages"
consumes = (TransactionCompleted, InstalledUnsignedRPM)
produces = (LeftoverPackages,)
tags = (RPMUpgradePhaseTag, IPUWorkflowTag)
+ def skip_leftover_pkg(self, name, unsigned_set):
+ # Packages like these are expected to be not updated.
+ is_unsigned = name in unsigned_set
+ # Packages like these are updated outside of Leapp.
+ is_external = name.startswith(CPANEL_SUFFIX)
+
+ return is_unsigned or is_external
+
def process(self):
- LEAPP_PACKAGES = ['leapp', 'leapp-repository', 'snactor', 'leapp-repository-deps-el8', 'leapp-deps-el8',
- 'python2-leapp']
installed_rpms = get_installed_rpms()
if not installed_rpms:
return
to_remove = LeftoverPackages()
- unsigned = [pkg.name for pkg in next(self.consume(InstalledUnsignedRPM), InstalledUnsignedRPM()).items]
+ unsigned = [
+ pkg.name
+ for pkg in next(
+ self.consume(InstalledUnsignedRPM), InstalledUnsignedRPM()
+ ).items
+ ]
+ unsigned_set = set(unsigned + LEAPP_PACKAGES)
for rpm in installed_rpms:
rpm = rpm.strip()
if not rpm:
continue
- name, version, release, epoch, packager, arch, pgpsig = rpm.split('|')
-
- if 'el7' in release and name not in set(unsigned + LEAPP_PACKAGES):
- to_remove.items.append(RPM(
- name=name,
- version=version,
- epoch=epoch,
- packager=packager,
- arch=arch,
- release=release,
- pgpsig=pgpsig
- ))
+ name, version, release, epoch, packager, arch, pgpsig = rpm.split("|")
+
+ if "el7" in release and not self.skip_leftover_pkg(name, unsigned_set):
+ to_remove.items.append(
+ RPM(
+ name=name,
+ version=version,
+ epoch=epoch,
+ packager=packager,
+ arch=arch,
+ release=release,
+ pgpsig=pgpsig,
+ )
+ )
self.produce(to_remove)
diff --git a/repos/system_upgrade/el7toel8/actors/networkmanagerupdateconnections/actor.py b/repos/system_upgrade/el7toel8/actors/networkmanagerupdateconnections/actor.py
index 69ca0f0..a7d7db1 100644
--- a/repos/system_upgrade/el7toel8/actors/networkmanagerupdateconnections/actor.py
+++ b/repos/system_upgrade/el7toel8/actors/networkmanagerupdateconnections/actor.py
@@ -2,6 +2,7 @@ from leapp.actors import Actor
from leapp.libraries.stdlib import CalledProcessError, run
from leapp.models import NetworkManagerConfig
from leapp.tags import FirstBootPhaseTag, IPUWorkflowTag
+from leapp import reporting
class NetworkManagerUpdateConnections(Actor):
@@ -26,9 +27,24 @@ class NetworkManagerUpdateConnections(Actor):
return
try:
- r = run(['/usr/bin/python3', 'tools/nm-update-client-ids.py'])['stdout']
- self.log.info('Updated client-ids: {}'.format(r))
- except (OSError, CalledProcessError) as e:
- self.log.warning('Error calling nm-update-client-ids script: {}'.format(e))
+ r = run(['/usr/bin/python3', 'tools/nm-update-client-ids.py'])
+
+ self.log.info('Updated client-ids: {}'.format(r['stdout']))
+ except OSError as e:
+ self.log.warning('OSError calling nm-update-client-ids script: {}'.format(e))
+ except CalledProcessError as e:
+ self.log.warning('CalledProcessError calling nm-update-client-ids script: {}'.format(e))
+ if e.exit_code == 79:
+ title = 'NetworkManager connection update failed - PyGObject bindings for NetworkManager not found.'
+ summary = 'When using dhcp=dhclient on Red Hat Enterprise Linux 7, a non-hexadecimal ' \
+ 'client-id (a string) is sent on the wire as is. On Red Hat Enterprise Linux 8, a zero ' \
+ 'byte is prepended to string-only client-ids. If you wish to preserve the RHEL 7 behaviour, ' \
+ 'you may want to convert your client-ids to hexadecimal form manually.'
+ reporting.create_report([
+ reporting.Title(title),
+ reporting.Summary(summary),
+ reporting.Severity(reporting.Severity.MEDIUM),
+ reporting.Tags([reporting.Tags.NETWORK])
+ ])
break
diff --git a/repos/system_upgrade/el7toel8/actors/networkmanagerupdateconnections/tools/nm-update-client-ids.py b/repos/system_upgrade/el7toel8/actors/networkmanagerupdateconnections/tools/nm-update-client-ids.py
index 923bf80..9972204 100755
--- a/repos/system_upgrade/el7toel8/actors/networkmanagerupdateconnections/tools/nm-update-client-ids.py
+++ b/repos/system_upgrade/el7toel8/actors/networkmanagerupdateconnections/tools/nm-update-client-ids.py
@@ -3,12 +3,26 @@ from __future__ import print_function
import sys
import gi
-gi.require_version('NM', '1.0')
+
+try:
+ gi.require_version("NM", "1.0")
+except ValueError:
+ # If we're missing NetworkManager-libnm, the script won't function.
+ print(
+ "PyGObject bindings for NetworkManager not found - do you have NetworkManager-libnm installed?"
+ )
+ print(
+ "If you have dhcp=dhclient, you may need to convert your string-formatted client IDs to hexadecimal"
+ "to preserve the format they're sent on the wire with. Otherwise, they will now have a zero byte"
+ "prepended while being sent."
+ )
+ sys.exit(79)
+
from gi.repository import NM # noqa: E402; pylint: disable=wrong-import-position
def is_hexstring(s):
- arr = s.split(':')
+ arr = s.split(":")
for a in arr:
if len(a) != 1 and len(a) != 2:
return False
@@ -21,8 +35,8 @@ def is_hexstring(s):
client = NM.Client.new(None)
if not client:
- print('Cannot create NM client instance')
- sys.exit(0)
+ print("Cannot create NM client instance")
+ sys.exit(79)
processed = 0
changed = 0
@@ -35,15 +49,20 @@ for c in client.get_connections():
client_id = s_ip4.get_dhcp_client_id()
if client_id is not None:
if not is_hexstring(client_id):
- new_client_id = ':'.join(hex(ord(x))[2:] for x in client_id)
+ new_client_id = ":".join(hex(ord(x))[2:] for x in client_id)
s_ip4.set_property(NM.SETTING_IP4_CONFIG_DHCP_CLIENT_ID, new_client_id)
success = c.commit_changes(True, None)
if success:
changed += 1
else:
errors += 1
- print('Connection {}: \'{}\' -> \'{}\' ({})'.format(c.get_uuid(),
- client_id, new_client_id,
- 'OK' if success else 'FAIL'))
+ print(
+ "Connection {}: '{}' -> '{}' ({})".format(
+ c.get_uuid(),
+ client_id,
+ new_client_id,
+ "OK" if success else "FAIL",
+ )
+ )
print("{} processed, {} changed, {} errors".format(processed, changed, errors))
diff --git a/repos/system_upgrade/el7toel8/actors/opensshpermitrootlogincheck/actor.py b/repos/system_upgrade/el7toel8/actors/opensshpermitrootlogincheck/actor.py
index f13a767..2e3412d 100644
--- a/repos/system_upgrade/el7toel8/actors/opensshpermitrootlogincheck/actor.py
+++ b/repos/system_upgrade/el7toel8/actors/opensshpermitrootlogincheck/actor.py
@@ -1,7 +1,7 @@
from leapp import reporting
from leapp.actors import Actor
from leapp.exceptions import StopActorExecutionError
-from leapp.libraries.actor.opensshpermitrootlogincheck import semantics_changes
+from leapp.libraries.actor.opensshpermitrootlogincheck import semantics_changes, add_permitrootlogin_conf
from leapp.libraries.stdlib import api
from leapp.models import OpenSshConfig, Report
from leapp.reporting import create_report
@@ -39,28 +39,32 @@ class OpenSshPermitRootLoginCheck(Actor):
resources = [
reporting.RelatedResource('package', 'openssh-server'),
- reporting.RelatedResource('file', '/etc/ssh/sshd_config')
+ reporting.RelatedResource('file', '/etc/ssh/sshd_config'),
+ reporting.RelatedResource('file', '/etc/ssh/sshd_config.leapp_backup')
]
- # When the configuration does not contain the PermitRootLogin directive and
- # the configuration file was locally modified, it will not get updated by
- # RPM and the user might be locked away from the server. Warn the user here.
- if not config.permit_root_login and config.modified:
+ if not config.permit_root_login:
+ add_permitrootlogin_conf()
create_report([
- reporting.Title('Possible problems with remote login using root account'),
+ reporting.Title('SSH configuration automatically modified to permit root login'),
reporting.Summary(
- 'OpenSSH configuration file does not explicitly state '
- 'the option PermitRootLogin in sshd_config file, '
- 'which will default in RHEL8 to "prohibit-password".'
+ 'Your OpenSSH configuration file does not explicitly state '
+ 'the option PermitRootLogin in sshd_config file. '
+ 'Its default is "yes" in RHEL7, but will change in '
+ 'RHEL8 to "prohibit-password", which may affect your ability '
+ 'to log onto this machine after the upgrade. '
+ 'To prevent this from occuring, the PermitRootLogin option '
+ 'has been explicity set to "yes" to preserve the default behaivour '
+ 'after migration. '
+ 'The original configuration file has been backed up to '
+ '/etc/ssh/sshd_config.leapp_backup'
),
- reporting.Severity(reporting.Severity.HIGH),
+ reporting.Severity(reporting.Severity.MEDIUM),
reporting.Tags(COMMON_REPORT_TAGS),
reporting.Remediation(
- hint='If you depend on remote root logins using '
- 'passwords, consider setting up a different '
- 'user for remote administration or adding '
- '"PermitRootLogin yes" to sshd_config.'
- ),
- reporting.Flags([reporting.Flags.INHIBITOR])
+ hint='If you would prefer to configure the root login policy yourself, '
+ 'consider setting the PermitRootLogin option '
+ 'in sshd_config explicitly.'
+ )
] + resources)
# Check if there is at least one PermitRootLogin other than "no"
@@ -68,7 +72,7 @@ class OpenSshPermitRootLoginCheck(Actor):
# This usually means some more complicated setup depending on the
# default value being globally "yes" and being overwritten by this
# match block
- if semantics_changes(config):
+ elif semantics_changes(config):
create_report([
reporting.Title('OpenSSH configured to allow root login'),
reporting.Summary(
@@ -76,7 +80,7 @@ class OpenSshPermitRootLoginCheck(Actor):
'blocks, but not explicitly enabled in global or '
'"Match all" context. This update changes the '
'default to disable root logins using paswords '
- 'so your server migth get inaccessible.'
+ 'so your server might become inaccessible.'
),
reporting.Severity(reporting.Severity.HIGH),
reporting.Tags(COMMON_REPORT_TAGS),
diff --git a/repos/system_upgrade/el7toel8/actors/opensshpermitrootlogincheck/libraries/opensshpermitrootlogincheck.py b/repos/system_upgrade/el7toel8/actors/opensshpermitrootlogincheck/libraries/opensshpermitrootlogincheck.py
index 0cb9081..7a962b7 100644
--- a/repos/system_upgrade/el7toel8/actors/opensshpermitrootlogincheck/libraries/opensshpermitrootlogincheck.py
+++ b/repos/system_upgrade/el7toel8/actors/opensshpermitrootlogincheck/libraries/opensshpermitrootlogincheck.py
@@ -1,3 +1,5 @@
+import errno
+from leapp.libraries.stdlib import api
def semantics_changes(config):
@@ -13,3 +15,30 @@ def semantics_changes(config):
globally_enabled = True
return not globally_enabled and in_match_disabled
+
+
+def add_permitrootlogin_conf():
+ CONFIG = '/etc/ssh/sshd_config'
+ CONFIG_BACKUP = '/etc/ssh/sshd_config.leapp_backup'
+ try:
+ with open(CONFIG, 'r') as fd:
+ sshd_config = fd.readlines()
+
+ permit_autoconf = [
+ "# Automatically added by Leapp to preserve RHEL7 default\n",
+ "# behaviour after migration.\n",
+ "# Placed on top of the file to avoid being included into Match blocks.\n",
+ "PermitRootLogin yes\n"
+ "\n",
+ ]
+ permit_autoconf.extend(sshd_config)
+ with open(CONFIG, 'w') as fd:
+ fd.writelines(permit_autoconf)
+ with open(CONFIG_BACKUP, 'w') as fd:
+ fd.writelines(sshd_config)
+
+ except IOError as err:
+ if err.errno != errno.ENOENT:
+ error = 'Failed to open sshd_config: {}'.format(str(err))
+ api.current_logger().error(error)
+ return
diff --git a/repos/system_upgrade/el7toel8/actors/updateyumvars/actor.py b/repos/system_upgrade/el7toel8/actors/updateyumvars/actor.py
new file mode 100644
index 0000000..6252fba
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/updateyumvars/actor.py
@@ -0,0 +1,18 @@
+from leapp.actors import Actor
+from leapp.libraries.actor import updateyumvars
+from leapp.tags import ThirdPartyApplicationsPhaseTag, IPUWorkflowTag
+
+
+class UpdateYumVars(Actor):
+ """
+ Update the files corresponding to the current major
+ OS version in the /etc/yum/vars folder.
+ """
+
+ name = 'update_yum_vars'
+ consumes = ()
+ produces = ()
+ tags = (ThirdPartyApplicationsPhaseTag, IPUWorkflowTag)
+
+ def process(self):
+ updateyumvars.vars_update()
diff --git a/repos/system_upgrade/el7toel8/actors/updateyumvars/libraries/updateyumvars.py b/repos/system_upgrade/el7toel8/actors/updateyumvars/libraries/updateyumvars.py
new file mode 100644
index 0000000..b77f784
--- /dev/null
+++ b/repos/system_upgrade/el7toel8/actors/updateyumvars/libraries/updateyumvars.py
@@ -0,0 +1,23 @@
+import os
+
+from leapp.libraries.stdlib import api
+
+VAR_FOLDER = "/etc/yum/vars"
+
+
+def vars_update():
+ """ Iterate through and modify the variables. """
+ if not os.path.isdir(VAR_FOLDER):
+ api.current_logger().debug(
+ "The {} directory doesn't exist. Nothing to do.".format(VAR_FOLDER)
+ )
+ return
+
+ for varfile_name in os.listdir(VAR_FOLDER):
+ # cp_centos_major_version contains the current OS' major version.
+ if varfile_name == 'cp_centos_major_version':
+ varfile_path = os.path.join(VAR_FOLDER, varfile_name)
+
+ with open(varfile_path, 'w') as varfile:
+ # Overwrite the value from outdated "7".
+ varfile.write('8')
diff --git a/repos/system_upgrade/wp-toolkit/.leapp/info b/repos/system_upgrade/wp-toolkit/.leapp/info
new file mode 100644
index 0000000..e4059e3
--- /dev/null
+++ b/repos/system_upgrade/wp-toolkit/.leapp/info
@@ -0,0 +1 @@
+{"name": "wp-toolkit", "id": "ae31666a-37b8-435c-a071-a3d28342099b", "repos": ["644900a5-c347-43a3-bfab-f448f46d9647"]}
\ No newline at end of file
diff --git a/repos/system_upgrade/wp-toolkit/.leapp/leapp.conf b/repos/system_upgrade/wp-toolkit/.leapp/leapp.conf
new file mode 100644
index 0000000..b459134
--- /dev/null
+++ b/repos/system_upgrade/wp-toolkit/.leapp/leapp.conf
@@ -0,0 +1,6 @@
+
+[repositories]
+repo_path=${repository:root_dir}
+
+[database]
+path=${repository:state_dir}/leapp.db
diff --git a/repos/system_upgrade/wp-toolkit/actors/setwptoolkityumvariable/actor.py b/repos/system_upgrade/wp-toolkit/actors/setwptoolkityumvariable/actor.py
new file mode 100644
index 0000000..f386358
--- /dev/null
+++ b/repos/system_upgrade/wp-toolkit/actors/setwptoolkityumvariable/actor.py
@@ -0,0 +1,65 @@
+from leapp.actors import Actor
+from leapp.models import ActiveVendorList, CopyFile, TargetUserSpacePreupgradeTasks, WpToolkit
+from leapp.libraries.stdlib import api
+from leapp.tags import TargetTransactionFactsPhaseTag, IPUWorkflowTag
+
+VENDOR_NAME = 'wp-toolkit'
+SUPPORTED_VARIANTS = ['cpanel', ]
+
+# Is the vendors.d path the best place to create this file?
+src_path = '/etc/leapp/files/vendors.d/wp-toolkit.var'
+dst_path = '/etc/dnf/vars/wptkversion'
+
+
+class SetWpToolkitYumVariable(Actor):
+ """
+ Records the current WP Toolkit version into a DNF variable file so that the
+ precise version requested is reinstalled, and forwards the request to copy
+ this data into the upgrading environment using a
+ :class:`TargetUserSpacePreupgradeTasks`.
+ """
+
+ name = 'set_wp_toolkit_yum_variable'
+ consumes = (ActiveVendorList, WpToolkit)
+ produces = (TargetUserSpacePreupgradeTasks,)
+ tags = (TargetTransactionFactsPhaseTag.Before, IPUWorkflowTag)
+
+ def _do_cpanel(self, version):
+
+ files_to_copy = []
+ if version is None:
+ version = 'latest'
+
+ try:
+ with open(src_path, 'w') as var_file:
+ var_file.write(version)
+
+ files_to_copy.append(CopyFile(src=src_path, dst=dst_path))
+ msg = 'Requesting leapp to copy {} into the upgrade environment as {}'.format(src_path, dst_path)
+ api.current_logger().debug(msg)
+
+ except OSError as e:
+ api.current_logger().error('Cannot write to {}: {}'.format(e.filename, e.strerror))
+
+ return TargetUserSpacePreupgradeTasks(copy_files=files_to_copy)
+
+ def process(self):
+
+ active_vendors = []
+ for vendor_list in api.consume(ActiveVendorList):
+ active_vendors.extend(vendor_list.data)
+
+ if VENDOR_NAME in active_vendors:
+ wptk_data = next(api.consume(WpToolkit), WpToolkit())
+
+ preupgrade_task = None
+ if wptk_data.variant == 'cpanel':
+ preupgrade_task = self._do_cpanel(wptk_data.version)
+ else:
+ api.current_logger().warn('Could not recognize a supported environment for WP Toolkit.')
+
+ if preupgrade_task is not None:
+ api.produce(preupgrade_task)
+
+ else:
+ api.current_logger().info('{} not an active vendor: skipping actor'.format(VENDOR_NAME))
diff --git a/repos/system_upgrade/wp-toolkit/actors/updatewptoolkitrepos/actor.py b/repos/system_upgrade/wp-toolkit/actors/updatewptoolkitrepos/actor.py
new file mode 100644
index 0000000..f1c6839
--- /dev/null
+++ b/repos/system_upgrade/wp-toolkit/actors/updatewptoolkitrepos/actor.py
@@ -0,0 +1,49 @@
+import os
+import shutil
+
+from leapp.actors import Actor
+from leapp.libraries.stdlib import api, run
+from leapp.models import ActiveVendorList, WpToolkit
+from leapp.tags import IPUWorkflowTag, FirstBootPhaseTag
+
+VENDOR_NAME = 'wp-toolkit'
+
+VENDORS_DIR = '/etc/leapp/files/vendors.d'
+REPO_DIR = '/etc/yum.repos.d'
+
+class UpdateWpToolkitRepos(Actor):
+ """
+ Replaces the WP Toolkit's old repo file from the CentOS 7 version with one appropriate for the new OS.
+ """
+
+ name = 'update_wp_toolkit_repos'
+ consumes = (ActiveVendorList, WpToolkit)
+ produces = ()
+ tags = (IPUWorkflowTag, FirstBootPhaseTag)
+
+ def process(self):
+
+ active_vendors = []
+ for vendor_list in api.consume(ActiveVendorList):
+ active_vendors.extend(vendor_list.data)
+
+ if VENDOR_NAME in active_vendors:
+
+ wptk_data = next(api.consume(WpToolkit), WpToolkit())
+
+ src_file = api.get_file_path('{}-{}.el8.repo'. format(VENDOR_NAME, wptk_data.variant))
+ dst_file = '{}/{}-{}.repo'.format(REPO_DIR, VENDOR_NAME, wptk_data.variant)
+
+ try:
+ os.rename(dst_file, dst_file + '.bak')
+ except OSError as e:
+ api.current_logger().warn('Could not rename {} to {}: {}'.format(e.filename, e.filename2, e.strerror))
+
+ api.current_logger().info('Updating WPTK package repository file at {} using {}'.format(dst_file, src_file))
+
+ try:
+ shutil.copy(src_file, dst_file)
+ except OSError as e:
+ api.current_logger().error('Could not update WPTK package repository file {}: {}'.format(e.filename2, e.strerror))
+ else:
+ api.current_logger().info('{} not an active vendor: skipping actor'.format(VENDOR_NAME))
diff --git a/repos/system_upgrade/wp-toolkit/actors/wptoolkitfacts/actor.py b/repos/system_upgrade/wp-toolkit/actors/wptoolkitfacts/actor.py
new file mode 100644
index 0000000..a2925dd
--- /dev/null
+++ b/repos/system_upgrade/wp-toolkit/actors/wptoolkitfacts/actor.py
@@ -0,0 +1,55 @@
+from leapp.actors import Actor
+from leapp.libraries.stdlib import api
+from leapp.models import ActiveVendorList, WpToolkit, VendorSourceRepos, InstalledRPM
+from leapp.tags import IPUWorkflowTag, FactsPhaseTag
+from leapp.libraries.common.rpms import package_data_for
+
+VENDOR_NAME = 'wp-toolkit'
+SUPPORTED_VARIANTS = ['cpanel', ]
+
+
+class WpToolkitFacts(Actor):
+ """
+ Find out whether a supported WP Toolkit repository is present and whether the appropriate package is installed.
+ """
+
+ name = 'wp_toolkit_facts'
+ consumes = (ActiveVendorList, VendorSourceRepos, InstalledRPM)
+ produces = (WpToolkit,)
+ tags = (IPUWorkflowTag, FactsPhaseTag)
+
+ def process(self):
+
+ active_vendors = []
+ for vendor_list in api.consume(ActiveVendorList):
+ active_vendors.extend(vendor_list.data)
+
+ if VENDOR_NAME in active_vendors:
+ api.current_logger().info('Vendor {} is active. Looking for information...'.format(VENDOR_NAME))
+
+ repo_list = []
+ for src_info in api.consume(VendorSourceRepos):
+ if src_info.vendor == VENDOR_NAME:
+ repo_list = src_info.source_repoids
+ break
+
+ variant = None
+ version = None
+ for maybe_variant in SUPPORTED_VARIANTS:
+ if '{}-{}'.format(VENDOR_NAME, maybe_variant) in repo_list:
+ variant = maybe_variant
+ api.current_logger().info('Found WP Toolkit variant {}'.format(variant))
+
+ pkgData = package_data_for(InstalledRPM, u'wp-toolkit-{}'.format(variant))
+ # name, arch, version, release
+ if pkgData:
+ version = pkgData['version']
+
+ break
+
+ api.current_logger().debug('Did not find WP Toolkit variant {}'.format(maybe_variant))
+
+ api.produce(WpToolkit(variant=variant, version=version))
+
+ else:
+ api.current_logger().info('{} not an active vendor: skipping actor'.format(VENDOR_NAME))
diff --git a/repos/system_upgrade/wp-toolkit/actors/wptoolkitfacts/tests/test_wptoolkitfacts.py b/repos/system_upgrade/wp-toolkit/actors/wptoolkitfacts/tests/test_wptoolkitfacts.py
new file mode 100644
index 0000000..551c2af
--- /dev/null
+++ b/repos/system_upgrade/wp-toolkit/actors/wptoolkitfacts/tests/test_wptoolkitfacts.py
@@ -0,0 +1,38 @@
+# XXX TODO this copies a lot from satellite_upgrade_facts.py, should probably make a fixture
+# for fake_package at the least?
+
+from leapp.models import InstalledRPM, RPM, ActiveVendorList, VendorSourceRepos, WpToolkit
+
+RH_PACKAGER = 'Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>'
+
+
+def fake_package(pkg_name,version):
+ return RPM(name=pkg_name, version=version, release='1.sm01', epoch='1', packager=RH_PACKAGER, arch='noarch',
+ pgpsig='RSA/SHA256, Mon 01 Jan 1970 00:00:00 AM -03, Key ID 199e2f91fd431d51')
+
+
+BOGUS_RPM = fake_package('bogus-bogus', '1.0')
+WPTOOLKIT_RPM = fake_package('wp-toolkit-cpanel', '0.1')
+
+
+def test_no_wptoolkit_vendor_present(current_actor_context):
+ current_actor_context.feed(ActiveVendorList(data=list(["jello"])), InstalledRPM(items=[]))
+ current_actor_context.run()
+ message = current_actor_context.consume(WpToolkit)
+ assert not message
+
+
+def test_no_wptoolkit_rpm_present(current_actor_context):
+ current_actor_context.feed(ActiveVendorList(data=list(['wp-toolkit'])), InstalledRPM(items=[]))
+ current_actor_context.run()
+ message = current_actor_context.consume(WpToolkit)
+ assert not hasattr(message, 'variant')
+ assert not hasattr(message, 'version')
+
+
+def test_wptoolkit_rpm_present(current_actor_context):
+ current_actor_context.feed(ActiveVendorList(data=list(['wp-toolkit'])), VendorSourceRepos(vendor='wp-toolkit',source_repoids=list(['wp-toolkit-cpanel'])), InstalledRPM(items=[BOGUS_RPM,WPTOOLKIT_RPM]))
+ current_actor_context.run()
+ message = current_actor_context.consume(WpToolkit)[0]
+ assert message.variant == 'cpanel'
+ assert message.version == '0.1'
diff --git a/repos/system_upgrade/wp-toolkit/files/wp-toolkit-cpanel.el8.repo b/repos/system_upgrade/wp-toolkit/files/wp-toolkit-cpanel.el8.repo
new file mode 100644
index 0000000..adfd7b6
--- /dev/null
+++ b/repos/system_upgrade/wp-toolkit/files/wp-toolkit-cpanel.el8.repo
@@ -0,0 +1,11 @@
+[wp-toolkit-cpanel]
+name=WP Toolkit for cPanel
+baseurl=https://wp-toolkit.plesk.com/cPanel/CentOS-8-x86_64/latest/wp-toolkit/
+enabled=1
+gpgcheck=1
+
+[wp-toolkit-thirdparties]
+name=WP Toolkit third parties
+baseurl=https://wp-toolkit.plesk.com/cPanel/CentOS-8-x86_64/latest/thirdparty/
+enabled=1
+gpgcheck=1
diff --git a/repos/system_upgrade/wp-toolkit/models/wptoolkit.py b/repos/system_upgrade/wp-toolkit/models/wptoolkit.py
new file mode 100644
index 0000000..9df3c0d
--- /dev/null
+++ b/repos/system_upgrade/wp-toolkit/models/wptoolkit.py
@@ -0,0 +1,23 @@
+from leapp.models import Model, fields
+from leapp.topics import SystemFactsTopic
+
+
+class WpToolkit(Model):
+ """
+ Records information about presence and versioning of WP Toolkit package management resources on the source system.
+ """
+ topic = SystemFactsTopic
+
+ """
+ States which supported "variant" of WP Toolkit seems available to the package manager.
+
+ Currently, only `cpanel` is supported.
+ """
+ variant = fields.Nullable(fields.String())
+
+ """
+ States which version of the WP Toolkit package for the given variant is installed.
+
+ If no package is installed, this will be `None`.
+ """
+ version = fields.Nullable(fields.String())
diff --git a/requirements.txt b/requirements.txt
index ac6bf9b..f69f981 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -11,5 +11,6 @@ distro==1.5.0
ipaddress==1.0.23
git+https://github.com/oamg/leapp
requests
+raven
# pinning a py27 troublemaking transitive dependency
lazy-object-proxy==1.5.2; python_version < '3'