forked from rpms/cloud-init
		
	import cloud-init-21.1-19.el9
This commit is contained in:
		
						commit
						6c3556215f
					
				
							
								
								
									
										1
									
								
								.cloud-init.metadata
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										1
									
								
								.cloud-init.metadata
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1 @@ | ||||
| 2ae378aa2ae23b34b0ff123623ba5e2fbdc4928d SOURCES/cloud-init-21.1.tar.gz | ||||
							
								
								
									
										1
									
								
								.gitignore
									
									
									
									
										vendored
									
									
										Normal file
									
								
							
							
						
						
									
										1
									
								
								.gitignore
									
									
									
									
										vendored
									
									
										Normal file
									
								
							| @ -0,0 +1 @@ | ||||
| SOURCES/cloud-init-21.1.tar.gz | ||||
							
								
								
									
										604
									
								
								SOURCES/0001-Add-initial-redhat-setup.patch
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										604
									
								
								SOURCES/0001-Add-initial-redhat-setup.patch
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,604 @@ | ||||
| From 4b84d29211b7b2121afe9045c71ded5381536d8b Mon Sep 17 00:00:00 2001 | ||||
| From: Eduardo Otubo <otubo@redhat.com> | ||||
| Date: Fri, 7 May 2021 13:36:03 +0200 | ||||
| Subject: Add initial redhat setup | ||||
| 
 | ||||
| Merged patches (RHEL-9/21.1): | ||||
| - 5688a1d0 Removing python-nose and python-tox as dependency
 | ||||
| - 237d57f9 Removing mock dependency
 | ||||
| - d1c2f496 Removing python-jsonschema dependency
 | ||||
| - 0d1cd14c Don't override default network configuration
 | ||||
| 
 | ||||
| Merged patches (21.1): | ||||
| - 915d30ad Change gating file to correct rhel version
 | ||||
| - 311f318d Removing net-tools dependency
 | ||||
| - 74731806 Adding man pages to Red Hat spec file
 | ||||
| - 758d333d Removing blocking test from yaml configuration file
 | ||||
| - c7e7c59c Changing permission of cloud-init-generator to 755
 | ||||
| - 8b85abbb Installing man pages in the correct place with correct permissions
 | ||||
| - c6808d8d Fix unit failure of cloud-final.service if NetworkManager was not present.
 | ||||
| - 11866ef6 Report full specific version with "cloud-init --version"
 | ||||
| 
 | ||||
| Rebase notes (18.5): | ||||
| - added bash_completition file
 | ||||
| - added cloud-id file
 | ||||
| 
 | ||||
| Merged patches (20.3): | ||||
| - 01900d0 changing ds-identify patch from /usr/lib to /usr/libexec
 | ||||
| - 7f47ca3 Render the generator from template instead of cp
 | ||||
| 
 | ||||
| Merged patches (19.4): | ||||
| - 4ab5a61 Fix for network configuration not persisting after reboot
 | ||||
| - 84cf125 Removing cloud-user from wheel
 | ||||
| - 31290ab Adding gating tests for Azure, ESXi and AWS
 | ||||
| 
 | ||||
| Merged patches (18.5): | ||||
| - 2d6b469 add power-state-change module to cloud_final_modules
 | ||||
| - 764159f Adding systemd mount options to wait for cloud-init
 | ||||
| - da4d99e Adding disk_setup to rhel/cloud.cfg
 | ||||
| - f5c6832 Enable cloud-init by default on vmware
 | ||||
| 
 | ||||
| Conflicts: | ||||
| cloudinit/config/cc_chef.py: | ||||
|  - Updated header documentation text | ||||
|  - Replacing double quotes by simple quotes | ||||
| 
 | ||||
| setup.py: | ||||
|  - Adding missing cmdclass info | ||||
| 
 | ||||
| Signed-off-by: Eduardo Otubo <otubo@redhat.com> | ||||
| 
 | ||||
| Changes: | ||||
| - move redhat to .distro to use new build script structure
 | ||||
| - Fixing changelog for RHEL 9
 | ||||
| 
 | ||||
| Merged patches (21.1): | ||||
| - 69bd7f71 DataSourceAzure.py: use hostnamectl to set hostname
 | ||||
| - 0407867e Remove race condition between cloud-init and NetworkManager
 | ||||
| 
 | ||||
| Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| ---
 | ||||
|  .distro/.gitignore                    |   1 + | ||||
|  .distro/Makefile                      |  74 +++++ | ||||
|  .distro/Makefile.common               |  30 ++ | ||||
|  .distro/cloud-init-tmpfiles.conf      |   1 + | ||||
|  .distro/cloud-init.spec.template      | 383 ++++++++++++++++++++++++++ | ||||
|  .distro/gating.yaml                   |   8 + | ||||
|  .distro/rpmbuild/BUILD/.gitignore     |   3 + | ||||
|  .distro/rpmbuild/RPMS/.gitignore      |   3 + | ||||
|  .distro/rpmbuild/SOURCES/.gitignore   |   3 + | ||||
|  .distro/rpmbuild/SPECS/.gitignore     |   3 + | ||||
|  .distro/rpmbuild/SRPMS/.gitignore     |   3 + | ||||
|  .distro/scripts/frh.py                |  27 ++ | ||||
|  .distro/scripts/git-backport-diff     | 327 ++++++++++++++++++++++ | ||||
|  .distro/scripts/git-compile-check     | 215 +++++++++++++++ | ||||
|  .distro/scripts/process-patches.sh    |  88 ++++++ | ||||
|  .distro/scripts/tarball_checksum.sh   |   3 + | ||||
|  .gitignore                            |   1 + | ||||
|  cloudinit/config/cc_chef.py           |  67 ++++- | ||||
|  cloudinit/settings.py                 |   7 +- | ||||
|  cloudinit/sources/DataSourceAzure.py  |   2 +- | ||||
|  requirements.txt                      |   3 - | ||||
|  rhel/README.rhel                      |   5 + | ||||
|  rhel/cloud-init-tmpfiles.conf         |   1 + | ||||
|  rhel/cloud.cfg                        |  69 +++++ | ||||
|  rhel/systemd/cloud-config.service     |  18 ++ | ||||
|  rhel/systemd/cloud-config.target      |  11 + | ||||
|  rhel/systemd/cloud-final.service      |  24 ++ | ||||
|  rhel/systemd/cloud-init-local.service |  31 +++ | ||||
|  rhel/systemd/cloud-init.service       |  26 ++ | ||||
|  rhel/systemd/cloud-init.target        |   7 + | ||||
|  setup.py                              |  23 +- | ||||
|  tools/read-version                    |  28 +- | ||||
|  32 files changed, 1441 insertions(+), 54 deletions(-) | ||||
|  create mode 100644 .distro/.gitignore | ||||
|  create mode 100644 .distro/Makefile | ||||
|  create mode 100644 .distro/Makefile.common | ||||
|  create mode 100644 .distro/cloud-init-tmpfiles.conf | ||||
|  create mode 100644 .distro/cloud-init.spec.template | ||||
|  create mode 100644 .distro/gating.yaml | ||||
|  create mode 100644 .distro/rpmbuild/BUILD/.gitignore | ||||
|  create mode 100644 .distro/rpmbuild/RPMS/.gitignore | ||||
|  create mode 100644 .distro/rpmbuild/SOURCES/.gitignore | ||||
|  create mode 100644 .distro/rpmbuild/SPECS/.gitignore | ||||
|  create mode 100644 .distro/rpmbuild/SRPMS/.gitignore | ||||
|  create mode 100755 .distro/scripts/frh.py | ||||
|  create mode 100755 .distro/scripts/git-backport-diff | ||||
|  create mode 100755 .distro/scripts/git-compile-check | ||||
|  create mode 100755 .distro/scripts/process-patches.sh | ||||
|  create mode 100755 .distro/scripts/tarball_checksum.sh | ||||
|  create mode 100644 rhel/README.rhel | ||||
|  create mode 100644 rhel/cloud-init-tmpfiles.conf | ||||
|  create mode 100644 rhel/cloud.cfg | ||||
|  create mode 100644 rhel/systemd/cloud-config.service | ||||
|  create mode 100644 rhel/systemd/cloud-config.target | ||||
|  create mode 100644 rhel/systemd/cloud-final.service | ||||
|  create mode 100644 rhel/systemd/cloud-init-local.service | ||||
|  create mode 100644 rhel/systemd/cloud-init.service | ||||
|  create mode 100644 rhel/systemd/cloud-init.target | ||||
| 
 | ||||
| diff --git a/cloudinit/config/cc_chef.py b/cloudinit/config/cc_chef.py
 | ||||
| index aaf71366..97ef649a 100644
 | ||||
| --- a/cloudinit/config/cc_chef.py
 | ||||
| +++ b/cloudinit/config/cc_chef.py
 | ||||
| @@ -6,7 +6,70 @@
 | ||||
|  # | ||||
|  # This file is part of cloud-init. See LICENSE file for license information. | ||||
|   | ||||
| -"""Chef: module that configures, starts and installs chef."""
 | ||||
| +"""
 | ||||
| +Chef
 | ||||
| +----
 | ||||
| +**Summary:** module that configures, starts and installs chef.
 | ||||
| +
 | ||||
| +This module enables chef to be installed (from packages or
 | ||||
| +from gems, or from omnibus). Before this occurs chef configurations are
 | ||||
| +written to disk (validation.pem, client.pem, firstboot.json, client.rb),
 | ||||
| +and needed chef folders/directories are created (/etc/chef and /var/log/chef
 | ||||
| +and so-on). Then once installing proceeds correctly if configured chef will
 | ||||
| +be started (in daemon mode or in non-daemon mode) and then once that has
 | ||||
| +finished (if ran in non-daemon mode this will be when chef finishes
 | ||||
| +converging, if ran in daemon mode then no further actions are possible since
 | ||||
| +chef will have forked into its own process) then a post run function can
 | ||||
| +run that can do finishing activities (such as removing the validation pem
 | ||||
| +file).
 | ||||
| +
 | ||||
| +**Internal name:** ``cc_chef``
 | ||||
| +
 | ||||
| +**Module frequency:** per always
 | ||||
| +
 | ||||
| +**Supported distros:** all
 | ||||
| +
 | ||||
| +**Config keys**::
 | ||||
| +
 | ||||
| +    chef:
 | ||||
| +       directories: (defaulting to /etc/chef, /var/log/chef, /var/lib/chef,
 | ||||
| +                     /var/cache/chef, /var/backups/chef, /run/chef)
 | ||||
| +       validation_cert: (optional string to be written to file validation_key)
 | ||||
| +                        special value 'system' means set use existing file
 | ||||
| +       validation_key: (optional the path for validation_cert. default
 | ||||
| +                        /etc/chef/validation.pem)
 | ||||
| +       firstboot_path: (path to write run_list and initial_attributes keys that
 | ||||
| +                        should also be present in this configuration, defaults
 | ||||
| +                        to /etc/chef/firstboot.json)
 | ||||
| +       exec: boolean to run or not run chef (defaults to false, unless
 | ||||
| +                                             a gem installed is requested
 | ||||
| +                                             where this will then default
 | ||||
| +                                             to true)
 | ||||
| +
 | ||||
| +    chef.rb template keys (if falsey, then will be skipped and not
 | ||||
| +                           written to /etc/chef/client.rb)
 | ||||
| +
 | ||||
| +    chef:
 | ||||
| +      client_key:
 | ||||
| +      encrypted_data_bag_secret:
 | ||||
| +      environment:
 | ||||
| +      file_backup_path:
 | ||||
| +      file_cache_path:
 | ||||
| +      json_attribs:
 | ||||
| +      log_level:
 | ||||
| +      log_location:
 | ||||
| +      node_name:
 | ||||
| +      omnibus_url:
 | ||||
| +      omnibus_url_retries:
 | ||||
| +      omnibus_version:
 | ||||
| +      pid_file:
 | ||||
| +      server_url:
 | ||||
| +      show_time:
 | ||||
| +      ssl_verify_mode:
 | ||||
| +      validation_cert:
 | ||||
| +      validation_key:
 | ||||
| +      validation_name:
 | ||||
| +"""
 | ||||
|   | ||||
|  import itertools | ||||
|  import json | ||||
| @@ -31,7 +94,7 @@ CHEF_DIRS = tuple([
 | ||||
|      '/var/lib/chef', | ||||
|      '/var/cache/chef', | ||||
|      '/var/backups/chef', | ||||
| -    '/var/run/chef',
 | ||||
| +    '/run/chef',
 | ||||
|  ]) | ||||
|  REQUIRED_CHEF_DIRS = tuple([ | ||||
|      '/etc/chef', | ||||
| diff --git a/cloudinit/settings.py b/cloudinit/settings.py
 | ||||
| index 91e1bfe7..e690c0fd 100644
 | ||||
| --- a/cloudinit/settings.py
 | ||||
| +++ b/cloudinit/settings.py
 | ||||
| @@ -47,13 +47,16 @@ CFG_BUILTIN = {
 | ||||
|      ], | ||||
|      'def_log_file': '/var/log/cloud-init.log', | ||||
|      'log_cfgs': [], | ||||
| -    'syslog_fix_perms': ['syslog:adm', 'root:adm', 'root:wheel', 'root:root'],
 | ||||
| +    'mount_default_fields': [None, None, 'auto', 'defaults,nofail', '0', '2'],
 | ||||
| +    'ssh_deletekeys': False,
 | ||||
| +    'ssh_genkeytypes': [],
 | ||||
| +    'syslog_fix_perms': [],
 | ||||
|      'system_info': { | ||||
|          'paths': { | ||||
|              'cloud_dir': '/var/lib/cloud', | ||||
|              'templates_dir': '/etc/cloud/templates/', | ||||
|          }, | ||||
| -        'distro': 'ubuntu',
 | ||||
| +        'distro': 'rhel',
 | ||||
|          'network': {'renderers': None}, | ||||
|      }, | ||||
|      'vendor_data': {'enabled': True, 'prefix': []}, | ||||
| diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
 | ||||
| index cee630f7..553b5a7e 100755
 | ||||
| --- a/cloudinit/sources/DataSourceAzure.py
 | ||||
| +++ b/cloudinit/sources/DataSourceAzure.py
 | ||||
| @@ -296,7 +296,7 @@ def get_hostname(hostname_command='hostname'):
 | ||||
|   | ||||
|   | ||||
|  def set_hostname(hostname, hostname_command='hostname'): | ||||
| -    subp.subp([hostname_command, hostname])
 | ||||
| +    util.subp(['hostnamectl', 'set-hostname', str(hostname)])
 | ||||
|   | ||||
|   | ||||
|  @azure_ds_telemetry_reporter | ||||
| diff --git a/requirements.txt b/requirements.txt
 | ||||
| index 5817da3b..5b8becd7 100644
 | ||||
| --- a/requirements.txt
 | ||||
| +++ b/requirements.txt
 | ||||
| @@ -29,6 +29,3 @@ requests
 | ||||
|   | ||||
|  # For patching pieces of cloud-config together | ||||
|  jsonpatch | ||||
| -
 | ||||
| -# For validating cloud-config sections per schema definitions
 | ||||
| -jsonschema
 | ||||
| diff --git a/rhel/README.rhel b/rhel/README.rhel
 | ||||
| new file mode 100644 | ||||
| index 00000000..aa29630d
 | ||||
| --- /dev/null
 | ||||
| +++ b/rhel/README.rhel
 | ||||
| @@ -0,0 +1,5 @@
 | ||||
| +The following cloud-init modules are currently unsupported on this OS:
 | ||||
| + - apt_update_upgrade ('apt_update', 'apt_upgrade', 'apt_mirror', 'apt_preserve_sources_list', 'apt_old_mirror', 'apt_sources', 'debconf_selections', 'packages' options)
 | ||||
| + - byobu ('byobu_by_default' option)
 | ||||
| + - chef
 | ||||
| + - grub_dpkg
 | ||||
| diff --git a/rhel/cloud-init-tmpfiles.conf b/rhel/cloud-init-tmpfiles.conf
 | ||||
| new file mode 100644 | ||||
| index 00000000..0c6d2a3b
 | ||||
| --- /dev/null
 | ||||
| +++ b/rhel/cloud-init-tmpfiles.conf
 | ||||
| @@ -0,0 +1 @@
 | ||||
| +d /run/cloud-init 0700 root root - -
 | ||||
| diff --git a/rhel/cloud.cfg b/rhel/cloud.cfg
 | ||||
| new file mode 100644 | ||||
| index 00000000..9ecba215
 | ||||
| --- /dev/null
 | ||||
| +++ b/rhel/cloud.cfg
 | ||||
| @@ -0,0 +1,69 @@
 | ||||
| +users:
 | ||||
| + - default
 | ||||
| +
 | ||||
| +disable_root: 1
 | ||||
| +ssh_pwauth:   0
 | ||||
| +
 | ||||
| +mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2']
 | ||||
| +resize_rootfs_tmp: /dev
 | ||||
| +ssh_deletekeys:   1
 | ||||
| +ssh_genkeytypes:  ~
 | ||||
| +syslog_fix_perms: ~
 | ||||
| +disable_vmware_customization: false
 | ||||
| +
 | ||||
| +cloud_init_modules:
 | ||||
| + - disk_setup
 | ||||
| + - migrator
 | ||||
| + - bootcmd
 | ||||
| + - write-files
 | ||||
| + - growpart
 | ||||
| + - resizefs
 | ||||
| + - set_hostname
 | ||||
| + - update_hostname
 | ||||
| + - update_etc_hosts
 | ||||
| + - rsyslog
 | ||||
| + - users-groups
 | ||||
| + - ssh
 | ||||
| +
 | ||||
| +cloud_config_modules:
 | ||||
| + - mounts
 | ||||
| + - locale
 | ||||
| + - set-passwords
 | ||||
| + - rh_subscription
 | ||||
| + - yum-add-repo
 | ||||
| + - package-update-upgrade-install
 | ||||
| + - timezone
 | ||||
| + - puppet
 | ||||
| + - chef
 | ||||
| + - salt-minion
 | ||||
| + - mcollective
 | ||||
| + - disable-ec2-metadata
 | ||||
| + - runcmd
 | ||||
| +
 | ||||
| +cloud_final_modules:
 | ||||
| + - rightscale_userdata
 | ||||
| + - scripts-per-once
 | ||||
| + - scripts-per-boot
 | ||||
| + - scripts-per-instance
 | ||||
| + - scripts-user
 | ||||
| + - ssh-authkey-fingerprints
 | ||||
| + - keys-to-console
 | ||||
| + - phone-home
 | ||||
| + - final-message
 | ||||
| + - power-state-change
 | ||||
| +
 | ||||
| +system_info:
 | ||||
| +  default_user:
 | ||||
| +    name: cloud-user
 | ||||
| +    lock_passwd: true
 | ||||
| +    gecos: Cloud User
 | ||||
| +    groups: [adm, systemd-journal]
 | ||||
| +    sudo: ["ALL=(ALL) NOPASSWD:ALL"]
 | ||||
| +    shell: /bin/bash
 | ||||
| +  distro: rhel
 | ||||
| +  paths:
 | ||||
| +    cloud_dir: /var/lib/cloud
 | ||||
| +    templates_dir: /etc/cloud/templates
 | ||||
| +  ssh_svcname: sshd
 | ||||
| +
 | ||||
| +# vim:syntax=yaml
 | ||||
| diff --git a/rhel/systemd/cloud-config.service b/rhel/systemd/cloud-config.service
 | ||||
| new file mode 100644 | ||||
| index 00000000..f3dcd4be
 | ||||
| --- /dev/null
 | ||||
| +++ b/rhel/systemd/cloud-config.service
 | ||||
| @@ -0,0 +1,18 @@
 | ||||
| +[Unit]
 | ||||
| +Description=Apply the settings specified in cloud-config
 | ||||
| +After=network-online.target cloud-config.target
 | ||||
| +Wants=network-online.target cloud-config.target
 | ||||
| +ConditionPathExists=!/etc/cloud/cloud-init.disabled
 | ||||
| +ConditionKernelCommandLine=!cloud-init=disabled
 | ||||
| +
 | ||||
| +[Service]
 | ||||
| +Type=oneshot
 | ||||
| +ExecStart=/usr/bin/cloud-init modules --mode=config
 | ||||
| +RemainAfterExit=yes
 | ||||
| +TimeoutSec=0
 | ||||
| +
 | ||||
| +# Output needs to appear in instance console output
 | ||||
| +StandardOutput=journal+console
 | ||||
| +
 | ||||
| +[Install]
 | ||||
| +WantedBy=cloud-init.target
 | ||||
| diff --git a/rhel/systemd/cloud-config.target b/rhel/systemd/cloud-config.target
 | ||||
| new file mode 100644 | ||||
| index 00000000..ae9b7d02
 | ||||
| --- /dev/null
 | ||||
| +++ b/rhel/systemd/cloud-config.target
 | ||||
| @@ -0,0 +1,11 @@
 | ||||
| +# cloud-init normally emits a "cloud-config" upstart event to inform third
 | ||||
| +# parties that cloud-config is available, which does us no good when we're
 | ||||
| +# using systemd.  cloud-config.target serves as this synchronization point
 | ||||
| +# instead.  Services that would "start on cloud-config" with upstart can
 | ||||
| +# instead use "After=cloud-config.target" and "Wants=cloud-config.target"
 | ||||
| +# as appropriate.
 | ||||
| +
 | ||||
| +[Unit]
 | ||||
| +Description=Cloud-config availability
 | ||||
| +Wants=cloud-init-local.service cloud-init.service
 | ||||
| +After=cloud-init-local.service cloud-init.service
 | ||||
| diff --git a/rhel/systemd/cloud-final.service b/rhel/systemd/cloud-final.service
 | ||||
| new file mode 100644 | ||||
| index 00000000..e281c0cf
 | ||||
| --- /dev/null
 | ||||
| +++ b/rhel/systemd/cloud-final.service
 | ||||
| @@ -0,0 +1,24 @@
 | ||||
| +[Unit]
 | ||||
| +Description=Execute cloud user/final scripts
 | ||||
| +After=network-online.target cloud-config.service rc-local.service
 | ||||
| +Wants=network-online.target cloud-config.service
 | ||||
| +ConditionPathExists=!/etc/cloud/cloud-init.disabled
 | ||||
| +ConditionKernelCommandLine=!cloud-init=disabled
 | ||||
| +
 | ||||
| +[Service]
 | ||||
| +Type=oneshot
 | ||||
| +ExecStart=/usr/bin/cloud-init modules --mode=final
 | ||||
| +RemainAfterExit=yes
 | ||||
| +TimeoutSec=0
 | ||||
| +KillMode=process
 | ||||
| +# Restart NetworkManager if it is present and running.
 | ||||
| +ExecStartPost=/bin/sh -c 'u=NetworkManager.service; \
 | ||||
| + out=$(systemctl show --property=SubState $u) || exit; \
 | ||||
| + [ "$out" = "SubState=running" ] || exit 0; \
 | ||||
| + systemctl reload-or-try-restart $u'
 | ||||
| +
 | ||||
| +# Output needs to appear in instance console output
 | ||||
| +StandardOutput=journal+console
 | ||||
| +
 | ||||
| +[Install]
 | ||||
| +WantedBy=cloud-init.target
 | ||||
| diff --git a/rhel/systemd/cloud-init-local.service b/rhel/systemd/cloud-init-local.service
 | ||||
| new file mode 100644 | ||||
| index 00000000..8f9f6c9f
 | ||||
| --- /dev/null
 | ||||
| +++ b/rhel/systemd/cloud-init-local.service
 | ||||
| @@ -0,0 +1,31 @@
 | ||||
| +[Unit]
 | ||||
| +Description=Initial cloud-init job (pre-networking)
 | ||||
| +DefaultDependencies=no
 | ||||
| +Wants=network-pre.target
 | ||||
| +After=systemd-remount-fs.service
 | ||||
| +Requires=dbus.socket
 | ||||
| +After=dbus.socket
 | ||||
| +Before=NetworkManager.service network.service
 | ||||
| +Before=network-pre.target
 | ||||
| +Before=shutdown.target
 | ||||
| +Before=firewalld.target
 | ||||
| +Conflicts=shutdown.target
 | ||||
| +RequiresMountsFor=/var/lib/cloud
 | ||||
| +ConditionPathExists=!/etc/cloud/cloud-init.disabled
 | ||||
| +ConditionKernelCommandLine=!cloud-init=disabled
 | ||||
| +
 | ||||
| +[Service]
 | ||||
| +Type=oneshot
 | ||||
| +ExecStartPre=/bin/mkdir -p /run/cloud-init
 | ||||
| +ExecStartPre=/sbin/restorecon /run/cloud-init
 | ||||
| +ExecStartPre=/usr/bin/touch /run/cloud-init/enabled
 | ||||
| +ExecStart=/usr/bin/cloud-init init --local
 | ||||
| +ExecStart=/bin/touch /run/cloud-init/network-config-ready
 | ||||
| +RemainAfterExit=yes
 | ||||
| +TimeoutSec=0
 | ||||
| +
 | ||||
| +# Output needs to appear in instance console output
 | ||||
| +StandardOutput=journal+console
 | ||||
| +
 | ||||
| +[Install]
 | ||||
| +WantedBy=cloud-init.target
 | ||||
| diff --git a/rhel/systemd/cloud-init.service b/rhel/systemd/cloud-init.service
 | ||||
| new file mode 100644 | ||||
| index 00000000..0b3d796d
 | ||||
| --- /dev/null
 | ||||
| +++ b/rhel/systemd/cloud-init.service
 | ||||
| @@ -0,0 +1,26 @@
 | ||||
| +[Unit]
 | ||||
| +Description=Initial cloud-init job (metadata service crawler)
 | ||||
| +Wants=cloud-init-local.service
 | ||||
| +Wants=sshd-keygen.service
 | ||||
| +Wants=sshd.service
 | ||||
| +After=cloud-init-local.service
 | ||||
| +After=NetworkManager.service network.service
 | ||||
| +After=NetworkManager-wait-online.service
 | ||||
| +Before=network-online.target
 | ||||
| +Before=sshd-keygen.service
 | ||||
| +Before=sshd.service
 | ||||
| +Before=systemd-user-sessions.service
 | ||||
| +ConditionPathExists=!/etc/cloud/cloud-init.disabled
 | ||||
| +ConditionKernelCommandLine=!cloud-init=disabled
 | ||||
| +
 | ||||
| +[Service]
 | ||||
| +Type=oneshot
 | ||||
| +ExecStart=/usr/bin/cloud-init init
 | ||||
| +RemainAfterExit=yes
 | ||||
| +TimeoutSec=0
 | ||||
| +
 | ||||
| +# Output needs to appear in instance console output
 | ||||
| +StandardOutput=journal+console
 | ||||
| +
 | ||||
| +[Install]
 | ||||
| +WantedBy=cloud-init.target
 | ||||
| diff --git a/rhel/systemd/cloud-init.target b/rhel/systemd/cloud-init.target
 | ||||
| new file mode 100644 | ||||
| index 00000000..083c3b6f
 | ||||
| --- /dev/null
 | ||||
| +++ b/rhel/systemd/cloud-init.target
 | ||||
| @@ -0,0 +1,7 @@
 | ||||
| +# cloud-init target is enabled by cloud-init-generator
 | ||||
| +# To disable it you can either:
 | ||||
| +#  a.) boot with kernel cmdline of 'cloud-init=disabled'
 | ||||
| +#  b.) touch a file /etc/cloud/cloud-init.disabled
 | ||||
| +[Unit]
 | ||||
| +Description=Cloud-init target
 | ||||
| +After=multi-user.target
 | ||||
| diff --git a/setup.py b/setup.py
 | ||||
| index cbacf48e..d5cd01a4 100755
 | ||||
| --- a/setup.py
 | ||||
| +++ b/setup.py
 | ||||
| @@ -125,14 +125,6 @@ INITSYS_FILES = {
 | ||||
|      'sysvinit_deb': [f for f in glob('sysvinit/debian/*') if is_f(f)], | ||||
|      'sysvinit_openrc': [f for f in glob('sysvinit/gentoo/*') if is_f(f)], | ||||
|      'sysvinit_suse': [f for f in glob('sysvinit/suse/*') if is_f(f)], | ||||
| -    'systemd': [render_tmpl(f)
 | ||||
| -                for f in (glob('systemd/*.tmpl') +
 | ||||
| -                          glob('systemd/*.service') +
 | ||||
| -                          glob('systemd/*.target'))
 | ||||
| -                if (is_f(f) and not is_generator(f))],
 | ||||
| -    'systemd.generators': [
 | ||||
| -        render_tmpl(f, mode=0o755)
 | ||||
| -        for f in glob('systemd/*') if is_f(f) and is_generator(f)],
 | ||||
|      'upstart': [f for f in glob('upstart/*') if is_f(f)], | ||||
|  } | ||||
|  INITSYS_ROOTS = { | ||||
| @@ -142,9 +134,6 @@ INITSYS_ROOTS = {
 | ||||
|      'sysvinit_deb': 'etc/init.d', | ||||
|      'sysvinit_openrc': 'etc/init.d', | ||||
|      'sysvinit_suse': 'etc/init.d', | ||||
| -    'systemd': pkg_config_read('systemd', 'systemdsystemunitdir'),
 | ||||
| -    'systemd.generators': pkg_config_read('systemd',
 | ||||
| -                                          'systemdsystemgeneratordir'),
 | ||||
|      'upstart': 'etc/init/', | ||||
|  } | ||||
|  INITSYS_TYPES = sorted([f.partition(".")[0] for f in INITSYS_ROOTS.keys()]) | ||||
| @@ -245,14 +234,11 @@ if not in_virtualenv():
 | ||||
|          INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k] | ||||
|   | ||||
|  data_files = [ | ||||
| -    (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]),
 | ||||
| +    (ETC + '/bash_completion.d', ['bash_completion/cloud-init']),
 | ||||
|      (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')), | ||||
|      (ETC + '/cloud/templates', glob('templates/*')), | ||||
| -    (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify',
 | ||||
| -                                    'tools/uncloud-init',
 | ||||
| +    (USR_LIB_EXEC + '/cloud-init', ['tools/uncloud-init',
 | ||||
|                                      'tools/write-ssh-key-fingerprints']), | ||||
| -    (USR + '/share/bash-completion/completions',
 | ||||
| -     ['bash_completion/cloud-init']),
 | ||||
|      (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]), | ||||
|      (USR + '/share/doc/cloud-init/examples', | ||||
|          [f for f in glob('doc/examples/*') if is_f(f)]), | ||||
| @@ -263,8 +249,7 @@ if not platform.system().endswith('BSD'):
 | ||||
|      data_files.extend([ | ||||
|          (ETC + '/NetworkManager/dispatcher.d/', | ||||
|           ['tools/hook-network-manager']), | ||||
| -        (ETC + '/dhcp/dhclient-exit-hooks.d/', ['tools/hook-dhclient']),
 | ||||
| -        (LIB + '/udev/rules.d', [f for f in glob('udev/*.rules')])
 | ||||
| +        ('/usr/lib/udev/rules.d', [f for f in glob('udev/*.rules')])
 | ||||
|      ]) | ||||
|  # Use a subclass for install that handles | ||||
|  # adding on the right init system configuration files | ||||
| @@ -286,8 +271,6 @@ setuptools.setup(
 | ||||
|      scripts=['tools/cloud-init-per'], | ||||
|      license='Dual-licensed under GPLv3 or Apache 2.0', | ||||
|      data_files=data_files, | ||||
| -    install_requires=requirements,
 | ||||
| -    cmdclass=cmdclass,
 | ||||
|      entry_points={ | ||||
|          'console_scripts': [ | ||||
|              'cloud-init = cloudinit.cmd.main:main', | ||||
| diff --git a/tools/read-version b/tools/read-version
 | ||||
| index 02c90643..79755f78 100755
 | ||||
| --- a/tools/read-version
 | ||||
| +++ b/tools/read-version
 | ||||
| @@ -71,32 +71,8 @@ version_long = None
 | ||||
|  is_release_branch_ci = ( | ||||
|      os.environ.get("TRAVIS_PULL_REQUEST_BRANCH", "").startswith("upstream/") | ||||
|  ) | ||||
| -if is_gitdir(_tdir) and which("git") and not is_release_branch_ci:
 | ||||
| -    flags = []
 | ||||
| -    if use_tags:
 | ||||
| -        flags = ['--tags']
 | ||||
| -    cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags
 | ||||
| -
 | ||||
| -    try:
 | ||||
| -        version = tiny_p(cmd).strip()
 | ||||
| -    except RuntimeError:
 | ||||
| -        version = None
 | ||||
| -
 | ||||
| -    if version is None or not version.startswith(src_version):
 | ||||
| -        sys.stderr.write("git describe version (%s) differs from "
 | ||||
| -                         "cloudinit.version (%s)\n" % (version, src_version))
 | ||||
| -        sys.stderr.write(
 | ||||
| -            "Please get the latest upstream tags.\n"
 | ||||
| -            "As an example, this can be done with the following:\n"
 | ||||
| -            "$ git remote add upstream https://git.launchpad.net/cloud-init\n"
 | ||||
| -            "$ git fetch upstream --tags\n"
 | ||||
| -        )
 | ||||
| -        sys.exit(1)
 | ||||
| -
 | ||||
| -    version_long = tiny_p(cmd + ["--long"]).strip()
 | ||||
| -else:
 | ||||
| -    version = src_version
 | ||||
| -    version_long = None
 | ||||
| +version = src_version
 | ||||
| +version_long = None
 | ||||
|   | ||||
|  # version is X.Y.Z[+xxx.gHASH] | ||||
|  # version_long is None or X.Y.Z-xxx-gHASH | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,283 @@ | ||||
| From 3f895d7236fab4f12482435829b530022a2205ec Mon Sep 17 00:00:00 2001 | ||||
| From: Eduardo Otubo <otubo@redhat.com> | ||||
| Date: Fri, 7 May 2021 13:36:06 +0200 | ||||
| Subject: Do not write NM_CONTROLLED=no in generated interface config  files | ||||
| 
 | ||||
| Conflicts 20.3: | ||||
|  - Not appplying patch on cloudinit/net/sysconfig.py since it now has a | ||||
| mechanism to identify if cloud-init is running on RHEL, having the | ||||
| correct settings for NM_CONTROLLED. | ||||
| 
 | ||||
| Merged patches (21.1): | ||||
| - ecbace48 sysconfig: Don't write BOOTPROTO=dhcp for ipv6 dhcp
 | ||||
| - a1a00383 include 'NOZEROCONF=yes' in /etc/sysconfig/network
 | ||||
| X-downstream-only: true | ||||
| Signed-off-by: Eduardo Otubo <otubo@redhat.com> | ||||
| Signed-off-by: Ryan McCabe <rmccabe@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/net/sysconfig.py  | 13 +++++++++++-- | ||||
|  tests/unittests/test_net.py | 28 ---------------------------- | ||||
|  2 files changed, 11 insertions(+), 30 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
 | ||||
| index 99a4bae4..d5440998 100644
 | ||||
| --- a/cloudinit/net/sysconfig.py
 | ||||
| +++ b/cloudinit/net/sysconfig.py
 | ||||
| @@ -289,7 +289,7 @@ class Renderer(renderer.Renderer):
 | ||||
|      #                                         details about this) | ||||
|   | ||||
|      iface_defaults = { | ||||
| -        'rhel': {'ONBOOT': True, 'USERCTL': False, 'NM_CONTROLLED': False,
 | ||||
| +        'rhel': {'ONBOOT': True, 'USERCTL': False,
 | ||||
|                   'BOOTPROTO': 'none'}, | ||||
|          'suse': {'BOOTPROTO': 'static', 'STARTMODE': 'auto'}, | ||||
|      } | ||||
| @@ -925,7 +925,16 @@ class Renderer(renderer.Renderer):
 | ||||
|          # Distros configuring /etc/sysconfig/network as a file e.g. Centos | ||||
|          if sysconfig_path.endswith('network'): | ||||
|              util.ensure_dir(os.path.dirname(sysconfig_path)) | ||||
| -            netcfg = [_make_header(), 'NETWORKING=yes']
 | ||||
| +            netcfg = []
 | ||||
| +            for line in util.load_file(sysconfig_path, quiet=True).split('\n'):
 | ||||
| +                if 'cloud-init' in line:
 | ||||
| +                    break
 | ||||
| +                if not line.startswith(('NETWORKING=',
 | ||||
| +                                        'IPV6_AUTOCONF=',
 | ||||
| +                                        'NETWORKING_IPV6=')):
 | ||||
| +                    netcfg.append(line)
 | ||||
| +            # Now generate the cloud-init portion of sysconfig/network
 | ||||
| +            netcfg.extend([_make_header(), 'NETWORKING=yes'])
 | ||||
|              if network_state.use_ipv6: | ||||
|                  netcfg.append('NETWORKING_IPV6=yes') | ||||
|                  netcfg.append('IPV6_AUTOCONF=no') | ||||
| diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
 | ||||
| index 38d934d4..c67b5fcc 100644
 | ||||
| --- a/tests/unittests/test_net.py
 | ||||
| +++ b/tests/unittests/test_net.py
 | ||||
| @@ -535,7 +535,6 @@ GATEWAY=172.19.3.254
 | ||||
|  HWADDR=fa:16:3e:ed:9a:59 | ||||
|  IPADDR=172.19.1.34 | ||||
|  NETMASK=255.255.252.0 | ||||
| -NM_CONTROLLED=no
 | ||||
|  ONBOOT=yes | ||||
|  TYPE=Ethernet | ||||
|  USERCTL=no | ||||
| @@ -633,7 +632,6 @@ IPADDR=172.19.1.34
 | ||||
|  IPADDR1=10.0.0.10 | ||||
|  NETMASK=255.255.252.0 | ||||
|  NETMASK1=255.255.255.0 | ||||
| -NM_CONTROLLED=no
 | ||||
|  ONBOOT=yes | ||||
|  TYPE=Ethernet | ||||
|  USERCTL=no | ||||
| @@ -756,7 +754,6 @@ IPV6_AUTOCONF=no
 | ||||
|  IPV6_DEFAULTGW=2001:DB8::1 | ||||
|  IPV6_FORCE_ACCEPT_RA=no | ||||
|  NETMASK=255.255.252.0 | ||||
| -NM_CONTROLLED=no
 | ||||
|  ONBOOT=yes | ||||
|  TYPE=Ethernet | ||||
|  USERCTL=no | ||||
| @@ -884,7 +881,6 @@ NETWORK_CONFIGS = {
 | ||||
|                  BOOTPROTO=none | ||||
|                  DEVICE=eth1 | ||||
|                  HWADDR=cf:d6:af:48:e8:80 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no"""), | ||||
| @@ -901,7 +897,6 @@ NETWORK_CONFIGS = {
 | ||||
|                  IPADDR=192.168.21.3 | ||||
|                  NETMASK=255.255.255.0 | ||||
|                  METRIC=10000 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no"""), | ||||
| @@ -1032,7 +1027,6 @@ NETWORK_CONFIGS = {
 | ||||
|                  IPV6_AUTOCONF=no | ||||
|                  IPV6_FORCE_ACCEPT_RA=no | ||||
|                  NETMASK=255.255.255.0 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no | ||||
| @@ -1737,7 +1731,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  DHCPV6C=yes | ||||
|                  IPV6INIT=yes | ||||
|                  MACADDR=aa:bb:cc:dd:ee:ff | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  TYPE=Bond | ||||
|                  USERCTL=no"""), | ||||
| @@ -1745,7 +1738,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  BOOTPROTO=dhcp | ||||
|                  DEVICE=bond0.200 | ||||
|                  DHCLIENT_SET_DEFAULT_ROUTE=no | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  PHYSDEV=bond0 | ||||
|                  USERCTL=no | ||||
| @@ -1763,7 +1755,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  IPV6_DEFAULTGW=2001:4800:78ff:1b::1 | ||||
|                  MACADDR=bb:bb:bb:bb:bb:aa | ||||
|                  NETMASK=255.255.255.0 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  PRIO=22 | ||||
|                  STP=no | ||||
| @@ -1773,7 +1764,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  BOOTPROTO=none | ||||
|                  DEVICE=eth0 | ||||
|                  HWADDR=c0:d6:9f:2c:e8:80 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no"""), | ||||
| @@ -1790,7 +1780,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  MTU=1500 | ||||
|                  NETMASK=255.255.255.0 | ||||
|                  NETMASK1=255.255.255.0 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  PHYSDEV=eth0 | ||||
|                  USERCTL=no | ||||
| @@ -1800,7 +1789,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  DEVICE=eth1 | ||||
|                  HWADDR=aa:d6:9f:2c:e8:80 | ||||
|                  MASTER=bond0 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  SLAVE=yes | ||||
|                  TYPE=Ethernet | ||||
| @@ -1810,7 +1798,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  DEVICE=eth2 | ||||
|                  HWADDR=c0:bb:9f:2c:e8:80 | ||||
|                  MASTER=bond0 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  SLAVE=yes | ||||
|                  TYPE=Ethernet | ||||
| @@ -1820,7 +1807,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  BRIDGE=br0 | ||||
|                  DEVICE=eth3 | ||||
|                  HWADDR=66:bb:9f:2c:e8:80 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no"""), | ||||
| @@ -1829,7 +1815,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  BRIDGE=br0 | ||||
|                  DEVICE=eth4 | ||||
|                  HWADDR=98:bb:9f:2c:e8:80 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no"""), | ||||
| @@ -1838,7 +1823,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                  DEVICE=eth5 | ||||
|                  DHCLIENT_SET_DEFAULT_ROUTE=no | ||||
|                  HWADDR=98:bb:9f:2c:e8:8a | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=no | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no"""), | ||||
| @@ -2294,7 +2278,6 @@ iface bond0 inet6 static
 | ||||
|          MTU=9000 | ||||
|          NETMASK=255.255.255.0 | ||||
|          NETMASK1=255.255.255.0 | ||||
| -        NM_CONTROLLED=no
 | ||||
|          ONBOOT=yes | ||||
|          TYPE=Bond | ||||
|          USERCTL=no | ||||
| @@ -2304,7 +2287,6 @@ iface bond0 inet6 static
 | ||||
|          DEVICE=bond0s0 | ||||
|          HWADDR=aa:bb:cc:dd:e8:00 | ||||
|          MASTER=bond0 | ||||
| -        NM_CONTROLLED=no
 | ||||
|          ONBOOT=yes | ||||
|          SLAVE=yes | ||||
|          TYPE=Ethernet | ||||
| @@ -2326,7 +2308,6 @@ iface bond0 inet6 static
 | ||||
|          DEVICE=bond0s1 | ||||
|          HWADDR=aa:bb:cc:dd:e8:01 | ||||
|          MASTER=bond0 | ||||
| -        NM_CONTROLLED=no
 | ||||
|          ONBOOT=yes | ||||
|          SLAVE=yes | ||||
|          TYPE=Ethernet | ||||
| @@ -2383,7 +2364,6 @@ iface bond0 inet6 static
 | ||||
|                  BOOTPROTO=none | ||||
|                  DEVICE=en0 | ||||
|                  HWADDR=aa:bb:cc:dd:e8:00 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no"""), | ||||
| @@ -2402,7 +2382,6 @@ iface bond0 inet6 static
 | ||||
|                  MTU=2222 | ||||
|                  NETMASK=255.255.255.0 | ||||
|                  NETMASK1=255.255.255.0 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  PHYSDEV=en0 | ||||
|                  USERCTL=no | ||||
| @@ -2467,7 +2446,6 @@ iface bond0 inet6 static
 | ||||
|                  DEVICE=br0 | ||||
|                  IPADDR=192.168.2.2 | ||||
|                  NETMASK=255.255.255.0 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  PRIO=22 | ||||
|                  STP=no | ||||
| @@ -2591,7 +2569,6 @@ iface bond0 inet6 static
 | ||||
|                  HWADDR=52:54:00:12:34:00 | ||||
|                  IPADDR=192.168.1.2 | ||||
|                  NETMASK=255.255.255.0 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=no | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no | ||||
| @@ -2601,7 +2578,6 @@ iface bond0 inet6 static
 | ||||
|                  DEVICE=eth1 | ||||
|                  HWADDR=52:54:00:12:34:aa | ||||
|                  MTU=1480 | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=yes | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no | ||||
| @@ -2610,7 +2586,6 @@ iface bond0 inet6 static
 | ||||
|                  BOOTPROTO=none | ||||
|                  DEVICE=eth2 | ||||
|                  HWADDR=52:54:00:12:34:ff | ||||
| -                NM_CONTROLLED=no
 | ||||
|                  ONBOOT=no | ||||
|                  TYPE=Ethernet | ||||
|                  USERCTL=no | ||||
| @@ -3027,7 +3002,6 @@ class TestRhelSysConfigRendering(CiTestCase):
 | ||||
|  BOOTPROTO=dhcp | ||||
|  DEVICE=eth1000 | ||||
|  HWADDR=07-1c-c6-75-a4-be | ||||
| -NM_CONTROLLED=no
 | ||||
|  ONBOOT=yes | ||||
|  TYPE=Ethernet | ||||
|  USERCTL=no | ||||
| @@ -3148,7 +3122,6 @@ GATEWAY=10.0.2.2
 | ||||
|  HWADDR=52:54:00:12:34:00 | ||||
|  IPADDR=10.0.2.15 | ||||
|  NETMASK=255.255.255.0 | ||||
| -NM_CONTROLLED=no
 | ||||
|  ONBOOT=yes | ||||
|  TYPE=Ethernet | ||||
|  USERCTL=no | ||||
| @@ -3218,7 +3191,6 @@ USERCTL=no
 | ||||
|  # | ||||
|  BOOTPROTO=dhcp | ||||
|  DEVICE=eth0 | ||||
| -NM_CONTROLLED=no
 | ||||
|  ONBOOT=yes | ||||
|  TYPE=Ethernet | ||||
|  USERCTL=no | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
							
								
								
									
										69
									
								
								SOURCES/0003-limit-permissions-on-def_log_file.patch
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										69
									
								
								SOURCES/0003-limit-permissions-on-def_log_file.patch
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,69 @@ | ||||
| From 680ebcb46d1db6f02f2b21c158b4a9af2d789ba3 Mon Sep 17 00:00:00 2001 | ||||
| From: Eduardo Otubo <otubo@redhat.com> | ||||
| Date: Fri, 7 May 2021 13:36:08 +0200 | ||||
| Subject: limit permissions on def_log_file | ||||
| 
 | ||||
| This sets a default mode of 0600 on def_log_file, and makes this | ||||
| configurable via the def_log_file_mode option in cloud.cfg. | ||||
| 
 | ||||
| LP: #1541196 | ||||
| Resolves: rhbz#1424612 | ||||
| X-approved-upstream: true | ||||
| 
 | ||||
| Conflicts 21.1: | ||||
|     cloudinit/stages.py: adjusting call of ensure_file() to use more | ||||
| recent version | ||||
| 
 | ||||
| Signed-off-by: Eduardo Otubo <otubo@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/settings.py         | 1 + | ||||
|  cloudinit/stages.py           | 1 + | ||||
|  doc/examples/cloud-config.txt | 4 ++++ | ||||
|  3 files changed, 6 insertions(+) | ||||
| 
 | ||||
| diff --git a/cloudinit/settings.py b/cloudinit/settings.py
 | ||||
| index e690c0fd..43a1490c 100644
 | ||||
| --- a/cloudinit/settings.py
 | ||||
| +++ b/cloudinit/settings.py
 | ||||
| @@ -46,6 +46,7 @@ CFG_BUILTIN = {
 | ||||
|          'None', | ||||
|      ], | ||||
|      'def_log_file': '/var/log/cloud-init.log', | ||||
| +    'def_log_file_mode': 0o600,
 | ||||
|      'log_cfgs': [], | ||||
|      'mount_default_fields': [None, None, 'auto', 'defaults,nofail', '0', '2'], | ||||
|      'ssh_deletekeys': False, | ||||
| diff --git a/cloudinit/stages.py b/cloudinit/stages.py
 | ||||
| index 3ef4491c..83e25dd1 100644
 | ||||
| --- a/cloudinit/stages.py
 | ||||
| +++ b/cloudinit/stages.py
 | ||||
| @@ -147,6 +147,7 @@ class Init(object):
 | ||||
|      def _initialize_filesystem(self): | ||||
|          util.ensure_dirs(self._initial_subdirs()) | ||||
|          log_file = util.get_cfg_option_str(self.cfg, 'def_log_file') | ||||
| +        log_file_mode = util.get_cfg_option_int(self.cfg, 'def_log_file_mode')
 | ||||
|          if log_file: | ||||
|              util.ensure_file(log_file, preserve_mode=True) | ||||
|              perms = self.cfg.get('syslog_fix_perms') | ||||
| diff --git a/doc/examples/cloud-config.txt b/doc/examples/cloud-config.txt
 | ||||
| index de9a0f87..bb33ad45 100644
 | ||||
| --- a/doc/examples/cloud-config.txt
 | ||||
| +++ b/doc/examples/cloud-config.txt
 | ||||
| @@ -414,10 +414,14 @@ timezone: US/Eastern
 | ||||
|  # if syslog_fix_perms is a list, it will iterate through and use the | ||||
|  # first pair that does not raise error. | ||||
|  # | ||||
| +# 'def_log_file' will be created with mode 'def_log_file_mode', which
 | ||||
| +# is specified as a numeric value and defaults to 0600.
 | ||||
| +#
 | ||||
|  # the default values are '/var/log/cloud-init.log' and 'syslog:adm' | ||||
|  # the value of 'def_log_file' should match what is configured in logging | ||||
|  # if either is empty, then no change of ownership will be done | ||||
|  def_log_file: /var/log/my-logging-file.log | ||||
| +def_log_file_mode: 0600
 | ||||
|  syslog_fix_perms: syslog:root | ||||
|   | ||||
|  # you can set passwords for a user or multiple users | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,43 @@ | ||||
| From 244a3f9059fc95a5e644bd7868aed8060d9edc61 Mon Sep 17 00:00:00 2001 | ||||
| From: Eduardo Otubo <otubo@redhat.com> | ||||
| Date: Fri, 4 Feb 2022 16:04:31 +0100 | ||||
| Subject: [PATCH] Add _netdev option to mount Azure ephemeral disk (#1213) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 19: Add _netdev option to mount Azure ephemeral disk (#1213) | ||||
| RH-Commit: [1/1] e44291a50634594b8a0505cab3415d5c58cc34c4 (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 1998445 | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| The ephemeral disk depends on a functional network to be mounted. Even | ||||
| though it depends on cloud-init.service, sometimes an ordering cycle is | ||||
| noticed on the instance. If the option "_netdev" is added the problem is | ||||
| gone. | ||||
| 
 | ||||
| rhbz: #1998445 | ||||
| 
 | ||||
| Signed-off-by: Eduardo Otubo otubo@redhat.com | ||||
| ---
 | ||||
|  cloudinit/config/cc_mounts.py | 4 +++- | ||||
|  1 file changed, 3 insertions(+), 1 deletion(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py
 | ||||
| index c22d1698..5125f17c 100644
 | ||||
| --- a/cloudinit/config/cc_mounts.py
 | ||||
| +++ b/cloudinit/config/cc_mounts.py
 | ||||
| @@ -362,7 +362,9 @@ def handle(_name, cfg, cloud, log, _args):
 | ||||
|      def_mnt_opts = "defaults,nobootwait" | ||||
|      uses_systemd = cloud.distro.uses_systemd() | ||||
|      if uses_systemd: | ||||
| -        def_mnt_opts = "defaults,nofail,x-systemd.requires=cloud-init.service"
 | ||||
| +        def_mnt_opts = (
 | ||||
| +            "defaults,nofail, x-systemd.requires=cloud-init.service, _netdev"
 | ||||
| +        )
 | ||||
|   | ||||
|      defvals = [None, None, "auto", def_mnt_opts, "0", "2"] | ||||
|      defvals = cfg.get("mount_default_fields", defvals) | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
							
								
								
									
										295
									
								
								SOURCES/ci-Add-flexibility-to-IMDS-api-version-793.patch
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										295
									
								
								SOURCES/ci-Add-flexibility-to-IMDS-api-version-793.patch
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,295 @@ | ||||
| From f844e9c263e59a623ca8c647bd87bf4f91374d54 Mon Sep 17 00:00:00 2001 | ||||
| From: Thomas Stringer <thstring@microsoft.com> | ||||
| Date: Wed, 3 Mar 2021 11:07:43 -0500 | ||||
| Subject: [PATCH 1/7] Add flexibility to IMDS api-version (#793) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 18: Add support for userdata on Azure from IMDS | ||||
| RH-Commit: [1/7] 99a3db20e3f277a2f12ea21e937e06939434a2ca (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 2042351 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| Add flexibility to IMDS api-version by having both a desired IMDS | ||||
| api-version and a minimum api-version. The desired api-version will | ||||
| be used first, and if that fails it will fall back to the minimum | ||||
| api-version. | ||||
| ---
 | ||||
|  cloudinit/sources/DataSourceAzure.py          | 113 ++++++++++++++---- | ||||
|  tests/unittests/test_datasource/test_azure.py |  42 ++++++- | ||||
|  2 files changed, 129 insertions(+), 26 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
 | ||||
| index 553b5a7e..de1452ce 100755
 | ||||
| --- a/cloudinit/sources/DataSourceAzure.py
 | ||||
| +++ b/cloudinit/sources/DataSourceAzure.py
 | ||||
| @@ -78,17 +78,15 @@ AGENT_SEED_DIR = '/var/lib/waagent'
 | ||||
|  # In the event where the IMDS primary server is not | ||||
|  # available, it takes 1s to fallback to the secondary one | ||||
|  IMDS_TIMEOUT_IN_SECONDS = 2 | ||||
| -IMDS_URL = "http://169.254.169.254/metadata/"
 | ||||
| -IMDS_VER = "2019-06-01"
 | ||||
| -IMDS_VER_PARAM = "api-version={}".format(IMDS_VER)
 | ||||
| +IMDS_URL = "http://169.254.169.254/metadata"
 | ||||
| +IMDS_VER_MIN = "2019-06-01"
 | ||||
| +IMDS_VER_WANT = "2020-09-01"
 | ||||
|   | ||||
|   | ||||
|  class metadata_type(Enum): | ||||
| -    compute = "{}instance?{}".format(IMDS_URL, IMDS_VER_PARAM)
 | ||||
| -    network = "{}instance/network?{}".format(IMDS_URL,
 | ||||
| -                                             IMDS_VER_PARAM)
 | ||||
| -    reprovisiondata = "{}reprovisiondata?{}".format(IMDS_URL,
 | ||||
| -                                                    IMDS_VER_PARAM)
 | ||||
| +    compute = "{}/instance".format(IMDS_URL)
 | ||||
| +    network = "{}/instance/network".format(IMDS_URL)
 | ||||
| +    reprovisiondata = "{}/reprovisiondata".format(IMDS_URL)
 | ||||
|   | ||||
|   | ||||
|  PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0" | ||||
| @@ -349,6 +347,8 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          self.update_events['network'].add(EventType.BOOT) | ||||
|          self._ephemeral_dhcp_ctx = None | ||||
|   | ||||
| +        self.failed_desired_api_version = False
 | ||||
| +
 | ||||
|      def __str__(self): | ||||
|          root = sources.DataSource.__str__(self) | ||||
|          return "%s [seed=%s]" % (root, self.seed) | ||||
| @@ -520,8 +520,10 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|                      self._wait_for_all_nics_ready() | ||||
|                  ret = self._reprovision() | ||||
|   | ||||
| -            imds_md = get_metadata_from_imds(
 | ||||
| -                self.fallback_interface, retries=10)
 | ||||
| +            imds_md = self.get_imds_data_with_api_fallback(
 | ||||
| +                self.fallback_interface,
 | ||||
| +                retries=10
 | ||||
| +            )
 | ||||
|              (md, userdata_raw, cfg, files) = ret | ||||
|              self.seed = cdev | ||||
|              crawled_data.update({ | ||||
| @@ -652,6 +654,57 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|              self.ds_cfg['data_dir'], crawled_data['files'], dirmode=0o700) | ||||
|          return True | ||||
|   | ||||
| +    @azure_ds_telemetry_reporter
 | ||||
| +    def get_imds_data_with_api_fallback(
 | ||||
| +            self,
 | ||||
| +            fallback_nic,
 | ||||
| +            retries,
 | ||||
| +            md_type=metadata_type.compute):
 | ||||
| +        """
 | ||||
| +        Wrapper for get_metadata_from_imds so that we can have flexibility
 | ||||
| +        in which IMDS api-version we use. If a particular instance of IMDS
 | ||||
| +        does not have the api version that is desired, we want to make
 | ||||
| +        this fault tolerant and fall back to a good known minimum api
 | ||||
| +        version.
 | ||||
| +        """
 | ||||
| +
 | ||||
| +        if not self.failed_desired_api_version:
 | ||||
| +            for _ in range(retries):
 | ||||
| +                try:
 | ||||
| +                    LOG.info(
 | ||||
| +                        "Attempting IMDS api-version: %s",
 | ||||
| +                        IMDS_VER_WANT
 | ||||
| +                    )
 | ||||
| +                    return get_metadata_from_imds(
 | ||||
| +                        fallback_nic=fallback_nic,
 | ||||
| +                        retries=0,
 | ||||
| +                        md_type=md_type,
 | ||||
| +                        api_version=IMDS_VER_WANT
 | ||||
| +                    )
 | ||||
| +                except UrlError as err:
 | ||||
| +                    LOG.info(
 | ||||
| +                        "UrlError with IMDS api-version: %s",
 | ||||
| +                        IMDS_VER_WANT
 | ||||
| +                    )
 | ||||
| +                    if err.code == 400:
 | ||||
| +                        log_msg = "Fall back to IMDS api-version: {}".format(
 | ||||
| +                            IMDS_VER_MIN
 | ||||
| +                        )
 | ||||
| +                        report_diagnostic_event(
 | ||||
| +                            log_msg,
 | ||||
| +                            logger_func=LOG.info
 | ||||
| +                        )
 | ||||
| +                        self.failed_desired_api_version = True
 | ||||
| +                        break
 | ||||
| +
 | ||||
| +        LOG.info("Using IMDS api-version: %s", IMDS_VER_MIN)
 | ||||
| +        return get_metadata_from_imds(
 | ||||
| +            fallback_nic=fallback_nic,
 | ||||
| +            retries=retries,
 | ||||
| +            md_type=md_type,
 | ||||
| +            api_version=IMDS_VER_MIN
 | ||||
| +        )
 | ||||
| +
 | ||||
|      def device_name_to_device(self, name): | ||||
|          return self.ds_cfg['disk_aliases'].get(name) | ||||
|   | ||||
| @@ -880,10 +933,11 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          # primary nic is being attached first helps here. Otherwise each nic | ||||
|          # could add several seconds of delay. | ||||
|          try: | ||||
| -            imds_md = get_metadata_from_imds(
 | ||||
| +            imds_md = self.get_imds_data_with_api_fallback(
 | ||||
|                  ifname, | ||||
|                  5, | ||||
| -                metadata_type.network)
 | ||||
| +                metadata_type.network
 | ||||
| +            )
 | ||||
|          except Exception as e: | ||||
|              LOG.warning( | ||||
|                  "Failed to get network metadata using nic %s. Attempt to " | ||||
| @@ -1017,7 +1071,10 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|      def _poll_imds(self): | ||||
|          """Poll IMDS for the new provisioning data until we get a valid | ||||
|          response. Then return the returned JSON object.""" | ||||
| -        url = metadata_type.reprovisiondata.value
 | ||||
| +        url = "{}?api-version={}".format(
 | ||||
| +            metadata_type.reprovisiondata.value,
 | ||||
| +            IMDS_VER_MIN
 | ||||
| +        )
 | ||||
|          headers = {"Metadata": "true"} | ||||
|          nl_sock = None | ||||
|          report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE)) | ||||
| @@ -2059,7 +2116,8 @@ def _generate_network_config_from_fallback_config() -> dict:
 | ||||
|  @azure_ds_telemetry_reporter | ||||
|  def get_metadata_from_imds(fallback_nic, | ||||
|                             retries, | ||||
| -                           md_type=metadata_type.compute):
 | ||||
| +                           md_type=metadata_type.compute,
 | ||||
| +                           api_version=IMDS_VER_MIN):
 | ||||
|      """Query Azure's instance metadata service, returning a dictionary. | ||||
|   | ||||
|      If network is not up, setup ephemeral dhcp on fallback_nic to talk to the | ||||
| @@ -2069,13 +2127,16 @@ def get_metadata_from_imds(fallback_nic,
 | ||||
|      @param fallback_nic: String. The name of the nic which requires active | ||||
|          network in order to query IMDS. | ||||
|      @param retries: The number of retries of the IMDS_URL. | ||||
| +    @param md_type: Metadata type for IMDS request.
 | ||||
| +    @param api_version: IMDS api-version to use in the request.
 | ||||
|   | ||||
|      @return: A dict of instance metadata containing compute and network | ||||
|          info. | ||||
|      """ | ||||
|      kwargs = {'logfunc': LOG.debug, | ||||
|                'msg': 'Crawl of Azure Instance Metadata Service (IMDS)', | ||||
| -              'func': _get_metadata_from_imds, 'args': (retries, md_type,)}
 | ||||
| +              'func': _get_metadata_from_imds,
 | ||||
| +              'args': (retries, md_type, api_version,)}
 | ||||
|      if net.is_up(fallback_nic): | ||||
|          return util.log_time(**kwargs) | ||||
|      else: | ||||
| @@ -2091,20 +2152,26 @@ def get_metadata_from_imds(fallback_nic,
 | ||||
|   | ||||
|   | ||||
|  @azure_ds_telemetry_reporter | ||||
| -def _get_metadata_from_imds(retries, md_type=metadata_type.compute):
 | ||||
| -
 | ||||
| -    url = md_type.value
 | ||||
| +def _get_metadata_from_imds(
 | ||||
| +        retries,
 | ||||
| +        md_type=metadata_type.compute,
 | ||||
| +        api_version=IMDS_VER_MIN):
 | ||||
| +    url = "{}?api-version={}".format(md_type.value, api_version)
 | ||||
|      headers = {"Metadata": "true"} | ||||
|      try: | ||||
|          response = readurl( | ||||
|              url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers, | ||||
|              retries=retries, exception_cb=retry_on_url_exc) | ||||
|      except Exception as e: | ||||
| -        report_diagnostic_event(
 | ||||
| -            'Ignoring IMDS instance metadata. '
 | ||||
| -            'Get metadata from IMDS failed: %s' % e,
 | ||||
| -            logger_func=LOG.warning)
 | ||||
| -        return {}
 | ||||
| +        # pylint:disable=no-member
 | ||||
| +        if isinstance(e, UrlError) and e.code == 400:
 | ||||
| +            raise
 | ||||
| +        else:
 | ||||
| +            report_diagnostic_event(
 | ||||
| +                'Ignoring IMDS instance metadata. '
 | ||||
| +                'Get metadata from IMDS failed: %s' % e,
 | ||||
| +                logger_func=LOG.warning)
 | ||||
| +            return {}
 | ||||
|      try: | ||||
|          from json.decoder import JSONDecodeError | ||||
|          json_decode_error = JSONDecodeError | ||||
| diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
 | ||||
| index f597c723..dedebeb1 100644
 | ||||
| --- a/tests/unittests/test_datasource/test_azure.py
 | ||||
| +++ b/tests/unittests/test_datasource/test_azure.py
 | ||||
| @@ -408,7 +408,9 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
 | ||||
|   | ||||
|      def setUp(self): | ||||
|          super(TestGetMetadataFromIMDS, self).setUp() | ||||
| -        self.network_md_url = dsaz.IMDS_URL + "instance?api-version=2019-06-01"
 | ||||
| +        self.network_md_url = "{}/instance?api-version=2019-06-01".format(
 | ||||
| +            dsaz.IMDS_URL
 | ||||
| +        )
 | ||||
|   | ||||
|      @mock.patch(MOCKPATH + 'readurl') | ||||
|      @mock.patch(MOCKPATH + 'EphemeralDHCPv4', autospec=True) | ||||
| @@ -518,7 +520,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
 | ||||
|          """Return empty dict when IMDS network metadata is absent.""" | ||||
|          httpretty.register_uri( | ||||
|              httpretty.GET, | ||||
| -            dsaz.IMDS_URL + 'instance?api-version=2017-12-01',
 | ||||
| +            dsaz.IMDS_URL + '/instance?api-version=2017-12-01',
 | ||||
|              body={}, status=404) | ||||
|   | ||||
|          m_net_is_up.return_value = True  # skips dhcp | ||||
| @@ -1877,6 +1879,40 @@ scbus-1 on xpt0 bus 0
 | ||||
|          ssh_keys = dsrc.get_public_ssh_keys() | ||||
|          self.assertEqual(ssh_keys, ['key2']) | ||||
|   | ||||
| +    @mock.patch(MOCKPATH + 'get_metadata_from_imds')
 | ||||
| +    def test_imds_api_version_wanted_nonexistent(
 | ||||
| +            self,
 | ||||
| +            m_get_metadata_from_imds):
 | ||||
| +        def get_metadata_from_imds_side_eff(*args, **kwargs):
 | ||||
| +            if kwargs['api_version'] == dsaz.IMDS_VER_WANT:
 | ||||
| +                raise url_helper.UrlError("No IMDS version", code=400)
 | ||||
| +            return NETWORK_METADATA
 | ||||
| +        m_get_metadata_from_imds.side_effect = get_metadata_from_imds_side_eff
 | ||||
| +        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
 | ||||
| +        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| +        data = {
 | ||||
| +            'ovfcontent': construct_valid_ovf_env(data=odata),
 | ||||
| +            'sys_cfg': sys_cfg
 | ||||
| +        }
 | ||||
| +        dsrc = self._get_ds(data)
 | ||||
| +        dsrc.get_data()
 | ||||
| +        self.assertIsNotNone(dsrc.metadata)
 | ||||
| +        self.assertTrue(dsrc.failed_desired_api_version)
 | ||||
| +
 | ||||
| +    @mock.patch(
 | ||||
| +        MOCKPATH + 'get_metadata_from_imds', return_value=NETWORK_METADATA)
 | ||||
| +    def test_imds_api_version_wanted_exists(self, m_get_metadata_from_imds):
 | ||||
| +        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
 | ||||
| +        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| +        data = {
 | ||||
| +            'ovfcontent': construct_valid_ovf_env(data=odata),
 | ||||
| +            'sys_cfg': sys_cfg
 | ||||
| +        }
 | ||||
| +        dsrc = self._get_ds(data)
 | ||||
| +        dsrc.get_data()
 | ||||
| +        self.assertIsNotNone(dsrc.metadata)
 | ||||
| +        self.assertFalse(dsrc.failed_desired_api_version)
 | ||||
| +
 | ||||
|   | ||||
|  class TestAzureBounce(CiTestCase): | ||||
|   | ||||
| @@ -2657,7 +2693,7 @@ class TestPreprovisioningHotAttachNics(CiTestCase):
 | ||||
|      @mock.patch(MOCKPATH + 'DataSourceAzure.wait_for_link_up') | ||||
|      @mock.patch('cloudinit.sources.helpers.netlink.wait_for_nic_attach_event') | ||||
|      @mock.patch('cloudinit.sources.net.find_fallback_nic') | ||||
| -    @mock.patch(MOCKPATH + 'get_metadata_from_imds')
 | ||||
| +    @mock.patch(MOCKPATH + 'DataSourceAzure.get_imds_data_with_api_fallback')
 | ||||
|      @mock.patch(MOCKPATH + 'EphemeralDHCPv4') | ||||
|      @mock.patch(MOCKPATH + 'DataSourceAzure._wait_for_nic_detach') | ||||
|      @mock.patch('os.path.isfile') | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,38 @@ | ||||
| From b9c6c6c88d16685475bb9c8f0de3c765bd5303fa Mon Sep 17 00:00:00 2001 | ||||
| From: Eduardo Otubo <otubo@redhat.com> | ||||
| Date: Thu, 17 Feb 2022 15:01:41 +0100 | ||||
| Subject: [PATCH 2/3] Adding _netdev to the default mount configuration | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 21: Adding _netdev to the default mount configuration | ||||
| RH-Commit: [1/1] 250860a24db396a5088d207d6526a0028ac73eb3 (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 1998445 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| Adding _netdev option also to the default configuration for RHEL. | ||||
| 
 | ||||
| rhbz: 1998445 | ||||
| x-downstream-only: yes | ||||
| 
 | ||||
| Signed-off-by: Eduardo Otubo <otubo@redhat.com> | ||||
| ---
 | ||||
|  rhel/cloud.cfg | 2 +- | ||||
|  1 file changed, 1 insertion(+), 1 deletion(-) | ||||
| 
 | ||||
| diff --git a/rhel/cloud.cfg b/rhel/cloud.cfg
 | ||||
| index cbee197a..75d5c84b 100644
 | ||||
| --- a/rhel/cloud.cfg
 | ||||
| +++ b/rhel/cloud.cfg
 | ||||
| @@ -4,7 +4,7 @@ users:
 | ||||
|  disable_root: 1 | ||||
|  ssh_pwauth:   0 | ||||
|   | ||||
| -mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2']
 | ||||
| +mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service,_netdev', '0', '2']
 | ||||
|  resize_rootfs_tmp: /dev | ||||
|  ssh_deletekeys:   1 | ||||
|  ssh_genkeytypes:  ['rsa', 'ecdsa', 'ed25519'] | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,397 @@ | ||||
| From 68f058e8d20a499f74bc78af8e0c6a90ca57ae20 Mon Sep 17 00:00:00 2001 | ||||
| From: Thomas Stringer <thstring@microsoft.com> | ||||
| Date: Mon, 26 Apr 2021 09:41:38 -0400 | ||||
| Subject: [PATCH 5/7] Azure: Retrieve username and hostname from IMDS (#865) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 18: Add support for userdata on Azure from IMDS | ||||
| RH-Commit: [5/7] 6a768d31e63e5f00dae0fad2712a7618d62b0879 (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 2042351 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| This change allows us to retrieve the username and hostname from | ||||
| IMDS instead of having to rely on the mounted OVF. | ||||
| ---
 | ||||
|  cloudinit/sources/DataSourceAzure.py          | 149 ++++++++++++++---- | ||||
|  tests/unittests/test_datasource/test_azure.py |  87 +++++++++- | ||||
|  2 files changed, 205 insertions(+), 31 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
 | ||||
| index 39e67c4f..6d7954ee 100755
 | ||||
| --- a/cloudinit/sources/DataSourceAzure.py
 | ||||
| +++ b/cloudinit/sources/DataSourceAzure.py
 | ||||
| @@ -5,6 +5,7 @@
 | ||||
|  # This file is part of cloud-init. See LICENSE file for license information. | ||||
|   | ||||
|  import base64 | ||||
| +from collections import namedtuple
 | ||||
|  import contextlib | ||||
|  import crypt | ||||
|  from functools import partial | ||||
| @@ -25,6 +26,7 @@ from cloudinit.net import device_driver
 | ||||
|  from cloudinit.net.dhcp import EphemeralDHCPv4 | ||||
|  from cloudinit import sources | ||||
|  from cloudinit.sources.helpers import netlink | ||||
| +from cloudinit import ssh_util
 | ||||
|  from cloudinit import subp | ||||
|  from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc | ||||
|  from cloudinit import util | ||||
| @@ -80,7 +82,12 @@ AGENT_SEED_DIR = '/var/lib/waagent'
 | ||||
|  IMDS_TIMEOUT_IN_SECONDS = 2 | ||||
|  IMDS_URL = "http://169.254.169.254/metadata" | ||||
|  IMDS_VER_MIN = "2019-06-01" | ||||
| -IMDS_VER_WANT = "2020-09-01"
 | ||||
| +IMDS_VER_WANT = "2020-10-01"
 | ||||
| +
 | ||||
| +
 | ||||
| +# This holds SSH key data including if the source was
 | ||||
| +# from IMDS, as well as the SSH key data itself.
 | ||||
| +SSHKeys = namedtuple("SSHKeys", ("keys_from_imds", "ssh_keys"))
 | ||||
|   | ||||
|   | ||||
|  class metadata_type(Enum): | ||||
| @@ -391,6 +398,8 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          """Return the subplatform metadata source details.""" | ||||
|          if self.seed.startswith('/dev'): | ||||
|              subplatform_type = 'config-disk' | ||||
| +        elif self.seed.lower() == 'imds':
 | ||||
| +            subplatform_type = 'imds'
 | ||||
|          else: | ||||
|              subplatform_type = 'seed-dir' | ||||
|          return '%s (%s)' % (subplatform_type, self.seed) | ||||
| @@ -433,9 +442,11 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|   | ||||
|          found = None | ||||
|          reprovision = False | ||||
| +        ovf_is_accessible = True
 | ||||
|          reprovision_after_nic_attach = False | ||||
|          for cdev in candidates: | ||||
|              try: | ||||
| +                LOG.debug("cdev: %s", cdev)
 | ||||
|                  if cdev == "IMDS": | ||||
|                      ret = None | ||||
|                      reprovision = True | ||||
| @@ -462,8 +473,18 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|                  raise sources.InvalidMetaDataException(msg) | ||||
|              except util.MountFailedError: | ||||
|                  report_diagnostic_event( | ||||
| -                    '%s was not mountable' % cdev, logger_func=LOG.warning)
 | ||||
| -                continue
 | ||||
| +                    '%s was not mountable' % cdev, logger_func=LOG.debug)
 | ||||
| +                cdev = 'IMDS'
 | ||||
| +                ovf_is_accessible = False
 | ||||
| +                empty_md = {'local-hostname': ''}
 | ||||
| +                empty_cfg = dict(
 | ||||
| +                    system_info=dict(
 | ||||
| +                        default_user=dict(
 | ||||
| +                            name=''
 | ||||
| +                        )
 | ||||
| +                    )
 | ||||
| +                )
 | ||||
| +                ret = (empty_md, '', empty_cfg, {})
 | ||||
|   | ||||
|              report_diagnostic_event("Found provisioning metadata in %s" % cdev, | ||||
|                                      logger_func=LOG.debug) | ||||
| @@ -490,6 +511,10 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|                  self.fallback_interface, | ||||
|                  retries=10 | ||||
|              ) | ||||
| +            if not imds_md and not ovf_is_accessible:
 | ||||
| +                msg = 'No OVF or IMDS available'
 | ||||
| +                report_diagnostic_event(msg)
 | ||||
| +                raise sources.InvalidMetaDataException(msg)
 | ||||
|              (md, userdata_raw, cfg, files) = ret | ||||
|              self.seed = cdev | ||||
|              crawled_data.update({ | ||||
| @@ -498,6 +523,21 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|                  'metadata': util.mergemanydict( | ||||
|                      [md, {'imds': imds_md}]), | ||||
|                  'userdata_raw': userdata_raw}) | ||||
| +            imds_username = _username_from_imds(imds_md)
 | ||||
| +            imds_hostname = _hostname_from_imds(imds_md)
 | ||||
| +            imds_disable_password = _disable_password_from_imds(imds_md)
 | ||||
| +            if imds_username:
 | ||||
| +                LOG.debug('Username retrieved from IMDS: %s', imds_username)
 | ||||
| +                cfg['system_info']['default_user']['name'] = imds_username
 | ||||
| +            if imds_hostname:
 | ||||
| +                LOG.debug('Hostname retrieved from IMDS: %s', imds_hostname)
 | ||||
| +                crawled_data['metadata']['local-hostname'] = imds_hostname
 | ||||
| +            if imds_disable_password:
 | ||||
| +                LOG.debug(
 | ||||
| +                    'Disable password retrieved from IMDS: %s',
 | ||||
| +                    imds_disable_password
 | ||||
| +                )
 | ||||
| +                crawled_data['metadata']['disable_password'] = imds_disable_password  # noqa: E501
 | ||||
|              found = cdev | ||||
|   | ||||
|              report_diagnostic_event( | ||||
| @@ -676,6 +716,13 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|   | ||||
|      @azure_ds_telemetry_reporter | ||||
|      def get_public_ssh_keys(self): | ||||
| +        """
 | ||||
| +        Retrieve public SSH keys.
 | ||||
| +        """
 | ||||
| +
 | ||||
| +        return self._get_public_ssh_keys_and_source().ssh_keys
 | ||||
| +
 | ||||
| +    def _get_public_ssh_keys_and_source(self):
 | ||||
|          """ | ||||
|          Try to get the ssh keys from IMDS first, and if that fails | ||||
|          (i.e. IMDS is unavailable) then fallback to getting the ssh | ||||
| @@ -685,30 +732,50 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          advantage, so this is a strong preference. But we must keep | ||||
|          OVF as a second option for environments that don't have IMDS. | ||||
|          """ | ||||
| +
 | ||||
|          LOG.debug('Retrieving public SSH keys') | ||||
|          ssh_keys = [] | ||||
| +        keys_from_imds = True
 | ||||
| +        LOG.debug('Attempting to get SSH keys from IMDS')
 | ||||
|          try: | ||||
| -            raise KeyError(
 | ||||
| -                "Not using public SSH keys from IMDS"
 | ||||
| -            )
 | ||||
| -            # pylint:disable=unreachable
 | ||||
|              ssh_keys = [ | ||||
|                  public_key['keyData'] | ||||
|                  for public_key | ||||
|                  in self.metadata['imds']['compute']['publicKeys'] | ||||
|              ] | ||||
| -            LOG.debug('Retrieved SSH keys from IMDS')
 | ||||
| +            for key in ssh_keys:
 | ||||
| +                if not _key_is_openssh_formatted(key=key):
 | ||||
| +                    keys_from_imds = False
 | ||||
| +                    break
 | ||||
| +
 | ||||
| +            if not keys_from_imds:
 | ||||
| +                log_msg = 'Keys not in OpenSSH format, using OVF'
 | ||||
| +            else:
 | ||||
| +                log_msg = 'Retrieved {} keys from IMDS'.format(
 | ||||
| +                    len(ssh_keys)
 | ||||
| +                    if ssh_keys is not None
 | ||||
| +                    else 0
 | ||||
| +                )
 | ||||
|          except KeyError: | ||||
|              log_msg = 'Unable to get keys from IMDS, falling back to OVF' | ||||
| +            keys_from_imds = False
 | ||||
| +        finally:
 | ||||
|              report_diagnostic_event(log_msg, logger_func=LOG.debug) | ||||
| +
 | ||||
| +        if not keys_from_imds:
 | ||||
| +            LOG.debug('Attempting to get SSH keys from OVF')
 | ||||
|              try: | ||||
|                  ssh_keys = self.metadata['public-keys'] | ||||
| -                LOG.debug('Retrieved keys from OVF')
 | ||||
| +                log_msg = 'Retrieved {} keys from OVF'.format(len(ssh_keys))
 | ||||
|              except KeyError: | ||||
|                  log_msg = 'No keys available from OVF' | ||||
| +            finally:
 | ||||
|                  report_diagnostic_event(log_msg, logger_func=LOG.debug) | ||||
|   | ||||
| -        return ssh_keys
 | ||||
| +        return SSHKeys(
 | ||||
| +            keys_from_imds=keys_from_imds,
 | ||||
| +            ssh_keys=ssh_keys
 | ||||
| +        )
 | ||||
|   | ||||
|      def get_config_obj(self): | ||||
|          return self.cfg | ||||
| @@ -1325,30 +1392,21 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          self.bounce_network_with_azure_hostname() | ||||
|   | ||||
|          pubkey_info = None | ||||
| -        try:
 | ||||
| -            raise KeyError(
 | ||||
| -                "Not using public SSH keys from IMDS"
 | ||||
| -            )
 | ||||
| -            # pylint:disable=unreachable
 | ||||
| -            public_keys = self.metadata['imds']['compute']['publicKeys']
 | ||||
| -            LOG.debug(
 | ||||
| -                'Successfully retrieved %s key(s) from IMDS',
 | ||||
| -                len(public_keys)
 | ||||
| -                if public_keys is not None
 | ||||
| +        ssh_keys_and_source = self._get_public_ssh_keys_and_source()
 | ||||
| +
 | ||||
| +        if not ssh_keys_and_source.keys_from_imds:
 | ||||
| +            pubkey_info = self.cfg.get('_pubkeys', None)
 | ||||
| +            log_msg = 'Retrieved {} fingerprints from OVF'.format(
 | ||||
| +                len(pubkey_info)
 | ||||
| +                if pubkey_info is not None
 | ||||
|                  else 0 | ||||
|              ) | ||||
| -        except KeyError:
 | ||||
| -            LOG.debug(
 | ||||
| -                'Unable to retrieve SSH keys from IMDS during '
 | ||||
| -                'negotiation, falling back to OVF'
 | ||||
| -            )
 | ||||
| -            pubkey_info = self.cfg.get('_pubkeys', None)
 | ||||
| +            report_diagnostic_event(log_msg, logger_func=LOG.debug)
 | ||||
|   | ||||
|          metadata_func = partial(get_metadata_from_fabric, | ||||
|                                  fallback_lease_file=self. | ||||
|                                  dhclient_lease_file, | ||||
| -                                pubkey_info=pubkey_info,
 | ||||
| -                                iso_dev=self.iso_dev)
 | ||||
| +                                pubkey_info=pubkey_info)
 | ||||
|   | ||||
|          LOG.debug("negotiating with fabric via agent command %s", | ||||
|                    self.ds_cfg['agent_command']) | ||||
| @@ -1404,6 +1462,41 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          return self.metadata.get('imds', {}).get('compute', {}).get('location') | ||||
|   | ||||
|   | ||||
| +def _username_from_imds(imds_data):
 | ||||
| +    try:
 | ||||
| +        return imds_data['compute']['osProfile']['adminUsername']
 | ||||
| +    except KeyError:
 | ||||
| +        return None
 | ||||
| +
 | ||||
| +
 | ||||
| +def _hostname_from_imds(imds_data):
 | ||||
| +    try:
 | ||||
| +        return imds_data['compute']['osProfile']['computerName']
 | ||||
| +    except KeyError:
 | ||||
| +        return None
 | ||||
| +
 | ||||
| +
 | ||||
| +def _disable_password_from_imds(imds_data):
 | ||||
| +    try:
 | ||||
| +        return imds_data['compute']['osProfile']['disablePasswordAuthentication'] == 'true'  # noqa: E501
 | ||||
| +    except KeyError:
 | ||||
| +        return None
 | ||||
| +
 | ||||
| +
 | ||||
| +def _key_is_openssh_formatted(key):
 | ||||
| +    """
 | ||||
| +    Validate whether or not the key is OpenSSH-formatted.
 | ||||
| +    """
 | ||||
| +
 | ||||
| +    parser = ssh_util.AuthKeyLineParser()
 | ||||
| +    try:
 | ||||
| +        akl = parser.parse(key)
 | ||||
| +    except TypeError:
 | ||||
| +        return False
 | ||||
| +
 | ||||
| +    return akl.keytype is not None
 | ||||
| +
 | ||||
| +
 | ||||
|  def _partitions_on_device(devpath, maxnum=16): | ||||
|      # return a list of tuples (ptnum, path) for each part on devpath | ||||
|      for suff in ("-part", "p", ""): | ||||
| diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
 | ||||
| index 320fa857..d9817d84 100644
 | ||||
| --- a/tests/unittests/test_datasource/test_azure.py
 | ||||
| +++ b/tests/unittests/test_datasource/test_azure.py
 | ||||
| @@ -108,7 +108,7 @@ NETWORK_METADATA = {
 | ||||
|          "zone": "", | ||||
|          "publicKeys": [ | ||||
|              { | ||||
| -                "keyData": "key1",
 | ||||
| +                "keyData": "ssh-rsa key1",
 | ||||
|                  "path": "path1" | ||||
|              } | ||||
|          ] | ||||
| @@ -1761,8 +1761,29 @@ scbus-1 on xpt0 bus 0
 | ||||
|          dsrc.get_data() | ||||
|          dsrc.setup(True) | ||||
|          ssh_keys = dsrc.get_public_ssh_keys() | ||||
| -        # Temporarily alter this test so that SSH public keys
 | ||||
| -        # from IMDS are *not* going to be in use to fix a regression.
 | ||||
| +        self.assertEqual(ssh_keys, ["ssh-rsa key1"])
 | ||||
| +        self.assertEqual(m_parse_certificates.call_count, 0)
 | ||||
| +
 | ||||
| +    @mock.patch(
 | ||||
| +        'cloudinit.sources.helpers.azure.OpenSSLManager.parse_certificates')
 | ||||
| +    @mock.patch(MOCKPATH + 'get_metadata_from_imds')
 | ||||
| +    def test_get_public_ssh_keys_with_no_openssh_format(
 | ||||
| +            self,
 | ||||
| +            m_get_metadata_from_imds,
 | ||||
| +            m_parse_certificates):
 | ||||
| +        imds_data = copy.deepcopy(NETWORK_METADATA)
 | ||||
| +        imds_data['compute']['publicKeys'][0]['keyData'] = 'no-openssh-format'
 | ||||
| +        m_get_metadata_from_imds.return_value = imds_data
 | ||||
| +        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
 | ||||
| +        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| +        data = {
 | ||||
| +            'ovfcontent': construct_valid_ovf_env(data=odata),
 | ||||
| +            'sys_cfg': sys_cfg
 | ||||
| +        }
 | ||||
| +        dsrc = self._get_ds(data)
 | ||||
| +        dsrc.get_data()
 | ||||
| +        dsrc.setup(True)
 | ||||
| +        ssh_keys = dsrc.get_public_ssh_keys()
 | ||||
|          self.assertEqual(ssh_keys, []) | ||||
|          self.assertEqual(m_parse_certificates.call_count, 0) | ||||
|   | ||||
| @@ -1818,6 +1839,66 @@ scbus-1 on xpt0 bus 0
 | ||||
|          self.assertIsNotNone(dsrc.metadata) | ||||
|          self.assertFalse(dsrc.failed_desired_api_version) | ||||
|   | ||||
| +    @mock.patch(MOCKPATH + 'get_metadata_from_imds')
 | ||||
| +    def test_hostname_from_imds(self, m_get_metadata_from_imds):
 | ||||
| +        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
 | ||||
| +        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| +        data = {
 | ||||
| +            'ovfcontent': construct_valid_ovf_env(data=odata),
 | ||||
| +            'sys_cfg': sys_cfg
 | ||||
| +        }
 | ||||
| +        imds_data_with_os_profile = copy.deepcopy(NETWORK_METADATA)
 | ||||
| +        imds_data_with_os_profile["compute"]["osProfile"] = dict(
 | ||||
| +            adminUsername="username1",
 | ||||
| +            computerName="hostname1",
 | ||||
| +            disablePasswordAuthentication="true"
 | ||||
| +        )
 | ||||
| +        m_get_metadata_from_imds.return_value = imds_data_with_os_profile
 | ||||
| +        dsrc = self._get_ds(data)
 | ||||
| +        dsrc.get_data()
 | ||||
| +        self.assertEqual(dsrc.metadata["local-hostname"], "hostname1")
 | ||||
| +
 | ||||
| +    @mock.patch(MOCKPATH + 'get_metadata_from_imds')
 | ||||
| +    def test_username_from_imds(self, m_get_metadata_from_imds):
 | ||||
| +        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
 | ||||
| +        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| +        data = {
 | ||||
| +            'ovfcontent': construct_valid_ovf_env(data=odata),
 | ||||
| +            'sys_cfg': sys_cfg
 | ||||
| +        }
 | ||||
| +        imds_data_with_os_profile = copy.deepcopy(NETWORK_METADATA)
 | ||||
| +        imds_data_with_os_profile["compute"]["osProfile"] = dict(
 | ||||
| +            adminUsername="username1",
 | ||||
| +            computerName="hostname1",
 | ||||
| +            disablePasswordAuthentication="true"
 | ||||
| +        )
 | ||||
| +        m_get_metadata_from_imds.return_value = imds_data_with_os_profile
 | ||||
| +        dsrc = self._get_ds(data)
 | ||||
| +        dsrc.get_data()
 | ||||
| +        self.assertEqual(
 | ||||
| +            dsrc.cfg["system_info"]["default_user"]["name"],
 | ||||
| +            "username1"
 | ||||
| +        )
 | ||||
| +
 | ||||
| +    @mock.patch(MOCKPATH + 'get_metadata_from_imds')
 | ||||
| +    def test_disable_password_from_imds(self, m_get_metadata_from_imds):
 | ||||
| +        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
 | ||||
| +        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| +        data = {
 | ||||
| +            'ovfcontent': construct_valid_ovf_env(data=odata),
 | ||||
| +            'sys_cfg': sys_cfg
 | ||||
| +        }
 | ||||
| +        imds_data_with_os_profile = copy.deepcopy(NETWORK_METADATA)
 | ||||
| +        imds_data_with_os_profile["compute"]["osProfile"] = dict(
 | ||||
| +            adminUsername="username1",
 | ||||
| +            computerName="hostname1",
 | ||||
| +            disablePasswordAuthentication="true"
 | ||||
| +        )
 | ||||
| +        m_get_metadata_from_imds.return_value = imds_data_with_os_profile
 | ||||
| +        dsrc = self._get_ds(data)
 | ||||
| +        dsrc.get_data()
 | ||||
| +        self.assertTrue(dsrc.metadata["disable_password"])
 | ||||
| +
 | ||||
|   | ||||
|  class TestAzureBounce(CiTestCase): | ||||
|   | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,315 @@ | ||||
| From 816fe5c2e6d5dcc68f292092b00b2acfbc4c8e88 Mon Sep 17 00:00:00 2001 | ||||
| From: aswinrajamannar <39812128+aswinrajamannar@users.noreply.github.com> | ||||
| Date: Mon, 26 Apr 2021 07:28:39 -0700 | ||||
| Subject: [PATCH 6/7] Azure: Retry net metadata during nic attach for | ||||
|  non-timeout errs (#878) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 18: Add support for userdata on Azure from IMDS | ||||
| RH-Commit: [6/7] 794cd340644260bb43a7c8582a8067f403b9842d (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 2042351 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| When network interfaces are hot-attached to the VM, attempting to get | ||||
| network metadata might return 410 (or 500, 503 etc) because the info | ||||
| is not yet available. In those cases, we retry getting the metadata | ||||
| before giving up. The only case where we can move on to wait for more | ||||
| nic attach events is if the call times out despite retries, which | ||||
| means the interface is not likely a primary interface, and we should | ||||
| try for more nic attach events. | ||||
| ---
 | ||||
|  cloudinit/sources/DataSourceAzure.py          | 65 +++++++++++-- | ||||
|  tests/unittests/test_datasource/test_azure.py | 95 ++++++++++++++++--- | ||||
|  2 files changed, 140 insertions(+), 20 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
 | ||||
| index 6d7954ee..d0be6d84 100755
 | ||||
| --- a/cloudinit/sources/DataSourceAzure.py
 | ||||
| +++ b/cloudinit/sources/DataSourceAzure.py
 | ||||
| @@ -17,6 +17,7 @@ from time import sleep
 | ||||
|  from xml.dom import minidom | ||||
|  import xml.etree.ElementTree as ET | ||||
|  from enum import Enum | ||||
| +import requests
 | ||||
|   | ||||
|  from cloudinit import dmi | ||||
|  from cloudinit import log as logging | ||||
| @@ -665,7 +666,9 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|              self, | ||||
|              fallback_nic, | ||||
|              retries, | ||||
| -            md_type=metadata_type.compute):
 | ||||
| +            md_type=metadata_type.compute,
 | ||||
| +            exc_cb=retry_on_url_exc,
 | ||||
| +            infinite=False):
 | ||||
|          """ | ||||
|          Wrapper for get_metadata_from_imds so that we can have flexibility | ||||
|          in which IMDS api-version we use. If a particular instance of IMDS | ||||
| @@ -685,7 +688,8 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|                          fallback_nic=fallback_nic, | ||||
|                          retries=0, | ||||
|                          md_type=md_type, | ||||
| -                        api_version=IMDS_VER_WANT
 | ||||
| +                        api_version=IMDS_VER_WANT,
 | ||||
| +                        exc_cb=exc_cb
 | ||||
|                      ) | ||||
|                  except UrlError as err: | ||||
|                      LOG.info( | ||||
| @@ -708,7 +712,9 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|              fallback_nic=fallback_nic, | ||||
|              retries=retries, | ||||
|              md_type=md_type, | ||||
| -            api_version=IMDS_VER_MIN
 | ||||
| +            api_version=IMDS_VER_MIN,
 | ||||
| +            exc_cb=exc_cb,
 | ||||
| +            infinite=infinite
 | ||||
|          ) | ||||
|   | ||||
|      def device_name_to_device(self, name): | ||||
| @@ -938,6 +944,9 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          is_primary = False | ||||
|          expected_nic_count = -1 | ||||
|          imds_md = None | ||||
| +        metadata_poll_count = 0
 | ||||
| +        metadata_logging_threshold = 1
 | ||||
| +        metadata_timeout_count = 0
 | ||||
|   | ||||
|          # For now, only a VM's primary NIC can contact IMDS and WireServer. If | ||||
|          # DHCP fails for a NIC, we have no mechanism to determine if the NIC is | ||||
| @@ -962,14 +971,48 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|                                      % (ifname, e), logger_func=LOG.error) | ||||
|              raise | ||||
|   | ||||
| +        # Retry polling network metadata for a limited duration only when the
 | ||||
| +        # calls fail due to timeout. This is because the platform drops packets
 | ||||
| +        # going towards IMDS when it is not a primary nic. If the calls fail
 | ||||
| +        # due to other issues like 410, 503 etc, then it means we are primary
 | ||||
| +        # but IMDS service is unavailable at the moment. Retry indefinitely in
 | ||||
| +        # those cases since we cannot move on without the network metadata.
 | ||||
| +        def network_metadata_exc_cb(msg, exc):
 | ||||
| +            nonlocal metadata_timeout_count, metadata_poll_count
 | ||||
| +            nonlocal metadata_logging_threshold
 | ||||
| +
 | ||||
| +            metadata_poll_count = metadata_poll_count + 1
 | ||||
| +
 | ||||
| +            # Log when needed but back off exponentially to avoid exploding
 | ||||
| +            # the log file.
 | ||||
| +            if metadata_poll_count >= metadata_logging_threshold:
 | ||||
| +                metadata_logging_threshold *= 2
 | ||||
| +                report_diagnostic_event(
 | ||||
| +                    "Ran into exception when attempting to reach %s "
 | ||||
| +                    "after %d polls." % (msg, metadata_poll_count),
 | ||||
| +                    logger_func=LOG.error)
 | ||||
| +
 | ||||
| +                if isinstance(exc, UrlError):
 | ||||
| +                    report_diagnostic_event("poll IMDS with %s failed. "
 | ||||
| +                                            "Exception: %s and code: %s" %
 | ||||
| +                                            (msg, exc.cause, exc.code),
 | ||||
| +                                            logger_func=LOG.error)
 | ||||
| +
 | ||||
| +            if exc.cause and isinstance(exc.cause, requests.Timeout):
 | ||||
| +                metadata_timeout_count = metadata_timeout_count + 1
 | ||||
| +                return (metadata_timeout_count <= 10)
 | ||||
| +            return True
 | ||||
| +
 | ||||
|          # Primary nic detection will be optimized in the future. The fact that | ||||
|          # primary nic is being attached first helps here. Otherwise each nic | ||||
|          # could add several seconds of delay. | ||||
|          try: | ||||
|              imds_md = self.get_imds_data_with_api_fallback( | ||||
|                  ifname, | ||||
| -                5,
 | ||||
| -                metadata_type.network
 | ||||
| +                0,
 | ||||
| +                metadata_type.network,
 | ||||
| +                network_metadata_exc_cb,
 | ||||
| +                True
 | ||||
|              ) | ||||
|          except Exception as e: | ||||
|              LOG.warning( | ||||
| @@ -2139,7 +2182,9 @@ def _generate_network_config_from_fallback_config() -> dict:
 | ||||
|  def get_metadata_from_imds(fallback_nic, | ||||
|                             retries, | ||||
|                             md_type=metadata_type.compute, | ||||
| -                           api_version=IMDS_VER_MIN):
 | ||||
| +                           api_version=IMDS_VER_MIN,
 | ||||
| +                           exc_cb=retry_on_url_exc,
 | ||||
| +                           infinite=False):
 | ||||
|      """Query Azure's instance metadata service, returning a dictionary. | ||||
|   | ||||
|      If network is not up, setup ephemeral dhcp on fallback_nic to talk to the | ||||
| @@ -2158,7 +2203,7 @@ def get_metadata_from_imds(fallback_nic,
 | ||||
|      kwargs = {'logfunc': LOG.debug, | ||||
|                'msg': 'Crawl of Azure Instance Metadata Service (IMDS)', | ||||
|                'func': _get_metadata_from_imds, | ||||
| -              'args': (retries, md_type, api_version,)}
 | ||||
| +              'args': (retries, exc_cb, md_type, api_version, infinite)}
 | ||||
|      if net.is_up(fallback_nic): | ||||
|          return util.log_time(**kwargs) | ||||
|      else: | ||||
| @@ -2176,14 +2221,16 @@ def get_metadata_from_imds(fallback_nic,
 | ||||
|  @azure_ds_telemetry_reporter | ||||
|  def _get_metadata_from_imds( | ||||
|          retries, | ||||
| +        exc_cb,
 | ||||
|          md_type=metadata_type.compute, | ||||
| -        api_version=IMDS_VER_MIN):
 | ||||
| +        api_version=IMDS_VER_MIN,
 | ||||
| +        infinite=False):
 | ||||
|      url = "{}?api-version={}".format(md_type.value, api_version) | ||||
|      headers = {"Metadata": "true"} | ||||
|      try: | ||||
|          response = readurl( | ||||
|              url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers, | ||||
| -            retries=retries, exception_cb=retry_on_url_exc)
 | ||||
| +            retries=retries, exception_cb=exc_cb, infinite=infinite)
 | ||||
|      except Exception as e: | ||||
|          # pylint:disable=no-member | ||||
|          if isinstance(e, UrlError) and e.code == 400: | ||||
| diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
 | ||||
| index d9817d84..c4a8e08d 100644
 | ||||
| --- a/tests/unittests/test_datasource/test_azure.py
 | ||||
| +++ b/tests/unittests/test_datasource/test_azure.py
 | ||||
| @@ -448,7 +448,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
 | ||||
|              "http://169.254.169.254/metadata/instance?api-version=" | ||||
|              "2019-06-01", exception_cb=mock.ANY, | ||||
|              headers=mock.ANY, retries=mock.ANY, | ||||
| -            timeout=mock.ANY)
 | ||||
| +            timeout=mock.ANY, infinite=False)
 | ||||
|   | ||||
|      @mock.patch(MOCKPATH + 'readurl', autospec=True) | ||||
|      @mock.patch(MOCKPATH + 'EphemeralDHCPv4') | ||||
| @@ -467,7 +467,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
 | ||||
|              "http://169.254.169.254/metadata/instance/network?api-version=" | ||||
|              "2019-06-01", exception_cb=mock.ANY, | ||||
|              headers=mock.ANY, retries=mock.ANY, | ||||
| -            timeout=mock.ANY)
 | ||||
| +            timeout=mock.ANY, infinite=False)
 | ||||
|   | ||||
|      @mock.patch(MOCKPATH + 'readurl', autospec=True) | ||||
|      @mock.patch(MOCKPATH + 'EphemeralDHCPv4') | ||||
| @@ -486,7 +486,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
 | ||||
|              "http://169.254.169.254/metadata/instance?api-version=" | ||||
|              "2019-06-01", exception_cb=mock.ANY, | ||||
|              headers=mock.ANY, retries=mock.ANY, | ||||
| -            timeout=mock.ANY)
 | ||||
| +            timeout=mock.ANY, infinite=False)
 | ||||
|   | ||||
|      @mock.patch(MOCKPATH + 'readurl', autospec=True) | ||||
|      @mock.patch(MOCKPATH + 'EphemeralDHCPv4WithReporting', autospec=True) | ||||
| @@ -511,7 +511,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
 | ||||
|          m_readurl.assert_called_with( | ||||
|              self.network_md_url, exception_cb=mock.ANY, | ||||
|              headers={'Metadata': 'true'}, retries=2, | ||||
| -            timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS)
 | ||||
| +            timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, infinite=False)
 | ||||
|   | ||||
|      @mock.patch('cloudinit.url_helper.time.sleep') | ||||
|      @mock.patch(MOCKPATH + 'net.is_up', autospec=True) | ||||
| @@ -2694,15 +2694,22 @@ class TestPreprovisioningHotAttachNics(CiTestCase):
 | ||||
|   | ||||
|          def nic_attach_ret(nl_sock, nics_found): | ||||
|              nonlocal m_attach_call_count | ||||
| -            if m_attach_call_count == 0:
 | ||||
| -                m_attach_call_count = m_attach_call_count + 1
 | ||||
| +            m_attach_call_count = m_attach_call_count + 1
 | ||||
| +            if m_attach_call_count == 1:
 | ||||
|                  return "eth0" | ||||
| -            return "eth1"
 | ||||
| +            elif m_attach_call_count == 2:
 | ||||
| +                return "eth1"
 | ||||
| +            raise RuntimeError("Must have found primary nic by now.")
 | ||||
| +
 | ||||
| +        # Simulate two NICs by adding the same one twice.
 | ||||
| +        md = {
 | ||||
| +            "interface": [
 | ||||
| +                IMDS_NETWORK_METADATA['interface'][0],
 | ||||
| +                IMDS_NETWORK_METADATA['interface'][0]
 | ||||
| +            ]
 | ||||
| +        }
 | ||||
|   | ||||
| -        def network_metadata_ret(ifname, retries, type):
 | ||||
| -            # Simulate two NICs by adding the same one twice.
 | ||||
| -            md = IMDS_NETWORK_METADATA
 | ||||
| -            md['interface'].append(md['interface'][0])
 | ||||
| +        def network_metadata_ret(ifname, retries, type, exc_cb, infinite):
 | ||||
|              if ifname == "eth0": | ||||
|                  return md | ||||
|              raise requests.Timeout('Fake connection timeout') | ||||
| @@ -2724,6 +2731,72 @@ class TestPreprovisioningHotAttachNics(CiTestCase):
 | ||||
|          self.assertEqual(1, m_imds.call_count) | ||||
|          self.assertEqual(2, m_link_up.call_count) | ||||
|   | ||||
| +    @mock.patch(MOCKPATH + 'DataSourceAzure.get_imds_data_with_api_fallback')
 | ||||
| +    @mock.patch(MOCKPATH + 'EphemeralDHCPv4')
 | ||||
| +    def test_check_if_nic_is_primary_retries_on_failures(
 | ||||
| +            self, m_dhcpv4, m_imds):
 | ||||
| +        """Retry polling for network metadata on all failures except timeout"""
 | ||||
| +        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
 | ||||
| +        lease = {
 | ||||
| +            'interface': 'eth9', 'fixed-address': '192.168.2.9',
 | ||||
| +            'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
 | ||||
| +            'unknown-245': '624c3620'}
 | ||||
| +
 | ||||
| +        eth0Retries = []
 | ||||
| +        eth1Retries = []
 | ||||
| +        # Simulate two NICs by adding the same one twice.
 | ||||
| +        md = {
 | ||||
| +            "interface": [
 | ||||
| +                IMDS_NETWORK_METADATA['interface'][0],
 | ||||
| +                IMDS_NETWORK_METADATA['interface'][0]
 | ||||
| +            ]
 | ||||
| +        }
 | ||||
| +
 | ||||
| +        def network_metadata_ret(ifname, retries, type, exc_cb, infinite):
 | ||||
| +            nonlocal eth0Retries, eth1Retries
 | ||||
| +
 | ||||
| +            # Simulate readurl functionality with retries and
 | ||||
| +            # exception callbacks so that the callback logic can be
 | ||||
| +            # validated.
 | ||||
| +            if ifname == "eth0":
 | ||||
| +                cause = requests.HTTPError()
 | ||||
| +                for _ in range(0, 15):
 | ||||
| +                    error = url_helper.UrlError(cause=cause, code=410)
 | ||||
| +                    eth0Retries.append(exc_cb("No goal state.", error))
 | ||||
| +            else:
 | ||||
| +                cause = requests.Timeout('Fake connection timeout')
 | ||||
| +                for _ in range(0, 10):
 | ||||
| +                    error = url_helper.UrlError(cause=cause)
 | ||||
| +                    eth1Retries.append(exc_cb("Connection timeout", error))
 | ||||
| +                # Should stop retrying after 10 retries
 | ||||
| +                eth1Retries.append(exc_cb("Connection timeout", error))
 | ||||
| +                raise cause
 | ||||
| +            return md
 | ||||
| +
 | ||||
| +        m_imds.side_effect = network_metadata_ret
 | ||||
| +
 | ||||
| +        dhcp_ctx = mock.MagicMock(lease=lease)
 | ||||
| +        dhcp_ctx.obtain_lease.return_value = lease
 | ||||
| +        m_dhcpv4.return_value = dhcp_ctx
 | ||||
| +
 | ||||
| +        is_primary, expected_nic_count = dsa._check_if_nic_is_primary("eth0")
 | ||||
| +        self.assertEqual(True, is_primary)
 | ||||
| +        self.assertEqual(2, expected_nic_count)
 | ||||
| +
 | ||||
| +        # All Eth0 errors are non-timeout errors. So we should have been
 | ||||
| +        # retrying indefinitely until success.
 | ||||
| +        for i in eth0Retries:
 | ||||
| +            self.assertTrue(i)
 | ||||
| +
 | ||||
| +        is_primary, expected_nic_count = dsa._check_if_nic_is_primary("eth1")
 | ||||
| +        self.assertEqual(False, is_primary)
 | ||||
| +
 | ||||
| +        # All Eth1 errors are timeout errors. Retry happens for a max of 10 and
 | ||||
| +        # then we should have moved on assuming it is not the primary nic.
 | ||||
| +        for i in range(0, 10):
 | ||||
| +            self.assertTrue(eth1Retries[i])
 | ||||
| +        self.assertFalse(eth1Retries[10])
 | ||||
| +
 | ||||
|      @mock.patch('cloudinit.distros.networking.LinuxNetworking.try_set_link_up') | ||||
|      def test_wait_for_link_up_returns_if_already_up( | ||||
|              self, m_is_link_up): | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,129 @@ | ||||
| From 0def71378dc7abf682727c600b696f7313cdcf60 Mon Sep 17 00:00:00 2001 | ||||
| From: Anh Vo <anhvo@microsoft.com> | ||||
| Date: Tue, 27 Apr 2021 13:40:59 -0400 | ||||
| Subject: [PATCH 7/7] Azure: adding support for consuming userdata from IMDS | ||||
|  (#884) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 18: Add support for userdata on Azure from IMDS | ||||
| RH-Commit: [7/7] 1e7ab925162ed9ef2c9b5b9f5c6d5e6ec6e623dd (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 2042351 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/sources/DataSourceAzure.py          | 23 ++++++++- | ||||
|  tests/unittests/test_datasource/test_azure.py | 50 +++++++++++++++++++ | ||||
|  2 files changed, 72 insertions(+), 1 deletion(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
 | ||||
| index d0be6d84..a66f023d 100755
 | ||||
| --- a/cloudinit/sources/DataSourceAzure.py
 | ||||
| +++ b/cloudinit/sources/DataSourceAzure.py
 | ||||
| @@ -83,7 +83,7 @@ AGENT_SEED_DIR = '/var/lib/waagent'
 | ||||
|  IMDS_TIMEOUT_IN_SECONDS = 2 | ||||
|  IMDS_URL = "http://169.254.169.254/metadata" | ||||
|  IMDS_VER_MIN = "2019-06-01" | ||||
| -IMDS_VER_WANT = "2020-10-01"
 | ||||
| +IMDS_VER_WANT = "2021-01-01"
 | ||||
|   | ||||
|   | ||||
|  # This holds SSH key data including if the source was | ||||
| @@ -539,6 +539,20 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|                      imds_disable_password | ||||
|                  ) | ||||
|                  crawled_data['metadata']['disable_password'] = imds_disable_password  # noqa: E501 | ||||
| +
 | ||||
| +            # only use userdata from imds if OVF did not provide custom data
 | ||||
| +            # userdata provided by IMDS is always base64 encoded
 | ||||
| +            if not userdata_raw:
 | ||||
| +                imds_userdata = _userdata_from_imds(imds_md)
 | ||||
| +                if imds_userdata:
 | ||||
| +                    LOG.debug("Retrieved userdata from IMDS")
 | ||||
| +                    try:
 | ||||
| +                        crawled_data['userdata_raw'] = base64.b64decode(
 | ||||
| +                            ''.join(imds_userdata.split()))
 | ||||
| +                    except Exception:
 | ||||
| +                        report_diagnostic_event(
 | ||||
| +                            "Bad userdata in IMDS",
 | ||||
| +                            logger_func=LOG.warning)
 | ||||
|              found = cdev | ||||
|   | ||||
|              report_diagnostic_event( | ||||
| @@ -1512,6 +1526,13 @@ def _username_from_imds(imds_data):
 | ||||
|          return None | ||||
|   | ||||
|   | ||||
| +def _userdata_from_imds(imds_data):
 | ||||
| +    try:
 | ||||
| +        return imds_data['compute']['userData']
 | ||||
| +    except KeyError:
 | ||||
| +        return None
 | ||||
| +
 | ||||
| +
 | ||||
|  def _hostname_from_imds(imds_data): | ||||
|      try: | ||||
|          return imds_data['compute']['osProfile']['computerName'] | ||||
| diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
 | ||||
| index c4a8e08d..f8433690 100644
 | ||||
| --- a/tests/unittests/test_datasource/test_azure.py
 | ||||
| +++ b/tests/unittests/test_datasource/test_azure.py
 | ||||
| @@ -1899,6 +1899,56 @@ scbus-1 on xpt0 bus 0
 | ||||
|          dsrc.get_data() | ||||
|          self.assertTrue(dsrc.metadata["disable_password"]) | ||||
|   | ||||
| +    @mock.patch(MOCKPATH + 'get_metadata_from_imds')
 | ||||
| +    def test_userdata_from_imds(self, m_get_metadata_from_imds):
 | ||||
| +        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
 | ||||
| +        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| +        data = {
 | ||||
| +            'ovfcontent': construct_valid_ovf_env(data=odata),
 | ||||
| +            'sys_cfg': sys_cfg
 | ||||
| +        }
 | ||||
| +        userdata = "userdataImds"
 | ||||
| +        imds_data = copy.deepcopy(NETWORK_METADATA)
 | ||||
| +        imds_data["compute"]["osProfile"] = dict(
 | ||||
| +            adminUsername="username1",
 | ||||
| +            computerName="hostname1",
 | ||||
| +            disablePasswordAuthentication="true",
 | ||||
| +        )
 | ||||
| +        imds_data["compute"]["userData"] = b64e(userdata)
 | ||||
| +        m_get_metadata_from_imds.return_value = imds_data
 | ||||
| +        dsrc = self._get_ds(data)
 | ||||
| +        ret = dsrc.get_data()
 | ||||
| +        self.assertTrue(ret)
 | ||||
| +        self.assertEqual(dsrc.userdata_raw, userdata.encode('utf-8'))
 | ||||
| +
 | ||||
| +    @mock.patch(MOCKPATH + 'get_metadata_from_imds')
 | ||||
| +    def test_userdata_from_imds_with_customdata_from_OVF(
 | ||||
| +            self, m_get_metadata_from_imds):
 | ||||
| +        userdataOVF = "userdataOVF"
 | ||||
| +        odata = {
 | ||||
| +            'HostName': "myhost", 'UserName': "myuser",
 | ||||
| +            'UserData': {'text': b64e(userdataOVF), 'encoding': 'base64'}
 | ||||
| +        }
 | ||||
| +        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
 | ||||
| +        data = {
 | ||||
| +            'ovfcontent': construct_valid_ovf_env(data=odata),
 | ||||
| +            'sys_cfg': sys_cfg
 | ||||
| +        }
 | ||||
| +
 | ||||
| +        userdataImds = "userdataImds"
 | ||||
| +        imds_data = copy.deepcopy(NETWORK_METADATA)
 | ||||
| +        imds_data["compute"]["osProfile"] = dict(
 | ||||
| +            adminUsername="username1",
 | ||||
| +            computerName="hostname1",
 | ||||
| +            disablePasswordAuthentication="true",
 | ||||
| +        )
 | ||||
| +        imds_data["compute"]["userData"] = b64e(userdataImds)
 | ||||
| +        m_get_metadata_from_imds.return_value = imds_data
 | ||||
| +        dsrc = self._get_ds(data)
 | ||||
| +        ret = dsrc.get_data()
 | ||||
| +        self.assertTrue(ret)
 | ||||
| +        self.assertEqual(dsrc.userdata_raw, userdataOVF.encode('utf-8'))
 | ||||
| +
 | ||||
|   | ||||
|  class TestAzureBounce(CiTestCase): | ||||
|   | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,177 @@ | ||||
| From 2ece71923a37a5e1107c80f091a1cc620943fbf2 Mon Sep 17 00:00:00 2001 | ||||
| From: Anh Vo <anhvo@microsoft.com> | ||||
| Date: Fri, 23 Apr 2021 10:18:05 -0400 | ||||
| Subject: [PATCH 4/7] Azure: eject the provisioning iso before reporting ready | ||||
|  (#861) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 18: Add support for userdata on Azure from IMDS | ||||
| RH-Commit: [4/7] 63e379a4406530c0c15c733f8eee35421079508b (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 2042351 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| Due to hyper-v implementations, iso ejection is more efficient if performed | ||||
| from within the guest. The code will attempt to perform a best-effort ejection. | ||||
| Failure during ejection will not prevent reporting ready from happening. If iso | ||||
| ejection is successful, later iso ejection from the platform will be a no-op. | ||||
| In the event the iso ejection from the guest fails, iso ejection will still happen at | ||||
| the platform level. | ||||
| ---
 | ||||
|  cloudinit/sources/DataSourceAzure.py          | 22 +++++++++++++++--- | ||||
|  cloudinit/sources/helpers/azure.py            | 23 ++++++++++++++++--- | ||||
|  .../test_datasource/test_azure_helper.py      | 13 +++++++++-- | ||||
|  3 files changed, 50 insertions(+), 8 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
 | ||||
| index 020b7006..39e67c4f 100755
 | ||||
| --- a/cloudinit/sources/DataSourceAzure.py
 | ||||
| +++ b/cloudinit/sources/DataSourceAzure.py
 | ||||
| @@ -332,6 +332,7 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|      dsname = 'Azure' | ||||
|      _negotiated = False | ||||
|      _metadata_imds = sources.UNSET | ||||
| +    _ci_pkl_version = 1
 | ||||
|   | ||||
|      def __init__(self, sys_cfg, distro, paths): | ||||
|          sources.DataSource.__init__(self, sys_cfg, distro, paths) | ||||
| @@ -346,8 +347,13 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          # Regenerate network config new_instance boot and every boot | ||||
|          self.update_events['network'].add(EventType.BOOT) | ||||
|          self._ephemeral_dhcp_ctx = None | ||||
| -
 | ||||
|          self.failed_desired_api_version = False | ||||
| +        self.iso_dev = None
 | ||||
| +
 | ||||
| +    def _unpickle(self, ci_pkl_version: int) -> None:
 | ||||
| +        super()._unpickle(ci_pkl_version)
 | ||||
| +        if "iso_dev" not in self.__dict__:
 | ||||
| +            self.iso_dev = None
 | ||||
|   | ||||
|      def __str__(self): | ||||
|          root = sources.DataSource.__str__(self) | ||||
| @@ -459,6 +465,13 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|                      '%s was not mountable' % cdev, logger_func=LOG.warning) | ||||
|                  continue | ||||
|   | ||||
| +            report_diagnostic_event("Found provisioning metadata in %s" % cdev,
 | ||||
| +                                    logger_func=LOG.debug)
 | ||||
| +
 | ||||
| +            # save the iso device for ejection before reporting ready
 | ||||
| +            if cdev.startswith("/dev"):
 | ||||
| +                self.iso_dev = cdev
 | ||||
| +
 | ||||
|              perform_reprovision = reprovision or self._should_reprovision(ret) | ||||
|              perform_reprovision_after_nic_attach = ( | ||||
|                  reprovision_after_nic_attach or | ||||
| @@ -1226,7 +1239,9 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          @return: The success status of sending the ready signal. | ||||
|          """ | ||||
|          try: | ||||
| -            get_metadata_from_fabric(None, lease['unknown-245'])
 | ||||
| +            get_metadata_from_fabric(fallback_lease_file=None,
 | ||||
| +                                     dhcp_opts=lease['unknown-245'],
 | ||||
| +                                     iso_dev=self.iso_dev)
 | ||||
|              return True | ||||
|          except Exception as e: | ||||
|              report_diagnostic_event( | ||||
| @@ -1332,7 +1347,8 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|          metadata_func = partial(get_metadata_from_fabric, | ||||
|                                  fallback_lease_file=self. | ||||
|                                  dhclient_lease_file, | ||||
| -                                pubkey_info=pubkey_info)
 | ||||
| +                                pubkey_info=pubkey_info,
 | ||||
| +                                iso_dev=self.iso_dev)
 | ||||
|   | ||||
|          LOG.debug("negotiating with fabric via agent command %s", | ||||
|                    self.ds_cfg['agent_command']) | ||||
| diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
 | ||||
| index 03e7156b..ad476076 100755
 | ||||
| --- a/cloudinit/sources/helpers/azure.py
 | ||||
| +++ b/cloudinit/sources/helpers/azure.py
 | ||||
| @@ -865,7 +865,19 @@ class WALinuxAgentShim:
 | ||||
|          return endpoint_ip_address | ||||
|   | ||||
|      @azure_ds_telemetry_reporter | ||||
| -    def register_with_azure_and_fetch_data(self, pubkey_info=None) -> dict:
 | ||||
| +    def eject_iso(self, iso_dev) -> None:
 | ||||
| +        try:
 | ||||
| +            LOG.debug("Ejecting the provisioning iso")
 | ||||
| +            subp.subp(['eject', iso_dev])
 | ||||
| +        except Exception as e:
 | ||||
| +            report_diagnostic_event(
 | ||||
| +                "Failed ejecting the provisioning iso: %s" % e,
 | ||||
| +                logger_func=LOG.debug)
 | ||||
| +
 | ||||
| +    @azure_ds_telemetry_reporter
 | ||||
| +    def register_with_azure_and_fetch_data(self,
 | ||||
| +                                           pubkey_info=None,
 | ||||
| +                                           iso_dev=None) -> dict:
 | ||||
|          """Gets the VM's GoalState from Azure, uses the GoalState information | ||||
|          to report ready/send the ready signal/provisioning complete signal to | ||||
|          Azure, and then uses pubkey_info to filter and obtain the user's | ||||
| @@ -891,6 +903,10 @@ class WALinuxAgentShim:
 | ||||
|              ssh_keys = self._get_user_pubkeys(goal_state, pubkey_info) | ||||
|          health_reporter = GoalStateHealthReporter( | ||||
|              goal_state, self.azure_endpoint_client, self.endpoint) | ||||
| +
 | ||||
| +        if iso_dev is not None:
 | ||||
| +            self.eject_iso(iso_dev)
 | ||||
| +
 | ||||
|          health_reporter.send_ready_signal() | ||||
|          return {'public-keys': ssh_keys} | ||||
|   | ||||
| @@ -1046,11 +1062,12 @@ class WALinuxAgentShim:
 | ||||
|   | ||||
|  @azure_ds_telemetry_reporter | ||||
|  def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None, | ||||
| -                             pubkey_info=None):
 | ||||
| +                             pubkey_info=None, iso_dev=None):
 | ||||
|      shim = WALinuxAgentShim(fallback_lease_file=fallback_lease_file, | ||||
|                              dhcp_options=dhcp_opts) | ||||
|      try: | ||||
| -        return shim.register_with_azure_and_fetch_data(pubkey_info=pubkey_info)
 | ||||
| +        return shim.register_with_azure_and_fetch_data(
 | ||||
| +            pubkey_info=pubkey_info, iso_dev=iso_dev)
 | ||||
|      finally: | ||||
|          shim.clean_up() | ||||
|   | ||||
| diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
 | ||||
| index 63482c6c..552c7905 100644
 | ||||
| --- a/tests/unittests/test_datasource/test_azure_helper.py
 | ||||
| +++ b/tests/unittests/test_datasource/test_azure_helper.py
 | ||||
| @@ -1009,6 +1009,14 @@ class TestWALinuxAgentShim(CiTestCase):
 | ||||
|          self.GoalState.return_value.container_id = self.test_container_id | ||||
|          self.GoalState.return_value.instance_id = self.test_instance_id | ||||
|   | ||||
| +    def test_eject_iso_is_called(self):
 | ||||
| +        shim = wa_shim()
 | ||||
| +        with mock.patch.object(
 | ||||
| +            shim, 'eject_iso', autospec=True
 | ||||
| +        ) as m_eject_iso:
 | ||||
| +            shim.register_with_azure_and_fetch_data(iso_dev="/dev/sr0")
 | ||||
| +            m_eject_iso.assert_called_once_with("/dev/sr0")
 | ||||
| +
 | ||||
|      def test_http_client_does_not_use_certificate_for_report_ready(self): | ||||
|          shim = wa_shim() | ||||
|          shim.register_with_azure_and_fetch_data() | ||||
| @@ -1283,13 +1291,14 @@ class TestGetMetadataGoalStateXMLAndReportReadyToFabric(CiTestCase):
 | ||||
|   | ||||
|      def test_calls_shim_register_with_azure_and_fetch_data(self): | ||||
|          m_pubkey_info = mock.MagicMock() | ||||
| -        azure_helper.get_metadata_from_fabric(pubkey_info=m_pubkey_info)
 | ||||
| +        azure_helper.get_metadata_from_fabric(
 | ||||
| +            pubkey_info=m_pubkey_info, iso_dev="/dev/sr0")
 | ||||
|          self.assertEqual( | ||||
|              1, | ||||
|              self.m_shim.return_value | ||||
|                  .register_with_azure_and_fetch_data.call_count) | ||||
|          self.assertEqual( | ||||
| -            mock.call(pubkey_info=m_pubkey_info),
 | ||||
| +            mock.call(iso_dev="/dev/sr0", pubkey_info=m_pubkey_info),
 | ||||
|              self.m_shim.return_value | ||||
|                  .register_with_azure_and_fetch_data.call_args) | ||||
|   | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,90 @@ | ||||
| From 3ee42e6e6ca51b3fd0b6461f707d62c89d54e227 Mon Sep 17 00:00:00 2001 | ||||
| From: Johnson Shi <Johnson.Shi@microsoft.com> | ||||
| Date: Thu, 25 Mar 2021 07:20:10 -0700 | ||||
| Subject: [PATCH 2/7] Azure helper: Ensure Azure http handler sleeps between | ||||
|  retries (#842) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 18: Add support for userdata on Azure from IMDS | ||||
| RH-Commit: [2/7] 65672cdfe2265f32e6d3c440ba5a8accafdb6ca6 (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 2042351 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| Ensure that the Azure helper's http handler sleeps a fixed duration | ||||
| between retry failure attempts. The http handler will sleep a fixed | ||||
| duration between failed attempts regardless of whether the attempt | ||||
| failed due to (1) request timing out or (2) instant failure (no | ||||
| timeout). | ||||
| 
 | ||||
| Due to certain platform issues, the http request to the Azure endpoint | ||||
| may instantly fail without reaching the http timeout duration. Without | ||||
| sleeping a fixed duration in between retry attempts, the http handler | ||||
| will loop through the max retry attempts quickly. This causes the | ||||
| communication between cloud-init and the Azure platform to be less | ||||
| resilient due to the short total duration if there is no sleep in | ||||
| between retries. | ||||
| ---
 | ||||
|  cloudinit/sources/helpers/azure.py                   |  2 ++ | ||||
|  tests/unittests/test_datasource/test_azure_helper.py | 11 +++++++++-- | ||||
|  2 files changed, 11 insertions(+), 2 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
 | ||||
| index d3055d08..03e7156b 100755
 | ||||
| --- a/cloudinit/sources/helpers/azure.py
 | ||||
| +++ b/cloudinit/sources/helpers/azure.py
 | ||||
| @@ -303,6 +303,7 @@ def http_with_retries(url, **kwargs) -> str:
 | ||||
|   | ||||
|      max_readurl_attempts = 240 | ||||
|      default_readurl_timeout = 5 | ||||
| +    sleep_duration_between_retries = 5
 | ||||
|      periodic_logging_attempts = 12 | ||||
|   | ||||
|      if 'timeout' not in kwargs: | ||||
| @@ -338,6 +339,7 @@ def http_with_retries(url, **kwargs) -> str:
 | ||||
|                      'attempt %d with exception: %s' % | ||||
|                      (url, attempt, e), | ||||
|                      logger_func=LOG.debug) | ||||
| +            time.sleep(sleep_duration_between_retries)
 | ||||
|   | ||||
|      raise exc | ||||
|   | ||||
| diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
 | ||||
| index b8899807..63482c6c 100644
 | ||||
| --- a/tests/unittests/test_datasource/test_azure_helper.py
 | ||||
| +++ b/tests/unittests/test_datasource/test_azure_helper.py
 | ||||
| @@ -384,6 +384,7 @@ class TestAzureHelperHttpWithRetries(CiTestCase):
 | ||||
|   | ||||
|      max_readurl_attempts = 240 | ||||
|      default_readurl_timeout = 5 | ||||
| +    sleep_duration_between_retries = 5
 | ||||
|      periodic_logging_attempts = 12 | ||||
|   | ||||
|      def setUp(self): | ||||
| @@ -394,8 +395,8 @@ class TestAzureHelperHttpWithRetries(CiTestCase):
 | ||||
|          self.m_readurl = patches.enter_context( | ||||
|              mock.patch.object( | ||||
|                  azure_helper.url_helper, 'readurl', mock.MagicMock())) | ||||
| -        patches.enter_context(
 | ||||
| -            mock.patch.object(azure_helper.time, 'sleep', mock.MagicMock()))
 | ||||
| +        self.m_sleep = patches.enter_context(
 | ||||
| +            mock.patch.object(azure_helper.time, 'sleep', autospec=True))
 | ||||
|   | ||||
|      def test_http_with_retries(self): | ||||
|          self.m_readurl.return_value = 'TestResp' | ||||
| @@ -438,6 +439,12 @@ class TestAzureHelperHttpWithRetries(CiTestCase):
 | ||||
|              self.m_readurl.call_count, | ||||
|              self.periodic_logging_attempts + 1) | ||||
|   | ||||
| +        # Ensure that cloud-init did sleep between each failed request
 | ||||
| +        self.assertEqual(
 | ||||
| +            self.m_sleep.call_count,
 | ||||
| +            self.periodic_logging_attempts)
 | ||||
| +        self.m_sleep.assert_called_with(self.sleep_duration_between_retries)
 | ||||
| +
 | ||||
|      def test_http_with_retries_long_delay_logs_periodic_failure_msg(self): | ||||
|          self.m_readurl.side_effect = \ | ||||
|              [SentinelException] * self.periodic_logging_attempts + \ | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
							
								
								
									
										47
									
								
								SOURCES/ci-Change-netifaces-dependency-to-0.10.4-965.patch
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										47
									
								
								SOURCES/ci-Change-netifaces-dependency-to-0.10.4-965.patch
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,47 @@ | ||||
| From 18138313e009a08592fe79c5e66d6eba8f027f19 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Fri, 14 Jan 2022 16:49:57 +0100 | ||||
| Subject: [PATCH 2/5] Change netifaces dependency to 0.10.4 (#965) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 17: Datasource for VMware | ||||
| RH-Commit: [2/5] 8688e8b955a7ee15cf66de0b2a242c7c418b7630 (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 2040090 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| 
 | ||||
| commit b9d308b4d61d22bacc05bcae59819755975631f8 | ||||
| Author: Andrew Kutz <101085+akutz@users.noreply.github.com> | ||||
| Date:   Tue Aug 10 15:10:44 2021 -0500 | ||||
| 
 | ||||
|     Change netifaces dependency to 0.10.4 (#965) | ||||
| 
 | ||||
|     Change netifaces dependency to 0.10.4 | ||||
| 
 | ||||
|     Currently versions Ubuntu <=20.10 use netifaces 0.10.4 By requiring | ||||
|     netifaces 0.10.9, the VMware datasource omitted itself from cloud-init | ||||
|     on Ubuntu <=20.10. | ||||
| 
 | ||||
|     This patch changes the netifaces dependency to 0.10.4. While it is true | ||||
|     there are patches to netifaces post 0.10.4 that are desirable, testing | ||||
|     against the most common network configuration was performed to verify | ||||
|     the VMware datasource will still function with netifaces 0.10.4. | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  requirements.txt | 2 +- | ||||
|  1 file changed, 1 insertion(+), 1 deletion(-) | ||||
| 
 | ||||
| diff --git a/requirements.txt b/requirements.txt
 | ||||
| index 41d01d62..c4adc455 100644
 | ||||
| --- a/requirements.txt
 | ||||
| +++ b/requirements.txt
 | ||||
| @@ -40,4 +40,4 @@ jsonschema
 | ||||
|  # and still participate in instance-data by gathering the network in detail at | ||||
|  # runtime and merge that information into the metadata and repersist that to | ||||
|  # disk. | ||||
| -netifaces>=0.10.9
 | ||||
| +netifaces>=0.10.4
 | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
							
								
								
									
										2201
									
								
								SOURCES/ci-Datasource-for-VMware-953.patch
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										2201
									
								
								SOURCES/ci-Datasource-for-VMware-953.patch
									
									
									
									
									
										Normal file
									
								
							
										
											
												File diff suppressed because it is too large
												Load Diff
											
										
									
								
							
							
								
								
									
										474
									
								
								SOURCES/ci-Fix-IPv6-netmask-format-for-sysconfig-1215.patch
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										474
									
								
								SOURCES/ci-Fix-IPv6-netmask-format-for-sysconfig-1215.patch
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,474 @@ | ||||
| From 290353d6df0b3bbbbcfa4f949f943388939ebc12 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Fri, 11 Feb 2022 14:57:40 +0100 | ||||
| Subject: [PATCH 1/3] Fix IPv6 netmask format for sysconfig (#1215) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 20: Fix IPv6 netmask format for sysconfig (#1215) | ||||
| RH-Commit: [1/1] 2eb7ac7c85e82c14f9a95b9baf1482ac987b1084 (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 2053546 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com> | ||||
| 
 | ||||
| commit b97a30f0a05c1dea918c46ca9c05c869d15fe2d5 | ||||
| Author: Harald <hjensas@redhat.com> | ||||
| Date:   Tue Feb 8 15:49:00 2022 +0100 | ||||
| 
 | ||||
|     Fix IPv6 netmask format for sysconfig (#1215) | ||||
| 
 | ||||
|     This change converts the IPv6 netmask from the network_data.json[1] | ||||
|     format to the CIDR style, <IPv6_addr>/<prefix>. | ||||
| 
 | ||||
|     Using an IPv6 address like ffff:ffff:ffff:ffff:: does not work with | ||||
|     NetworkManager, nor networkscripts. | ||||
| 
 | ||||
|     NetworkManager will ignore the route, logging: | ||||
|       ifcfg-rh: ignoring invalid route at \ | ||||
|         "::/:: via fd00:fd00:fd00:2::fffe dev $DEV" \ | ||||
|         (/etc/sysconfig/network-scripts/route6-$DEV:3): \ | ||||
|         Argument for "::/::" is not ADDR/PREFIX format | ||||
| 
 | ||||
|     Similarly if using networkscripts, ip route fail with error: | ||||
|       Error: inet6 prefix is expected rather than \ | ||||
|         "fd00:fd00:fd00::/ffff:ffff:ffff:ffff::". | ||||
| 
 | ||||
|     Also a bit of refactoring ... | ||||
| 
 | ||||
|     cloudinit.net.sysconfig.Route.to_string: | ||||
|     * Move a couple of lines around to reduce repeated code. | ||||
|     * if "ADDRESS" not in key -> continute, so that the | ||||
|       code block following it can be de-indented. | ||||
|     cloudinit.net.network_state: | ||||
|     * Refactors the ipv4_mask_to_net_prefix, ipv6_mask_to_net_prefix | ||||
|       removes mask_to_net_prefix methods. Utilize ipaddress library to | ||||
|       do some of the heavy lifting. | ||||
| 
 | ||||
|     LP: #1959148 | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/net/__init__.py                     |   7 +- | ||||
|  cloudinit/net/network_state.py                | 103 +++++++----------- | ||||
|  cloudinit/net/sysconfig.py                    |  91 ++++++++++------ | ||||
|  cloudinit/sources/DataSourceOpenNebula.py     |   2 +- | ||||
|  .../sources/helpers/vmware/imc/config_nic.py  |   4 +- | ||||
|  tests/unittests/test_net.py                   |  78 ++++++++++++- | ||||
|  6 files changed, 176 insertions(+), 109 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
 | ||||
| index 4bdc1bda..91cb0627 100644
 | ||||
| --- a/cloudinit/net/__init__.py
 | ||||
| +++ b/cloudinit/net/__init__.py
 | ||||
| @@ -13,7 +13,7 @@ import re
 | ||||
|   | ||||
|  from cloudinit import subp | ||||
|  from cloudinit import util | ||||
| -from cloudinit.net.network_state import mask_to_net_prefix
 | ||||
| +from cloudinit.net.network_state import ipv4_mask_to_net_prefix
 | ||||
|  from cloudinit.url_helper import UrlError, readurl | ||||
|   | ||||
|  LOG = logging.getLogger(__name__) | ||||
| @@ -986,10 +986,11 @@ class EphemeralIPv4Network(object):
 | ||||
|                  'Cannot init network on {0} with {1}/{2} and bcast {3}'.format( | ||||
|                      interface, ip, prefix_or_mask, broadcast)) | ||||
|          try: | ||||
| -            self.prefix = mask_to_net_prefix(prefix_or_mask)
 | ||||
| +            self.prefix = ipv4_mask_to_net_prefix(prefix_or_mask)
 | ||||
|          except ValueError as e: | ||||
|              raise ValueError( | ||||
| -                'Cannot setup network: {0}'.format(e)
 | ||||
| +                "Cannot setup network, invalid prefix or "
 | ||||
| +                "netmask: {0}".format(e)
 | ||||
|              ) from e | ||||
|   | ||||
|          self.connectivity_url = connectivity_url | ||||
| diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py
 | ||||
| index e8bf9e39..2768ef94 100644
 | ||||
| --- a/cloudinit/net/network_state.py
 | ||||
| +++ b/cloudinit/net/network_state.py
 | ||||
| @@ -6,6 +6,7 @@
 | ||||
|   | ||||
|  import copy | ||||
|  import functools | ||||
| +import ipaddress
 | ||||
|  import logging | ||||
|  import socket | ||||
|  import struct | ||||
| @@ -872,12 +873,18 @@ def _normalize_net_keys(network, address_keys=()):
 | ||||
|          try: | ||||
|              prefix = int(maybe_prefix) | ||||
|          except ValueError: | ||||
| -            # this supports input of <address>/255.255.255.0
 | ||||
| -            prefix = mask_to_net_prefix(maybe_prefix)
 | ||||
| -    elif netmask:
 | ||||
| -        prefix = mask_to_net_prefix(netmask)
 | ||||
| -    elif 'prefix' in net:
 | ||||
| -        prefix = int(net['prefix'])
 | ||||
| +            if ipv6:
 | ||||
| +                # this supports input of ffff:ffff:ffff::
 | ||||
| +                prefix = ipv6_mask_to_net_prefix(maybe_prefix)
 | ||||
| +            else:
 | ||||
| +                # this supports input of 255.255.255.0
 | ||||
| +                prefix = ipv4_mask_to_net_prefix(maybe_prefix)
 | ||||
| +    elif netmask and not ipv6:
 | ||||
| +        prefix = ipv4_mask_to_net_prefix(netmask)
 | ||||
| +    elif netmask and ipv6:
 | ||||
| +        prefix = ipv6_mask_to_net_prefix(netmask)
 | ||||
| +    elif "prefix" in net:
 | ||||
| +        prefix = int(net["prefix"])
 | ||||
|      else: | ||||
|          prefix = 64 if ipv6 else 24 | ||||
|   | ||||
| @@ -972,72 +979,42 @@ def ipv4_mask_to_net_prefix(mask):
 | ||||
|         str(24)         => 24 | ||||
|         "24"            => 24 | ||||
|      """ | ||||
| -    if isinstance(mask, int):
 | ||||
| -        return mask
 | ||||
| -    if isinstance(mask, str):
 | ||||
| -        try:
 | ||||
| -            return int(mask)
 | ||||
| -        except ValueError:
 | ||||
| -            pass
 | ||||
| -    else:
 | ||||
| -        raise TypeError("mask '%s' is not a string or int")
 | ||||
| -
 | ||||
| -    if '.' not in mask:
 | ||||
| -        raise ValueError("netmask '%s' does not contain a '.'" % mask)
 | ||||
| -
 | ||||
| -    toks = mask.split(".")
 | ||||
| -    if len(toks) != 4:
 | ||||
| -        raise ValueError("netmask '%s' had only %d parts" % (mask, len(toks)))
 | ||||
| -
 | ||||
| -    return sum([bin(int(x)).count('1') for x in toks])
 | ||||
| +    return ipaddress.ip_network(f"0.0.0.0/{mask}").prefixlen
 | ||||
|   | ||||
|   | ||||
|  def ipv6_mask_to_net_prefix(mask): | ||||
|      """Convert an ipv6 netmask (very uncommon) or prefix (64) to prefix. | ||||
|   | ||||
| -    If 'mask' is an integer or string representation of one then
 | ||||
| -    int(mask) will be returned.
 | ||||
| +    If the input is already an integer or a string representation of
 | ||||
| +    an integer, then int(mask) will be returned.
 | ||||
| +       "ffff:ffff:ffff::"  => 48
 | ||||
| +       "48"                => 48
 | ||||
|      """ | ||||
| -
 | ||||
| -    if isinstance(mask, int):
 | ||||
| -        return mask
 | ||||
| -    if isinstance(mask, str):
 | ||||
| -        try:
 | ||||
| -            return int(mask)
 | ||||
| -        except ValueError:
 | ||||
| -            pass
 | ||||
| -    else:
 | ||||
| -        raise TypeError("mask '%s' is not a string or int")
 | ||||
| -
 | ||||
| -    if ':' not in mask:
 | ||||
| -        raise ValueError("mask '%s' does not have a ':'")
 | ||||
| -
 | ||||
| -    bitCount = [0, 0x8000, 0xc000, 0xe000, 0xf000, 0xf800, 0xfc00, 0xfe00,
 | ||||
| -                0xff00, 0xff80, 0xffc0, 0xffe0, 0xfff0, 0xfff8, 0xfffc,
 | ||||
| -                0xfffe, 0xffff]
 | ||||
| -    prefix = 0
 | ||||
| -    for word in mask.split(':'):
 | ||||
| -        if not word or int(word, 16) == 0:
 | ||||
| -            break
 | ||||
| -        prefix += bitCount.index(int(word, 16))
 | ||||
| -
 | ||||
| -    return prefix
 | ||||
| -
 | ||||
| -
 | ||||
| -def mask_to_net_prefix(mask):
 | ||||
| -    """Return the network prefix for the netmask provided.
 | ||||
| -
 | ||||
| -    Supports ipv4 or ipv6 netmasks."""
 | ||||
|      try: | ||||
| -        # if 'mask' is a prefix that is an integer.
 | ||||
| -        # then just return it.
 | ||||
| -        return int(mask)
 | ||||
| +        # In the case the mask is already a prefix
 | ||||
| +        prefixlen = ipaddress.ip_network(f"::/{mask}").prefixlen
 | ||||
| +        return prefixlen
 | ||||
|      except ValueError: | ||||
| +        # ValueError means mask is an IPv6 address representation and need
 | ||||
| +        # conversion.
 | ||||
|          pass | ||||
| -    if is_ipv6_addr(mask):
 | ||||
| -        return ipv6_mask_to_net_prefix(mask)
 | ||||
| -    else:
 | ||||
| -        return ipv4_mask_to_net_prefix(mask)
 | ||||
| +
 | ||||
| +    netmask = ipaddress.ip_address(mask)
 | ||||
| +    mask_int = int(netmask)
 | ||||
| +    # If the mask is all zeroes, just return it
 | ||||
| +    if mask_int == 0:
 | ||||
| +        return mask_int
 | ||||
| +
 | ||||
| +    trailing_zeroes = min(
 | ||||
| +        ipaddress.IPV6LENGTH, (~mask_int & (mask_int - 1)).bit_length()
 | ||||
| +    )
 | ||||
| +    leading_ones = mask_int >> trailing_zeroes
 | ||||
| +    prefixlen = ipaddress.IPV6LENGTH - trailing_zeroes
 | ||||
| +    all_ones = (1 << prefixlen) - 1
 | ||||
| +    if leading_ones != all_ones:
 | ||||
| +        raise ValueError("Invalid network mask '%s'" % mask)
 | ||||
| +
 | ||||
| +    return prefixlen
 | ||||
|   | ||||
|   | ||||
|  def mask_and_ipv4_to_bcast_addr(mask, ip): | ||||
| diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
 | ||||
| index d5440998..7ecbe1c3 100644
 | ||||
| --- a/cloudinit/net/sysconfig.py
 | ||||
| +++ b/cloudinit/net/sysconfig.py
 | ||||
| @@ -12,6 +12,7 @@ from cloudinit import util
 | ||||
|  from cloudinit import subp | ||||
|  from cloudinit.distros.parsers import networkmanager_conf | ||||
|  from cloudinit.distros.parsers import resolv_conf | ||||
| +from cloudinit.net import network_state
 | ||||
|   | ||||
|  from . import renderer | ||||
|  from .network_state import ( | ||||
| @@ -171,43 +172,61 @@ class Route(ConfigMap):
 | ||||
|          # (because Route can contain a mix of IPv4 and IPv6) | ||||
|          reindex = -1 | ||||
|          for key in sorted(self._conf.keys()): | ||||
| -            if 'ADDRESS' in key:
 | ||||
| -                index = key.replace('ADDRESS', '')
 | ||||
| -                address_value = str(self._conf[key])
 | ||||
| -                # only accept combinations:
 | ||||
| -                # if proto ipv6 only display ipv6 routes
 | ||||
| -                # if proto ipv4 only display ipv4 routes
 | ||||
| -                # do not add ipv6 routes if proto is ipv4
 | ||||
| -                # do not add ipv4 routes if proto is ipv6
 | ||||
| -                # (this array will contain a mix of ipv4 and ipv6)
 | ||||
| -                if proto == "ipv4" and not self.is_ipv6_route(address_value):
 | ||||
| -                    netmask_value = str(self._conf['NETMASK' + index])
 | ||||
| -                    gateway_value = str(self._conf['GATEWAY' + index])
 | ||||
| -                    # increase IPv4 index
 | ||||
| -                    reindex = reindex + 1
 | ||||
| -                    buf.write("%s=%s\n" % ('ADDRESS' + str(reindex),
 | ||||
| -                                           _quote_value(address_value)))
 | ||||
| -                    buf.write("%s=%s\n" % ('GATEWAY' + str(reindex),
 | ||||
| -                                           _quote_value(gateway_value)))
 | ||||
| -                    buf.write("%s=%s\n" % ('NETMASK' + str(reindex),
 | ||||
| -                                           _quote_value(netmask_value)))
 | ||||
| -                    metric_key = 'METRIC' + index
 | ||||
| -                    if metric_key in self._conf:
 | ||||
| -                        metric_value = str(self._conf['METRIC' + index])
 | ||||
| -                        buf.write("%s=%s\n" % ('METRIC' + str(reindex),
 | ||||
| -                                               _quote_value(metric_value)))
 | ||||
| -                elif proto == "ipv6" and self.is_ipv6_route(address_value):
 | ||||
| -                    netmask_value = str(self._conf['NETMASK' + index])
 | ||||
| -                    gateway_value = str(self._conf['GATEWAY' + index])
 | ||||
| -                    metric_value = (
 | ||||
| -                        'metric ' + str(self._conf['METRIC' + index])
 | ||||
| -                        if 'METRIC' + index in self._conf else '')
 | ||||
| +            if "ADDRESS" not in key:
 | ||||
| +                continue
 | ||||
| +
 | ||||
| +            index = key.replace("ADDRESS", "")
 | ||||
| +            address_value = str(self._conf[key])
 | ||||
| +            netmask_value = str(self._conf["NETMASK" + index])
 | ||||
| +            gateway_value = str(self._conf["GATEWAY" + index])
 | ||||
| +
 | ||||
| +            # only accept combinations:
 | ||||
| +            # if proto ipv6 only display ipv6 routes
 | ||||
| +            # if proto ipv4 only display ipv4 routes
 | ||||
| +            # do not add ipv6 routes if proto is ipv4
 | ||||
| +            # do not add ipv4 routes if proto is ipv6
 | ||||
| +            # (this array will contain a mix of ipv4 and ipv6)
 | ||||
| +            if proto == "ipv4" and not self.is_ipv6_route(address_value):
 | ||||
| +                # increase IPv4 index
 | ||||
| +                reindex = reindex + 1
 | ||||
| +                buf.write(
 | ||||
| +                    "%s=%s\n"
 | ||||
| +                    % ("ADDRESS" + str(reindex), _quote_value(address_value))
 | ||||
| +                )
 | ||||
| +                buf.write(
 | ||||
| +                    "%s=%s\n"
 | ||||
| +                    % ("GATEWAY" + str(reindex), _quote_value(gateway_value))
 | ||||
| +                )
 | ||||
| +                buf.write(
 | ||||
| +                    "%s=%s\n"
 | ||||
| +                    % ("NETMASK" + str(reindex), _quote_value(netmask_value))
 | ||||
| +                )
 | ||||
| +                metric_key = "METRIC" + index
 | ||||
| +                if metric_key in self._conf:
 | ||||
| +                    metric_value = str(self._conf["METRIC" + index])
 | ||||
|                      buf.write( | ||||
| -                        "%s/%s via %s %s dev %s\n" % (address_value,
 | ||||
| -                                                      netmask_value,
 | ||||
| -                                                      gateway_value,
 | ||||
| -                                                      metric_value,
 | ||||
| -                                                      self._route_name))
 | ||||
| +                        "%s=%s\n"
 | ||||
| +                        % ("METRIC" + str(reindex), _quote_value(metric_value))
 | ||||
| +                    )
 | ||||
| +            elif proto == "ipv6" and self.is_ipv6_route(address_value):
 | ||||
| +                prefix_value = network_state.ipv6_mask_to_net_prefix(
 | ||||
| +                    netmask_value
 | ||||
| +                )
 | ||||
| +                metric_value = (
 | ||||
| +                    "metric " + str(self._conf["METRIC" + index])
 | ||||
| +                    if "METRIC" + index in self._conf
 | ||||
| +                    else ""
 | ||||
| +                )
 | ||||
| +                buf.write(
 | ||||
| +                    "%s/%s via %s %s dev %s\n"
 | ||||
| +                    % (
 | ||||
| +                        address_value,
 | ||||
| +                        prefix_value,
 | ||||
| +                        gateway_value,
 | ||||
| +                        metric_value,
 | ||||
| +                        self._route_name,
 | ||||
| +                    )
 | ||||
| +                )
 | ||||
|   | ||||
|          return buf.getvalue() | ||||
|   | ||||
| diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py
 | ||||
| index 730ec586..e7980ab1 100644
 | ||||
| --- a/cloudinit/sources/DataSourceOpenNebula.py
 | ||||
| +++ b/cloudinit/sources/DataSourceOpenNebula.py
 | ||||
| @@ -233,7 +233,7 @@ class OpenNebulaNetwork(object):
 | ||||
|              # Set IPv4 address | ||||
|              devconf['addresses'] = [] | ||||
|              mask = self.get_mask(c_dev) | ||||
| -            prefix = str(net.mask_to_net_prefix(mask))
 | ||||
| +            prefix = str(net.ipv4_mask_to_net_prefix(mask))
 | ||||
|              devconf['addresses'].append( | ||||
|                  self.get_ip(c_dev, mac) + '/' + prefix) | ||||
|   | ||||
| diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py
 | ||||
| index 9cd2c0c0..3a45c67e 100644
 | ||||
| --- a/cloudinit/sources/helpers/vmware/imc/config_nic.py
 | ||||
| +++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py
 | ||||
| @@ -9,7 +9,7 @@ import logging
 | ||||
|  import os | ||||
|  import re | ||||
|   | ||||
| -from cloudinit.net.network_state import mask_to_net_prefix
 | ||||
| +from cloudinit.net.network_state import ipv4_mask_to_net_prefix
 | ||||
|  from cloudinit import subp | ||||
|  from cloudinit import util | ||||
|   | ||||
| @@ -180,7 +180,7 @@ class NicConfigurator(object):
 | ||||
|          """ | ||||
|          route_list = [] | ||||
|   | ||||
| -        cidr = mask_to_net_prefix(netmask)
 | ||||
| +        cidr = ipv4_mask_to_net_prefix(netmask)
 | ||||
|   | ||||
|          for gateway in gateways: | ||||
|              destination = "%s/%d" % (gen_subnet(gateway, netmask), cidr) | ||||
| diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
 | ||||
| index c67b5fcc..0bc547af 100644
 | ||||
| --- a/tests/unittests/test_net.py
 | ||||
| +++ b/tests/unittests/test_net.py
 | ||||
| @@ -2025,10 +2025,10 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
 | ||||
|                      routes: | ||||
|                          - gateway: 2001:67c:1562:1 | ||||
|                            network: 2001:67c:1 | ||||
| -                          netmask: ffff:ffff:0
 | ||||
| +                          netmask: "ffff:ffff::"
 | ||||
|                          - gateway: 3001:67c:1562:1 | ||||
|                            network: 3001:67c:1 | ||||
| -                          netmask: ffff:ffff:0
 | ||||
| +                          netmask: "ffff:ffff::"
 | ||||
|                            metric: 10000 | ||||
|              """), | ||||
|          'expected_netplan': textwrap.dedent(""" | ||||
| @@ -2295,8 +2295,8 @@ iface bond0 inet6 static
 | ||||
|              'route6-bond0': textwrap.dedent("""\ | ||||
|          # Created by cloud-init on instance boot automatically, do not edit. | ||||
|          # | ||||
| -        2001:67c:1/ffff:ffff:0 via 2001:67c:1562:1  dev bond0
 | ||||
| -        3001:67c:1/ffff:ffff:0 via 3001:67c:1562:1 metric 10000 dev bond0
 | ||||
| +        2001:67c:1/32 via 2001:67c:1562:1  dev bond0
 | ||||
| +        3001:67c:1/32 via 3001:67c:1562:1 metric 10000 dev bond0
 | ||||
|              """), | ||||
|              'route-bond0': textwrap.dedent("""\ | ||||
|          ADDRESS0=10.1.3.0 | ||||
| @@ -3084,6 +3084,76 @@ USERCTL=no
 | ||||
|              renderer.render_network_state(ns, target=render_dir) | ||||
|          self.assertEqual([], os.listdir(render_dir)) | ||||
|   | ||||
| +    def test_invalid_network_mask_ipv6(self):
 | ||||
| +        net_json = {
 | ||||
| +            "services": [{"type": "dns", "address": "172.19.0.12"}],
 | ||||
| +            "networks": [
 | ||||
| +                {
 | ||||
| +                    "network_id": "public-ipv6",
 | ||||
| +                    "type": "ipv6",
 | ||||
| +                    "netmask": "",
 | ||||
| +                    "link": "tap1a81968a-79",
 | ||||
| +                    "routes": [
 | ||||
| +                        {
 | ||||
| +                            "gateway": "2001:DB8::1",
 | ||||
| +                            "netmask": "ff:ff:ff:ff::",
 | ||||
| +                            "network": "2001:DB8:1::1",
 | ||||
| +                        },
 | ||||
| +                    ],
 | ||||
| +                    "ip_address": "2001:DB8::10",
 | ||||
| +                    "id": "network1",
 | ||||
| +                }
 | ||||
| +            ],
 | ||||
| +            "links": [
 | ||||
| +                {
 | ||||
| +                    "ethernet_mac_address": "fa:16:3e:ed:9a:59",
 | ||||
| +                    "mtu": None,
 | ||||
| +                    "type": "bridge",
 | ||||
| +                    "id": "tap1a81968a-79",
 | ||||
| +                    "vif_id": "1a81968a-797a-400f-8a80-567f997eb93f",
 | ||||
| +                },
 | ||||
| +            ],
 | ||||
| +        }
 | ||||
| +        macs = {"fa:16:3e:ed:9a:59": "eth0"}
 | ||||
| +        network_cfg = openstack.convert_net_json(net_json, known_macs=macs)
 | ||||
| +        with self.assertRaises(ValueError):
 | ||||
| +            network_state.parse_net_config_data(network_cfg, skip_broken=False)
 | ||||
| +
 | ||||
| +    def test_invalid_network_mask_ipv4(self):
 | ||||
| +        net_json = {
 | ||||
| +            "services": [{"type": "dns", "address": "172.19.0.12"}],
 | ||||
| +            "networks": [
 | ||||
| +                {
 | ||||
| +                    "network_id": "public-ipv4",
 | ||||
| +                    "type": "ipv4",
 | ||||
| +                    "netmask": "",
 | ||||
| +                    "link": "tap1a81968a-79",
 | ||||
| +                    "routes": [
 | ||||
| +                        {
 | ||||
| +                            "gateway": "172.20.0.1",
 | ||||
| +                            "netmask": "255.234.255.0",
 | ||||
| +                            "network": "172.19.0.0",
 | ||||
| +                        },
 | ||||
| +                    ],
 | ||||
| +                    "ip_address": "172.20.0.10",
 | ||||
| +                    "id": "network1",
 | ||||
| +                }
 | ||||
| +            ],
 | ||||
| +            "links": [
 | ||||
| +                {
 | ||||
| +                    "ethernet_mac_address": "fa:16:3e:ed:9a:59",
 | ||||
| +                    "mtu": None,
 | ||||
| +                    "type": "bridge",
 | ||||
| +                    "id": "tap1a81968a-79",
 | ||||
| +                    "vif_id": "1a81968a-797a-400f-8a80-567f997eb93f",
 | ||||
| +                },
 | ||||
| +            ],
 | ||||
| +        }
 | ||||
| +        macs = {"fa:16:3e:ed:9a:59": "eth0"}
 | ||||
| +        network_cfg = openstack.convert_net_json(net_json, known_macs=macs)
 | ||||
| +        with self.assertRaises(ValueError):
 | ||||
| +            network_state.parse_net_config_data(network_cfg, skip_broken=False)
 | ||||
| +
 | ||||
|      def test_openstack_rendering_samples(self): | ||||
|          for os_sample in OS_SAMPLES: | ||||
|              render_dir = self.tmp_dir() | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,262 @@ | ||||
| From 5bfe2ee2b063d87e6fd255d6c5e63123aa3f6de0 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Sat, 21 Aug 2021 13:55:53 +0200 | ||||
| Subject: [PATCH] Fix home permissions modified by ssh module (SC-338) (#984) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 9: Fix home permissions modified by ssh module | ||||
| RH-Commit: [1/1] ab55db88aa1bf2f77acaca5e76ffabbab72b1fb2 (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 1995843 | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| 
 | ||||
| TESTED: By me and QA | ||||
| BREW: 39178085 | ||||
| 
 | ||||
| Fix home permissions modified by ssh module (SC-338) (#984) | ||||
| 
 | ||||
| commit 7d3f5d750f6111c2716143364ea33486df67c927 | ||||
| Author: James Falcon <therealfalcon@gmail.com> | ||||
| Date:   Fri Aug 20 17:09:49 2021 -0500 | ||||
| 
 | ||||
|     Fix home permissions modified by ssh module (SC-338) (#984) | ||||
| 
 | ||||
|     Fix home permissions modified by ssh module | ||||
| 
 | ||||
|     In #956, we updated the file and directory permissions for keys not in | ||||
|     the user's home directory. We also unintentionally modified the | ||||
|     permissions within the home directory as well. These should not change, | ||||
|     and this commit changes that back. | ||||
| 
 | ||||
|     LP: #1940233 | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/ssh_util.py                         |  35 ++++- | ||||
|  .../modules/test_ssh_keysfile.py              | 132 +++++++++++++++--- | ||||
|  2 files changed, 146 insertions(+), 21 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/ssh_util.py b/cloudinit/ssh_util.py
 | ||||
| index b8a3c8f7..9ccadf09 100644
 | ||||
| --- a/cloudinit/ssh_util.py
 | ||||
| +++ b/cloudinit/ssh_util.py
 | ||||
| @@ -321,23 +321,48 @@ def check_create_path(username, filename, strictmodes):
 | ||||
|          home_folder = os.path.dirname(user_pwent.pw_dir) | ||||
|          for directory in directories: | ||||
|              parent_folder += "/" + directory | ||||
| -            if home_folder.startswith(parent_folder):
 | ||||
| +
 | ||||
| +            # security check, disallow symlinks in the AuthorizedKeysFile path.
 | ||||
| +            if os.path.islink(parent_folder):
 | ||||
| +                LOG.debug(
 | ||||
| +                    "Invalid directory. Symlink exists in path: %s",
 | ||||
| +                    parent_folder)
 | ||||
| +                return False
 | ||||
| +
 | ||||
| +            if os.path.isfile(parent_folder):
 | ||||
| +                LOG.debug(
 | ||||
| +                    "Invalid directory. File exists in path: %s",
 | ||||
| +                    parent_folder)
 | ||||
| +                return False
 | ||||
| +
 | ||||
| +            if (home_folder.startswith(parent_folder) or
 | ||||
| +                    parent_folder == user_pwent.pw_dir):
 | ||||
|                  continue | ||||
|   | ||||
| -            if not os.path.isdir(parent_folder):
 | ||||
| +            if not os.path.exists(parent_folder):
 | ||||
|                  # directory does not exist, and permission so far are good: | ||||
|                  # create the directory, and make it accessible by everyone | ||||
|                  # but owned by root, as it might be used by many users. | ||||
|                  with util.SeLinuxGuard(parent_folder): | ||||
| -                    os.makedirs(parent_folder, mode=0o755, exist_ok=True)
 | ||||
| -                    util.chownbyid(parent_folder, root_pwent.pw_uid,
 | ||||
| -                                   root_pwent.pw_gid)
 | ||||
| +                    mode = 0o755
 | ||||
| +                    uid = root_pwent.pw_uid
 | ||||
| +                    gid = root_pwent.pw_gid
 | ||||
| +                    if parent_folder.startswith(user_pwent.pw_dir):
 | ||||
| +                        mode = 0o700
 | ||||
| +                        uid = user_pwent.pw_uid
 | ||||
| +                        gid = user_pwent.pw_gid
 | ||||
| +                    os.makedirs(parent_folder, mode=mode, exist_ok=True)
 | ||||
| +                    util.chownbyid(parent_folder, uid, gid)
 | ||||
|   | ||||
|              permissions = check_permissions(username, parent_folder, | ||||
|                                              filename, False, strictmodes) | ||||
|              if not permissions: | ||||
|                  return False | ||||
|   | ||||
| +        if os.path.islink(filename) or os.path.isdir(filename):
 | ||||
| +            LOG.debug("%s is not a file!", filename)
 | ||||
| +            return False
 | ||||
| +
 | ||||
|          # check the file | ||||
|          if not os.path.exists(filename): | ||||
|              # if file does not exist: we need to create it, since the | ||||
| diff --git a/tests/integration_tests/modules/test_ssh_keysfile.py b/tests/integration_tests/modules/test_ssh_keysfile.py
 | ||||
| index f82d7649..3159feb9 100644
 | ||||
| --- a/tests/integration_tests/modules/test_ssh_keysfile.py
 | ||||
| +++ b/tests/integration_tests/modules/test_ssh_keysfile.py
 | ||||
| @@ -10,10 +10,10 @@ TEST_USER1_KEYS = get_test_rsa_keypair('test1')
 | ||||
|  TEST_USER2_KEYS = get_test_rsa_keypair('test2') | ||||
|  TEST_DEFAULT_KEYS = get_test_rsa_keypair('test3') | ||||
|   | ||||
| -USERDATA = """\
 | ||||
| +_USERDATA = """\
 | ||||
|  #cloud-config | ||||
|  bootcmd: | ||||
| - - sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile /etc/ssh/authorized_keys %h/.ssh/authorized_keys2;' /etc/ssh/sshd_config
 | ||||
| + - {bootcmd}
 | ||||
|  ssh_authorized_keys: | ||||
|   - {default} | ||||
|  users: | ||||
| @@ -24,27 +24,17 @@ users:
 | ||||
|  - name: test_user2 | ||||
|    ssh_authorized_keys: | ||||
|      - {user2} | ||||
| -""".format(  # noqa: E501
 | ||||
| +""".format(
 | ||||
| +    bootcmd='{bootcmd}',
 | ||||
|      default=TEST_DEFAULT_KEYS.public_key, | ||||
|      user1=TEST_USER1_KEYS.public_key, | ||||
|      user2=TEST_USER2_KEYS.public_key, | ||||
|  ) | ||||
|   | ||||
|   | ||||
| -@pytest.mark.ubuntu
 | ||||
| -@pytest.mark.user_data(USERDATA)
 | ||||
| -def test_authorized_keys(client: IntegrationInstance):
 | ||||
| -    expected_keys = [
 | ||||
| -        ('test_user1', '/home/test_user1/.ssh/authorized_keys2',
 | ||||
| -         TEST_USER1_KEYS),
 | ||||
| -        ('test_user2', '/home/test_user2/.ssh/authorized_keys2',
 | ||||
| -         TEST_USER2_KEYS),
 | ||||
| -        ('ubuntu', '/home/ubuntu/.ssh/authorized_keys2',
 | ||||
| -         TEST_DEFAULT_KEYS),
 | ||||
| -        ('root', '/root/.ssh/authorized_keys2', TEST_DEFAULT_KEYS),
 | ||||
| -    ]
 | ||||
| -
 | ||||
| +def common_verify(client, expected_keys):
 | ||||
|      for user, filename, keys in expected_keys: | ||||
| +        # Ensure key is in the key file
 | ||||
|          contents = client.read_from_file(filename) | ||||
|          if user in ['ubuntu', 'root']: | ||||
|              # Our personal public key gets added by pycloudlib | ||||
| @@ -83,3 +73,113 @@ def test_authorized_keys(client: IntegrationInstance):
 | ||||
|                      look_for_keys=False, | ||||
|                      allow_agent=False, | ||||
|                  ) | ||||
| +
 | ||||
| +        # Ensure we haven't messed with any /home permissions
 | ||||
| +        # See LP: #1940233
 | ||||
| +        home_dir = '/home/{}'.format(user)
 | ||||
| +        home_perms = '755'
 | ||||
| +        if user == 'root':
 | ||||
| +            home_dir = '/root'
 | ||||
| +            home_perms = '700'
 | ||||
| +        assert '{} {}'.format(user, home_perms) == client.execute(
 | ||||
| +            'stat -c "%U %a" {}'.format(home_dir)
 | ||||
| +        )
 | ||||
| +        if client.execute("test -d {}/.ssh".format(home_dir)).ok:
 | ||||
| +            assert '{} 700'.format(user) == client.execute(
 | ||||
| +                'stat -c "%U %a" {}/.ssh'.format(home_dir)
 | ||||
| +            )
 | ||||
| +        assert '{} 600'.format(user) == client.execute(
 | ||||
| +            'stat -c "%U %a" {}'.format(filename)
 | ||||
| +        )
 | ||||
| +
 | ||||
| +        # Also ensure ssh-keygen works as expected
 | ||||
| +        client.execute('mkdir {}/.ssh'.format(home_dir))
 | ||||
| +        assert client.execute(
 | ||||
| +            "ssh-keygen -b 2048 -t rsa -f {}/.ssh/id_rsa -q -N ''".format(
 | ||||
| +                home_dir)
 | ||||
| +        ).ok
 | ||||
| +        assert client.execute('test -f {}/.ssh/id_rsa'.format(home_dir))
 | ||||
| +        assert client.execute('test -f {}/.ssh/id_rsa.pub'.format(home_dir))
 | ||||
| +
 | ||||
| +    assert 'root 755' == client.execute('stat -c "%U %a" /home')
 | ||||
| +
 | ||||
| +
 | ||||
| +DEFAULT_KEYS_USERDATA = _USERDATA.format(bootcmd='""')
 | ||||
| +
 | ||||
| +
 | ||||
| +@pytest.mark.ubuntu
 | ||||
| +@pytest.mark.user_data(DEFAULT_KEYS_USERDATA)
 | ||||
| +def test_authorized_keys_default(client: IntegrationInstance):
 | ||||
| +    expected_keys = [
 | ||||
| +        ('test_user1', '/home/test_user1/.ssh/authorized_keys',
 | ||||
| +         TEST_USER1_KEYS),
 | ||||
| +        ('test_user2', '/home/test_user2/.ssh/authorized_keys',
 | ||||
| +         TEST_USER2_KEYS),
 | ||||
| +        ('ubuntu', '/home/ubuntu/.ssh/authorized_keys',
 | ||||
| +         TEST_DEFAULT_KEYS),
 | ||||
| +        ('root', '/root/.ssh/authorized_keys', TEST_DEFAULT_KEYS),
 | ||||
| +    ]
 | ||||
| +    common_verify(client, expected_keys)
 | ||||
| +
 | ||||
| +
 | ||||
| +AUTHORIZED_KEYS2_USERDATA = _USERDATA.format(bootcmd=(
 | ||||
| +    "sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile "
 | ||||
| +    "/etc/ssh/authorized_keys %h/.ssh/authorized_keys2;' "
 | ||||
| +    "/etc/ssh/sshd_config"))
 | ||||
| +
 | ||||
| +
 | ||||
| +@pytest.mark.ubuntu
 | ||||
| +@pytest.mark.user_data(AUTHORIZED_KEYS2_USERDATA)
 | ||||
| +def test_authorized_keys2(client: IntegrationInstance):
 | ||||
| +    expected_keys = [
 | ||||
| +        ('test_user1', '/home/test_user1/.ssh/authorized_keys2',
 | ||||
| +         TEST_USER1_KEYS),
 | ||||
| +        ('test_user2', '/home/test_user2/.ssh/authorized_keys2',
 | ||||
| +         TEST_USER2_KEYS),
 | ||||
| +        ('ubuntu', '/home/ubuntu/.ssh/authorized_keys2',
 | ||||
| +         TEST_DEFAULT_KEYS),
 | ||||
| +        ('root', '/root/.ssh/authorized_keys2', TEST_DEFAULT_KEYS),
 | ||||
| +    ]
 | ||||
| +    common_verify(client, expected_keys)
 | ||||
| +
 | ||||
| +
 | ||||
| +NESTED_KEYS_USERDATA = _USERDATA.format(bootcmd=(
 | ||||
| +    "sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile "
 | ||||
| +    "/etc/ssh/authorized_keys %h/foo/bar/ssh/keys;' "
 | ||||
| +    "/etc/ssh/sshd_config"))
 | ||||
| +
 | ||||
| +
 | ||||
| +@pytest.mark.ubuntu
 | ||||
| +@pytest.mark.user_data(NESTED_KEYS_USERDATA)
 | ||||
| +def test_nested_keys(client: IntegrationInstance):
 | ||||
| +    expected_keys = [
 | ||||
| +        ('test_user1', '/home/test_user1/foo/bar/ssh/keys',
 | ||||
| +         TEST_USER1_KEYS),
 | ||||
| +        ('test_user2', '/home/test_user2/foo/bar/ssh/keys',
 | ||||
| +         TEST_USER2_KEYS),
 | ||||
| +        ('ubuntu', '/home/ubuntu/foo/bar/ssh/keys',
 | ||||
| +         TEST_DEFAULT_KEYS),
 | ||||
| +        ('root', '/root/foo/bar/ssh/keys', TEST_DEFAULT_KEYS),
 | ||||
| +    ]
 | ||||
| +    common_verify(client, expected_keys)
 | ||||
| +
 | ||||
| +
 | ||||
| +EXTERNAL_KEYS_USERDATA = _USERDATA.format(bootcmd=(
 | ||||
| +    "sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile "
 | ||||
| +    "/etc/ssh/authorized_keys /etc/ssh/authorized_keys/%u/keys;' "
 | ||||
| +    "/etc/ssh/sshd_config"))
 | ||||
| +
 | ||||
| +
 | ||||
| +@pytest.mark.ubuntu
 | ||||
| +@pytest.mark.user_data(EXTERNAL_KEYS_USERDATA)
 | ||||
| +def test_external_keys(client: IntegrationInstance):
 | ||||
| +    expected_keys = [
 | ||||
| +        ('test_user1', '/etc/ssh/authorized_keys/test_user1/keys',
 | ||||
| +         TEST_USER1_KEYS),
 | ||||
| +        ('test_user2', '/etc/ssh/authorized_keys/test_user2/keys',
 | ||||
| +         TEST_USER2_KEYS),
 | ||||
| +        ('ubuntu', '/etc/ssh/authorized_keys/ubuntu/keys',
 | ||||
| +         TEST_DEFAULT_KEYS),
 | ||||
| +        ('root', '/etc/ssh/authorized_keys/root/keys', TEST_DEFAULT_KEYS),
 | ||||
| +    ]
 | ||||
| +    common_verify(client, expected_keys)
 | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,103 @@ | ||||
| From 83394f05a01b5e1f8e520213537558c1cb5d9051 Mon Sep 17 00:00:00 2001 | ||||
| From: Eduardo Otubo <otubo@redhat.com> | ||||
| Date: Thu, 1 Jul 2021 12:01:34 +0200 | ||||
| Subject: [PATCH] Fix requiring device-number on EC2 derivatives (#836) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 3: Fix requiring device-number on EC2 derivatives (#836) | ||||
| RH-Commit: [1/1] a0b7af14a2bc6480bb844a496007737b8807f666 (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 1943511 | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| 
 | ||||
| commit 9bd19645a61586b82e86db6f518dd05c3363b17f | ||||
| Author: James Falcon <TheRealFalcon@users.noreply.github.com> | ||||
| Date:   Mon Mar 8 14:09:47 2021 -0600 | ||||
| 
 | ||||
|     Fix requiring device-number on EC2 derivatives (#836) | ||||
| 
 | ||||
|     #342 (70dbccbb) introduced the ability to determine route-metrics based on | ||||
|     the `device-number` provided by the EC2 IMDS. Not all datasources that | ||||
|     subclass EC2 will have this attribute, so allow the old behavior if | ||||
|     `device-number` is not present. | ||||
| 
 | ||||
|     LP: #1917875 | ||||
| 
 | ||||
| Signed-off-by: Eduardo Otubo <otubo@redhat.com> | ||||
| Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/sources/DataSourceEc2.py            |  3 +- | ||||
|  .../unittests/test_datasource/test_aliyun.py  | 30 +++++++++++++++++++ | ||||
|  2 files changed, 32 insertions(+), 1 deletion(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
 | ||||
| index 1930a509..a2105dc7 100644
 | ||||
| --- a/cloudinit/sources/DataSourceEc2.py
 | ||||
| +++ b/cloudinit/sources/DataSourceEc2.py
 | ||||
| @@ -765,13 +765,14 @@ def convert_ec2_metadata_network_config(
 | ||||
|          netcfg['ethernets'][nic_name] = dev_config | ||||
|          return netcfg | ||||
|      # Apply network config for all nics and any secondary IPv4/v6 addresses | ||||
| +    nic_idx = 0
 | ||||
|      for mac, nic_name in sorted(macs_to_nics.items()): | ||||
|          nic_metadata = macs_metadata.get(mac) | ||||
|          if not nic_metadata: | ||||
|              continue  # Not a physical nic represented in metadata | ||||
|          # device-number is zero-indexed, we want it 1-indexed for the | ||||
|          # multiplication on the following line | ||||
| -        nic_idx = int(nic_metadata['device-number']) + 1
 | ||||
| +        nic_idx = int(nic_metadata.get('device-number', nic_idx)) + 1
 | ||||
|          dhcp_override = {'route-metric': nic_idx * 100} | ||||
|          dev_config = {'dhcp4': True, 'dhcp4-overrides': dhcp_override, | ||||
|                        'dhcp6': False, | ||||
| diff --git a/tests/unittests/test_datasource/test_aliyun.py b/tests/unittests/test_datasource/test_aliyun.py
 | ||||
| index eb2828d5..cab1ac2b 100644
 | ||||
| --- a/tests/unittests/test_datasource/test_aliyun.py
 | ||||
| +++ b/tests/unittests/test_datasource/test_aliyun.py
 | ||||
| @@ -7,6 +7,7 @@ from unittest import mock
 | ||||
|   | ||||
|  from cloudinit import helpers | ||||
|  from cloudinit.sources import DataSourceAliYun as ay | ||||
| +from cloudinit.sources.DataSourceEc2 import convert_ec2_metadata_network_config
 | ||||
|  from cloudinit.tests import helpers as test_helpers | ||||
|   | ||||
|  DEFAULT_METADATA = { | ||||
| @@ -183,6 +184,35 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase):
 | ||||
|          self.assertEqual(ay.parse_public_keys(public_keys), | ||||
|                           public_keys['key-pair-0']['openssh-key']) | ||||
|   | ||||
| +    def test_route_metric_calculated_without_device_number(self):
 | ||||
| +        """Test that route-metric code works without `device-number`
 | ||||
| +
 | ||||
| +        `device-number` is part of EC2 metadata, but not supported on aliyun.
 | ||||
| +        Attempting to access it will raise a KeyError.
 | ||||
| +
 | ||||
| +        LP: #1917875
 | ||||
| +        """
 | ||||
| +        netcfg = convert_ec2_metadata_network_config(
 | ||||
| +            {"interfaces": {"macs": {
 | ||||
| +                "06:17:04:d7:26:09": {
 | ||||
| +                    "interface-id": "eni-e44ef49e",
 | ||||
| +                },
 | ||||
| +                "06:17:04:d7:26:08": {
 | ||||
| +                    "interface-id": "eni-e44ef49f",
 | ||||
| +                }
 | ||||
| +            }}},
 | ||||
| +            macs_to_nics={
 | ||||
| +                '06:17:04:d7:26:09': 'eth0',
 | ||||
| +                '06:17:04:d7:26:08': 'eth1',
 | ||||
| +            }
 | ||||
| +        )
 | ||||
| +
 | ||||
| +        met0 = netcfg['ethernets']['eth0']['dhcp4-overrides']['route-metric']
 | ||||
| +        met1 = netcfg['ethernets']['eth1']['dhcp4-overrides']['route-metric']
 | ||||
| +
 | ||||
| +        # route-metric numbers should be 100 apart
 | ||||
| +        assert 100 == abs(met0 - met1)
 | ||||
| +
 | ||||
|   | ||||
|  class TestIsAliYun(test_helpers.CiTestCase): | ||||
|      ALIYUN_PRODUCT = 'Alibaba Cloud ECS' | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,104 @@ | ||||
| From e6412be62079bbec5d67d178711ea42f21cafab8 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Tue, 12 Oct 2021 16:35:00 +0200 | ||||
| Subject: [PATCH 1/2] Inhibit sshd-keygen@.service if cloud-init is active | ||||
|  (#1028) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 11: Add drop-in to prevent race with sshd-keygen service | ||||
| RH-Commit: [1/2] 77ba3f167e71c43847aa5b38e1833d84568ed5a7 (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 2002492 | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| 
 | ||||
| TESTED: by me and QA | ||||
| BREW: 40286693 | ||||
| 
 | ||||
| commit 02c71f097bca455a0f87d3e0a2af4d04b1cbd727 | ||||
| Author: Ryan Harper <ryan.harper@canonical.com> | ||||
| Date:   Tue Oct 12 09:31:36 2021 -0500 | ||||
| 
 | ||||
|     Inhibit sshd-keygen@.service if cloud-init is active (#1028) | ||||
| 
 | ||||
|     In some cloud-init enabled images the sshd-keygen@.service | ||||
|     may race with cloud-init and prevent ssh host keys from being | ||||
|     generated or generating host keys twice slowing boot and  consuming | ||||
|     additional entropy during boot.  This drop-in unit adds a condition to | ||||
|     the sshd-keygen@.service which prevents running if cloud-init is active. | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| Conflicts: minor conflict in setup.py (line 253), where we still use | ||||
| "/usr/lib/" instead of LIB | ||||
| ---
 | ||||
|  packages/redhat/cloud-init.spec.in                    | 1 + | ||||
|  packages/suse/cloud-init.spec.in                      | 1 + | ||||
|  setup.py                                              | 5 ++++- | ||||
|  systemd/disable-sshd-keygen-if-cloud-init-active.conf | 8 ++++++++ | ||||
|  4 files changed, 14 insertions(+), 1 deletion(-) | ||||
|  create mode 100644 systemd/disable-sshd-keygen-if-cloud-init-active.conf | ||||
| 
 | ||||
| diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in
 | ||||
| index 16138012..1491822b 100644
 | ||||
| --- a/packages/redhat/cloud-init.spec.in
 | ||||
| +++ b/packages/redhat/cloud-init.spec.in
 | ||||
| @@ -175,6 +175,7 @@ fi
 | ||||
|   | ||||
|  %if "%{init_system}" == "systemd" | ||||
|  /usr/lib/systemd/system-generators/cloud-init-generator | ||||
| +%{_sysconfdir}/systemd/system/sshd-keygen@.service.d/disable-sshd-keygen-if-cloud-init-active.conf
 | ||||
|  %{_unitdir}/cloud-* | ||||
|  %else | ||||
|  %attr(0755, root, root) %{_initddir}/cloud-config | ||||
| diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in
 | ||||
| index 004b875f..da8107b4 100644
 | ||||
| --- a/packages/suse/cloud-init.spec.in
 | ||||
| +++ b/packages/suse/cloud-init.spec.in
 | ||||
| @@ -126,6 +126,7 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f)
 | ||||
|   | ||||
|  %{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient | ||||
|  %{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager | ||||
| +%{_sysconfdir}/systemd/system/sshd-keygen@.service.d/disable-sshd-keygen-if-cloud-init-active.conf
 | ||||
|   | ||||
|  # Python code is here... | ||||
|  %{python_sitelib}/* | ||||
| diff --git a/setup.py b/setup.py
 | ||||
| index d5cd01a4..ec03fa27 100755
 | ||||
| --- a/setup.py
 | ||||
| +++ b/setup.py
 | ||||
| @@ -38,6 +38,7 @@ def is_generator(p):
 | ||||
|  def pkg_config_read(library, var): | ||||
|      fallbacks = { | ||||
|          'systemd': { | ||||
| +            'systemdsystemconfdir': '/etc/systemd/system',
 | ||||
|              'systemdsystemunitdir': '/lib/systemd/system', | ||||
|              'systemdsystemgeneratordir': '/lib/systemd/system-generators', | ||||
|          } | ||||
| @@ -249,7 +250,9 @@ if not platform.system().endswith('BSD'):
 | ||||
|      data_files.extend([ | ||||
|          (ETC + '/NetworkManager/dispatcher.d/', | ||||
|           ['tools/hook-network-manager']), | ||||
| -        ('/usr/lib/udev/rules.d', [f for f in glob('udev/*.rules')])
 | ||||
| +        ('/usr/lib/udev/rules.d', [f for f in glob('udev/*.rules')]),
 | ||||
| +        (ETC + '/systemd/system/sshd-keygen@.service.d/',
 | ||||
| +         ['systemd/disable-sshd-keygen-if-cloud-init-active.conf']),
 | ||||
|      ]) | ||||
|  # Use a subclass for install that handles | ||||
|  # adding on the right init system configuration files | ||||
| diff --git a/systemd/disable-sshd-keygen-if-cloud-init-active.conf b/systemd/disable-sshd-keygen-if-cloud-init-active.conf
 | ||||
| new file mode 100644 | ||||
| index 00000000..71e35876
 | ||||
| --- /dev/null
 | ||||
| +++ b/systemd/disable-sshd-keygen-if-cloud-init-active.conf
 | ||||
| @@ -0,0 +1,8 @@
 | ||||
| +# In some cloud-init enabled images the sshd-keygen template service may race
 | ||||
| +# with cloud-init during boot causing issues with host key generation.  This
 | ||||
| +# drop-in config adds a condition to sshd-keygen@.service if it exists and
 | ||||
| +# prevents the sshd-keygen units from running *if* cloud-init is going to run.
 | ||||
| +#
 | ||||
| +[Unit]
 | ||||
| +ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target
 | ||||
| +EOF
 | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
							
								
								
									
										47
									
								
								SOURCES/ci-Revert-unnecesary-lcase-in-ds-identify-978.patch
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										47
									
								
								SOURCES/ci-Revert-unnecesary-lcase-in-ds-identify-978.patch
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,47 @@ | ||||
| From 0aba80bf749458960945acf106833b098c3c5c97 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Fri, 14 Jan 2022 16:50:44 +0100 | ||||
| Subject: [PATCH 4/5] Revert unnecesary lcase in ds-identify (#978) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 17: Datasource for VMware | ||||
| RH-Commit: [4/5] 334aae223b966173238a905150cf7bc07829c255 (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 2040090 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| 
 | ||||
| commit f516a7d37c1654addc02485e681b4358d7e7c0db | ||||
| Author: Andrew Kutz <101085+akutz@users.noreply.github.com> | ||||
| Date:   Fri Aug 13 14:30:55 2021 -0500 | ||||
| 
 | ||||
|     Revert unnecesary lcase in ds-identify (#978) | ||||
| 
 | ||||
|     This patch reverts an unnecessary lcase optimization in the | ||||
|     ds-identify script. SystemD documents the values produced by | ||||
|     the systemd-detect-virt command are lower case, and the mapping | ||||
|     table used by the FreeBSD check is also lower-case. | ||||
| 
 | ||||
|     The optimization added two new forked processes, needlessly | ||||
|     causing overhead. | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  tools/ds-identify | 2 +- | ||||
|  1 file changed, 1 insertion(+), 1 deletion(-) | ||||
| 
 | ||||
| diff --git a/tools/ds-identify b/tools/ds-identify
 | ||||
| index 0e12298f..7b782462 100755
 | ||||
| --- a/tools/ds-identify
 | ||||
| +++ b/tools/ds-identify
 | ||||
| @@ -449,7 +449,7 @@ detect_virt() {
 | ||||
|  read_virt() { | ||||
|      cached "$DI_VIRT" && return 0 | ||||
|      detect_virt | ||||
| -    DI_VIRT="$(echo "${_RET}" | tr '[:upper:]' '[:lower:]')"
 | ||||
| +    DI_VIRT="${_RET}"
 | ||||
|  } | ||||
|   | ||||
|  is_container() { | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,46 @@ | ||||
| From cf7b45eaa070061615ad26f6754f7d2b39e7de76 Mon Sep 17 00:00:00 2001 | ||||
| From: Eduardo Otubo <otubo@redhat.com> | ||||
| Date: Thu, 17 Feb 2022 15:32:35 +0100 | ||||
| Subject: [PATCH 3/3] Setting highest autoconnect priority for network-scripts | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 22: Setting highest autoconnect priority for network-scripts | ||||
| RH-Commit: [1/1] 34f1d62f8934a983a124df95b861a1e448681d3b (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 2036060 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| Set the highest autoconnect priority for network-scripts which is | ||||
| loaded by NetworkManager ifcfg-rh plugin.  Note that keyfile is the only | ||||
| and default existing plugin on RHEL9, by setting the highest autoconnect | ||||
| priority for network-scripts, NetworkManager will activate | ||||
| network-scripts but keyfile.  Network-scripts path: | ||||
| 
 | ||||
| Since this is a blocking issue, we decided to have this one-liner | ||||
| downstream-only patch so we can move forward and have a better | ||||
| NetworkManager support later on the release. | ||||
| 
 | ||||
| rhbz: 2036060 | ||||
| x-downstream-only: yes | ||||
| 
 | ||||
| Signed-off-by: Eduardo Otubo <otubo@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/net/sysconfig.py | 2 +- | ||||
|  1 file changed, 1 insertion(+), 1 deletion(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
 | ||||
| index 7ecbe1c3..c7ca7c56 100644
 | ||||
| --- a/cloudinit/net/sysconfig.py
 | ||||
| +++ b/cloudinit/net/sysconfig.py
 | ||||
| @@ -309,7 +309,7 @@ class Renderer(renderer.Renderer):
 | ||||
|   | ||||
|      iface_defaults = { | ||||
|          'rhel': {'ONBOOT': True, 'USERCTL': False, | ||||
| -                 'BOOTPROTO': 'none'},
 | ||||
| +                 'BOOTPROTO': 'none', "AUTOCONNECT_PRIORITY": 999},
 | ||||
|          'suse': {'BOOTPROTO': 'static', 'STARTMODE': 'auto'}, | ||||
|      } | ||||
|   | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
										
											
												File diff suppressed because it is too large
												Load Diff
											
										
									
								
							
							
								
								
									
										97
									
								
								SOURCES/ci-Update-dscheck_VMware-s-rpctool-check-970.patch
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										97
									
								
								SOURCES/ci-Update-dscheck_VMware-s-rpctool-check-970.patch
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,97 @@ | ||||
| From f284c2925b7076b81afb9207161f01718ba70951 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Fri, 14 Jan 2022 16:50:18 +0100 | ||||
| Subject: [PATCH 3/5] Update dscheck_VMware's rpctool check (#970) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 17: Datasource for VMware | ||||
| RH-Commit: [3/5] 0739bc18b46b8877fb3825d13f7cda57acda2dde (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 2040090 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| 
 | ||||
| commit 7781dec3306e9467f216cfcb36b7e10a8b38547a | ||||
| Author: Shreenidhi Shedi <53473811+sshedi@users.noreply.github.com> | ||||
| Date:   Fri Aug 13 00:40:39 2021 +0530 | ||||
| 
 | ||||
|     Update dscheck_VMware's rpctool check (#970) | ||||
| 
 | ||||
|     This patch updates the dscheck_VMware function's use of "vmware-rpctool". | ||||
| 
 | ||||
|     When checking to see if a "guestinfo" property is set. | ||||
|     Because a successful exit code can occur even if there is an empty | ||||
|     string returned, it is possible that the VMware datasource will be | ||||
|     loaded as a false-positive. This patch ensures that in addition to | ||||
|     validating the exit code, the emitted output is also examined to ensure | ||||
|     a non-empty value is returned by rpctool before returning "${DS_FOUND}" | ||||
|     from "dscheck_VMware()". | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  tools/ds-identify | 15 +++++++++------ | ||||
|  1 file changed, 9 insertions(+), 6 deletions(-) | ||||
| 
 | ||||
| diff --git a/tools/ds-identify b/tools/ds-identify
 | ||||
| index c01eae3d..0e12298f 100755
 | ||||
| --- a/tools/ds-identify
 | ||||
| +++ b/tools/ds-identify
 | ||||
| @@ -141,6 +141,7 @@ error() {
 | ||||
|      debug 0 "$@" | ||||
|      stderr "$@" | ||||
|  } | ||||
| +
 | ||||
|  warn() { | ||||
|      set -- "WARN:" "$@" | ||||
|      debug 0 "$@" | ||||
| @@ -344,7 +345,6 @@ geom_label_status_as() {
 | ||||
|      return $ret | ||||
|  } | ||||
|   | ||||
| -
 | ||||
|  read_fs_info_freebsd() { | ||||
|      local oifs="$IFS" line="" delim="," | ||||
|      local ret=0 labels="" dev="" label="" ftype="" isodevs="" | ||||
| @@ -404,7 +404,6 @@ cached() {
 | ||||
|      [ -n "$1" ] && _RET="$1" && return || return 1 | ||||
|  } | ||||
|   | ||||
| -
 | ||||
|  detect_virt() { | ||||
|      local virt="${UNAVAILABLE}" r="" out="" | ||||
|      if [ -d /run/systemd ]; then | ||||
| @@ -450,7 +449,7 @@ detect_virt() {
 | ||||
|  read_virt() { | ||||
|      cached "$DI_VIRT" && return 0 | ||||
|      detect_virt | ||||
| -    DI_VIRT=${_RET}
 | ||||
| +    DI_VIRT="$(echo "${_RET}" | tr '[:upper:]' '[:lower:]')"
 | ||||
|  } | ||||
|   | ||||
|  is_container() { | ||||
| @@ -1370,16 +1369,20 @@ vmware_has_rpctool() {
 | ||||
|      command -v vmware-rpctool >/dev/null 2>&1 | ||||
|  } | ||||
|   | ||||
| +vmware_rpctool_guestinfo() {
 | ||||
| +    vmware-rpctool "info-get guestinfo.${1}" 2>/dev/null | grep "[[:alnum:]]"
 | ||||
| +}
 | ||||
| +
 | ||||
|  vmware_rpctool_guestinfo_metadata() { | ||||
| -    vmware-rpctool "info-get guestinfo.metadata"
 | ||||
| +    vmware_rpctool_guestinfo "metadata"
 | ||||
|  } | ||||
|   | ||||
|  vmware_rpctool_guestinfo_userdata() { | ||||
| -    vmware-rpctool "info-get guestinfo.userdata"
 | ||||
| +    vmware_rpctool_guestinfo "userdata"
 | ||||
|  } | ||||
|   | ||||
|  vmware_rpctool_guestinfo_vendordata() { | ||||
| -    vmware-rpctool "info-get guestinfo.vendordata"
 | ||||
| +    vmware_rpctool_guestinfo "vendordata"
 | ||||
|  } | ||||
|   | ||||
|  dscheck_VMware() { | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,470 @@ | ||||
| From 9ccb738cf078555b68122b1fc745a45fe952c439 Mon Sep 17 00:00:00 2001 | ||||
| From: Anh Vo <anhvo@microsoft.com> | ||||
| Date: Tue, 13 Apr 2021 17:39:39 -0400 | ||||
| Subject: [PATCH 3/7] azure: Removing ability to invoke walinuxagent (#799) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 18: Add support for userdata on Azure from IMDS | ||||
| RH-Commit: [3/7] 7431b912e3df7ea384820f45e0230b47ab54643c (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 2042351 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| Invoking walinuxagent from within cloud-init is no longer | ||||
| supported/necessary | ||||
| ---
 | ||||
|  cloudinit/sources/DataSourceAzure.py          | 137 ++++-------------- | ||||
|  doc/rtd/topics/datasources/azure.rst          |  62 ++------ | ||||
|  tests/unittests/test_datasource/test_azure.py |  97 ------------- | ||||
|  3 files changed, 35 insertions(+), 261 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
 | ||||
| index de1452ce..020b7006 100755
 | ||||
| --- a/cloudinit/sources/DataSourceAzure.py
 | ||||
| +++ b/cloudinit/sources/DataSourceAzure.py
 | ||||
| @@ -381,53 +381,6 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|                      util.logexc(LOG, "handling set_hostname failed") | ||||
|          return False | ||||
|   | ||||
| -    @azure_ds_telemetry_reporter
 | ||||
| -    def get_metadata_from_agent(self):
 | ||||
| -        temp_hostname = self.metadata.get('local-hostname')
 | ||||
| -        agent_cmd = self.ds_cfg['agent_command']
 | ||||
| -        LOG.debug("Getting metadata via agent.  hostname=%s cmd=%s",
 | ||||
| -                  temp_hostname, agent_cmd)
 | ||||
| -
 | ||||
| -        self.bounce_network_with_azure_hostname()
 | ||||
| -
 | ||||
| -        try:
 | ||||
| -            invoke_agent(agent_cmd)
 | ||||
| -        except subp.ProcessExecutionError:
 | ||||
| -            # claim the datasource even if the command failed
 | ||||
| -            util.logexc(LOG, "agent command '%s' failed.",
 | ||||
| -                        self.ds_cfg['agent_command'])
 | ||||
| -
 | ||||
| -        ddir = self.ds_cfg['data_dir']
 | ||||
| -
 | ||||
| -        fp_files = []
 | ||||
| -        key_value = None
 | ||||
| -        for pk in self.cfg.get('_pubkeys', []):
 | ||||
| -            if pk.get('value', None):
 | ||||
| -                key_value = pk['value']
 | ||||
| -                LOG.debug("SSH authentication: using value from fabric")
 | ||||
| -            else:
 | ||||
| -                bname = str(pk['fingerprint'] + ".crt")
 | ||||
| -                fp_files += [os.path.join(ddir, bname)]
 | ||||
| -                LOG.debug("SSH authentication: "
 | ||||
| -                          "using fingerprint from fabric")
 | ||||
| -
 | ||||
| -        with events.ReportEventStack(
 | ||||
| -                name="waiting-for-ssh-public-key",
 | ||||
| -                description="wait for agents to retrieve SSH keys",
 | ||||
| -                parent=azure_ds_reporter):
 | ||||
| -            # wait very long for public SSH keys to arrive
 | ||||
| -            # https://bugs.launchpad.net/cloud-init/+bug/1717611
 | ||||
| -            missing = util.log_time(logfunc=LOG.debug,
 | ||||
| -                                    msg="waiting for SSH public key files",
 | ||||
| -                                    func=util.wait_for_files,
 | ||||
| -                                    args=(fp_files, 900))
 | ||||
| -            if len(missing):
 | ||||
| -                LOG.warning("Did not find files, but going on: %s", missing)
 | ||||
| -
 | ||||
| -        metadata = {}
 | ||||
| -        metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files)
 | ||||
| -        return metadata
 | ||||
| -
 | ||||
|      def _get_subplatform(self): | ||||
|          """Return the subplatform metadata source details.""" | ||||
|          if self.seed.startswith('/dev'): | ||||
| @@ -1354,35 +1307,32 @@ class DataSourceAzure(sources.DataSource):
 | ||||
|             On failure, returns False. | ||||
|          """ | ||||
|   | ||||
| -        if self.ds_cfg['agent_command'] == AGENT_START_BUILTIN:
 | ||||
| -            self.bounce_network_with_azure_hostname()
 | ||||
| +        self.bounce_network_with_azure_hostname()
 | ||||
|   | ||||
| -            pubkey_info = None
 | ||||
| -            try:
 | ||||
| -                raise KeyError(
 | ||||
| -                    "Not using public SSH keys from IMDS"
 | ||||
| -                )
 | ||||
| -                # pylint:disable=unreachable
 | ||||
| -                public_keys = self.metadata['imds']['compute']['publicKeys']
 | ||||
| -                LOG.debug(
 | ||||
| -                    'Successfully retrieved %s key(s) from IMDS',
 | ||||
| -                    len(public_keys)
 | ||||
| -                    if public_keys is not None
 | ||||
| -                    else 0
 | ||||
| -                )
 | ||||
| -            except KeyError:
 | ||||
| -                LOG.debug(
 | ||||
| -                    'Unable to retrieve SSH keys from IMDS during '
 | ||||
| -                    'negotiation, falling back to OVF'
 | ||||
| -                )
 | ||||
| -                pubkey_info = self.cfg.get('_pubkeys', None)
 | ||||
| -
 | ||||
| -            metadata_func = partial(get_metadata_from_fabric,
 | ||||
| -                                    fallback_lease_file=self.
 | ||||
| -                                    dhclient_lease_file,
 | ||||
| -                                    pubkey_info=pubkey_info)
 | ||||
| -        else:
 | ||||
| -            metadata_func = self.get_metadata_from_agent
 | ||||
| +        pubkey_info = None
 | ||||
| +        try:
 | ||||
| +            raise KeyError(
 | ||||
| +                "Not using public SSH keys from IMDS"
 | ||||
| +            )
 | ||||
| +            # pylint:disable=unreachable
 | ||||
| +            public_keys = self.metadata['imds']['compute']['publicKeys']
 | ||||
| +            LOG.debug(
 | ||||
| +                'Successfully retrieved %s key(s) from IMDS',
 | ||||
| +                len(public_keys)
 | ||||
| +                if public_keys is not None
 | ||||
| +                else 0
 | ||||
| +            )
 | ||||
| +        except KeyError:
 | ||||
| +            LOG.debug(
 | ||||
| +                'Unable to retrieve SSH keys from IMDS during '
 | ||||
| +                'negotiation, falling back to OVF'
 | ||||
| +            )
 | ||||
| +            pubkey_info = self.cfg.get('_pubkeys', None)
 | ||||
| +
 | ||||
| +        metadata_func = partial(get_metadata_from_fabric,
 | ||||
| +                                fallback_lease_file=self.
 | ||||
| +                                dhclient_lease_file,
 | ||||
| +                                pubkey_info=pubkey_info)
 | ||||
|   | ||||
|          LOG.debug("negotiating with fabric via agent command %s", | ||||
|                    self.ds_cfg['agent_command']) | ||||
| @@ -1617,33 +1567,6 @@ def perform_hostname_bounce(hostname, cfg, prev_hostname):
 | ||||
|      return True | ||||
|   | ||||
|   | ||||
| -@azure_ds_telemetry_reporter
 | ||||
| -def crtfile_to_pubkey(fname, data=None):
 | ||||
| -    pipeline = ('openssl x509 -noout -pubkey < "$0" |'
 | ||||
| -                'ssh-keygen -i -m PKCS8 -f /dev/stdin')
 | ||||
| -    (out, _err) = subp.subp(['sh', '-c', pipeline, fname],
 | ||||
| -                            capture=True, data=data)
 | ||||
| -    return out.rstrip()
 | ||||
| -
 | ||||
| -
 | ||||
| -@azure_ds_telemetry_reporter
 | ||||
| -def pubkeys_from_crt_files(flist):
 | ||||
| -    pubkeys = []
 | ||||
| -    errors = []
 | ||||
| -    for fname in flist:
 | ||||
| -        try:
 | ||||
| -            pubkeys.append(crtfile_to_pubkey(fname))
 | ||||
| -        except subp.ProcessExecutionError:
 | ||||
| -            errors.append(fname)
 | ||||
| -
 | ||||
| -    if errors:
 | ||||
| -        report_diagnostic_event(
 | ||||
| -            "failed to convert the crt files to pubkey: %s" % errors,
 | ||||
| -            logger_func=LOG.warning)
 | ||||
| -
 | ||||
| -    return pubkeys
 | ||||
| -
 | ||||
| -
 | ||||
|  @azure_ds_telemetry_reporter | ||||
|  def write_files(datadir, files, dirmode=None): | ||||
|   | ||||
| @@ -1672,16 +1595,6 @@ def write_files(datadir, files, dirmode=None):
 | ||||
|          util.write_file(filename=fname, content=content, mode=0o600) | ||||
|   | ||||
|   | ||||
| -@azure_ds_telemetry_reporter
 | ||||
| -def invoke_agent(cmd):
 | ||||
| -    # this is a function itself to simplify patching it for test
 | ||||
| -    if cmd:
 | ||||
| -        LOG.debug("invoking agent: %s", cmd)
 | ||||
| -        subp.subp(cmd, shell=(not isinstance(cmd, list)))
 | ||||
| -    else:
 | ||||
| -        LOG.debug("not invoking agent")
 | ||||
| -
 | ||||
| -
 | ||||
|  def find_child(node, filter_func): | ||||
|      ret = [] | ||||
|      if not node.hasChildNodes(): | ||||
| diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst
 | ||||
| index e04c3a33..ad9f2236 100644
 | ||||
| --- a/doc/rtd/topics/datasources/azure.rst
 | ||||
| +++ b/doc/rtd/topics/datasources/azure.rst
 | ||||
| @@ -5,28 +5,6 @@ Azure
 | ||||
|   | ||||
|  This datasource finds metadata and user-data from the Azure cloud platform. | ||||
|   | ||||
| -walinuxagent
 | ||||
| -------------
 | ||||
| -walinuxagent has several functions within images.  For cloud-init
 | ||||
| -specifically, the relevant functionality it performs is to register the
 | ||||
| -instance with the Azure cloud platform at boot so networking will be
 | ||||
| -permitted.  For more information about the other functionality of
 | ||||
| -walinuxagent, see `Azure's documentation
 | ||||
| -<https://github.com/Azure/WALinuxAgent#introduction>`_ for more details.
 | ||||
| -(Note, however, that only one of walinuxagent's provisioning and cloud-init
 | ||||
| -should be used to perform instance customisation.)
 | ||||
| -
 | ||||
| -If you are configuring walinuxagent yourself, you will want to ensure that you
 | ||||
| -have `Provisioning.UseCloudInit
 | ||||
| -<https://github.com/Azure/WALinuxAgent#provisioningusecloudinit>`_ set to
 | ||||
| -``y``.
 | ||||
| -
 | ||||
| -
 | ||||
| -Builtin Agent
 | ||||
| --------------
 | ||||
| -An alternative to using walinuxagent to register to the Azure cloud platform
 | ||||
| -is to use the ``__builtin__`` agent command.  This section contains more
 | ||||
| -background on what that code path does, and how to enable it.
 | ||||
|   | ||||
|  The Azure cloud platform provides initial data to an instance via an attached | ||||
|  CD formatted in UDF.  That CD contains a 'ovf-env.xml' file that provides some | ||||
| @@ -41,16 +19,6 @@ by calling a script in /etc/dhcp/dhclient-exit-hooks or a file in
 | ||||
|  'dhclient_hook' of cloud-init itself. This sub-command will write the client | ||||
|  information in json format to /run/cloud-init/dhclient.hook/<interface>.json. | ||||
|   | ||||
| -In order for cloud-init to leverage this method to find the endpoint, the
 | ||||
| -cloud.cfg file must contain:
 | ||||
| -
 | ||||
| -.. sourcecode:: yaml
 | ||||
| -
 | ||||
| -  datasource:
 | ||||
| -    Azure:
 | ||||
| -      set_hostname: False
 | ||||
| -      agent_command: __builtin__
 | ||||
| -
 | ||||
|  If those files are not available, the fallback is to check the leases file | ||||
|  for the endpoint server (again option 245). | ||||
|   | ||||
| @@ -83,9 +51,6 @@ configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``).
 | ||||
|   | ||||
|  The settings that may be configured are: | ||||
|   | ||||
| - * **agent_command**: Either __builtin__ (default) or a command to run to getcw
 | ||||
| -   metadata. If __builtin__, get metadata from walinuxagent. Otherwise run the
 | ||||
| -   provided command to obtain metadata.
 | ||||
|   * **apply_network_config**: Boolean set to True to use network configuration | ||||
|     described by Azure's IMDS endpoint instead of fallback network config of | ||||
|     dhcp on eth0. Default is True. For Ubuntu 16.04 or earlier, default is | ||||
| @@ -121,7 +86,6 @@ An example configuration with the default values is provided below:
 | ||||
|   | ||||
|    datasource: | ||||
|      Azure: | ||||
| -      agent_command: __builtin__
 | ||||
|        apply_network_config: true | ||||
|        data_dir: /var/lib/waagent | ||||
|        dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases | ||||
| @@ -144,9 +108,7 @@ child of the ``LinuxProvisioningConfigurationSet`` (a sibling to ``UserName``)
 | ||||
|  If both ``UserData`` and ``CustomData`` are provided behavior is undefined on | ||||
|  which will be selected. | ||||
|   | ||||
| -In the example below, user-data provided is 'this is my userdata', and the
 | ||||
| -datasource config provided is ``{"agent_command": ["start", "walinuxagent"]}``.
 | ||||
| -That agent command will take affect as if it were specified in system config.
 | ||||
| +In the example below, user-data provided is 'this is my userdata'
 | ||||
|   | ||||
|  Example: | ||||
|   | ||||
| @@ -184,20 +146,16 @@ The hostname is provided to the instance in the ovf-env.xml file as
 | ||||
|  Whatever value the instance provides in its dhcp request will resolve in the | ||||
|  domain returned in the 'search' request. | ||||
|   | ||||
| -The interesting issue is that a generic image will already have a hostname
 | ||||
| -configured.  The ubuntu cloud images have 'ubuntu' as the hostname of the
 | ||||
| -system, and the initial dhcp request on eth0 is not guaranteed to occur after
 | ||||
| -the datasource code has been run.  So, on first boot, that initial value will
 | ||||
| -be sent in the dhcp request and *that* value will resolve.
 | ||||
| -
 | ||||
| -In order to make the ``HostName`` provided in the ovf-env.xml resolve, a
 | ||||
| -dhcp request must be made with the new value.  Walinuxagent (in its current
 | ||||
| -version) handles this by polling the state of hostname and bouncing ('``ifdown
 | ||||
| -eth0; ifup eth0``' the network interface if it sees that a change has been
 | ||||
| -made.
 | ||||
| +A generic image will already have a hostname configured.  The ubuntu
 | ||||
| +cloud images have 'ubuntu' as the hostname of the system, and the
 | ||||
| +initial dhcp request on eth0 is not guaranteed to occur after the
 | ||||
| +datasource code has been run.  So, on first boot, that initial value
 | ||||
| +will be sent in the dhcp request and *that* value will resolve.
 | ||||
|   | ||||
| -cloud-init handles this by setting the hostname in the DataSource's 'get_data'
 | ||||
| -method via '``hostname $HostName``', and then bouncing the interface.  This
 | ||||
| +In order to make the ``HostName`` provided in the ovf-env.xml resolve,
 | ||||
| +a dhcp request must be made with the new value. cloud-init handles
 | ||||
| +this by setting the hostname in the DataSource's 'get_data' method via
 | ||||
| +'``hostname $HostName``', and then bouncing the interface.  This
 | ||||
|  behavior can be configured or disabled in the datasource config.  See | ||||
|  'Configuration' above. | ||||
|   | ||||
| diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
 | ||||
| index dedebeb1..320fa857 100644
 | ||||
| --- a/tests/unittests/test_datasource/test_azure.py
 | ||||
| +++ b/tests/unittests/test_datasource/test_azure.py
 | ||||
| @@ -638,17 +638,10 @@ scbus-1 on xpt0 bus 0
 | ||||
|          def dsdevs(): | ||||
|              return data.get('dsdevs', []) | ||||
|   | ||||
| -        def _invoke_agent(cmd):
 | ||||
| -            data['agent_invoked'] = cmd
 | ||||
| -
 | ||||
|          def _wait_for_files(flist, _maxwait=None, _naplen=None): | ||||
|              data['waited'] = flist | ||||
|              return [] | ||||
|   | ||||
| -        def _pubkeys_from_crt_files(flist):
 | ||||
| -            data['pubkey_files'] = flist
 | ||||
| -            return ["pubkey_from: %s" % f for f in flist]
 | ||||
| -
 | ||||
|          if data.get('ovfcontent') is not None: | ||||
|              populate_dir(os.path.join(self.paths.seed_dir, "azure"), | ||||
|                           {'ovf-env.xml': data['ovfcontent']}) | ||||
| @@ -675,8 +668,6 @@ scbus-1 on xpt0 bus 0
 | ||||
|   | ||||
|          self.apply_patches([ | ||||
|              (dsaz, 'list_possible_azure_ds_devs', dsdevs), | ||||
| -            (dsaz, 'invoke_agent', _invoke_agent),
 | ||||
| -            (dsaz, 'pubkeys_from_crt_files', _pubkeys_from_crt_files),
 | ||||
|              (dsaz, 'perform_hostname_bounce', mock.MagicMock()), | ||||
|              (dsaz, 'get_hostname', mock.MagicMock()), | ||||
|              (dsaz, 'set_hostname', mock.MagicMock()), | ||||
| @@ -765,7 +756,6 @@ scbus-1 on xpt0 bus 0
 | ||||
|              ret = dsrc.get_data() | ||||
|              self.m_is_platform_viable.assert_called_with(dsrc.seed_dir) | ||||
|              self.assertFalse(ret) | ||||
| -            self.assertNotIn('agent_invoked', data)
 | ||||
|              # Assert that for non viable platforms, | ||||
|              # there is no communication with the Azure datasource. | ||||
|              self.assertEqual( | ||||
| @@ -789,7 +779,6 @@ scbus-1 on xpt0 bus 0
 | ||||
|              ret = dsrc.get_data() | ||||
|              self.m_is_platform_viable.assert_called_with(dsrc.seed_dir) | ||||
|              self.assertFalse(ret) | ||||
| -            self.assertNotIn('agent_invoked', data)
 | ||||
|              self.assertEqual( | ||||
|                  1, | ||||
|                  m_report_failure.call_count) | ||||
| @@ -806,7 +795,6 @@ scbus-1 on xpt0 bus 0
 | ||||
|                  1, | ||||
|                  m_crawl_metadata.call_count) | ||||
|              self.assertFalse(ret) | ||||
| -            self.assertNotIn('agent_invoked', data)
 | ||||
|   | ||||
|      def test_crawl_metadata_exception_should_report_failure_with_msg(self): | ||||
|          data = {} | ||||
| @@ -1086,21 +1074,6 @@ scbus-1 on xpt0 bus 0
 | ||||
|          self.assertTrue(os.path.isdir(self.waagent_d)) | ||||
|          self.assertEqual(stat.S_IMODE(os.stat(self.waagent_d).st_mode), 0o700) | ||||
|   | ||||
| -    def test_user_cfg_set_agent_command_plain(self):
 | ||||
| -        # set dscfg in via plaintext
 | ||||
| -        # we must have friendly-to-xml formatted plaintext in yaml_cfg
 | ||||
| -        # not all plaintext is expected to work.
 | ||||
| -        yaml_cfg = "{agent_command: my_command}\n"
 | ||||
| -        cfg = yaml.safe_load(yaml_cfg)
 | ||||
| -        odata = {'HostName': "myhost", 'UserName': "myuser",
 | ||||
| -                 'dscfg': {'text': yaml_cfg, 'encoding': 'plain'}}
 | ||||
| -        data = {'ovfcontent': construct_valid_ovf_env(data=odata)}
 | ||||
| -
 | ||||
| -        dsrc = self._get_ds(data)
 | ||||
| -        ret = self._get_and_setup(dsrc)
 | ||||
| -        self.assertTrue(ret)
 | ||||
| -        self.assertEqual(data['agent_invoked'], cfg['agent_command'])
 | ||||
| -
 | ||||
|      @mock.patch('cloudinit.sources.DataSourceAzure.device_driver', | ||||
|                  return_value=None) | ||||
|      def test_network_config_set_from_imds(self, m_driver): | ||||
| @@ -1205,29 +1178,6 @@ scbus-1 on xpt0 bus 0
 | ||||
|          dsrc.get_data() | ||||
|          self.assertEqual('eastus2', dsrc.region) | ||||
|   | ||||
| -    def test_user_cfg_set_agent_command(self):
 | ||||
| -        # set dscfg in via base64 encoded yaml
 | ||||
| -        cfg = {'agent_command': "my_command"}
 | ||||
| -        odata = {'HostName': "myhost", 'UserName': "myuser",
 | ||||
| -                 'dscfg': {'text': b64e(yaml.dump(cfg)),
 | ||||
| -                           'encoding': 'base64'}}
 | ||||
| -        data = {'ovfcontent': construct_valid_ovf_env(data=odata)}
 | ||||
| -
 | ||||
| -        dsrc = self._get_ds(data)
 | ||||
| -        ret = self._get_and_setup(dsrc)
 | ||||
| -        self.assertTrue(ret)
 | ||||
| -        self.assertEqual(data['agent_invoked'], cfg['agent_command'])
 | ||||
| -
 | ||||
| -    def test_sys_cfg_set_agent_command(self):
 | ||||
| -        sys_cfg = {'datasource': {'Azure': {'agent_command': '_COMMAND'}}}
 | ||||
| -        data = {'ovfcontent': construct_valid_ovf_env(data={}),
 | ||||
| -                'sys_cfg': sys_cfg}
 | ||||
| -
 | ||||
| -        dsrc = self._get_ds(data)
 | ||||
| -        ret = self._get_and_setup(dsrc)
 | ||||
| -        self.assertTrue(ret)
 | ||||
| -        self.assertEqual(data['agent_invoked'], '_COMMAND')
 | ||||
| -
 | ||||
|      def test_sys_cfg_set_never_destroy_ntfs(self): | ||||
|          sys_cfg = {'datasource': {'Azure': { | ||||
|              'never_destroy_ntfs': 'user-supplied-value'}}} | ||||
| @@ -1311,51 +1261,6 @@ scbus-1 on xpt0 bus 0
 | ||||
|          self.assertTrue(ret) | ||||
|          self.assertEqual(dsrc.userdata_raw, mydata.encode('utf-8')) | ||||
|   | ||||
| -    def test_cfg_has_pubkeys_fingerprint(self):
 | ||||
| -        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| -        mypklist = [{'fingerprint': 'fp1', 'path': 'path1', 'value': ''}]
 | ||||
| -        pubkeys = [(x['fingerprint'], x['path'], x['value']) for x in mypklist]
 | ||||
| -        data = {'ovfcontent': construct_valid_ovf_env(data=odata,
 | ||||
| -                                                      pubkeys=pubkeys)}
 | ||||
| -
 | ||||
| -        dsrc = self._get_ds(data, agent_command=['not', '__builtin__'])
 | ||||
| -        ret = self._get_and_setup(dsrc)
 | ||||
| -        self.assertTrue(ret)
 | ||||
| -        for mypk in mypklist:
 | ||||
| -            self.assertIn(mypk, dsrc.cfg['_pubkeys'])
 | ||||
| -            self.assertIn('pubkey_from', dsrc.metadata['public-keys'][-1])
 | ||||
| -
 | ||||
| -    def test_cfg_has_pubkeys_value(self):
 | ||||
| -        # make sure that provided key is used over fingerprint
 | ||||
| -        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| -        mypklist = [{'fingerprint': 'fp1', 'path': 'path1', 'value': 'value1'}]
 | ||||
| -        pubkeys = [(x['fingerprint'], x['path'], x['value']) for x in mypklist]
 | ||||
| -        data = {'ovfcontent': construct_valid_ovf_env(data=odata,
 | ||||
| -                                                      pubkeys=pubkeys)}
 | ||||
| -
 | ||||
| -        dsrc = self._get_ds(data, agent_command=['not', '__builtin__'])
 | ||||
| -        ret = self._get_and_setup(dsrc)
 | ||||
| -        self.assertTrue(ret)
 | ||||
| -
 | ||||
| -        for mypk in mypklist:
 | ||||
| -            self.assertIn(mypk, dsrc.cfg['_pubkeys'])
 | ||||
| -            self.assertIn(mypk['value'], dsrc.metadata['public-keys'])
 | ||||
| -
 | ||||
| -    def test_cfg_has_no_fingerprint_has_value(self):
 | ||||
| -        # test value is used when fingerprint not provided
 | ||||
| -        odata = {'HostName': "myhost", 'UserName': "myuser"}
 | ||||
| -        mypklist = [{'fingerprint': None, 'path': 'path1', 'value': 'value1'}]
 | ||||
| -        pubkeys = [(x['fingerprint'], x['path'], x['value']) for x in mypklist]
 | ||||
| -        data = {'ovfcontent': construct_valid_ovf_env(data=odata,
 | ||||
| -                                                      pubkeys=pubkeys)}
 | ||||
| -
 | ||||
| -        dsrc = self._get_ds(data, agent_command=['not', '__builtin__'])
 | ||||
| -        ret = self._get_and_setup(dsrc)
 | ||||
| -        self.assertTrue(ret)
 | ||||
| -
 | ||||
| -        for mypk in mypklist:
 | ||||
| -            self.assertIn(mypk['value'], dsrc.metadata['public-keys'])
 | ||||
| -
 | ||||
|      def test_default_ephemeral_configs_ephemeral_exists(self): | ||||
|          # make sure the ephemeral configs are correct if disk present | ||||
|          odata = {} | ||||
| @@ -1919,8 +1824,6 @@ class TestAzureBounce(CiTestCase):
 | ||||
|      with_logs = True | ||||
|   | ||||
|      def mock_out_azure_moving_parts(self): | ||||
| -        self.patches.enter_context(
 | ||||
| -            mock.patch.object(dsaz, 'invoke_agent'))
 | ||||
|          self.patches.enter_context( | ||||
|              mock.patch.object(dsaz.util, 'wait_for_files')) | ||||
|          self.patches.enter_context( | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,97 @@ | ||||
| From 2a6b3b5afb20a7856ad81b3ec3da621571c3bec3 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Wed, 20 Oct 2021 10:41:36 +0200 | ||||
| Subject: [PATCH] cc_ssh.py: fix private key group owner and permissions | ||||
|  (#1070) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 12: cc_ssh.py: fix private key group owner and permissions (#1070) | ||||
| RH-Commit: [1/1] b2dc9cfd18ac0a8e1e22a37b1585d22dbde11536 (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 2015974 | ||||
| RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com> | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| 
 | ||||
| commit ee296ced9c0a61b1484d850b807c601bcd670ec1 | ||||
| Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date:   Tue Oct 19 21:32:10 2021 +0200 | ||||
| 
 | ||||
|     cc_ssh.py: fix private key group owner and permissions (#1070) | ||||
| 
 | ||||
|     When default host keys are created by sshd-keygen (/etc/ssh/ssh_host_*_key) | ||||
|     in RHEL/CentOS/Fedora, openssh it performs the following: | ||||
| 
 | ||||
|     // create new keys | ||||
|     if ! $KEYGEN -q -t $KEYTYPE -f $KEY -C '' -N '' >&/dev/null; then | ||||
|             exit 1 | ||||
|     fi | ||||
| 
 | ||||
|     // sanitize permissions | ||||
|     /usr/bin/chgrp ssh_keys $KEY | ||||
|     /usr/bin/chmod 640 $KEY | ||||
|     /usr/bin/chmod 644 $KEY.pub | ||||
|     Note that the group ssh_keys exists only in RHEL/CentOS/Fedora. | ||||
| 
 | ||||
|     Now that we disable sshd-keygen to allow only cloud-init to create | ||||
|     them, we miss the "sanitize permissions" part, where we set the group | ||||
|     owner as ssh_keys and the private key mode to 640. | ||||
| 
 | ||||
|     According to https://bugzilla.redhat.com/show_bug.cgi?id=2013644#c8, failing | ||||
|     to set group ownership and permissions like openssh does makes the RHEL openscap | ||||
|     tool generate an error. | ||||
| 
 | ||||
|     Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com | ||||
| 
 | ||||
|     RHBZ: 2013644 | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/config/cc_ssh.py |  7 +++++++ | ||||
|  cloudinit/util.py          | 14 ++++++++++++++ | ||||
|  2 files changed, 21 insertions(+) | ||||
| 
 | ||||
| diff --git a/cloudinit/config/cc_ssh.py b/cloudinit/config/cc_ssh.py
 | ||||
| index 05a16dbc..4e986c55 100755
 | ||||
| --- a/cloudinit/config/cc_ssh.py
 | ||||
| +++ b/cloudinit/config/cc_ssh.py
 | ||||
| @@ -240,6 +240,13 @@ def handle(_name, cfg, cloud, log, _args):
 | ||||
|                  try: | ||||
|                      out, err = subp.subp(cmd, capture=True, env=lang_c) | ||||
|                      sys.stdout.write(util.decode_binary(out)) | ||||
| +
 | ||||
| +                    gid = util.get_group_id("ssh_keys")
 | ||||
| +                    if gid != -1:
 | ||||
| +                        # perform same "sanitize permissions" as sshd-keygen
 | ||||
| +                        os.chown(keyfile, -1, gid)
 | ||||
| +                        os.chmod(keyfile, 0o640)
 | ||||
| +                        os.chmod(keyfile + ".pub", 0o644)
 | ||||
|                  except subp.ProcessExecutionError as e: | ||||
|                      err = util.decode_binary(e.stderr).lower() | ||||
|                      if (e.exit_code == 1 and | ||||
| diff --git a/cloudinit/util.py b/cloudinit/util.py
 | ||||
| index 343976ad..fe37ae89 100644
 | ||||
| --- a/cloudinit/util.py
 | ||||
| +++ b/cloudinit/util.py
 | ||||
| @@ -1831,6 +1831,20 @@ def chmod(path, mode):
 | ||||
|              os.chmod(path, real_mode) | ||||
|   | ||||
|   | ||||
| +def get_group_id(grp_name: str) -> int:
 | ||||
| +    """
 | ||||
| +    Returns the group id of a group name, or -1 if no group exists
 | ||||
| +
 | ||||
| +    @param grp_name: the name of the group
 | ||||
| +    """
 | ||||
| +    gid = -1
 | ||||
| +    try:
 | ||||
| +        gid = grp.getgrnam(grp_name).gr_gid
 | ||||
| +    except KeyError:
 | ||||
| +        LOG.debug("Group %s is not a valid group name", grp_name)
 | ||||
| +    return gid
 | ||||
| +
 | ||||
| +
 | ||||
|  def get_permissions(path: str) -> int: | ||||
|      """ | ||||
|      Returns the octal permissions of the file/folder pointed by the path, | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,87 @@ | ||||
| From e0eca40388080dabf6598c0d9653ea50ae10c984 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Tue, 7 Dec 2021 10:04:43 +0100 | ||||
| Subject: [PATCH] cloudinit/net: handle two different routes for the same ip | ||||
|  (#1124) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 15: cloudinit/net: handle two different routes for the same ip (#1124) | ||||
| RH-Commit: [1/1] b623a76ccd642e22e8d9c4aebc26f0b0cec8118b (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 2028031 | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| 
 | ||||
| commit 0e25076b34fa995161b83996e866c0974cee431f | ||||
| Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date:   Mon Dec 6 18:34:26 2021 +0100 | ||||
| 
 | ||||
|     cloudinit/net: handle two different routes for the same ip (#1124) | ||||
| 
 | ||||
|     If we set a dhcp server side like this: | ||||
|     $ cat /var/tmp/cloud-init/cloud-init-dhcp-f0rie5tm/dhcp.leases | ||||
|     lease { | ||||
|     ... | ||||
|     option classless-static-routes 31.169.254.169.254 0.0.0.0,31.169.254.169.254 | ||||
|         10.112.143.127,22.10.112.140 0.0.0.0,0 10.112.140.1; | ||||
|     ... | ||||
|     } | ||||
|     cloud-init fails to configure the routes via 'ip route add' because to there are | ||||
|     two different routes for 169.254.169.254: | ||||
| 
 | ||||
|     $ ip -4 route add 192.168.1.1/32 via 0.0.0.0 dev eth0 | ||||
|     $ ip -4 route add 192.168.1.1/32 via 10.112.140.248 dev eth0 | ||||
| 
 | ||||
|     But NetworkManager can handle such scenario successfully as it uses "ip route append". | ||||
|     So change cloud-init to also use "ip route append" to fix the issue: | ||||
| 
 | ||||
|     $ ip -4 route append 192.168.1.1/32 via 0.0.0.0 dev eth0 | ||||
|     $ ip -4 route append 192.168.1.1/32 via 10.112.140.248 dev eth0 | ||||
| 
 | ||||
|     Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
|     RHBZ: #2003231 | ||||
| 
 | ||||
| Conflicts: | ||||
|     cloudinit/net/tests/test_init.py: a mock call in | ||||
|     test_ephemeral_ipv4_network_with_rfc3442_static_routes is not | ||||
|     present downstream. | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/net/__init__.py        | 2 +- | ||||
|  cloudinit/net/tests/test_init.py | 4 ++-- | ||||
|  2 files changed, 3 insertions(+), 3 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
 | ||||
| index de65e7af..4bdc1bda 100644
 | ||||
| --- a/cloudinit/net/__init__.py
 | ||||
| +++ b/cloudinit/net/__init__.py
 | ||||
| @@ -1076,7 +1076,7 @@ class EphemeralIPv4Network(object):
 | ||||
|              if gateway != "0.0.0.0/0": | ||||
|                  via_arg = ['via', gateway] | ||||
|              subp.subp( | ||||
| -                ['ip', '-4', 'route', 'add', net_address] + via_arg +
 | ||||
| +                ['ip', '-4', 'route', 'append', net_address] + via_arg +
 | ||||
|                  ['dev', self.interface], capture=True) | ||||
|              self.cleanup_cmds.insert( | ||||
|                  0, ['ip', '-4', 'route', 'del', net_address] + via_arg + | ||||
| diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
 | ||||
| index 0535387a..6754df8d 100644
 | ||||
| --- a/cloudinit/net/tests/test_init.py
 | ||||
| +++ b/cloudinit/net/tests/test_init.py
 | ||||
| @@ -715,10 +715,10 @@ class TestEphemeralIPV4Network(CiTestCase):
 | ||||
|                  ['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', 'up'], | ||||
|                  capture=True), | ||||
|              mock.call( | ||||
| -                ['ip', '-4', 'route', 'add', '169.254.169.254/32',
 | ||||
| +                ['ip', '-4', 'route', 'append', '169.254.169.254/32',
 | ||||
|                   'via', '192.168.2.1', 'dev', 'eth0'], capture=True), | ||||
|              mock.call( | ||||
| -                ['ip', '-4', 'route', 'add', '0.0.0.0/0',
 | ||||
| +                ['ip', '-4', 'route', 'append', '0.0.0.0/0',
 | ||||
|                   'via', '192.168.2.1', 'dev', 'eth0'], capture=True)] | ||||
|          expected_teardown_calls = [ | ||||
|              mock.call( | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,174 @@ | ||||
| From 83f3d481c5f0d962bff5bacfd2c323529754869e Mon Sep 17 00:00:00 2001 | ||||
| From: Amy Chen <xiachen@redhat.com> | ||||
| Date: Thu, 2 Dec 2021 18:11:08 +0800 | ||||
| Subject: [PATCH] fix error on upgrade caused by new vendordata2 attributes | ||||
| 
 | ||||
| RH-Author: xiachen <None> | ||||
| RH-MergeRequest: 14: fix error on upgrade caused by new vendordata2 attributes | ||||
| RH-Commit: [1/1] ef14db399cd1fe6e4ba847d98acee15fef8021de (xiachen/cloud-init-centos) | ||||
| RH-Bugzilla: 2028381 | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| commit d132356cc361abef2d90d4073438f3ab759d5964 | ||||
| Author: James Falcon <TheRealFalcon@users.noreply.github.com> | ||||
| Date:   Mon Apr 19 11:31:28 2021 -0500 | ||||
| 
 | ||||
|     fix error on upgrade caused by new vendordata2 attributes (#869) | ||||
| 
 | ||||
|     In #777, we added 'vendordata2' and 'vendordata2_raw' attributes to | ||||
|     the DataSource class, but didn't use the upgrade framework to deal | ||||
|     with an unpickle after upgrade. This commit adds the necessary | ||||
|     upgrade code. | ||||
| 
 | ||||
|     Additionally, added a smaller-scope upgrade test to our integration | ||||
|     tests that will be run on every CI run so we catch these issues | ||||
|     immediately in the future. | ||||
| 
 | ||||
|     LP: #1922739 | ||||
| 
 | ||||
| Signed-off-by: Amy Chen <xiachen@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/sources/__init__.py           | 12 +++++++++++- | ||||
|  cloudinit/tests/test_upgrade.py         |  4 ++++ | ||||
|  tests/integration_tests/clouds.py       |  4 ++-- | ||||
|  tests/integration_tests/test_upgrade.py | 25 ++++++++++++++++++++++++- | ||||
|  4 files changed, 41 insertions(+), 4 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
 | ||||
| index 1ad1880d..7d74f8d9 100644
 | ||||
| --- a/cloudinit/sources/__init__.py
 | ||||
| +++ b/cloudinit/sources/__init__.py
 | ||||
| @@ -24,6 +24,7 @@ from cloudinit import util
 | ||||
|  from cloudinit.atomic_helper import write_json | ||||
|  from cloudinit.event import EventType | ||||
|  from cloudinit.filters import launch_index | ||||
| +from cloudinit.persistence import CloudInitPickleMixin
 | ||||
|  from cloudinit.reporting import events | ||||
|   | ||||
|  DSMODE_DISABLED = "disabled" | ||||
| @@ -134,7 +135,7 @@ URLParams = namedtuple(
 | ||||
|      'URLParms', ['max_wait_seconds', 'timeout_seconds', 'num_retries']) | ||||
|   | ||||
|   | ||||
| -class DataSource(metaclass=abc.ABCMeta):
 | ||||
| +class DataSource(CloudInitPickleMixin, metaclass=abc.ABCMeta):
 | ||||
|   | ||||
|      dsmode = DSMODE_NETWORK | ||||
|      default_locale = 'en_US.UTF-8' | ||||
| @@ -196,6 +197,8 @@ class DataSource(metaclass=abc.ABCMeta):
 | ||||
|      # non-root users | ||||
|      sensitive_metadata_keys = ('merged_cfg', 'security-credentials',) | ||||
|   | ||||
| +    _ci_pkl_version = 1
 | ||||
| +
 | ||||
|      def __init__(self, sys_cfg, distro, paths, ud_proc=None): | ||||
|          self.sys_cfg = sys_cfg | ||||
|          self.distro = distro | ||||
| @@ -218,6 +221,13 @@ class DataSource(metaclass=abc.ABCMeta):
 | ||||
|          else: | ||||
|              self.ud_proc = ud_proc | ||||
|   | ||||
| +    def _unpickle(self, ci_pkl_version: int) -> None:
 | ||||
| +        """Perform deserialization fixes for Paths."""
 | ||||
| +        if not hasattr(self, 'vendordata2'):
 | ||||
| +            self.vendordata2 = None
 | ||||
| +        if not hasattr(self, 'vendordata2_raw'):
 | ||||
| +            self.vendordata2_raw = None
 | ||||
| +
 | ||||
|      def __str__(self): | ||||
|          return type_utils.obj_name(self) | ||||
|   | ||||
| diff --git a/cloudinit/tests/test_upgrade.py b/cloudinit/tests/test_upgrade.py
 | ||||
| index f79a2536..71cea616 100644
 | ||||
| --- a/cloudinit/tests/test_upgrade.py
 | ||||
| +++ b/cloudinit/tests/test_upgrade.py
 | ||||
| @@ -43,3 +43,7 @@ class TestUpgrade:
 | ||||
|      def test_blacklist_drivers_set_on_networking(self, previous_obj_pkl): | ||||
|          """We always expect Networking.blacklist_drivers to be initialised.""" | ||||
|          assert previous_obj_pkl.distro.networking.blacklist_drivers is None | ||||
| +
 | ||||
| +    def test_vendordata_exists(self, previous_obj_pkl):
 | ||||
| +        assert previous_obj_pkl.vendordata2 is None
 | ||||
| +        assert previous_obj_pkl.vendordata2_raw is None
 | ||||
| \ No newline at end of file | ||||
| diff --git a/tests/integration_tests/clouds.py b/tests/integration_tests/clouds.py
 | ||||
| index 9527a413..1d0b9d83 100644
 | ||||
| --- a/tests/integration_tests/clouds.py
 | ||||
| +++ b/tests/integration_tests/clouds.py
 | ||||
| @@ -100,14 +100,14 @@ class IntegrationCloud(ABC):
 | ||||
|              # Even if we're using the default key, it may still have a | ||||
|              # different name in the clouds, so we need to set it separately. | ||||
|              self.cloud_instance.key_pair.name = settings.KEYPAIR_NAME | ||||
| -        self._released_image_id = self._get_initial_image()
 | ||||
| +        self.released_image_id = self._get_initial_image()
 | ||||
|          self.snapshot_id = None | ||||
|   | ||||
|      @property | ||||
|      def image_id(self): | ||||
|          if self.snapshot_id: | ||||
|              return self.snapshot_id | ||||
| -        return self._released_image_id
 | ||||
| +        return self.released_image_id
 | ||||
|   | ||||
|      def emit_settings_to_log(self) -> None: | ||||
|          log.info( | ||||
| diff --git a/tests/integration_tests/test_upgrade.py b/tests/integration_tests/test_upgrade.py
 | ||||
| index c20cb3c1..48e0691b 100644
 | ||||
| --- a/tests/integration_tests/test_upgrade.py
 | ||||
| +++ b/tests/integration_tests/test_upgrade.py
 | ||||
| @@ -1,4 +1,5 @@
 | ||||
|  import logging | ||||
| +import os
 | ||||
|  import pytest | ||||
|  import time | ||||
|  from pathlib import Path | ||||
| @@ -8,6 +9,8 @@ from tests.integration_tests.conftest import (
 | ||||
|      get_validated_source, | ||||
|      session_start_time, | ||||
|  ) | ||||
| +from tests.integration_tests.instances import CloudInitSource
 | ||||
| +
 | ||||
|   | ||||
|  log = logging.getLogger('integration_testing') | ||||
|   | ||||
| @@ -63,7 +66,7 @@ def test_upgrade(session_cloud: IntegrationCloud):
 | ||||
|          return  # type checking doesn't understand that skip raises | ||||
|   | ||||
|      launch_kwargs = { | ||||
| -        'image_id': session_cloud._get_initial_image(),
 | ||||
| +        'image_id': session_cloud.released_image_id,
 | ||||
|      } | ||||
|   | ||||
|      image = ImageSpecification.from_os_image() | ||||
| @@ -93,6 +96,26 @@ def test_upgrade(session_cloud: IntegrationCloud):
 | ||||
|          instance.install_new_cloud_init(source, take_snapshot=False) | ||||
|          instance.execute('hostname something-else') | ||||
|          _restart(instance) | ||||
| +        assert instance.execute('cloud-init status --wait --long').ok
 | ||||
|          _output_to_compare(instance, after_path, netcfg_path) | ||||
|   | ||||
|      log.info('Wrote upgrade test logs to %s and %s', before_path, after_path) | ||||
| +
 | ||||
| +
 | ||||
| +@pytest.mark.ci
 | ||||
| +@pytest.mark.ubuntu
 | ||||
| +def test_upgrade_package(session_cloud: IntegrationCloud):
 | ||||
| +    if get_validated_source(session_cloud) != CloudInitSource.DEB_PACKAGE:
 | ||||
| +        not_run_message = 'Test only supports upgrading to build deb'
 | ||||
| +        if os.environ.get('TRAVIS'):
 | ||||
| +            # If this isn't running on CI, we should know
 | ||||
| +            pytest.fail(not_run_message)
 | ||||
| +        else:
 | ||||
| +            pytest.skip(not_run_message)
 | ||||
| +
 | ||||
| +    launch_kwargs = {'image_id': session_cloud.released_image_id}
 | ||||
| +
 | ||||
| +    with session_cloud.launch(launch_kwargs=launch_kwargs) as instance:
 | ||||
| +        instance.install_deb()
 | ||||
| +        instance.restart()
 | ||||
| +        assert instance.execute('cloud-init status --wait --long').ok
 | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,45 @@ | ||||
| From ec9c280ad24900ad078a0f371fa8b4f5f407ee90 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Tue, 26 Oct 2021 21:52:45 +0200 | ||||
| Subject: [PATCH] remove unnecessary EOF string in | ||||
|  disable-sshd-keygen-if-cloud-init-active.conf (#1075) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 13: remove unnecessary EOF string in disable-sshd-keygen-if-cloud-init-active.conf (#1075) | ||||
| RH-Commit: [1/1] 4c01a4bb86a73df3212bb4cf0388b2df707eddc4 (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 2016305 | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| 
 | ||||
| commit a8380a125d40ff0ae88f2ba25a518346f2063a1a | ||||
| Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date:   Tue Oct 26 16:15:47 2021 +0200 | ||||
| 
 | ||||
|     remove unnecessary EOF string in disable-sshd-keygen-if-cloud-init-active.conf (#1075) | ||||
| 
 | ||||
|     Running 'systemd-analyze verify cloud-init-local.service' | ||||
|     triggers the following warning: | ||||
| 
 | ||||
|     disable-sshhd-keygen-if-cloud-init-active.conf:8: Missing '=', ignoring line. | ||||
| 
 | ||||
|     The string "EOF" is probably a typo, so remove it. | ||||
| 
 | ||||
|     Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  systemd/disable-sshd-keygen-if-cloud-init-active.conf | 1 - | ||||
|  1 file changed, 1 deletion(-) | ||||
| 
 | ||||
| diff --git a/systemd/disable-sshd-keygen-if-cloud-init-active.conf b/systemd/disable-sshd-keygen-if-cloud-init-active.conf
 | ||||
| index 71e35876..1a5d7a5a 100644
 | ||||
| --- a/systemd/disable-sshd-keygen-if-cloud-init-active.conf
 | ||||
| +++ b/systemd/disable-sshd-keygen-if-cloud-init-active.conf
 | ||||
| @@ -5,4 +5,3 @@
 | ||||
|  # | ||||
|  [Unit] | ||||
|  ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target | ||||
| -EOF
 | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,65 @@ | ||||
| From 5069e58c009bc8c689f00de35391ae6d860197a4 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Thu, 20 May 2021 08:53:55 +0200 | ||||
| Subject: [PATCH 1/2] rhel/cloud.cfg: remove ssh_genkeytypes in settings.py and | ||||
|  set in cloud.cfg | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 16: rhel/cloud.cfg: remove ssh_genkeytypes in settings.py and set in cloud.cfg | ||||
| RH-Commit: [1/1] 67a4904f4d7918be4c9b3c3dbf340b3ecb9e8786 | ||||
| RH-Bugzilla: 1970909 | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com> | ||||
| 
 | ||||
| Currently genkeytypes in cloud.cfg is set to None, so together with | ||||
| ssh_deletekeys=1 cloudinit on first boot it will just delete the existing | ||||
| keys and not generate new ones. | ||||
| 
 | ||||
| Just removing that property in cloud.cfg is not enough, because | ||||
| settings.py provides another empty default value that will be used | ||||
| instead, resulting to no key generated even when the property is not defined. | ||||
| 
 | ||||
| Removing genkeytypes also in settings.py will default to GENERATE_KEY_NAMES, | ||||
| but since we want only 'rsa', 'ecdsa' and 'ed25519', add back genkeytypes in | ||||
| cloud.cfg with the above defaults. | ||||
| 
 | ||||
| Also remove ssh_deletekeys in settings.py as we always need | ||||
| to 1 (and it also defaults to 1). | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/settings.py | 2 -- | ||||
|  rhel/cloud.cfg        | 2 +- | ||||
|  2 files changed, 1 insertion(+), 3 deletions(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/settings.py b/cloudinit/settings.py
 | ||||
| index 43a1490c..2acf2615 100644
 | ||||
| --- a/cloudinit/settings.py
 | ||||
| +++ b/cloudinit/settings.py
 | ||||
| @@ -49,8 +49,6 @@ CFG_BUILTIN = {
 | ||||
|      'def_log_file_mode': 0o600, | ||||
|      'log_cfgs': [], | ||||
|      'mount_default_fields': [None, None, 'auto', 'defaults,nofail', '0', '2'], | ||||
| -    'ssh_deletekeys': False,
 | ||||
| -    'ssh_genkeytypes': [],
 | ||||
|      'syslog_fix_perms': [], | ||||
|      'system_info': { | ||||
|          'paths': { | ||||
| diff --git a/rhel/cloud.cfg b/rhel/cloud.cfg
 | ||||
| index 9ecba215..cbee197a 100644
 | ||||
| --- a/rhel/cloud.cfg
 | ||||
| +++ b/rhel/cloud.cfg
 | ||||
| @@ -7,7 +7,7 @@ ssh_pwauth:   0
 | ||||
|  mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2'] | ||||
|  resize_rootfs_tmp: /dev | ||||
|  ssh_deletekeys:   1 | ||||
| -ssh_genkeytypes:  ~
 | ||||
| +ssh_genkeytypes:  ['rsa', 'ecdsa', 'ed25519']
 | ||||
|  syslog_fix_perms: ~ | ||||
|  disable_vmware_customization: false | ||||
|   | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,651 @@ | ||||
| From 857009723f14e9ad2f5f4c8614d72982b00ec27d Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Mon, 12 Jul 2021 21:47:37 +0200 | ||||
| Subject: [PATCH 2/2] ssh-util: allow cloudinit to merge all ssh keys into a | ||||
|  custom user file, defined in AuthorizedKeysFile (#937) | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 5: ssh-util: allow cloudinit to merge all ssh keys into a custom user file, defined in AuthorizedKeysFile (#937) | ||||
| RH-Commit: [1/1] 3ed352e47c34e2ed2a1f9f5d68bc8b8f9a1365a6 (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 1979099 | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| 
 | ||||
| Conflicts: upstream patch modifies tests/integration_tests/util.py, that is | ||||
| not present in RHEL. | ||||
| 
 | ||||
| commit 9b52405c6f0de5e00d5ee9c1d13540425d8f6bf5 | ||||
| Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date:   Mon Jul 12 20:21:02 2021 +0200 | ||||
| 
 | ||||
|     ssh-util: allow cloudinit to merge all ssh keys into a custom user file, defined in AuthorizedKeysFile (#937) | ||||
| 
 | ||||
|     This patch aims to fix LP1911680, by analyzing the files provided | ||||
|     in sshd_config and merge all keys into an user-specific file. Also | ||||
|     introduces additional tests to cover this specific case. | ||||
| 
 | ||||
|     The file is picked by analyzing the path given in AuthorizedKeysFile. | ||||
| 
 | ||||
|     If it points inside the current user folder (path is /home/user/*), it | ||||
|     means it is an user-specific file, so we can copy all user-keys there. | ||||
|     If it contains a %u or %h, it means that there will be a specific | ||||
|     authorized_keys file for each user, so we can copy all user-keys there. | ||||
|     If no path points to an user-specific file, for example when only | ||||
|     /etc/ssh/authorized_keys is given, default to ~/.ssh/authorized_keys. | ||||
|     Note that if there are more than a single user-specific file, the last | ||||
|     one will be picked. | ||||
| 
 | ||||
|     Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
|     Co-authored-by: James Falcon <therealfalcon@gmail.com> | ||||
| 
 | ||||
|     LP: #1911680 | ||||
|     RHBZ:1862967 | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/ssh_util.py                         |  22 +- | ||||
|  .../assets/keys/id_rsa.test1                  |  38 +++ | ||||
|  .../assets/keys/id_rsa.test1.pub              |   1 + | ||||
|  .../assets/keys/id_rsa.test2                  |  38 +++ | ||||
|  .../assets/keys/id_rsa.test2.pub              |   1 + | ||||
|  .../assets/keys/id_rsa.test3                  |  38 +++ | ||||
|  .../assets/keys/id_rsa.test3.pub              |   1 + | ||||
|  .../modules/test_ssh_keysfile.py              |  85 ++++++ | ||||
|  tests/unittests/test_sshutil.py               | 246 +++++++++++++++++- | ||||
|  9 files changed, 456 insertions(+), 14 deletions(-) | ||||
|  create mode 100644 tests/integration_tests/assets/keys/id_rsa.test1 | ||||
|  create mode 100644 tests/integration_tests/assets/keys/id_rsa.test1.pub | ||||
|  create mode 100644 tests/integration_tests/assets/keys/id_rsa.test2 | ||||
|  create mode 100644 tests/integration_tests/assets/keys/id_rsa.test2.pub | ||||
|  create mode 100644 tests/integration_tests/assets/keys/id_rsa.test3 | ||||
|  create mode 100644 tests/integration_tests/assets/keys/id_rsa.test3.pub | ||||
|  create mode 100644 tests/integration_tests/modules/test_ssh_keysfile.py | ||||
| 
 | ||||
| diff --git a/cloudinit/ssh_util.py b/cloudinit/ssh_util.py
 | ||||
| index c08042d6..89057262 100644
 | ||||
| --- a/cloudinit/ssh_util.py
 | ||||
| +++ b/cloudinit/ssh_util.py
 | ||||
| @@ -252,13 +252,15 @@ def render_authorizedkeysfile_paths(value, homedir, username):
 | ||||
|  def extract_authorized_keys(username, sshd_cfg_file=DEF_SSHD_CFG): | ||||
|      (ssh_dir, pw_ent) = users_ssh_info(username) | ||||
|      default_authorizedkeys_file = os.path.join(ssh_dir, 'authorized_keys') | ||||
| +    user_authorizedkeys_file = default_authorizedkeys_file
 | ||||
|      auth_key_fns = [] | ||||
|      with util.SeLinuxGuard(ssh_dir, recursive=True): | ||||
|          try: | ||||
|              ssh_cfg = parse_ssh_config_map(sshd_cfg_file) | ||||
| +            key_paths = ssh_cfg.get("authorizedkeysfile",
 | ||||
| +                                    "%h/.ssh/authorized_keys")
 | ||||
|              auth_key_fns = render_authorizedkeysfile_paths( | ||||
| -                ssh_cfg.get("authorizedkeysfile", "%h/.ssh/authorized_keys"),
 | ||||
| -                pw_ent.pw_dir, username)
 | ||||
| +                key_paths, pw_ent.pw_dir, username)
 | ||||
|   | ||||
|          except (IOError, OSError): | ||||
|              # Give up and use a default key filename | ||||
| @@ -267,8 +269,22 @@ def extract_authorized_keys(username, sshd_cfg_file=DEF_SSHD_CFG):
 | ||||
|                          "config from %r, using 'AuthorizedKeysFile' file " | ||||
|                          "%r instead", DEF_SSHD_CFG, auth_key_fns[0]) | ||||
|   | ||||
| +    # check if one of the keys is the user's one
 | ||||
| +    for key_path, auth_key_fn in zip(key_paths.split(), auth_key_fns):
 | ||||
| +        if any([
 | ||||
| +            '%u' in key_path,
 | ||||
| +            '%h' in key_path,
 | ||||
| +            auth_key_fn.startswith('{}/'.format(pw_ent.pw_dir))
 | ||||
| +        ]):
 | ||||
| +            user_authorizedkeys_file = auth_key_fn
 | ||||
| +
 | ||||
| +    if user_authorizedkeys_file != default_authorizedkeys_file:
 | ||||
| +        LOG.debug(
 | ||||
| +            "AuthorizedKeysFile has an user-specific authorized_keys, "
 | ||||
| +            "using %s", user_authorizedkeys_file)
 | ||||
| +
 | ||||
|      # always store all the keys in the user's private file | ||||
| -    return (default_authorizedkeys_file, parse_authorized_keys(auth_key_fns))
 | ||||
| +    return (user_authorizedkeys_file, parse_authorized_keys(auth_key_fns))
 | ||||
|   | ||||
|   | ||||
|  def setup_user_keys(keys, username, options=None): | ||||
| diff --git a/tests/integration_tests/assets/keys/id_rsa.test1 b/tests/integration_tests/assets/keys/id_rsa.test1
 | ||||
| new file mode 100644 | ||||
| index 00000000..bd4c822e
 | ||||
| --- /dev/null
 | ||||
| +++ b/tests/integration_tests/assets/keys/id_rsa.test1
 | ||||
| @@ -0,0 +1,38 @@
 | ||||
| +-----BEGIN OPENSSH PRIVATE KEY-----
 | ||||
| +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
 | ||||
| +NhAAAAAwEAAQAAAYEAtRlG96aJ23URvAgO/bBsuLl+lquc350aSwV98/i8vlvOn5GVcHye
 | ||||
| +t/rXQg4lZ4s0owG3kWyQFY8nvTk+G+UNU8fN0anAzBDi+4MzsejkF9scjTMFmXVrIpICqV
 | ||||
| +3bYQNjPv6r+ubQdkD01du3eB9t5/zl84gtshp0hBdofyz8u1/A25s7fVU67GyI7PdKvaS+
 | ||||
| +yvJSInZnb2e9VQzfJC+qAnN7gUZatBKjdgUtJeiUUeDaVnaS17b0aoT9iBO0sIcQtOTBlY
 | ||||
| +lCjFt1TAMLZ64Hj3SfGZB7Yj0Z+LzFB2IWX1zzsjI68YkYPKOSL/NYhQU9e55kJQ7WnngN
 | ||||
| +HY/2n/A7dNKSFDmgM5c9IWgeZ7fjpsfIYAoJ/CAxFIND+PEHd1gCS6xoEhaUVyh5WH/Xkw
 | ||||
| +Kv1nx4AiZ2BFCE+75kySRLZUJ+5y0r3DU5ktMXeURzVIP7pu0R8DCul+GU+M/+THyWtAEO
 | ||||
| +geaNJ6fYpo2ipDhbmTYt3kk2lMIapRxGBFs+37sdAAAFgGGJssNhibLDAAAAB3NzaC1yc2
 | ||||
| +EAAAGBALUZRvemidt1EbwIDv2wbLi5fparnN+dGksFffP4vL5bzp+RlXB8nrf610IOJWeL
 | ||||
| +NKMBt5FskBWPJ705PhvlDVPHzdGpwMwQ4vuDM7Ho5BfbHI0zBZl1ayKSAqld22EDYz7+q/
 | ||||
| +rm0HZA9NXbt3gfbef85fOILbIadIQXaH8s/LtfwNubO31VOuxsiOz3Sr2kvsryUiJ2Z29n
 | ||||
| +vVUM3yQvqgJze4FGWrQSo3YFLSXolFHg2lZ2kte29GqE/YgTtLCHELTkwZWJQoxbdUwDC2
 | ||||
| +euB490nxmQe2I9Gfi8xQdiFl9c87IyOvGJGDyjki/zWIUFPXueZCUO1p54DR2P9p/wO3TS
 | ||||
| +khQ5oDOXPSFoHme346bHyGAKCfwgMRSDQ/jxB3dYAkusaBIWlFcoeVh/15MCr9Z8eAImdg
 | ||||
| +RQhPu+ZMkkS2VCfuctK9w1OZLTF3lEc1SD+6btEfAwrpfhlPjP/kx8lrQBDoHmjSen2KaN
 | ||||
| +oqQ4W5k2Ld5JNpTCGqUcRgRbPt+7HQAAAAMBAAEAAAGBAJJCTOd70AC2ptEGbR0EHHqADT
 | ||||
| +Wgefy7A94tHFEqxTy0JscGq/uCGimaY7kMdbcPXT59B4VieWeAC2cuUPP0ZHQSfS5ke7oT
 | ||||
| +tU3N47U+0uBVbNS4rUAH7bOo2o9wptnOA5x/z+O+AARRZ6tEXQOd1oSy4gByLf2Wkh2QTi
 | ||||
| +vP6Hln1vlFgKEzcXg6G8fN3MYWxKRhWmZM3DLERMvorlqqSBLcs5VvfZfLKcsKWTExioAq
 | ||||
| +KgwEjYm8T9+rcpsw1xBus3j9k7wCI1Sus6PCDjq0pcYKLMYM7p8ygnU2tRYrOztdIxgWRA
 | ||||
| +w/1oenm1Mqq2tV5xJcBCwCLOGe6SFwkIRywOYc57j5McH98Xhhg9cViyyBdXy/baF0mro+
 | ||||
| +qPhOsWDxqwD4VKZ9UmQ6O8kPNKcc7QcIpFJhcO0g9zbp/MT0KueaWYrTKs8y4lUkTT7Xz6
 | ||||
| ++MzlR122/JwlAbBo6Y2kWtB+y+XwBZ0BfyJsm2czDhKm7OI5KfuBNhq0tFfKwOlYBq4QAA
 | ||||
| +AMAyvUof1R8LLISkdO3EFTKn5RGNkPPoBJmGs6LwvU7NSjjLj/wPQe4jsIBc585tvbrddp
 | ||||
| +60h72HgkZ5tqOfdeBYOKqX0qQQBHUEvI6M+NeQTQRev8bCHMLXQ21vzpClnrwNzlja359E
 | ||||
| +uTRfiPRwIlyPLhOUiClBDSAnBI9h82Hkk3zzsQ/xGfsPB7iOjRbW69bMRSVCRpeweCVmWC
 | ||||
| +77DTsEOq69V2TdljhQNIXE5OcOWonIlfgPiI74cdd+dLhzc/AAAADBAO1/JXd2kYiRyNkZ
 | ||||
| +aXTLcwiSgBQIYbobqVP3OEtTclr0P1JAvby3Y4cCaEhkenx+fBqgXAku5lKM+U1Q9AEsMk
 | ||||
| +cjIhaDpb43rU7GPjMn4zHwgGsEKd5pC1yIQ2PlK+cHanAdsDjIg+6RR+fuvid/mBeBOYXb
 | ||||
| +Py0sa3HyekLJmCdx4UEyNASoiNaGFLQVAqo+RACsXy6VMxFH5dqDYlvwrfUQLwxJmse9Vb
 | ||||
| +GEuuPAsklNugZqssC2XOIujFVUpslduQAAAMEAwzVHQVtsc3icCSzEAARpDTUdTbI29OhB
 | ||||
| +/FMBnjzS9/3SWfLuBOSm9heNCHs2jdGNb8cPdKZuY7S9Fx6KuVUPyTbSSYkjj0F4fTeC9g
 | ||||
| +0ym4p4UWYdF67WSWwLORkaG8K0d+G/CXkz8hvKUg6gcZWKBHAE1ROrHu1nsc8v7mkiKq4I
 | ||||
| +bnTw5Q9TgjbWcQWtgPq0wXyyl/K8S1SFdkMCTOHDD0RQ+jTV2WNGVwFTodIRHenX+Rw2g4
 | ||||
| +CHbTWbsFrHR1qFAAAACmphbWVzQG5ld3Q=
 | ||||
| +-----END OPENSSH PRIVATE KEY-----
 | ||||
| diff --git a/tests/integration_tests/assets/keys/id_rsa.test1.pub b/tests/integration_tests/assets/keys/id_rsa.test1.pub
 | ||||
| new file mode 100644 | ||||
| index 00000000..3d2e26e1
 | ||||
| --- /dev/null
 | ||||
| +++ b/tests/integration_tests/assets/keys/id_rsa.test1.pub
 | ||||
| @@ -0,0 +1 @@
 | ||||
| +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1GUb3ponbdRG8CA79sGy4uX6Wq5zfnRpLBX3z+Ly+W86fkZVwfJ63+tdCDiVnizSjAbeRbJAVjye9OT4b5Q1Tx83RqcDMEOL7gzOx6OQX2xyNMwWZdWsikgKpXdthA2M+/qv65tB2QPTV27d4H23n/OXziC2yGnSEF2h/LPy7X8Dbmzt9VTrsbIjs90q9pL7K8lIidmdvZ71VDN8kL6oCc3uBRlq0EqN2BS0l6JRR4NpWdpLXtvRqhP2IE7SwhxC05MGViUKMW3VMAwtnrgePdJ8ZkHtiPRn4vMUHYhZfXPOyMjrxiRg8o5Iv81iFBT17nmQlDtaeeA0dj/af8Dt00pIUOaAzlz0haB5nt+Omx8hgCgn8IDEUg0P48Qd3WAJLrGgSFpRXKHlYf9eTAq/WfHgCJnYEUIT7vmTJJEtlQn7nLSvcNTmS0xd5RHNUg/um7RHwMK6X4ZT4z/5MfJa0AQ6B5o0np9imjaKkOFuZNi3eSTaUwhqlHEYEWz7fux0= test1@host
 | ||||
| diff --git a/tests/integration_tests/assets/keys/id_rsa.test2 b/tests/integration_tests/assets/keys/id_rsa.test2
 | ||||
| new file mode 100644 | ||||
| index 00000000..5854d901
 | ||||
| --- /dev/null
 | ||||
| +++ b/tests/integration_tests/assets/keys/id_rsa.test2
 | ||||
| @@ -0,0 +1,38 @@
 | ||||
| +-----BEGIN OPENSSH PRIVATE KEY-----
 | ||||
| +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
 | ||||
| +NhAAAAAwEAAQAAAYEAvK50D2PWOc4ikyHVRJS6tDhqzjL5cKiivID4p1X8BYCVw83XAEGO
 | ||||
| +LnItUyVXHNADlh6fpVq1NY6A2JVtygoPF6ZFx8ph7IWMmnhDdnxLLyGsbhd1M1tiXJD/R+
 | ||||
| +3WnGHRJ4PKrQavMLgqHRrieV3QVVfjFSeo6jX/4TruP6ZmvITMZWJrXaGphxJ/pPykEdkO
 | ||||
| +i8AmKU9FNviojyPS2nNtj9B/635IdgWvrd7Vf5Ycsw9MR55LWSidwa856RH62Yl6LpEGTH
 | ||||
| +m1lJiMk1u88JPSqvohhaUkLKkFpcQwcB0m76W1KOyllJsmX8bNXrlZsI+WiiYI7Xl5vQm2
 | ||||
| +17DEuNeavtPAtDMxu8HmTg2UJ55Naxehbfe2lx2k5kYGGw3i1O1OVN2pZ2/OB71LucYd/5
 | ||||
| +qxPaz03wswcGOJYGPkNc40vdES/Scc7Yt8HsnZuzqkyOgzn0HiUCzoYUYLYTpLf+yGmwxS
 | ||||
| +yAEY056aOfkCsboKHOKiOmlJxNaZZFQkX1evep4DAAAFgC7HMbUuxzG1AAAAB3NzaC1yc2
 | ||||
| +EAAAGBALyudA9j1jnOIpMh1USUurQ4as4y+XCooryA+KdV/AWAlcPN1wBBji5yLVMlVxzQ
 | ||||
| +A5Yen6VatTWOgNiVbcoKDxemRcfKYeyFjJp4Q3Z8Sy8hrG4XdTNbYlyQ/0ft1pxh0SeDyq
 | ||||
| +0GrzC4Kh0a4nld0FVX4xUnqOo1/+E67j+mZryEzGVia12hqYcSf6T8pBHZDovAJilPRTb4
 | ||||
| +qI8j0tpzbY/Qf+t+SHYFr63e1X+WHLMPTEeeS1koncGvOekR+tmJei6RBkx5tZSYjJNbvP
 | ||||
| +CT0qr6IYWlJCypBaXEMHAdJu+ltSjspZSbJl/GzV65WbCPloomCO15eb0JttewxLjXmr7T
 | ||||
| +wLQzMbvB5k4NlCeeTWsXoW33tpcdpOZGBhsN4tTtTlTdqWdvzge9S7nGHf+asT2s9N8LMH
 | ||||
| +BjiWBj5DXONL3REv0nHO2LfB7J2bs6pMjoM59B4lAs6GFGC2E6S3/shpsMUsgBGNOemjn5
 | ||||
| +ArG6ChziojppScTWmWRUJF9Xr3qeAwAAAAMBAAEAAAGASj/kkEHbhbfmxzujL2/P4Sfqb+
 | ||||
| +aDXqAeGkwujbs6h/fH99vC5ejmSMTJrVSeaUo6fxLiBDIj6UWA0rpLEBzRP59BCpRL4MXV
 | ||||
| +RNxav/+9nniD4Hb+ug0WMhMlQmsH71ZW9lPYqCpfOq7ec8GmqdgPKeaCCEspH7HMVhfYtd
 | ||||
| +eHylwAC02lrpz1l5/h900sS5G9NaWR3uPA+xbzThDs4uZVkSidjlCNt1QZhDSSk7jA5n34
 | ||||
| +qJ5UTGu9WQDZqyxWKND+RIyQuFAPGQyoyCC1FayHO2sEhT5qHuumL14Mn81XpzoXFoKyql
 | ||||
| +rhBDe+pHhKArBYt92Evch0k1ABKblFxtxLXcvk4Fs7pHi+8k4+Cnazej2kcsu1kURlMZJB
 | ||||
| +w2QT/8BV4uImbH05LtyscQuwGzpIoxqrnHrvg5VbohStmhoOjYybzqqW3/M0qhkn5JgTiy
 | ||||
| +dJcHRJisRnAcmbmEchYtLDi6RW1e022H4I9AFXQqyr5HylBq6ugtWcFCsrcX8ibZ8xAAAA
 | ||||
| +wQCAOPgwae6yZLkrYzRfbxZtGKNmhpI0EtNSDCHYuQQapFZJe7EFENs/VAaIiiut0yajGj
 | ||||
| +c3aoKcwGIoT8TUM8E3GSNW6+WidUOC7H6W+/6N2OYZHRBACGz820xO+UBCl2oSk+dLBlfr
 | ||||
| +IQzBGUWn5uVYCs0/2nxfCdFyHtMK8dMF/ypbdG+o1rXz5y9b7PVG6Mn+o1Rjsdkq7VERmy
 | ||||
| +Pukd8hwATOIJqoKl3TuFyBeYFLqe+0e7uTeswQFw17PF31VjAAAADBAOpJRQb8c6qWqsvv
 | ||||
| +vkve0uMuL0DfWW0G6+SxjPLcV6aTWL5xu0Grd8uBxDkkHU/CDrAwpchXyuLsvbw21Eje/u
 | ||||
| +U5k9nLEscWZwcX7odxlK+EfAY2Bf5+Hd9bH5HMzTRJH8KkWK1EppOLPyiDxz4LZGzPLVyv
 | ||||
| +/1PgSuvXkSWk1KIE4SvSemyxGX2tPVI6uO+URqevfnPOS1tMB7BMQlgkR6eh4bugx9UYx9
 | ||||
| +mwlXonNa4dN0iQxZ7N4rKFBbT/uyB2bQAAAMEAzisnkD8k9Tn8uyhxpWLHwb03X4ZUUHDV
 | ||||
| +zu15e4a8dZ+mM8nHO986913Xz5JujlJKkGwFTvgWkIiR2zqTEauZHARH7gANpaweTm6lPd
 | ||||
| +E4p2S0M3ulY7xtp9lCFIrDhMPPkGq8SFZB6qhgucHcZSRLq6ZDou3S2IdNOzDTpBtkhRCS
 | ||||
| +0zFcdTLh3zZweoy8HGbW36bwB6s1CIL76Pd4F64i0Ms9CCCU6b+E5ArFhYQIsXiDbgHWbD
 | ||||
| +tZRSm2GEgnDGAvAAAACmphbWVzQG5ld3Q=
 | ||||
| +-----END OPENSSH PRIVATE KEY-----
 | ||||
| diff --git a/tests/integration_tests/assets/keys/id_rsa.test2.pub b/tests/integration_tests/assets/keys/id_rsa.test2.pub
 | ||||
| new file mode 100644 | ||||
| index 00000000..f3831a57
 | ||||
| --- /dev/null
 | ||||
| +++ b/tests/integration_tests/assets/keys/id_rsa.test2.pub
 | ||||
| @@ -0,0 +1 @@
 | ||||
| +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8rnQPY9Y5ziKTIdVElLq0OGrOMvlwqKK8gPinVfwFgJXDzdcAQY4uci1TJVcc0AOWHp+lWrU1joDYlW3KCg8XpkXHymHshYyaeEN2fEsvIaxuF3UzW2JckP9H7dacYdEng8qtBq8wuCodGuJ5XdBVV+MVJ6jqNf/hOu4/pma8hMxlYmtdoamHEn+k/KQR2Q6LwCYpT0U2+KiPI9Lac22P0H/rfkh2Ba+t3tV/lhyzD0xHnktZKJ3BrznpEfrZiXoukQZMebWUmIyTW7zwk9Kq+iGFpSQsqQWlxDBwHSbvpbUo7KWUmyZfxs1euVmwj5aKJgjteXm9CbbXsMS415q+08C0MzG7weZODZQnnk1rF6Ft97aXHaTmRgYbDeLU7U5U3alnb84HvUu5xh3/mrE9rPTfCzBwY4lgY+Q1zjS90RL9Jxzti3weydm7OqTI6DOfQeJQLOhhRgthOkt/7IabDFLIARjTnpo5+QKxugoc4qI6aUnE1plkVCRfV696ngM= test2@host
 | ||||
| diff --git a/tests/integration_tests/assets/keys/id_rsa.test3 b/tests/integration_tests/assets/keys/id_rsa.test3
 | ||||
| new file mode 100644 | ||||
| index 00000000..2596c762
 | ||||
| --- /dev/null
 | ||||
| +++ b/tests/integration_tests/assets/keys/id_rsa.test3
 | ||||
| @@ -0,0 +1,38 @@
 | ||||
| +-----BEGIN OPENSSH PRIVATE KEY-----
 | ||||
| +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
 | ||||
| +NhAAAAAwEAAQAAAYEApPG4MdkYQKD57/qreFrh9GRC22y66qZOWZWRjC887rrbvBzO69hV
 | ||||
| +yJpTIXleJEvpWiHYcjMR5G6NNFsnNtZ4fxDqmSc4vcFj53JsE/XNqLKq6psXadCb5vkNpG
 | ||||
| +bxA+Z5bJlzJ969PgJIIEbgc86sei4kgR2MuPWqtZbY5GkpNCTqWuLYeFK+14oFruA2nyWH
 | ||||
| +9MOIRDHK/d597psHy+LTMtymO7ZPhO571abKw6jvvwiSeDxVE9kV7KAQIuM9/S3gftvgQQ
 | ||||
| +ron3GL34pgmIabdSGdbfHqGDooryJhlbquJZELBN236KgRNTCAjVvUzjjQr1eRP3xssGwV
 | ||||
| +O6ECBGCQLl/aYogAgtwnwj9iXqtfiLK3EwlgjquU4+JQ0CVtLhG3gIZB+qoMThco0pmHTr
 | ||||
| +jtfQCwrztsBBFunSa2/CstuV1mQ5O5ZrZ6ACo9yPRBNkns6+CiKdtMtCtzi3k2RDz9jpYm
 | ||||
| +Pcak03Lr7IkdC1Tp6+jA+//yPHSO1o4CqW89IQzNAAAFgEUd7lZFHe5WAAAAB3NzaC1yc2
 | ||||
| +EAAAGBAKTxuDHZGECg+e/6q3ha4fRkQttsuuqmTlmVkYwvPO6627wczuvYVciaUyF5XiRL
 | ||||
| +6Voh2HIzEeRujTRbJzbWeH8Q6pknOL3BY+dybBP1zaiyquqbF2nQm+b5DaRm8QPmeWyZcy
 | ||||
| +fevT4CSCBG4HPOrHouJIEdjLj1qrWW2ORpKTQk6lri2HhSvteKBa7gNp8lh/TDiEQxyv3e
 | ||||
| +fe6bB8vi0zLcpju2T4Tue9WmysOo778Ikng8VRPZFeygECLjPf0t4H7b4EEK6J9xi9+KYJ
 | ||||
| +iGm3UhnW3x6hg6KK8iYZW6riWRCwTdt+ioETUwgI1b1M440K9XkT98bLBsFTuhAgRgkC5f
 | ||||
| +2mKIAILcJ8I/Yl6rX4iytxMJYI6rlOPiUNAlbS4Rt4CGQfqqDE4XKNKZh0647X0AsK87bA
 | ||||
| +QRbp0mtvwrLbldZkOTuWa2egAqPcj0QTZJ7OvgoinbTLQrc4t5NkQ8/Y6WJj3GpNNy6+yJ
 | ||||
| +HQtU6evowPv/8jx0jtaOAqlvPSEMzQAAAAMBAAEAAAGAGaqbdPZJNdVWzyb8g6/wtSzc0n
 | ||||
| +Qq6dSTIJGLonq/So69HpqFAGIbhymsger24UMGvsXBfpO/1wH06w68HWZmPa+OMeLOi4iK
 | ||||
| +WTuO4dQ/+l5DBlq32/lgKSLcIpb6LhcxEdsW9j9Mx1dnjc45owun/yMq/wRwH1/q/nLIsV
 | ||||
| +JD3R9ZcGcYNDD8DWIm3D17gmw+qbG7hJES+0oh4n0xS2KyZpm7LFOEMDVEA8z+hE/HbryQ
 | ||||
| +vjD1NC91n+qQWD1wKfN3WZDRwip3z1I5VHMpvXrA/spHpa9gzHK5qXNmZSz3/dfA1zHjCR
 | ||||
| +2dHjJnrIUH8nyPfw8t+COC+sQBL3Nr0KUWEFPRM08cOcQm4ctzg17aDIZBONjlZGKlReR8
 | ||||
| +1zfAw84Q70q2spLWLBLXSFblHkaOfijEbejIbaz2UUEQT27WD7RHAORdQlkx7eitk66T9d
 | ||||
| +DzIq/cpYhm5Fs8KZsh3PLldp9nsHbD2Oa9J9LJyI4ryuIW0mVwRdvPSiiYi3K+mDCpAAAA
 | ||||
| +wBe+ugEEJ+V7orb1f4Zez0Bd4FNkEc52WZL4CWbaCtM+ZBg5KnQ6xW14JdC8IS9cNi/I5P
 | ||||
| +yLsBvG4bWPLGgQruuKY6oLueD6BFnKjqF6ACUCiSQldh4BAW1nYc2U48+FFvo3ZQyudFSy
 | ||||
| +QEFlhHmcaNMDo0AIJY5Xnq2BG3nEX7AqdtZ8hhenHwLCRQJatDwSYBHDpSDdh9vpTnGp/2
 | ||||
| +0jBz25Ko4UANzvSAc3sA4yN3jfpoM366TgdNf8x3g1v7yljQAAAMEA0HSQjzH5nhEwB58k
 | ||||
| +mYYxnBYp1wb86zIuVhAyjZaeinvBQSTmLow8sXIHcCVuD3CgBezlU2SX5d9YuvRU9rcthi
 | ||||
| +uzn4wWnbnzYy4SwzkMJXchUAkumFVD8Hq5TNPh2Z+033rLLE08EhYypSeVpuzdpFoStaS9
 | ||||
| +3DUZA2bR/zLZI9MOVZRUcYImNegqIjOYHY8Sbj3/0QPV6+WpUJFMPvvedWhfaOsRMTA6nr
 | ||||
| +VLG4pxkrieVl0UtuRGbzD/exXhXVi7AAAAwQDKkJj4ez/+KZFYlZQKiV0BrfUFcgS6ElFM
 | ||||
| +2CZIEagCtu8eedrwkNqx2FUX33uxdvUTr4c9I3NvWeEEGTB9pgD4lh1x/nxfuhyGXtimFM
 | ||||
| +GnznGV9oyz0DmKlKiKSEGwWf5G+/NiiCwwVJ7wsQQm7TqNtkQ9b8MhWWXC7xlXKUs7dmTa
 | ||||
| +e8AqAndCCMEnbS1UQFO/R5PNcZXkFWDggLQ/eWRYKlrXgdnUgH6h0saOcViKpNJBUXb3+x
 | ||||
| +eauhOY52PS/BcAAAAKamFtZXNAbmV3dAE=
 | ||||
| +-----END OPENSSH PRIVATE KEY-----
 | ||||
| diff --git a/tests/integration_tests/assets/keys/id_rsa.test3.pub b/tests/integration_tests/assets/keys/id_rsa.test3.pub
 | ||||
| new file mode 100644 | ||||
| index 00000000..057db632
 | ||||
| --- /dev/null
 | ||||
| +++ b/tests/integration_tests/assets/keys/id_rsa.test3.pub
 | ||||
| @@ -0,0 +1 @@
 | ||||
| +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCk8bgx2RhAoPnv+qt4WuH0ZELbbLrqpk5ZlZGMLzzuutu8HM7r2FXImlMheV4kS+laIdhyMxHkbo00Wyc21nh/EOqZJzi9wWPncmwT9c2osqrqmxdp0Jvm+Q2kZvED5nlsmXMn3r0+AkggRuBzzqx6LiSBHYy49aq1ltjkaSk0JOpa4th4Ur7XigWu4DafJYf0w4hEMcr93n3umwfL4tMy3KY7tk+E7nvVpsrDqO+/CJJ4PFUT2RXsoBAi4z39LeB+2+BBCuifcYvfimCYhpt1IZ1t8eoYOiivImGVuq4lkQsE3bfoqBE1MICNW9TOONCvV5E/fGywbBU7oQIEYJAuX9piiACC3CfCP2Jeq1+IsrcTCWCOq5Tj4lDQJW0uEbeAhkH6qgxOFyjSmYdOuO19ALCvO2wEEW6dJrb8Ky25XWZDk7lmtnoAKj3I9EE2Sezr4KIp20y0K3OLeTZEPP2OliY9xqTTcuvsiR0LVOnr6MD7//I8dI7WjgKpbz0hDM0= test3@host
 | ||||
| diff --git a/tests/integration_tests/modules/test_ssh_keysfile.py b/tests/integration_tests/modules/test_ssh_keysfile.py
 | ||||
| new file mode 100644 | ||||
| index 00000000..f82d7649
 | ||||
| --- /dev/null
 | ||||
| +++ b/tests/integration_tests/modules/test_ssh_keysfile.py
 | ||||
| @@ -0,0 +1,85 @@
 | ||||
| +import paramiko
 | ||||
| +import pytest
 | ||||
| +from io import StringIO
 | ||||
| +from paramiko.ssh_exception import SSHException
 | ||||
| +
 | ||||
| +from tests.integration_tests.instances import IntegrationInstance
 | ||||
| +from tests.integration_tests.util import get_test_rsa_keypair
 | ||||
| +
 | ||||
| +TEST_USER1_KEYS = get_test_rsa_keypair('test1')
 | ||||
| +TEST_USER2_KEYS = get_test_rsa_keypair('test2')
 | ||||
| +TEST_DEFAULT_KEYS = get_test_rsa_keypair('test3')
 | ||||
| +
 | ||||
| +USERDATA = """\
 | ||||
| +#cloud-config
 | ||||
| +bootcmd:
 | ||||
| + - sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile /etc/ssh/authorized_keys %h/.ssh/authorized_keys2;' /etc/ssh/sshd_config
 | ||||
| +ssh_authorized_keys:
 | ||||
| + - {default}
 | ||||
| +users:
 | ||||
| +- default
 | ||||
| +- name: test_user1
 | ||||
| +  ssh_authorized_keys:
 | ||||
| +    - {user1}
 | ||||
| +- name: test_user2
 | ||||
| +  ssh_authorized_keys:
 | ||||
| +    - {user2}
 | ||||
| +""".format(  # noqa: E501
 | ||||
| +    default=TEST_DEFAULT_KEYS.public_key,
 | ||||
| +    user1=TEST_USER1_KEYS.public_key,
 | ||||
| +    user2=TEST_USER2_KEYS.public_key,
 | ||||
| +)
 | ||||
| +
 | ||||
| +
 | ||||
| +@pytest.mark.ubuntu
 | ||||
| +@pytest.mark.user_data(USERDATA)
 | ||||
| +def test_authorized_keys(client: IntegrationInstance):
 | ||||
| +    expected_keys = [
 | ||||
| +        ('test_user1', '/home/test_user1/.ssh/authorized_keys2',
 | ||||
| +         TEST_USER1_KEYS),
 | ||||
| +        ('test_user2', '/home/test_user2/.ssh/authorized_keys2',
 | ||||
| +         TEST_USER2_KEYS),
 | ||||
| +        ('ubuntu', '/home/ubuntu/.ssh/authorized_keys2',
 | ||||
| +         TEST_DEFAULT_KEYS),
 | ||||
| +        ('root', '/root/.ssh/authorized_keys2', TEST_DEFAULT_KEYS),
 | ||||
| +    ]
 | ||||
| +
 | ||||
| +    for user, filename, keys in expected_keys:
 | ||||
| +        contents = client.read_from_file(filename)
 | ||||
| +        if user in ['ubuntu', 'root']:
 | ||||
| +            # Our personal public key gets added by pycloudlib
 | ||||
| +            lines = contents.split('\n')
 | ||||
| +            assert len(lines) == 2
 | ||||
| +            assert keys.public_key.strip() in contents
 | ||||
| +        else:
 | ||||
| +            assert contents.strip() == keys.public_key.strip()
 | ||||
| +
 | ||||
| +        # Ensure we can actually connect
 | ||||
| +        ssh = paramiko.SSHClient()
 | ||||
| +        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
 | ||||
| +        paramiko_key = paramiko.RSAKey.from_private_key(StringIO(
 | ||||
| +            keys.private_key))
 | ||||
| +
 | ||||
| +        # Will fail with AuthenticationException if
 | ||||
| +        # we cannot connect
 | ||||
| +        ssh.connect(
 | ||||
| +            client.instance.ip,
 | ||||
| +            username=user,
 | ||||
| +            pkey=paramiko_key,
 | ||||
| +            look_for_keys=False,
 | ||||
| +            allow_agent=False,
 | ||||
| +        )
 | ||||
| +
 | ||||
| +        # Ensure other uses can't connect using our key
 | ||||
| +        other_users = [u[0] for u in expected_keys if u[2] != keys]
 | ||||
| +        for other_user in other_users:
 | ||||
| +            with pytest.raises(SSHException):
 | ||||
| +                print('trying to connect as {} with key from {}'.format(
 | ||||
| +                    other_user, user))
 | ||||
| +                ssh.connect(
 | ||||
| +                    client.instance.ip,
 | ||||
| +                    username=other_user,
 | ||||
| +                    pkey=paramiko_key,
 | ||||
| +                    look_for_keys=False,
 | ||||
| +                    allow_agent=False,
 | ||||
| +                )
 | ||||
| diff --git a/tests/unittests/test_sshutil.py b/tests/unittests/test_sshutil.py
 | ||||
| index fd1d1bac..bcb8044f 100644
 | ||||
| --- a/tests/unittests/test_sshutil.py
 | ||||
| +++ b/tests/unittests/test_sshutil.py
 | ||||
| @@ -570,20 +570,33 @@ class TestBasicAuthorizedKeyParse(test_helpers.CiTestCase):
 | ||||
|              ssh_util.render_authorizedkeysfile_paths( | ||||
|                  "%h/.keys", "/homedirs/bobby", "bobby")) | ||||
|   | ||||
| +    def test_all(self):
 | ||||
| +        self.assertEqual(
 | ||||
| +            ["/homedirs/bobby/.keys", "/homedirs/bobby/.secret/keys",
 | ||||
| +             "/keys/path1", "/opt/bobby/keys"],
 | ||||
| +            ssh_util.render_authorizedkeysfile_paths(
 | ||||
| +                "%h/.keys .secret/keys /keys/path1 /opt/%u/keys",
 | ||||
| +                "/homedirs/bobby", "bobby"))
 | ||||
| +
 | ||||
|   | ||||
|  class TestMultipleSshAuthorizedKeysFile(test_helpers.CiTestCase): | ||||
|   | ||||
|      @patch("cloudinit.ssh_util.pwd.getpwnam") | ||||
|      def test_multiple_authorizedkeys_file_order1(self, m_getpwnam): | ||||
| -        fpw = FakePwEnt(pw_name='bobby', pw_dir='/home2/bobby')
 | ||||
| +        fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
 | ||||
|          m_getpwnam.return_value = fpw | ||||
| -        authorized_keys = self.tmp_path('authorized_keys')
 | ||||
| +        user_ssh_folder = "%s/.ssh" % fpw.pw_dir
 | ||||
| +
 | ||||
| +        # /tmp/home2/bobby/.ssh/authorized_keys = rsa
 | ||||
| +        authorized_keys = self.tmp_path('authorized_keys', dir=user_ssh_folder)
 | ||||
|          util.write_file(authorized_keys, VALID_CONTENT['rsa']) | ||||
|   | ||||
| -        user_keys = self.tmp_path('user_keys')
 | ||||
| +        # /tmp/home2/bobby/.ssh/user_keys = dsa
 | ||||
| +        user_keys = self.tmp_path('user_keys', dir=user_ssh_folder)
 | ||||
|          util.write_file(user_keys, VALID_CONTENT['dsa']) | ||||
|   | ||||
| -        sshd_config = self.tmp_path('sshd_config')
 | ||||
| +        # /tmp/sshd_config
 | ||||
| +        sshd_config = self.tmp_path('sshd_config', dir="/tmp")
 | ||||
|          util.write_file( | ||||
|              sshd_config, | ||||
|              "AuthorizedKeysFile %s %s" % (authorized_keys, user_keys) | ||||
| @@ -593,33 +606,244 @@ class TestMultipleSshAuthorizedKeysFile(test_helpers.CiTestCase):
 | ||||
|              fpw.pw_name, sshd_config) | ||||
|          content = ssh_util.update_authorized_keys(auth_key_entries, []) | ||||
|   | ||||
| -        self.assertEqual("%s/.ssh/authorized_keys" % fpw.pw_dir, auth_key_fn)
 | ||||
| +        self.assertEqual(user_keys, auth_key_fn)
 | ||||
|          self.assertTrue(VALID_CONTENT['rsa'] in content) | ||||
|          self.assertTrue(VALID_CONTENT['dsa'] in content) | ||||
|   | ||||
|      @patch("cloudinit.ssh_util.pwd.getpwnam") | ||||
|      def test_multiple_authorizedkeys_file_order2(self, m_getpwnam): | ||||
| -        fpw = FakePwEnt(pw_name='suzie', pw_dir='/home/suzie')
 | ||||
| +        fpw = FakePwEnt(pw_name='suzie', pw_dir='/tmp/home/suzie')
 | ||||
|          m_getpwnam.return_value = fpw | ||||
| -        authorized_keys = self.tmp_path('authorized_keys')
 | ||||
| +        user_ssh_folder = "%s/.ssh" % fpw.pw_dir
 | ||||
| +
 | ||||
| +        # /tmp/home/suzie/.ssh/authorized_keys = rsa
 | ||||
| +        authorized_keys = self.tmp_path('authorized_keys', dir=user_ssh_folder)
 | ||||
|          util.write_file(authorized_keys, VALID_CONTENT['rsa']) | ||||
|   | ||||
| -        user_keys = self.tmp_path('user_keys')
 | ||||
| +        # /tmp/home/suzie/.ssh/user_keys = dsa
 | ||||
| +        user_keys = self.tmp_path('user_keys', dir=user_ssh_folder)
 | ||||
|          util.write_file(user_keys, VALID_CONTENT['dsa']) | ||||
|   | ||||
| -        sshd_config = self.tmp_path('sshd_config')
 | ||||
| +        # /tmp/sshd_config
 | ||||
| +        sshd_config = self.tmp_path('sshd_config', dir="/tmp")
 | ||||
|          util.write_file( | ||||
|              sshd_config, | ||||
| -            "AuthorizedKeysFile %s %s" % (authorized_keys, user_keys)
 | ||||
| +            "AuthorizedKeysFile %s %s" % (user_keys, authorized_keys)
 | ||||
|          ) | ||||
|   | ||||
|          (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys( | ||||
| -            fpw.pw_name, sshd_config
 | ||||
| +            fpw.pw_name, sshd_config)
 | ||||
| +        content = ssh_util.update_authorized_keys(auth_key_entries, [])
 | ||||
| +
 | ||||
| +        self.assertEqual(authorized_keys, auth_key_fn)
 | ||||
| +        self.assertTrue(VALID_CONTENT['rsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['dsa'] in content)
 | ||||
| +
 | ||||
| +    @patch("cloudinit.ssh_util.pwd.getpwnam")
 | ||||
| +    def test_multiple_authorizedkeys_file_local_global(self, m_getpwnam):
 | ||||
| +        fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
 | ||||
| +        m_getpwnam.return_value = fpw
 | ||||
| +        user_ssh_folder = "%s/.ssh" % fpw.pw_dir
 | ||||
| +
 | ||||
| +        # /tmp/home2/bobby/.ssh/authorized_keys = rsa
 | ||||
| +        authorized_keys = self.tmp_path('authorized_keys', dir=user_ssh_folder)
 | ||||
| +        util.write_file(authorized_keys, VALID_CONTENT['rsa'])
 | ||||
| +
 | ||||
| +        # /tmp/home2/bobby/.ssh/user_keys = dsa
 | ||||
| +        user_keys = self.tmp_path('user_keys', dir=user_ssh_folder)
 | ||||
| +        util.write_file(user_keys, VALID_CONTENT['dsa'])
 | ||||
| +
 | ||||
| +        # /tmp/etc/ssh/authorized_keys = ecdsa
 | ||||
| +        authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys',
 | ||||
| +                                               dir="/tmp")
 | ||||
| +        util.write_file(authorized_keys_global, VALID_CONTENT['ecdsa'])
 | ||||
| +
 | ||||
| +        # /tmp/sshd_config
 | ||||
| +        sshd_config = self.tmp_path('sshd_config', dir="/tmp")
 | ||||
| +        util.write_file(
 | ||||
| +            sshd_config,
 | ||||
| +            "AuthorizedKeysFile %s %s %s" % (authorized_keys_global,
 | ||||
| +                                             user_keys, authorized_keys)
 | ||||
| +        )
 | ||||
| +
 | ||||
| +        (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
 | ||||
| +            fpw.pw_name, sshd_config)
 | ||||
| +        content = ssh_util.update_authorized_keys(auth_key_entries, [])
 | ||||
| +
 | ||||
| +        self.assertEqual(authorized_keys, auth_key_fn)
 | ||||
| +        self.assertTrue(VALID_CONTENT['rsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['ecdsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['dsa'] in content)
 | ||||
| +
 | ||||
| +    @patch("cloudinit.ssh_util.pwd.getpwnam")
 | ||||
| +    def test_multiple_authorizedkeys_file_local_global2(self, m_getpwnam):
 | ||||
| +        fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
 | ||||
| +        m_getpwnam.return_value = fpw
 | ||||
| +        user_ssh_folder = "%s/.ssh" % fpw.pw_dir
 | ||||
| +
 | ||||
| +        # /tmp/home2/bobby/.ssh/authorized_keys2 = rsa
 | ||||
| +        authorized_keys = self.tmp_path('authorized_keys2',
 | ||||
| +                                        dir=user_ssh_folder)
 | ||||
| +        util.write_file(authorized_keys, VALID_CONTENT['rsa'])
 | ||||
| +
 | ||||
| +        # /tmp/home2/bobby/.ssh/user_keys3 = dsa
 | ||||
| +        user_keys = self.tmp_path('user_keys3', dir=user_ssh_folder)
 | ||||
| +        util.write_file(user_keys, VALID_CONTENT['dsa'])
 | ||||
| +
 | ||||
| +        # /tmp/etc/ssh/authorized_keys = ecdsa
 | ||||
| +        authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys',
 | ||||
| +                                               dir="/tmp")
 | ||||
| +        util.write_file(authorized_keys_global, VALID_CONTENT['ecdsa'])
 | ||||
| +
 | ||||
| +        # /tmp/sshd_config
 | ||||
| +        sshd_config = self.tmp_path('sshd_config', dir="/tmp")
 | ||||
| +        util.write_file(
 | ||||
| +            sshd_config,
 | ||||
| +            "AuthorizedKeysFile %s %s %s" % (authorized_keys_global,
 | ||||
| +                                             authorized_keys, user_keys)
 | ||||
| +        )
 | ||||
| +
 | ||||
| +        (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
 | ||||
| +            fpw.pw_name, sshd_config)
 | ||||
| +        content = ssh_util.update_authorized_keys(auth_key_entries, [])
 | ||||
| +
 | ||||
| +        self.assertEqual(user_keys, auth_key_fn)
 | ||||
| +        self.assertTrue(VALID_CONTENT['rsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['ecdsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['dsa'] in content)
 | ||||
| +
 | ||||
| +    @patch("cloudinit.ssh_util.pwd.getpwnam")
 | ||||
| +    def test_multiple_authorizedkeys_file_global(self, m_getpwnam):
 | ||||
| +        fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
 | ||||
| +        m_getpwnam.return_value = fpw
 | ||||
| +
 | ||||
| +        # /tmp/etc/ssh/authorized_keys = rsa
 | ||||
| +        authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys',
 | ||||
| +                                               dir="/tmp")
 | ||||
| +        util.write_file(authorized_keys_global, VALID_CONTENT['rsa'])
 | ||||
| +
 | ||||
| +        # /tmp/sshd_config
 | ||||
| +        sshd_config = self.tmp_path('sshd_config')
 | ||||
| +        util.write_file(
 | ||||
| +            sshd_config,
 | ||||
| +            "AuthorizedKeysFile %s" % (authorized_keys_global)
 | ||||
|          ) | ||||
| +
 | ||||
| +        (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
 | ||||
| +            fpw.pw_name, sshd_config)
 | ||||
|          content = ssh_util.update_authorized_keys(auth_key_entries, []) | ||||
|   | ||||
|          self.assertEqual("%s/.ssh/authorized_keys" % fpw.pw_dir, auth_key_fn) | ||||
|          self.assertTrue(VALID_CONTENT['rsa'] in content) | ||||
| +
 | ||||
| +    @patch("cloudinit.ssh_util.pwd.getpwnam")
 | ||||
| +    def test_multiple_authorizedkeys_file_multiuser(self, m_getpwnam):
 | ||||
| +        fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
 | ||||
| +        m_getpwnam.return_value = fpw
 | ||||
| +        user_ssh_folder = "%s/.ssh" % fpw.pw_dir
 | ||||
| +        # /tmp/home2/bobby/.ssh/authorized_keys2 = rsa
 | ||||
| +        authorized_keys = self.tmp_path('authorized_keys2',
 | ||||
| +                                        dir=user_ssh_folder)
 | ||||
| +        util.write_file(authorized_keys, VALID_CONTENT['rsa'])
 | ||||
| +        # /tmp/home2/bobby/.ssh/user_keys3 = dsa
 | ||||
| +        user_keys = self.tmp_path('user_keys3', dir=user_ssh_folder)
 | ||||
| +        util.write_file(user_keys, VALID_CONTENT['dsa'])
 | ||||
| +
 | ||||
| +        fpw2 = FakePwEnt(pw_name='suzie', pw_dir='/tmp/home/suzie')
 | ||||
| +        user_ssh_folder = "%s/.ssh" % fpw2.pw_dir
 | ||||
| +        # /tmp/home/suzie/.ssh/authorized_keys2 = ssh-xmss@openssh.com
 | ||||
| +        authorized_keys2 = self.tmp_path('authorized_keys2',
 | ||||
| +                                         dir=user_ssh_folder)
 | ||||
| +        util.write_file(authorized_keys2,
 | ||||
| +                        VALID_CONTENT['ssh-xmss@openssh.com'])
 | ||||
| +
 | ||||
| +        # /tmp/etc/ssh/authorized_keys = ecdsa
 | ||||
| +        authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys2',
 | ||||
| +                                               dir="/tmp")
 | ||||
| +        util.write_file(authorized_keys_global, VALID_CONTENT['ecdsa'])
 | ||||
| +
 | ||||
| +        # /tmp/sshd_config
 | ||||
| +        sshd_config = self.tmp_path('sshd_config', dir="/tmp")
 | ||||
| +        util.write_file(
 | ||||
| +            sshd_config,
 | ||||
| +            "AuthorizedKeysFile %s %%h/.ssh/authorized_keys2 %s" %
 | ||||
| +            (authorized_keys_global, user_keys)
 | ||||
| +        )
 | ||||
| +
 | ||||
| +        # process first user
 | ||||
| +        (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
 | ||||
| +            fpw.pw_name, sshd_config)
 | ||||
| +        content = ssh_util.update_authorized_keys(auth_key_entries, [])
 | ||||
| +
 | ||||
| +        self.assertEqual(user_keys, auth_key_fn)
 | ||||
| +        self.assertTrue(VALID_CONTENT['rsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['ecdsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['dsa'] in content)
 | ||||
| +        self.assertFalse(VALID_CONTENT['ssh-xmss@openssh.com'] in content)
 | ||||
| +
 | ||||
| +        m_getpwnam.return_value = fpw2
 | ||||
| +        # process second user
 | ||||
| +        (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
 | ||||
| +            fpw2.pw_name, sshd_config)
 | ||||
| +        content = ssh_util.update_authorized_keys(auth_key_entries, [])
 | ||||
| +
 | ||||
| +        self.assertEqual(authorized_keys2, auth_key_fn)
 | ||||
| +        self.assertTrue(VALID_CONTENT['ssh-xmss@openssh.com'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['ecdsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['dsa'] in content)
 | ||||
| +        self.assertFalse(VALID_CONTENT['rsa'] in content)
 | ||||
| +
 | ||||
| +    @patch("cloudinit.ssh_util.pwd.getpwnam")
 | ||||
| +    def test_multiple_authorizedkeys_file_multiuser2(self, m_getpwnam):
 | ||||
| +        fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home/bobby')
 | ||||
| +        m_getpwnam.return_value = fpw
 | ||||
| +        user_ssh_folder = "%s/.ssh" % fpw.pw_dir
 | ||||
| +        # /tmp/home/bobby/.ssh/authorized_keys2 = rsa
 | ||||
| +        authorized_keys = self.tmp_path('authorized_keys2',
 | ||||
| +                                        dir=user_ssh_folder)
 | ||||
| +        util.write_file(authorized_keys, VALID_CONTENT['rsa'])
 | ||||
| +        # /tmp/home/bobby/.ssh/user_keys3 = dsa
 | ||||
| +        user_keys = self.tmp_path('user_keys3', dir=user_ssh_folder)
 | ||||
| +        util.write_file(user_keys, VALID_CONTENT['dsa'])
 | ||||
| +
 | ||||
| +        fpw2 = FakePwEnt(pw_name='badguy', pw_dir='/tmp/home/badguy')
 | ||||
| +        user_ssh_folder = "%s/.ssh" % fpw2.pw_dir
 | ||||
| +        # /tmp/home/badguy/home/bobby = ""
 | ||||
| +        authorized_keys2 = self.tmp_path('home/bobby', dir="/tmp/home/badguy")
 | ||||
| +
 | ||||
| +        # /tmp/etc/ssh/authorized_keys = ecdsa
 | ||||
| +        authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys2',
 | ||||
| +                                               dir="/tmp")
 | ||||
| +        util.write_file(authorized_keys_global, VALID_CONTENT['ecdsa'])
 | ||||
| +
 | ||||
| +        # /tmp/sshd_config
 | ||||
| +        sshd_config = self.tmp_path('sshd_config', dir="/tmp")
 | ||||
| +        util.write_file(
 | ||||
| +            sshd_config,
 | ||||
| +            "AuthorizedKeysFile %s %%h/.ssh/authorized_keys2 %s %s" %
 | ||||
| +            (authorized_keys_global, user_keys, authorized_keys2)
 | ||||
| +        )
 | ||||
| +
 | ||||
| +        # process first user
 | ||||
| +        (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
 | ||||
| +            fpw.pw_name, sshd_config)
 | ||||
| +        content = ssh_util.update_authorized_keys(auth_key_entries, [])
 | ||||
| +
 | ||||
| +        self.assertEqual(user_keys, auth_key_fn)
 | ||||
| +        self.assertTrue(VALID_CONTENT['rsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['ecdsa'] in content)
 | ||||
| +        self.assertTrue(VALID_CONTENT['dsa'] in content)
 | ||||
| +
 | ||||
| +        m_getpwnam.return_value = fpw2
 | ||||
| +        # process second user
 | ||||
| +        (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
 | ||||
| +            fpw2.pw_name, sshd_config)
 | ||||
| +        content = ssh_util.update_authorized_keys(auth_key_entries, [])
 | ||||
| +
 | ||||
| +        # badguy should not take the key from the other user!
 | ||||
| +        self.assertEqual(authorized_keys2, auth_key_fn)
 | ||||
| +        self.assertTrue(VALID_CONTENT['ecdsa'] in content)
 | ||||
|          self.assertTrue(VALID_CONTENT['dsa'] in content) | ||||
| +        self.assertFalse(VALID_CONTENT['rsa'] in content)
 | ||||
|   | ||||
|  # vi: ts=4 expandtab | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,86 @@ | ||||
| From ce346f6057377c7bb9b89703fb8855ccf4947a61 Mon Sep 17 00:00:00 2001 | ||||
| From: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date: Wed, 8 Sep 2021 16:08:12 +0200 | ||||
| Subject: [PATCH] ssh_utils.py: ignore when sshd_config options are not | ||||
|  key/value pairs | ||||
| 
 | ||||
| RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-MergeRequest: 10: ssh_utils.py: ignore when sshd_config options are not key/value pairs | ||||
| RH-Commit: [1/1] 546081571e8b6b1415aae1a04660137070532fae (eesposit/cloud-init-centos-) | ||||
| RH-Bugzilla: 2002302 | ||||
| RH-Acked-by: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com> | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| 
 | ||||
| TESTED: by me | ||||
| BREW: 39622506 | ||||
| 
 | ||||
| commit 2ce857248162957a785af61c135ca8433fdbbcde | ||||
| Author: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| Date:   Wed Sep 8 02:08:36 2021 +0200 | ||||
| 
 | ||||
|     ssh_utils.py: ignore when sshd_config options are not key/value pairs (#1007) | ||||
| 
 | ||||
|     As specified in #LP 1845552, | ||||
|     In cloudinit/ssh_util.py, in parse_ssh_config_lines(), we attempt to | ||||
|     parse each line of sshd_config. This function expects each line to | ||||
|     be one of the following forms: | ||||
| 
 | ||||
|         \# comment | ||||
|         key value | ||||
|         key=value | ||||
| 
 | ||||
|     However, options like DenyGroups and DenyUsers are specified to | ||||
|     *optionally* accepts values in sshd_config. | ||||
|     Cloud-init should comply to this and skip the option if a value | ||||
|     is not provided. | ||||
| 
 | ||||
|     Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| 
 | ||||
| Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/ssh_util.py           | 8 +++++++- | ||||
|  tests/unittests/test_sshutil.py | 8 ++++++++ | ||||
|  2 files changed, 15 insertions(+), 1 deletion(-) | ||||
| 
 | ||||
| diff --git a/cloudinit/ssh_util.py b/cloudinit/ssh_util.py
 | ||||
| index 9ccadf09..33679dcc 100644
 | ||||
| --- a/cloudinit/ssh_util.py
 | ||||
| +++ b/cloudinit/ssh_util.py
 | ||||
| @@ -484,7 +484,13 @@ def parse_ssh_config_lines(lines):
 | ||||
|          try: | ||||
|              key, val = line.split(None, 1) | ||||
|          except ValueError: | ||||
| -            key, val = line.split('=', 1)
 | ||||
| +            try:
 | ||||
| +                key, val = line.split('=', 1)
 | ||||
| +            except ValueError:
 | ||||
| +                LOG.debug(
 | ||||
| +                    "sshd_config: option \"%s\" has no key/value pair,"
 | ||||
| +                    " skipping it", line)
 | ||||
| +                continue
 | ||||
|          ret.append(SshdConfigLine(line, key, val)) | ||||
|      return ret | ||||
|   | ||||
| diff --git a/tests/unittests/test_sshutil.py b/tests/unittests/test_sshutil.py
 | ||||
| index a66788bf..08e20050 100644
 | ||||
| --- a/tests/unittests/test_sshutil.py
 | ||||
| +++ b/tests/unittests/test_sshutil.py
 | ||||
| @@ -525,6 +525,14 @@ class TestUpdateSshConfigLines(test_helpers.CiTestCase):
 | ||||
|          self.assertEqual([self.pwauth], result) | ||||
|          self.check_line(lines[-1], self.pwauth, "no") | ||||
|   | ||||
| +    def test_option_without_value(self):
 | ||||
| +        """Implementation only accepts key-value pairs."""
 | ||||
| +        extended_exlines = self.exlines.copy()
 | ||||
| +        denyusers_opt = "DenyUsers"
 | ||||
| +        extended_exlines.append(denyusers_opt)
 | ||||
| +        lines = ssh_util.parse_ssh_config_lines(list(extended_exlines))
 | ||||
| +        self.assertNotIn(denyusers_opt, str(lines))
 | ||||
| +
 | ||||
|      def test_single_option_updated(self): | ||||
|          """A single update should have change made and line updated.""" | ||||
|          opt, val = ("UsePAM", "no") | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
| @ -0,0 +1,371 @@ | ||||
| From f9564bd4477782e8cffe4be1d3c31c0338fb03b1 Mon Sep 17 00:00:00 2001 | ||||
| From: Eduardo Otubo <otubo@redhat.com> | ||||
| Date: Mon, 5 Jul 2021 14:07:21 +0200 | ||||
| Subject: [PATCH 1/2] write passwords only to serial console, lock down | ||||
|  cloud-init-output.log (#847) | ||||
| 
 | ||||
| RH-Author: Eduardo Otubo <otubo@redhat.com> | ||||
| RH-MergeRequest: 4: write passwords only to serial console, lock down cloud-init-output.log (#847) | ||||
| RH-Commit: [1/1] 7543b3458c01ea988e987336d84510157c00390d (otubo/cloud-init-src) | ||||
| RH-Bugzilla: 1945892 | ||||
| RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> | ||||
| RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com> | ||||
| 
 | ||||
| commit b794d426b9ab43ea9d6371477466070d86e10668 | ||||
| Author: Daniel Watkins <oddbloke@ubuntu.com> | ||||
| Date:   Fri Mar 19 10:06:42 2021 -0400 | ||||
| 
 | ||||
|     write passwords only to serial console, lock down cloud-init-output.log (#847) | ||||
| 
 | ||||
|     Prior to this commit, when a user specified configuration which would | ||||
|     generate random passwords for users, cloud-init would cause those | ||||
|     passwords to be written to the serial console by emitting them on | ||||
|     stderr.  In the default configuration, any stdout or stderr emitted by | ||||
|     cloud-init is also written to `/var/log/cloud-init-output.log`.  This | ||||
|     file is world-readable, meaning that those randomly-generated passwords | ||||
|     were available to be read by any user with access to the system.  This | ||||
|     presents an obvious security issue. | ||||
| 
 | ||||
|     This commit responds to this issue in two ways: | ||||
| 
 | ||||
|     * We address the direct issue by moving from writing the passwords to | ||||
|       sys.stderr to writing them directly to /dev/console (via | ||||
|       util.multi_log); this means that the passwords will never end up in | ||||
|       cloud-init-output.log | ||||
|     * To avoid future issues like this, we also modify the logging code so | ||||
|       that any files created in a log sink subprocess will only be | ||||
|       owner/group readable and, if it exists, will be owned by the adm | ||||
|       group.  This results in `/var/log/cloud-init-output.log` no longer | ||||
|       being world-readable, meaning that if there are other parts of the | ||||
|       codebase that are emitting sensitive data intended for the serial | ||||
|       console, that data is no longer available to all users of the system. | ||||
| 
 | ||||
|     LP: #1918303 | ||||
| 
 | ||||
| Signed-off-by: Eduardo Otubo <otubo@redhat.com> | ||||
| Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com> | ||||
| ---
 | ||||
|  cloudinit/config/cc_set_passwords.py          |  5 +- | ||||
|  cloudinit/config/tests/test_set_passwords.py  | 40 +++++++++---- | ||||
|  cloudinit/tests/test_util.py                  | 56 +++++++++++++++++++ | ||||
|  cloudinit/util.py                             | 38 +++++++++++-- | ||||
|  .../modules/test_set_password.py              | 24 ++++++++ | ||||
|  tests/integration_tests/test_logging.py       | 22 ++++++++ | ||||
|  tests/unittests/test_util.py                  |  4 ++ | ||||
|  7 files changed, 173 insertions(+), 16 deletions(-) | ||||
|  create mode 100644 tests/integration_tests/test_logging.py | ||||
| 
 | ||||
| diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
 | ||||
| index d6b5682d..433de751 100755
 | ||||
| --- a/cloudinit/config/cc_set_passwords.py
 | ||||
| +++ b/cloudinit/config/cc_set_passwords.py
 | ||||
| @@ -78,7 +78,6 @@ password.
 | ||||
|  """ | ||||
|   | ||||
|  import re | ||||
| -import sys
 | ||||
|   | ||||
|  from cloudinit.distros import ug_util | ||||
|  from cloudinit import log as logging | ||||
| @@ -214,7 +213,9 @@ def handle(_name, cfg, cloud, log, args):
 | ||||
|          if len(randlist): | ||||
|              blurb = ("Set the following 'random' passwords\n", | ||||
|                       '\n'.join(randlist)) | ||||
| -            sys.stderr.write("%s\n%s\n" % blurb)
 | ||||
| +            util.multi_log(
 | ||||
| +                "%s\n%s\n" % blurb, stderr=False, fallback_to_stdout=False
 | ||||
| +            )
 | ||||
|   | ||||
|          if expire: | ||||
|              expired_users = [] | ||||
| diff --git a/cloudinit/config/tests/test_set_passwords.py b/cloudinit/config/tests/test_set_passwords.py
 | ||||
| index daa1ef51..bbe2ee8f 100644
 | ||||
| --- a/cloudinit/config/tests/test_set_passwords.py
 | ||||
| +++ b/cloudinit/config/tests/test_set_passwords.py
 | ||||
| @@ -74,10 +74,6 @@ class TestSetPasswordsHandle(CiTestCase):
 | ||||
|   | ||||
|      with_logs = True | ||||
|   | ||||
| -    def setUp(self):
 | ||||
| -        super(TestSetPasswordsHandle, self).setUp()
 | ||||
| -        self.add_patch('cloudinit.config.cc_set_passwords.sys.stderr', 'm_err')
 | ||||
| -
 | ||||
|      def test_handle_on_empty_config(self, *args): | ||||
|          """handle logs that no password has changed when config is empty.""" | ||||
|          cloud = self.tmp_cloud(distro='ubuntu') | ||||
| @@ -129,10 +125,12 @@ class TestSetPasswordsHandle(CiTestCase):
 | ||||
|              mock.call(['pw', 'usermod', 'ubuntu', '-p', '01-Jan-1970'])], | ||||
|              m_subp.call_args_list) | ||||
|   | ||||
| +    @mock.patch(MODPATH + "util.multi_log")
 | ||||
|      @mock.patch(MODPATH + "util.is_BSD") | ||||
|      @mock.patch(MODPATH + "subp.subp") | ||||
| -    def test_handle_on_chpasswd_list_creates_random_passwords(self, m_subp,
 | ||||
| -                                                              m_is_bsd):
 | ||||
| +    def test_handle_on_chpasswd_list_creates_random_passwords(
 | ||||
| +        self, m_subp, m_is_bsd, m_multi_log
 | ||||
| +    ):
 | ||||
|          """handle parses command set random passwords.""" | ||||
|          m_is_bsd.return_value = False | ||||
|          cloud = self.tmp_cloud(distro='ubuntu') | ||||
| @@ -146,10 +144,32 @@ class TestSetPasswordsHandle(CiTestCase):
 | ||||
|          self.assertIn( | ||||
|              'DEBUG: Handling input for chpasswd as list.', | ||||
|              self.logs.getvalue()) | ||||
| -        self.assertNotEqual(
 | ||||
| -            [mock.call(['chpasswd'],
 | ||||
| -             '\n'.join(valid_random_pwds) + '\n')],
 | ||||
| -            m_subp.call_args_list)
 | ||||
| +
 | ||||
| +        self.assertEqual(1, m_subp.call_count)
 | ||||
| +        args, _kwargs = m_subp.call_args
 | ||||
| +        self.assertEqual(["chpasswd"], args[0])
 | ||||
| +
 | ||||
| +        stdin = args[1]
 | ||||
| +        user_pass = {
 | ||||
| +            user: password
 | ||||
| +            for user, password
 | ||||
| +            in (line.split(":") for line in stdin.splitlines())
 | ||||
| +        }
 | ||||
| +
 | ||||
| +        self.assertEqual(1, m_multi_log.call_count)
 | ||||
| +        self.assertEqual(
 | ||||
| +            mock.call(mock.ANY, stderr=False, fallback_to_stdout=False),
 | ||||
| +            m_multi_log.call_args
 | ||||
| +        )
 | ||||
| +
 | ||||
| +        self.assertEqual(set(["root", "ubuntu"]), set(user_pass.keys()))
 | ||||
| +        written_lines = m_multi_log.call_args[0][0].splitlines()
 | ||||
| +        for password in user_pass.values():
 | ||||
| +            for line in written_lines:
 | ||||
| +                if password in line:
 | ||||
| +                    break
 | ||||
| +            else:
 | ||||
| +                self.fail("Password not emitted to console")
 | ||||
|   | ||||
|   | ||||
|  # vi: ts=4 expandtab | ||||
| diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
 | ||||
| index b7a302f1..e811917e 100644
 | ||||
| --- a/cloudinit/tests/test_util.py
 | ||||
| +++ b/cloudinit/tests/test_util.py
 | ||||
| @@ -851,4 +851,60 @@ class TestEnsureFile:
 | ||||
|          assert "ab" == kwargs["omode"] | ||||
|   | ||||
|   | ||||
| +@mock.patch("cloudinit.util.grp.getgrnam")
 | ||||
| +@mock.patch("cloudinit.util.os.setgid")
 | ||||
| +@mock.patch("cloudinit.util.os.umask")
 | ||||
| +class TestRedirectOutputPreexecFn:
 | ||||
| +    """This tests specifically the preexec_fn used in redirect_output."""
 | ||||
| +
 | ||||
| +    @pytest.fixture(params=["outfmt", "errfmt"])
 | ||||
| +    def preexec_fn(self, request):
 | ||||
| +        """A fixture to gather the preexec_fn used by redirect_output.
 | ||||
| +
 | ||||
| +        This enables simpler direct testing of it, and parameterises any tests
 | ||||
| +        using it to cover both the stdout and stderr code paths.
 | ||||
| +        """
 | ||||
| +        test_string = "| piped output to invoke subprocess"
 | ||||
| +        if request.param == "outfmt":
 | ||||
| +            args = (test_string, None)
 | ||||
| +        elif request.param == "errfmt":
 | ||||
| +            args = (None, test_string)
 | ||||
| +        with mock.patch("cloudinit.util.subprocess.Popen") as m_popen:
 | ||||
| +            util.redirect_output(*args)
 | ||||
| +
 | ||||
| +        assert 1 == m_popen.call_count
 | ||||
| +        _args, kwargs = m_popen.call_args
 | ||||
| +        assert "preexec_fn" in kwargs, "preexec_fn not passed to Popen"
 | ||||
| +        return kwargs["preexec_fn"]
 | ||||
| +
 | ||||
| +    def test_preexec_fn_sets_umask(
 | ||||
| +        self, m_os_umask, _m_setgid, _m_getgrnam, preexec_fn
 | ||||
| +    ):
 | ||||
| +        """preexec_fn should set a mask that avoids world-readable files."""
 | ||||
| +        preexec_fn()
 | ||||
| +
 | ||||
| +        assert [mock.call(0o037)] == m_os_umask.call_args_list
 | ||||
| +
 | ||||
| +    def test_preexec_fn_sets_group_id_if_adm_group_present(
 | ||||
| +        self, _m_os_umask, m_setgid, m_getgrnam, preexec_fn
 | ||||
| +    ):
 | ||||
| +        """We should setgrp to adm if present, so files are owned by them."""
 | ||||
| +        fake_group = mock.Mock(gr_gid=mock.sentinel.gr_gid)
 | ||||
| +        m_getgrnam.return_value = fake_group
 | ||||
| +
 | ||||
| +        preexec_fn()
 | ||||
| +
 | ||||
| +        assert [mock.call("adm")] == m_getgrnam.call_args_list
 | ||||
| +        assert [mock.call(mock.sentinel.gr_gid)] == m_setgid.call_args_list
 | ||||
| +
 | ||||
| +    def test_preexec_fn_handles_absent_adm_group_gracefully(
 | ||||
| +        self, _m_os_umask, m_setgid, m_getgrnam, preexec_fn
 | ||||
| +    ):
 | ||||
| +        """We should handle an absent adm group gracefully."""
 | ||||
| +        m_getgrnam.side_effect = KeyError("getgrnam(): name not found: 'adm'")
 | ||||
| +
 | ||||
| +        preexec_fn()
 | ||||
| +
 | ||||
| +        assert 0 == m_setgid.call_count
 | ||||
| +
 | ||||
|  # vi: ts=4 expandtab | ||||
| diff --git a/cloudinit/util.py b/cloudinit/util.py
 | ||||
| index 769f3425..4e0a72db 100644
 | ||||
| --- a/cloudinit/util.py
 | ||||
| +++ b/cloudinit/util.py
 | ||||
| @@ -359,7 +359,7 @@ def find_modules(root_dir):
 | ||||
|   | ||||
|   | ||||
|  def multi_log(text, console=True, stderr=True, | ||||
| -              log=None, log_level=logging.DEBUG):
 | ||||
| +              log=None, log_level=logging.DEBUG, fallback_to_stdout=True):
 | ||||
|      if stderr: | ||||
|          sys.stderr.write(text) | ||||
|      if console: | ||||
| @@ -368,7 +368,7 @@ def multi_log(text, console=True, stderr=True,
 | ||||
|              with open(conpath, 'w') as wfh: | ||||
|                  wfh.write(text) | ||||
|                  wfh.flush() | ||||
| -        else:
 | ||||
| +        elif fallback_to_stdout:
 | ||||
|              # A container may lack /dev/console (arguably a container bug).  If | ||||
|              # it does not exist, then write output to stdout.  this will result | ||||
|              # in duplicate stderr and stdout messages if stderr was True. | ||||
| @@ -623,6 +623,26 @@ def redirect_output(outfmt, errfmt, o_out=None, o_err=None):
 | ||||
|      if not o_err: | ||||
|          o_err = sys.stderr | ||||
|   | ||||
| +    # pylint: disable=subprocess-popen-preexec-fn
 | ||||
| +    def set_subprocess_umask_and_gid():
 | ||||
| +        """Reconfigure umask and group ID to create output files securely.
 | ||||
| +
 | ||||
| +        This is passed to subprocess.Popen as preexec_fn, so it is executed in
 | ||||
| +        the context of the newly-created process.  It:
 | ||||
| +
 | ||||
| +        * sets the umask of the process so created files aren't world-readable
 | ||||
| +        * if an adm group exists in the system, sets that as the process' GID
 | ||||
| +          (so that the created file(s) are owned by root:adm)
 | ||||
| +        """
 | ||||
| +        os.umask(0o037)
 | ||||
| +        try:
 | ||||
| +            group_id = grp.getgrnam("adm").gr_gid
 | ||||
| +        except KeyError:
 | ||||
| +            # No adm group, don't set a group
 | ||||
| +            pass
 | ||||
| +        else:
 | ||||
| +            os.setgid(group_id)
 | ||||
| +
 | ||||
|      if outfmt: | ||||
|          LOG.debug("Redirecting %s to %s", o_out, outfmt) | ||||
|          (mode, arg) = outfmt.split(" ", 1) | ||||
| @@ -632,7 +652,12 @@ def redirect_output(outfmt, errfmt, o_out=None, o_err=None):
 | ||||
|                  owith = "wb" | ||||
|              new_fp = open(arg, owith) | ||||
|          elif mode == "|": | ||||
| -            proc = subprocess.Popen(arg, shell=True, stdin=subprocess.PIPE)
 | ||||
| +            proc = subprocess.Popen(
 | ||||
| +                arg,
 | ||||
| +                shell=True,
 | ||||
| +                stdin=subprocess.PIPE,
 | ||||
| +                preexec_fn=set_subprocess_umask_and_gid,
 | ||||
| +            )
 | ||||
|              new_fp = proc.stdin | ||||
|          else: | ||||
|              raise TypeError("Invalid type for output format: %s" % outfmt) | ||||
| @@ -654,7 +679,12 @@ def redirect_output(outfmt, errfmt, o_out=None, o_err=None):
 | ||||
|                  owith = "wb" | ||||
|              new_fp = open(arg, owith) | ||||
|          elif mode == "|": | ||||
| -            proc = subprocess.Popen(arg, shell=True, stdin=subprocess.PIPE)
 | ||||
| +            proc = subprocess.Popen(
 | ||||
| +                arg,
 | ||||
| +                shell=True,
 | ||||
| +                stdin=subprocess.PIPE,
 | ||||
| +                preexec_fn=set_subprocess_umask_and_gid,
 | ||||
| +            )
 | ||||
|              new_fp = proc.stdin | ||||
|          else: | ||||
|              raise TypeError("Invalid type for error format: %s" % errfmt) | ||||
| diff --git a/tests/integration_tests/modules/test_set_password.py b/tests/integration_tests/modules/test_set_password.py
 | ||||
| index b13f76fb..d7cf91a5 100644
 | ||||
| --- a/tests/integration_tests/modules/test_set_password.py
 | ||||
| +++ b/tests/integration_tests/modules/test_set_password.py
 | ||||
| @@ -116,6 +116,30 @@ class Mixin:
 | ||||
|          # Which are not the same | ||||
|          assert shadow_users["harry"] != shadow_users["dick"] | ||||
|   | ||||
| +    def test_random_passwords_not_stored_in_cloud_init_output_log(
 | ||||
| +        self, class_client
 | ||||
| +    ):
 | ||||
| +        """We should not emit passwords to the in-instance log file.
 | ||||
| +
 | ||||
| +        LP: #1918303
 | ||||
| +        """
 | ||||
| +        cloud_init_output = class_client.read_from_file(
 | ||||
| +            "/var/log/cloud-init-output.log"
 | ||||
| +        )
 | ||||
| +        assert "dick:" not in cloud_init_output
 | ||||
| +        assert "harry:" not in cloud_init_output
 | ||||
| +
 | ||||
| +    def test_random_passwords_emitted_to_serial_console(self, class_client):
 | ||||
| +        """We should emit passwords to the serial console. (LP: #1918303)"""
 | ||||
| +        try:
 | ||||
| +            console_log = class_client.instance.console_log()
 | ||||
| +        except NotImplementedError:
 | ||||
| +            # Assume that an exception here means that we can't use the console
 | ||||
| +            # log
 | ||||
| +            pytest.skip("NotImplementedError when requesting console log")
 | ||||
| +        assert "dick:" in console_log
 | ||||
| +        assert "harry:" in console_log
 | ||||
| +
 | ||||
|      def test_explicit_password_set_correctly(self, class_client): | ||||
|          """Test that an explicitly-specified password is set correctly.""" | ||||
|          shadow_users, _ = self._fetch_and_parse_etc_shadow(class_client) | ||||
| diff --git a/tests/integration_tests/test_logging.py b/tests/integration_tests/test_logging.py
 | ||||
| new file mode 100644 | ||||
| index 00000000..b31a0434
 | ||||
| --- /dev/null
 | ||||
| +++ b/tests/integration_tests/test_logging.py
 | ||||
| @@ -0,0 +1,22 @@
 | ||||
| +"""Integration tests relating to cloud-init's logging."""
 | ||||
| +
 | ||||
| +
 | ||||
| +class TestVarLogCloudInitOutput:
 | ||||
| +    """Integration tests relating to /var/log/cloud-init-output.log."""
 | ||||
| +
 | ||||
| +    def test_var_log_cloud_init_output_not_world_readable(self, client):
 | ||||
| +        """
 | ||||
| +        The log can contain sensitive data, it shouldn't be world-readable.
 | ||||
| +
 | ||||
| +        LP: #1918303
 | ||||
| +        """
 | ||||
| +        # Check the file exists
 | ||||
| +        assert client.execute("test -f /var/log/cloud-init-output.log").ok
 | ||||
| +
 | ||||
| +        # Check its permissions are as we expect
 | ||||
| +        perms, user, group = client.execute(
 | ||||
| +            "stat -c %a:%U:%G /var/log/cloud-init-output.log"
 | ||||
| +        ).split(":")
 | ||||
| +        assert "640" == perms
 | ||||
| +        assert "root" == user
 | ||||
| +        assert "adm" == group
 | ||||
| diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
 | ||||
| index 857629f1..e5292001 100644
 | ||||
| --- a/tests/unittests/test_util.py
 | ||||
| +++ b/tests/unittests/test_util.py
 | ||||
| @@ -572,6 +572,10 @@ class TestMultiLog(helpers.FilesystemMockingTestCase):
 | ||||
|          util.multi_log(logged_string) | ||||
|          self.assertEqual(logged_string, self.stdout.getvalue()) | ||||
|   | ||||
| +    def test_logs_dont_go_to_stdout_if_fallback_to_stdout_is_false(self):
 | ||||
| +        util.multi_log('something', fallback_to_stdout=False)
 | ||||
| +        self.assertEqual('', self.stdout.getvalue())
 | ||||
| +
 | ||||
|      def test_logs_go_to_log_if_given(self): | ||||
|          log = mock.MagicMock() | ||||
|          logged_string = 'something very important' | ||||
| -- 
 | ||||
| 2.27.0 | ||||
| 
 | ||||
							
								
								
									
										1
									
								
								SOURCES/cloud-init-tmpfiles.conf
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										1
									
								
								SOURCES/cloud-init-tmpfiles.conf
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1 @@ | ||||
| d /run/cloud-init 0700 root root - - | ||||
							
								
								
									
										563
									
								
								SPECS/cloud-init.spec
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										563
									
								
								SPECS/cloud-init.spec
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,563 @@ | ||||
| Name:           cloud-init | ||||
| Version:        21.1 | ||||
| Release:        19%{?dist} | ||||
| Summary:        Cloud instance init scripts | ||||
| License:        ASL 2.0 or GPLv3 | ||||
| URL:            http://launchpad.net/cloud-init | ||||
| Source0:        https://launchpad.net/cloud-init/trunk/%{version}/+download/%{name}-%{version}.tar.gz | ||||
| Source1:        cloud-init-tmpfiles.conf | ||||
| 
 | ||||
| Patch0001: 0001-Add-initial-redhat-setup.patch | ||||
| Patch0002: 0002-Do-not-write-NM_CONTROLLED-no-in-generated-interface.patch | ||||
| Patch0003: 0003-limit-permissions-on-def_log_file.patch | ||||
| # For bz#1970909 - [cloud-init] From RHEL 82+ cloud-init no longer displays sshd keys fingerprints from instance launched from a backup image[rhel-9] | ||||
| Patch4: ci-rhel-cloud.cfg-remove-ssh_genkeytypes-in-settings.py.patch | ||||
| # For bz#1943511 - [Aliyun][RHEL9.0][cloud-init] cloud-init service failed to start with Alibaba instance | ||||
| Patch5: ci-Fix-requiring-device-number-on-EC2-derivatives-836.patch | ||||
| # For bz#1945892 - CVE-2021-3429 cloud-init: randomly generated passwords logged in clear-text to world-readable file [rhel-9.0] | ||||
| Patch6: ci-write-passwords-only-to-serial-console-lock-down-clo.patch | ||||
| # For bz#1979099 - [cloud-init]Customize ssh AuthorizedKeysFile causes login failure[RHEL-9.0] | ||||
| Patch7: ci-ssh-util-allow-cloudinit-to-merge-all-ssh-keys-into-.patch | ||||
| # For bz#1979099 - [cloud-init]Customize ssh AuthorizedKeysFile causes login failure[RHEL-9.0] | ||||
| Patch8: ci-Stop-copying-ssh-system-keys-and-check-folder-permis.patch | ||||
| # For bz#1995843 - [cloudinit]  Fix home permissions modified by ssh module | ||||
| Patch9: ci-Fix-home-permissions-modified-by-ssh-module-SC-338-9.patch | ||||
| # For bz#2002302 - cloud-init fails with ValueError: need more than 1 value to unpack[rhel-9] | ||||
| Patch10: ci-ssh_utils.py-ignore-when-sshd_config-options-are-not.patch | ||||
| # For bz#2002492 - util.py[WARNING]: Failed generating key type rsa to file /etc/ssh/ssh_host_rsa_key | ||||
| Patch11: ci-Inhibit-sshd-keygen-.service-if-cloud-init-is-active.patch | ||||
| # For bz#2015974 - cloud-init fails to set host key permissions correctly | ||||
| Patch12: ci-cc_ssh.py-fix-private-key-group-owner-and-permission.patch | ||||
| # For bz#2016305 - disable-sshd-keygen-if-cloud-init-active.conf:8: Missing '=', ignoring line | ||||
| Patch13: ci-remove-unnecessary-EOF-string-in-disable-sshd-keygen.patch | ||||
| # For bz#2028381 - cloud-init.service fails to start after package update | ||||
| Patch14: ci-fix-error-on-upgrade-caused-by-new-vendordata2-attri.patch | ||||
| # For bz#2028031 - [RHEL-9] Above 19.2 of cloud-init fails to configure routes when configuring static and default routes to the same destination IP | ||||
| Patch15: ci-cloudinit-net-handle-two-different-routes-for-the-sa.patch | ||||
| # For bz#2040090 - [cloud-init][RHEL9] Support for cloud-init datasource 'cloud-init-vmware-guestinfo' | ||||
| Patch16: ci-Datasource-for-VMware-953.patch | ||||
| # For bz#2040090 - [cloud-init][RHEL9] Support for cloud-init datasource 'cloud-init-vmware-guestinfo' | ||||
| Patch17: ci-Change-netifaces-dependency-to-0.10.4-965.patch | ||||
| # For bz#2040090 - [cloud-init][RHEL9] Support for cloud-init datasource 'cloud-init-vmware-guestinfo' | ||||
| Patch18: ci-Update-dscheck_VMware-s-rpctool-check-970.patch | ||||
| # For bz#2040090 - [cloud-init][RHEL9] Support for cloud-init datasource 'cloud-init-vmware-guestinfo' | ||||
| Patch19: ci-Revert-unnecesary-lcase-in-ds-identify-978.patch | ||||
| # For bz#2042351 - [RHEL-9] Support for provisioning Azure VM with userdata | ||||
| Patch20: ci-Add-flexibility-to-IMDS-api-version-793.patch | ||||
| # For bz#2042351 - [RHEL-9] Support for provisioning Azure VM with userdata | ||||
| Patch21: ci-Azure-helper-Ensure-Azure-http-handler-sleeps-betwee.patch | ||||
| # For bz#2042351 - [RHEL-9] Support for provisioning Azure VM with userdata | ||||
| Patch22: ci-azure-Removing-ability-to-invoke-walinuxagent-799.patch | ||||
| # For bz#2042351 - [RHEL-9] Support for provisioning Azure VM with userdata | ||||
| Patch23: ci-Azure-eject-the-provisioning-iso-before-reporting-re.patch | ||||
| # For bz#2042351 - [RHEL-9] Support for provisioning Azure VM with userdata | ||||
| Patch24: ci-Azure-Retrieve-username-and-hostname-from-IMDS-865.patch | ||||
| # For bz#2042351 - [RHEL-9] Support for provisioning Azure VM with userdata | ||||
| Patch25: ci-Azure-Retry-net-metadata-during-nic-attach-for-non-t.patch | ||||
| # For bz#2042351 - [RHEL-9] Support for provisioning Azure VM with userdata | ||||
| Patch26: ci-Azure-adding-support-for-consuming-userdata-from-IMD.patch | ||||
| # For bz#1998445 - [Azure][RHEL-9] ordering cycle exists after reboot | ||||
| Patch27: ci-Add-_netdev-option-to-mount-Azure-ephemeral-disk-121.patch | ||||
| # For bz#2053546 - cloud-init writes route6-$DEVICE config with a HEX netmask. ip route does not like : Error: inet6 prefix is expected rather than "fd00:fd00:fd00::/ffff:ffff:ffff:ffff::". | ||||
| Patch28: ci-Fix-IPv6-netmask-format-for-sysconfig-1215.patch | ||||
| # For bz#1998445 - [Azure][RHEL-9] ordering cycle exists after reboot | ||||
| Patch29: ci-Adding-_netdev-to-the-default-mount-configuration.patch | ||||
| # For bz#2036060 - [cloud-init][ESXi][RHEL-9] Failed to config static IP according to VMware Customization Config File | ||||
| Patch30: ci-Setting-highest-autoconnect-priority-for-network-scr.patch | ||||
| 
 | ||||
| # Source-git patches | ||||
| 
 | ||||
| BuildArch:      noarch | ||||
| 
 | ||||
| BuildRequires:  pkgconfig(systemd) | ||||
| BuildRequires:  python3-devel | ||||
| BuildRequires:  python3-setuptools | ||||
| BuildRequires:  systemd | ||||
| 
 | ||||
| # For tests | ||||
| BuildRequires:  iproute | ||||
| BuildRequires:  python3-configobj | ||||
| # https://bugzilla.redhat.com/show_bug.cgi?id=1695953 | ||||
| BuildRequires:  python3-distro | ||||
| # https://bugzilla.redhat.com/show_bug.cgi?id=1417029 | ||||
| BuildRequires:  python3-httpretty >= 0.8.14-2 | ||||
| BuildRequires:  python3-jinja2 | ||||
| BuildRequires:  python3-jsonpatch | ||||
| BuildRequires:  python3-oauthlib | ||||
| BuildRequires:  python3-prettytable | ||||
| BuildRequires:  python3-pyserial | ||||
| BuildRequires:  python3-PyYAML | ||||
| BuildRequires:  python3-requests | ||||
| BuildRequires:  python3-six | ||||
| # dnf is needed to make cc_ntp unit tests work | ||||
| # https://bugs.launchpad.net/cloud-init/+bug/1721573 | ||||
| BuildRequires:  /usr/bin/dnf | ||||
| 
 | ||||
| Requires:       e2fsprogs | ||||
| Requires:       iproute | ||||
| Requires:       libselinux-python3 | ||||
| Requires:       policycoreutils-python3 | ||||
| Requires:       procps | ||||
| Requires:       python3-configobj | ||||
| # https://bugzilla.redhat.com/show_bug.cgi?id=1695953 | ||||
| Requires:       python3-distro | ||||
| Requires:       python3-jinja2 | ||||
| Requires:       python3-jsonpatch | ||||
| Requires:       python3-oauthlib | ||||
| Requires:       python3-prettytable | ||||
| Requires:       python3-pyserial | ||||
| Requires:       python3-PyYAML | ||||
| Requires:       python3-requests | ||||
| Requires:       python3-six | ||||
| Requires:       shadow-utils | ||||
| Requires:       util-linux | ||||
| Requires:       xfsprogs | ||||
| Requires:       dhcp-client | ||||
| # https://bugzilla.redhat.com/show_bug.cgi?id=2032524 | ||||
| Requires:       gdisk | ||||
| Requires:       openssl | ||||
| Requires:       python3-netifaces | ||||
| 
 | ||||
| %{?systemd_requires} | ||||
| 
 | ||||
| %description | ||||
| Cloud-init is a set of init scripts for cloud instances.  Cloud instances | ||||
| need special scripts to run during initialization to retrieve and install | ||||
| ssh keys and to let the user run various scripts. | ||||
| 
 | ||||
| 
 | ||||
| %prep | ||||
| %autosetup -p1 | ||||
| 
 | ||||
| # Change shebangs | ||||
| sed -i -e 's|#!/usr/bin/env python|#!/usr/bin/env python3|' \ | ||||
|        -e 's|#!/usr/bin/python|#!/usr/bin/python3|' tools/* cloudinit/ssh_util.py | ||||
| 
 | ||||
| %build | ||||
| %py3_build | ||||
| 
 | ||||
| 
 | ||||
| %install | ||||
| %py3_install -- | ||||
| 
 | ||||
| %if 0%{?fedora} | ||||
| python3 tools/render-cloudcfg --variant fedora > $RPM_BUILD_ROOT/%{_sysconfdir}/cloud/cloud.cfg | ||||
| %elif 0%{?rhel} | ||||
| cp -p rhel/cloud.cfg $RPM_BUILD_ROOT/%{_sysconfdir}/cloud/cloud.cfg | ||||
| %endif | ||||
| 
 | ||||
| sed -i "s,@@PACKAGED_VERSION@@,%{version}-%{release}," $RPM_BUILD_ROOT/%{python3_sitelib}/cloudinit/version.py | ||||
| 
 | ||||
| mkdir -p $RPM_BUILD_ROOT/var/lib/cloud | ||||
| 
 | ||||
| # /run/cloud-init needs a tmpfiles.d entry | ||||
| mkdir -p $RPM_BUILD_ROOT/run/cloud-init | ||||
| mkdir -p $RPM_BUILD_ROOT/%{_tmpfilesdir} | ||||
| cp -p %{SOURCE1} $RPM_BUILD_ROOT/%{_tmpfilesdir}/%{name}.conf | ||||
| 
 | ||||
| # We supply our own config file since our software differs from Ubuntu's. | ||||
| cp -p rhel/cloud.cfg $RPM_BUILD_ROOT/%{_sysconfdir}/cloud/cloud.cfg | ||||
| 
 | ||||
| mkdir -p $RPM_BUILD_ROOT/%{_sysconfdir}/rsyslog.d | ||||
| cp -p tools/21-cloudinit.conf $RPM_BUILD_ROOT/%{_sysconfdir}/rsyslog.d/21-cloudinit.conf | ||||
| 
 | ||||
| # Make installed NetworkManager hook name less generic | ||||
| mv $RPM_BUILD_ROOT/etc/NetworkManager/dispatcher.d/hook-network-manager \ | ||||
|    $RPM_BUILD_ROOT/etc/NetworkManager/dispatcher.d/cloud-init-azure-hook | ||||
| 
 | ||||
| # Install our own systemd units (rhbz#1440831) | ||||
| mkdir -p $RPM_BUILD_ROOT%{_unitdir} | ||||
| cp rhel/systemd/* $RPM_BUILD_ROOT%{_unitdir}/ | ||||
| 
 | ||||
| [ ! -d $RPM_BUILD_ROOT%{_systemdgeneratordir} ] && mkdir -p $RPM_BUILD_ROOT%{_systemdgeneratordir} | ||||
| python3 tools/render-cloudcfg --variant rhel systemd/cloud-init-generator.tmpl > $RPM_BUILD_ROOT%{_systemdgeneratordir}/cloud-init-generator | ||||
| chmod 755 $RPM_BUILD_ROOT%{_systemdgeneratordir}/cloud-init-generator | ||||
| 
 | ||||
| [ ! -d $RPM_BUILD_ROOT/usr/lib/%{name} ] && mkdir -p $RPM_BUILD_ROOT/usr/lib/%{name} | ||||
| cp -p tools/ds-identify $RPM_BUILD_ROOT%{_libexecdir}/%{name}/ds-identify | ||||
| 
 | ||||
| # installing man pages | ||||
| mkdir -p ${RPM_BUILD_ROOT}%{_mandir}/man1/ | ||||
| for man in cloud-id.1 cloud-init.1 cloud-init-per.1; do | ||||
|     install -c -m 0644 doc/man/${man} ${RPM_BUILD_ROOT}%{_mandir}/man1/${man} | ||||
|     chmod -x ${RPM_BUILD_ROOT}%{_mandir}/man1/* | ||||
| done | ||||
| 
 | ||||
| %clean | ||||
| rm -rf $RPM_BUILD_ROOT | ||||
| 
 | ||||
| 
 | ||||
| %post | ||||
| if [ $1 -eq 1 ] ; then | ||||
|     # Initial installation | ||||
|     # Enabled by default per "runs once then goes away" exception | ||||
|     /bin/systemctl enable cloud-config.service     >/dev/null 2>&1 || : | ||||
|     /bin/systemctl enable cloud-final.service      >/dev/null 2>&1 || : | ||||
|     /bin/systemctl enable cloud-init.service       >/dev/null 2>&1 || : | ||||
|     /bin/systemctl enable cloud-init-local.service >/dev/null 2>&1 || : | ||||
|     /bin/systemctl enable cloud-init.target        >/dev/null 2>&1 || : | ||||
| elif [ $1 -eq 2 ]; then | ||||
|     # Upgrade. If the upgrade is from a version older than 0.7.9-8, | ||||
|     # there will be stale systemd config | ||||
|     /bin/systemctl is-enabled cloud-config.service >/dev/null 2>&1 && | ||||
|       /bin/systemctl reenable cloud-config.service >/dev/null 2>&1 || : | ||||
| 
 | ||||
|     /bin/systemctl is-enabled cloud-final.service >/dev/null 2>&1 && | ||||
|       /bin/systemctl reenable cloud-final.service >/dev/null 2>&1 || : | ||||
| 
 | ||||
|     /bin/systemctl is-enabled cloud-init.service >/dev/null 2>&1 && | ||||
|       /bin/systemctl reenable cloud-init.service >/dev/null 2>&1 || : | ||||
| 
 | ||||
|     /bin/systemctl is-enabled cloud-init-local.service >/dev/null 2>&1 && | ||||
|       /bin/systemctl reenable cloud-init-local.service >/dev/null 2>&1 || : | ||||
| 
 | ||||
|     /bin/systemctl is-enabled cloud-init.target >/dev/null 2>&1 && | ||||
|       /bin/systemctl reenable cloud-init.target >/dev/null 2>&1 || : | ||||
| fi | ||||
| 
 | ||||
| %preun | ||||
| if [ $1 -eq 0 ] ; then | ||||
|     # Package removal, not upgrade | ||||
|     /bin/systemctl --no-reload disable cloud-config.service >/dev/null 2>&1 || : | ||||
|     /bin/systemctl --no-reload disable cloud-final.service  >/dev/null 2>&1 || : | ||||
|     /bin/systemctl --no-reload disable cloud-init.service   >/dev/null 2>&1 || : | ||||
|     /bin/systemctl --no-reload disable cloud-init-local.service >/dev/null 2>&1 || : | ||||
|     /bin/systemctl --no-reload disable cloud-init.target     >/dev/null 2>&1 || : | ||||
|     # One-shot services -> no need to stop | ||||
| fi | ||||
| 
 | ||||
| %postun | ||||
| %systemd_postun cloud-config.service cloud-config.target cloud-final.service cloud-init.service cloud-init.target cloud-init-local.service | ||||
| 
 | ||||
| 
 | ||||
| %files | ||||
| %license LICENSE | ||||
| %doc ChangeLog rhel/README.rhel | ||||
| %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg | ||||
| %dir               %{_sysconfdir}/cloud/cloud.cfg.d | ||||
| %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/*.cfg | ||||
| %doc               %{_sysconfdir}/cloud/cloud.cfg.d/README | ||||
| %dir               %{_sysconfdir}/cloud/templates | ||||
| %config(noreplace) %{_sysconfdir}/cloud/templates/* | ||||
| %{_unitdir}/cloud-config.service | ||||
| %{_unitdir}/cloud-config.target | ||||
| %{_unitdir}/cloud-final.service | ||||
| %{_unitdir}/cloud-init-local.service | ||||
| %{_unitdir}/cloud-init.service | ||||
| %{_unitdir}/cloud-init.target | ||||
| %{_tmpfilesdir}/%{name}.conf | ||||
| %{python3_sitelib}/* | ||||
| %{_libexecdir}/%{name} | ||||
| %{_bindir}/cloud-init* | ||||
| %doc %{_datadir}/doc/%{name} | ||||
| %{_mandir}/man1/* | ||||
| %dir %verify(not mode) /run/cloud-init | ||||
| %dir /var/lib/cloud | ||||
| /etc/NetworkManager/dispatcher.d/cloud-init-azure-hook | ||||
| %{_udevrulesdir}/66-azure-ephemeral.rules | ||||
| %{_sysconfdir}/bash_completion.d/cloud-init | ||||
| %{_bindir}/cloud-id | ||||
| %{_libexecdir}/%{name}/ds-identify | ||||
| %{_systemdgeneratordir}/cloud-init-generator | ||||
| %{_sysconfdir}/systemd/system/sshd-keygen@.service.d/disable-sshd-keygen-if-cloud-init-active.conf | ||||
| 
 | ||||
| %dir %{_sysconfdir}/rsyslog.d | ||||
| %config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf | ||||
| 
 | ||||
| %changelog | ||||
| * Fri Feb 25 2022 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-19 | ||||
| - ci-Fix-IPv6-netmask-format-for-sysconfig-1215.patch [bz#2053546] | ||||
| - ci-Adding-_netdev-to-the-default-mount-configuration.patch [bz#1998445] | ||||
| - ci-Setting-highest-autoconnect-priority-for-network-scr.patch [bz#2036060] | ||||
| - Resolves: bz#2053546 | ||||
|   (cloud-init writes route6-$DEVICE config with a HEX netmask. ip route does not like : Error: inet6 prefix is expected rather than "fd00:fd00:fd00::/ffff:ffff:ffff:ffff::".) | ||||
| - Resolves: bz#1998445 | ||||
|   ([Azure][RHEL-9] ordering cycle exists after reboot) | ||||
| - Resolves: bz#2036060 | ||||
|   ([cloud-init][ESXi][RHEL-9] Failed to config static IP according to VMware Customization Config File) | ||||
| 
 | ||||
| * Fri Feb 11 2022 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-18 | ||||
| - ci-Add-_netdev-option-to-mount-Azure-ephemeral-disk-121.patch [bz#1998445] | ||||
| - Resolves: bz#1998445 | ||||
|   ([Azure][RHEL-9] ordering cycle exists after reboot) | ||||
| 
 | ||||
| * Mon Feb 07 2022 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-17 | ||||
| - ci-Add-flexibility-to-IMDS-api-version-793.patch [bz#2042351] | ||||
| - ci-Azure-helper-Ensure-Azure-http-handler-sleeps-betwee.patch [bz#2042351] | ||||
| - ci-azure-Removing-ability-to-invoke-walinuxagent-799.patch [bz#2042351] | ||||
| - ci-Azure-eject-the-provisioning-iso-before-reporting-re.patch [bz#2042351] | ||||
| - ci-Azure-Retrieve-username-and-hostname-from-IMDS-865.patch [bz#2042351] | ||||
| - ci-Azure-Retry-net-metadata-during-nic-attach-for-non-t.patch [bz#2042351] | ||||
| - ci-Azure-adding-support-for-consuming-userdata-from-IMD.patch [bz#2042351] | ||||
| - Resolves: bz#2042351 | ||||
|   ([RHEL-9] Support for provisioning Azure VM with userdata) | ||||
| 
 | ||||
| * Fri Jan 21 2022 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-16 | ||||
| - ci-Datasource-for-VMware-953.patch [bz#2040090] | ||||
| - ci-Change-netifaces-dependency-to-0.10.4-965.patch [bz#2040090] | ||||
| - ci-Update-dscheck_VMware-s-rpctool-check-970.patch [bz#2040090] | ||||
| - ci-Revert-unnecesary-lcase-in-ds-identify-978.patch [bz#2040090] | ||||
| - ci-Add-netifaces-package-as-a-Requires-in-cloud-init.sp.patch [bz#2040090] | ||||
| - Resolves: bz#2040090 | ||||
|   ([cloud-init][RHEL9] Support for cloud-init datasource 'cloud-init-vmware-guestinfo') | ||||
| 
 | ||||
| * Thu Jan 13 2022 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-15 | ||||
| - ci-Add-gdisk-and-openssl-as-deps-to-fix-UEFI-Azure-init.patch [bz#2032524] | ||||
| - Resolves: bz#2032524 | ||||
|   ([RHEL9] [Azure] cloud-init fails to configure the system) | ||||
| 
 | ||||
| * Tue Dec 14 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-14 | ||||
| - ci-cloudinit-net-handle-two-different-routes-for-the-sa.patch [bz#2028031] | ||||
| - Resolves: bz#2028031 | ||||
|   ([RHEL-9] Above 19.2 of cloud-init fails to configure routes when configuring static and default routes to the same destination IP) | ||||
| 
 | ||||
| * Mon Dec 06 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-13 | ||||
| - ci-fix-error-on-upgrade-caused-by-new-vendordata2-attri.patch [bz#2028381] | ||||
| - Resolves: bz#2028381 | ||||
|   (cloud-init.service fails to start after package update) | ||||
| 
 | ||||
| * Mon Nov 01 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-12 | ||||
| - ci-remove-unnecessary-EOF-string-in-disable-sshd-keygen.patch [bz#2016305] | ||||
| - Resolves: bz#2016305 | ||||
|   (disable-sshd-keygen-if-cloud-init-active.conf:8: Missing '=', ignoring line) | ||||
| 
 | ||||
| * Tue Oct 26 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-11 | ||||
| - ci-cc_ssh.py-fix-private-key-group-owner-and-permission.patch [bz#2015974] | ||||
| - Resolves: bz#2015974 | ||||
|   (cloud-init fails to set host key permissions correctly) | ||||
| 
 | ||||
| * Mon Oct 18 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-10 | ||||
| - ci-Inhibit-sshd-keygen-.service-if-cloud-init-is-active.patch [bz#2002492] | ||||
| - ci-add-the-drop-in-also-in-the-files-section-of-cloud-i.patch [bz#2002492] | ||||
| - Resolves: bz#2002492 | ||||
|   (util.py[WARNING]: Failed generating key type rsa to file /etc/ssh/ssh_host_rsa_key) | ||||
| 
 | ||||
| * Fri Sep 10 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-9 | ||||
| - ci-ssh_utils.py-ignore-when-sshd_config-options-are-not.patch [bz#2002302] | ||||
| - Resolves: bz#2002302 | ||||
|   (cloud-init fails with ValueError: need more than 1 value to unpack[rhel-9]) | ||||
| 
 | ||||
| * Fri Sep 03 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-8 | ||||
| - ci-Fix-home-permissions-modified-by-ssh-module-SC-338-9.patch [bz#1995843] | ||||
| - Resolves: bz#1995843 | ||||
|   ([cloudinit]  Fix home permissions modified by ssh module) | ||||
| 
 | ||||
| * Mon Aug 16 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-7 | ||||
| - ci-Stop-copying-ssh-system-keys-and-check-folder-permis.patch [bz#1979099] | ||||
| - ci-Report-full-specific-version-with-cloud-init-version.patch [bz#1971002] | ||||
| - Resolves: bz#1979099 | ||||
|   ([cloud-init]Customize ssh AuthorizedKeysFile causes login failure[RHEL-9.0]) | ||||
| - Resolves: bz#1971002 | ||||
|   (cloud-init should report full specific full version with "cloud-init --version" [rhel-9]) | ||||
| 
 | ||||
| * Mon Aug 09 2021 Mohan Boddu <mboddu@redhat.com> - 21.1-6 | ||||
| - Rebuilt for IMA sigs, glibc 2.34, aarch64 flags | ||||
|   Related: rhbz#1991688 | ||||
| 
 | ||||
| * Fri Aug 06 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-5 | ||||
| - ci-Add-dhcp-client-as-a-dependency.patch [bz#1964900] | ||||
| - Resolves: bz#1964900 | ||||
|   ([Azure][RHEL-9] cloud-init must require dhcp-client on Azure) | ||||
| 
 | ||||
| * Thu Jul 15 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-4 | ||||
| - ci-write-passwords-only-to-serial-console-lock-down-clo.patch [bz#1945892] | ||||
| - ci-ssh-util-allow-cloudinit-to-merge-all-ssh-keys-into-.patch [bz#1979099] | ||||
| - Resolves: bz#1945892 | ||||
|   (CVE-2021-3429 cloud-init: randomly generated passwords logged in clear-text to world-readable file [rhel-9.0]) | ||||
| - Resolves: bz#1979099 | ||||
|   ([cloud-init]Customize ssh AuthorizedKeysFile causes login failure[RHEL-9.0]) | ||||
| 
 | ||||
| * Fri Jul 02 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-3 | ||||
| - ci-Fix-requiring-device-number-on-EC2-derivatives-836.patch [bz#1943511] | ||||
| - Resolves: bz#1943511 | ||||
|   ([Aliyun][RHEL9.0][cloud-init] cloud-init service failed to start with Alibaba instance) | ||||
| 
 | ||||
| * Mon Jun 21 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-2 | ||||
| - ci-rhel-cloud.cfg-remove-ssh_genkeytypes-in-settings.py.patch [bz#1970909] | ||||
| - ci-Use-_systemdgeneratordir-macro-for-cloud-init-genera.patch [bz#1971480] | ||||
| - Resolves: bz#1970909 | ||||
|   ([cloud-init] From RHEL 82+ cloud-init no longer displays sshd keys fingerprints from instance launched from a backup image[rhel-9]) | ||||
| - Resolves: bz#1971480 | ||||
|   (Use systemdgenerators macro in spec file) | ||||
| 
 | ||||
| * Thu Jun 10 2021 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-1 | ||||
| - Rebase to 21.1 [bz#1958209] | ||||
| - Resolves: bz#1958209 | ||||
|   ([RHEL-9.0] Rebase cloud-init to 21.1) | ||||
| 
 | ||||
| * Wed Apr 21 2021 Miroslav Rezanina <mrezanin@redhat.com> - 20.4-5 | ||||
| - Removing python-mock dependency | ||||
| - Resolves: bz#1922323 | ||||
| 
 | ||||
| * Thu Apr 15 2021 Mohan Boddu <mboddu@redhat.com> - 20.4-4 | ||||
| - Rebuilt for RHEL 9 BETA on Apr 15th 2021. Related: rhbz#1947937 | ||||
| 
 | ||||
| * Wed Apr 07 2021 Miroslav Rezanina <mrezanin@redhat.com> - 20.4-3.el9 | ||||
| - ci-Removing-python-nose-and-python-tox-as-dependency.patch [bz#1916777 bz#1918892] | ||||
| - Resolves: bz#1916777 | ||||
|   (cloud-init requires python-nose) | ||||
| - Resolves: bz#1918892 | ||||
|   (cloud-init requires tox) | ||||
| 
 | ||||
| * Tue Jan 26 2021 Fedora Release Engineering <releng@fedoraproject.org> - 20.4-2 | ||||
| - Rebuilt for https://fedoraproject.org/wiki/Fedora_34_Mass_Rebuild | ||||
| 
 | ||||
| * Thu Dec 03 2020 Eduardo Otubo <otubo@redhat.com> - 20.4-2 | ||||
| - Updated to 20.4 [bz#1902250] | ||||
| 
 | ||||
| * Mon Sep 07 2020 Eduardo Otubo <otubo@redhat.com> - 19.4-7 | ||||
| - Fix execution fail with backtrace | ||||
| 
 | ||||
| * Mon Sep 07 2020 Eduardo Otubo <otubo@redhat.com> - 19.4-6 | ||||
| - Adding missing patches to spec file | ||||
| 
 | ||||
| * Mon Jul 27 2020 Fedora Release Engineering <releng@fedoraproject.org> - 19.4-5 | ||||
| - Rebuilt for https://fedoraproject.org/wiki/Fedora_33_Mass_Rebuild | ||||
| 
 | ||||
| * Mon May 25 2020 Miro Hrončok <mhroncok@redhat.com> - 19.4-4 | ||||
| - Rebuilt for Python 3.9 | ||||
| 
 | ||||
| * Tue Apr 14 2020 Eduardo Otubo <otubo@redhat.com> - 19.4-3 | ||||
| - Fix BZ#1798729 - CVE-2020-8632 cloud-init: Too short random password length | ||||
|   in cc_set_password in config/cc_set_passwords.py | ||||
| - Fix BZ#1798732 - CVE-2020-8631 cloud-init: Use of random.choice when | ||||
|   generating random password | ||||
| 
 | ||||
| * Sun Feb 23 2020 Dusty Mabe <dusty@dustymabe.com> - 19.4-2 | ||||
| - Fix sed substitutions for unittest2 and assertItemsEqual | ||||
| - Fix failing unittests by including `BuildRequires: passwd` | ||||
|     - The unittests started failing because of upstream commit | ||||
|       7c07af2 where cloud-init can now support using `usermod` to | ||||
|       lock an account if `passwd` isn't installed. Since `passwd` | ||||
|       wasn't installed in our mock buildroot it was choosing to | ||||
|       use `usermod` and the unittests were failing. See: | ||||
|       https://github.com/canonical/cloud-init/commit/7c07af2 | ||||
| - Add missing files to package | ||||
|     - /usr/bin/cloud-id | ||||
|     - /usr/share/bash-completion/completions/cloud-init | ||||
| 
 | ||||
| * Fri Feb 14 2020 Eduardo Otubo <otubo@redhat.com> - 19.4-1 | ||||
| - Updated to 19.4 | ||||
| - Rebasing the Fedora specific patches but removing patches that don't apply anymore | ||||
| 
 | ||||
| * Tue Jan 28 2020 Fedora Release Engineering <releng@fedoraproject.org> - 17.1-15 | ||||
| - Rebuilt for https://fedoraproject.org/wiki/Fedora_32_Mass_Rebuild | ||||
| 
 | ||||
| * Fri Nov 08 2019 Miro Hrončok <mhroncok@redhat.com> - 17.1-14 | ||||
| - Drop unneeded build dependency on python3-unittest2 | ||||
| 
 | ||||
| * Thu Oct 03 2019 Miro Hrončok <mhroncok@redhat.com> - 17.1-13 | ||||
| - Rebuilt for Python 3.8.0rc1 (#1748018) | ||||
| 
 | ||||
| * Sun Aug 18 2019 Miro Hrončok <mhroncok@redhat.com> - 17.1-12 | ||||
| - Rebuilt for Python 3.8 | ||||
| 
 | ||||
| * Wed Jul 24 2019 Fedora Release Engineering <releng@fedoraproject.org> - 17.1-11 | ||||
| - Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild | ||||
| 
 | ||||
| * Tue Apr 23 2019 Björn Esser <besser82@fedoraproject.org> - 17.1-10 | ||||
| - Add patch to replace platform.dist() [RH:1695953] | ||||
| - Add (Build)Requires: python3-distro | ||||
| 
 | ||||
| * Tue Apr 23 2019 Björn Esser <besser82@fedoraproject.org> - 17.1-9 | ||||
| - Fix %%systemd_postun macro [RH:1695953] | ||||
| - Add patch to fix failing test for EPOCHREALTIME bash env [RH:1695953] | ||||
| 
 | ||||
| * Thu Jan 31 2019 Fedora Release Engineering <releng@fedoraproject.org> - 17.1-8 | ||||
| - Rebuilt for https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild | ||||
| 
 | ||||
| * Thu Jul 12 2018 Fedora Release Engineering <releng@fedoraproject.org> - 17.1-7 | ||||
| - Rebuilt for https://fedoraproject.org/wiki/Fedora_29_Mass_Rebuild | ||||
| 
 | ||||
| * Mon Jun 18 2018 Miro Hrončok <mhroncok@redhat.com> - 17.1-6 | ||||
| - Rebuilt for Python 3.7 | ||||
| 
 | ||||
| * Sat Apr 21 2018 Lars Kellogg-Stedman <lars@redhat.com> - 17.1-5 | ||||
| - Enable dhcp on EC2 interfaces with only local ipv4 addresses [RH:1569321] | ||||
|   (cherry pick upstream commit eb292c1) | ||||
| 
 | ||||
| * Mon Mar 26 2018 Patrick Uiterwijk <puiterwijk@redhat.com> - 17.1-4 | ||||
| - Make sure the patch does not add infinitely many entries | ||||
| 
 | ||||
| * Mon Mar 26 2018 Patrick Uiterwijk <puiterwijk@redhat.com> - 17.1-3 | ||||
| - Add patch to retain old values of /etc/sysconfig/network | ||||
| 
 | ||||
| * Wed Feb 07 2018 Fedora Release Engineering <releng@fedoraproject.org> - 17.1-2 | ||||
| - Rebuilt for https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild | ||||
| 
 | ||||
| * Wed Oct  4 2017 Garrett Holmstrom <gholms@fedoraproject.org> - 17.1-1 | ||||
| - Updated to 17.1 | ||||
| 
 | ||||
| * Tue Sep 26 2017 Ryan McCabe <rmccabe@redhat.com> 0.7.9-10 | ||||
| - AliCloud: Add support for the Alibaba Cloud datasource (rhbz#1482547) | ||||
| 
 | ||||
| * Thu Jun 22 2017 Lars Kellogg-Stedman <lars@redhat.com> 0.7.9-9 | ||||
| - RHEL/CentOS: Fix default routes for IPv4/IPv6 configuration. (rhbz#1438082) | ||||
| - azure: ensure that networkmanager hook script runs (rhbz#1440831 rhbz#1460206) | ||||
| - Fix ipv6 subnet detection (rhbz#1438082) | ||||
| 
 | ||||
| * Tue May 23 2017 Lars Kellogg-Stedman <lars@redhat.com> 0.7.9-8 | ||||
| - Update patches | ||||
| 
 | ||||
| * Mon May 22 2017 Lars Kellogg-Stedman <lars@redhat.com> 0.7.9-7 | ||||
| - Add missing sysconfig unit test data (rhbz#1438082) | ||||
| - Fix dual stack IPv4/IPv6 configuration for RHEL (rhbz#1438082) | ||||
| - sysconfig: Raise ValueError when multiple default gateways are present. (rhbz#1438082) | ||||
| - Bounce network interface for Azure when using the built-in path. (rhbz#1434109) | ||||
| - Do not write NM_CONTROLLED=no in generated interface config files (rhbz#1385172) | ||||
| 
 | ||||
| * Wed May 10 2017 Lars Kellogg-Stedman <lars@redhat.com> 0.7.9-6 | ||||
| - add power-state-change module to cloud_final_modules (rhbz#1252477) | ||||
| - remove 'tee' command from logging configuration (rhbz#1424612) | ||||
| - limit permissions on def_log_file (rhbz#1424612) | ||||
| - Bounce network interface for Azure when using the built-in path. (rhbz#1434109) | ||||
| - OpenStack: add 'dvs' to the list of physical link types. (rhbz#1442783) | ||||
| 
 | ||||
| * Wed May 10 2017 Lars Kellogg-Stedman <lars@redhat.com> 0.7.9-5 | ||||
| - systemd: replace generator with unit conditionals (rhbz#1440831) | ||||
| 
 | ||||
| * Thu Apr 13 2017 Charalampos Stratakis <cstratak@redhat.com> 0.7.9-4 | ||||
| - Import to RHEL 7 | ||||
| Resolves: rhbz#1427280 | ||||
| 
 | ||||
| * Tue Mar 07 2017 Lars Kellogg-Stedman <lars@redhat.com> 0.7.9-3 | ||||
| - fixes for network config generation | ||||
| - avoid dependency cycle at boot (rhbz#1420946) | ||||
| 
 | ||||
| * Tue Jan 17 2017 Lars Kellogg-Stedman <lars@redhat.com> 0.7.9-2 | ||||
| - use timeout from datasource config in openstack get_data (rhbz#1408589) | ||||
| 
 | ||||
| * Thu Dec 01 2016 Lars Kellogg-Stedman <lars@redhat.com> - 0.7.9-1 | ||||
| - Rebased on upstream 0.7.9. | ||||
| - Remove dependency on run-parts | ||||
| 
 | ||||
| * Wed Jan 06 2016 Lars Kellogg-Stedman <lars@redhat.com> - 0.7.6-8 | ||||
| - make rh_subscription plugin do nothing in the absence of a valid | ||||
|   configuration [RH:1295953] | ||||
| - move rh_subscription module to cloud_config stage | ||||
| 
 | ||||
| * Wed Jan 06 2016 Lars Kellogg-Stedman <lars@redhat.com> - 0.7.6-7 | ||||
| - correct permissions on /etc/ssh/sshd_config [RH:1296191] | ||||
| 
 | ||||
| * Thu Sep 03 2015 Lars Kellogg-Stedman <lars@redhat.com> - 0.7.6-6 | ||||
| - rebuild for ppc64le | ||||
| 
 | ||||
| * Tue Jul 07 2015 Lars Kellogg-Stedman <lars@redhat.com> - 0.7.6-5 | ||||
| - bump revision for new build | ||||
| 
 | ||||
| * Tue Jul 07 2015 Lars Kellogg-Stedman <lars@redhat.com> - 0.7.6-4 | ||||
| - ensure rh_subscription plugin is enabled by default | ||||
| 
 | ||||
| * Wed Apr 29 2015 Lars Kellogg-Stedman <lars@redhat.com> - 0.7.6-3 | ||||
| - added dependency on python-jinja2 [RH:1215913] | ||||
| - added rhn_subscription plugin [RH:1227393] | ||||
| - require pyserial to support smartos data source [RH:1226187] | ||||
| 
 | ||||
| * Fri Jan 16 2015 Lars Kellogg-Stedman <lars@redhat.com> - 0.7.6-2 | ||||
| - Rebased RHEL version to Fedora rawhide | ||||
| - Backported fix for https://bugs.launchpad.net/cloud-init/+bug/1246485 | ||||
| - Backported fix for https://bugs.launchpad.net/cloud-init/+bug/1411829 | ||||
| 
 | ||||
| * Fri Nov 14 2014 Colin Walters <walters@redhat.com> - 0.7.6-1 | ||||
| - New upstream version [RH:974327] | ||||
| - Drop python-cheetah dependency (same as above bug) | ||||
		Loading…
	
		Reference in New Issue
	
	Block a user