Compare commits

...

5 Commits

Author SHA1 Message Date
eabdullin 976e92ccc1 Sync with stable 2024-01-15 12:07:54 +03:00
CentOS Sources 2ceec41e2c import cloud-init-22.1-8.el8 2023-05-16 07:06:51 +00:00
CentOS Sources c3ed77caa4 import cloud-init-22.1-6.el8_7.2 2023-02-21 09:41:11 +00:00
CentOS Sources 8706acdb01 import cloud-init-22.1-5.el8 2022-11-08 11:53:07 +00:00
CentOS Sources 5b0b552e23 import cloud-init-21.1-15.el8_6.3 2022-06-28 11:00:34 +00:00
60 changed files with 5720 additions and 10426 deletions

View File

@ -1 +1 @@
2ae378aa2ae23b34b0ff123623ba5e2fbdc4928d SOURCES/cloud-init-21.1.tar.gz
d34297c11997da2f026a5518f92539f7fb135cc2 SOURCES/cloud-init-23.1.1.tar.gz

2
.gitignore vendored
View File

@ -1 +1 @@
SOURCES/cloud-init-21.1.tar.gz
SOURCES/cloud-init-23.1.1.tar.gz

View File

@ -1,561 +0,0 @@
From 074cb9b011623849cfa95c1d7cc813bb28f03ff0 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Fri, 7 May 2021 13:36:03 +0200
Subject: Add initial redhat setup
Merged patches (21.1):
- 915d30ad Change gating file to correct rhel version
- 311f318d Removing net-tools dependency
- 74731806 Adding man pages to Red Hat spec file
- 758d333d Removing blocking test from yaml configuration file
- c7e7c59c Changing permission of cloud-init-generator to 755
- 8b85abbb Installing man pages in the correct place with correct permissions
- c6808d8d Fix unit failure of cloud-final.service if NetworkManager was not present.
- 11866ef6 Report full specific version with "cloud-init --version"
Rebase notes (18.5):
- added bash_completition file
- added cloud-id file
Merged patches (20.3):
- 01900d0 changing ds-identify patch from /usr/lib to /usr/libexec
- 7f47ca3 Render the generator from template instead of cp
Merged patches (19.4):
- 4ab5a61 Fix for network configuration not persisting after reboot
- 84cf125 Removing cloud-user from wheel
- 31290ab Adding gating tests for Azure, ESXi and AWS
Merged patches (18.5):
- 2d6b469 add power-state-change module to cloud_final_modules
- 764159f Adding systemd mount options to wait for cloud-init
- da4d99e Adding disk_setup to rhel/cloud.cfg
- f5c6832 Enable cloud-init by default on vmware
Conflicts:
cloudinit/config/cc_chef.py:
- Updated header documentation text
- Replacing double quotes by simple quotes
setup.py:
- Adding missing cmdclass info
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
---
.gitignore | 1 +
cloudinit/config/cc_chef.py | 67 +++-
cloudinit/settings.py | 7 +-
redhat/.gitignore | 1 +
redhat/Makefile | 71 ++++
redhat/Makefile.common | 37 ++
redhat/cloud-init-tmpfiles.conf | 1 +
redhat/cloud-init.spec.template | 530 ++++++++++++++++++++++++++
redhat/gating.yaml | 8 +
redhat/rpmbuild/BUILD/.gitignore | 3 +
redhat/rpmbuild/RPMS/.gitignore | 3 +
redhat/rpmbuild/SOURCES/.gitignore | 3 +
redhat/rpmbuild/SPECS/.gitignore | 3 +
redhat/rpmbuild/SRPMS/.gitignore | 3 +
redhat/scripts/frh.py | 27 ++
redhat/scripts/git-backport-diff | 327 ++++++++++++++++
redhat/scripts/git-compile-check | 215 +++++++++++
redhat/scripts/process-patches.sh | 77 ++++
redhat/scripts/tarball_checksum.sh | 3 +
rhel/README.rhel | 5 +
rhel/cloud-init-tmpfiles.conf | 1 +
rhel/cloud.cfg | 69 ++++
rhel/systemd/cloud-config.service | 18 +
rhel/systemd/cloud-config.target | 11 +
rhel/systemd/cloud-final.service | 24 ++
rhel/systemd/cloud-init-local.service | 31 ++
rhel/systemd/cloud-init.service | 25 ++
rhel/systemd/cloud-init.target | 7 +
setup.py | 23 +-
tools/read-version | 28 +-
30 files changed, 1579 insertions(+), 50 deletions(-)
create mode 100644 redhat/.gitignore
create mode 100644 redhat/Makefile
create mode 100644 redhat/Makefile.common
create mode 100644 redhat/cloud-init-tmpfiles.conf
create mode 100644 redhat/cloud-init.spec.template
create mode 100644 redhat/gating.yaml
create mode 100644 redhat/rpmbuild/BUILD/.gitignore
create mode 100644 redhat/rpmbuild/RPMS/.gitignore
create mode 100644 redhat/rpmbuild/SOURCES/.gitignore
create mode 100644 redhat/rpmbuild/SPECS/.gitignore
create mode 100644 redhat/rpmbuild/SRPMS/.gitignore
create mode 100755 redhat/scripts/frh.py
create mode 100755 redhat/scripts/git-backport-diff
create mode 100755 redhat/scripts/git-compile-check
create mode 100755 redhat/scripts/process-patches.sh
create mode 100755 redhat/scripts/tarball_checksum.sh
create mode 100644 rhel/README.rhel
create mode 100644 rhel/cloud-init-tmpfiles.conf
create mode 100644 rhel/cloud.cfg
create mode 100644 rhel/systemd/cloud-config.service
create mode 100644 rhel/systemd/cloud-config.target
create mode 100644 rhel/systemd/cloud-final.service
create mode 100644 rhel/systemd/cloud-init-local.service
create mode 100644 rhel/systemd/cloud-init.service
create mode 100644 rhel/systemd/cloud-init.target
diff --git a/cloudinit/config/cc_chef.py b/cloudinit/config/cc_chef.py
index aaf71366..97ef649a 100644
--- a/cloudinit/config/cc_chef.py
+++ b/cloudinit/config/cc_chef.py
@@ -6,7 +6,70 @@
#
# This file is part of cloud-init. See LICENSE file for license information.
-"""Chef: module that configures, starts and installs chef."""
+"""
+Chef
+----
+**Summary:** module that configures, starts and installs chef.
+
+This module enables chef to be installed (from packages or
+from gems, or from omnibus). Before this occurs chef configurations are
+written to disk (validation.pem, client.pem, firstboot.json, client.rb),
+and needed chef folders/directories are created (/etc/chef and /var/log/chef
+and so-on). Then once installing proceeds correctly if configured chef will
+be started (in daemon mode or in non-daemon mode) and then once that has
+finished (if ran in non-daemon mode this will be when chef finishes
+converging, if ran in daemon mode then no further actions are possible since
+chef will have forked into its own process) then a post run function can
+run that can do finishing activities (such as removing the validation pem
+file).
+
+**Internal name:** ``cc_chef``
+
+**Module frequency:** per always
+
+**Supported distros:** all
+
+**Config keys**::
+
+ chef:
+ directories: (defaulting to /etc/chef, /var/log/chef, /var/lib/chef,
+ /var/cache/chef, /var/backups/chef, /run/chef)
+ validation_cert: (optional string to be written to file validation_key)
+ special value 'system' means set use existing file
+ validation_key: (optional the path for validation_cert. default
+ /etc/chef/validation.pem)
+ firstboot_path: (path to write run_list and initial_attributes keys that
+ should also be present in this configuration, defaults
+ to /etc/chef/firstboot.json)
+ exec: boolean to run or not run chef (defaults to false, unless
+ a gem installed is requested
+ where this will then default
+ to true)
+
+ chef.rb template keys (if falsey, then will be skipped and not
+ written to /etc/chef/client.rb)
+
+ chef:
+ client_key:
+ encrypted_data_bag_secret:
+ environment:
+ file_backup_path:
+ file_cache_path:
+ json_attribs:
+ log_level:
+ log_location:
+ node_name:
+ omnibus_url:
+ omnibus_url_retries:
+ omnibus_version:
+ pid_file:
+ server_url:
+ show_time:
+ ssl_verify_mode:
+ validation_cert:
+ validation_key:
+ validation_name:
+"""
import itertools
import json
@@ -31,7 +94,7 @@ CHEF_DIRS = tuple([
'/var/lib/chef',
'/var/cache/chef',
'/var/backups/chef',
- '/var/run/chef',
+ '/run/chef',
])
REQUIRED_CHEF_DIRS = tuple([
'/etc/chef',
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index 91e1bfe7..e690c0fd 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -47,13 +47,16 @@ CFG_BUILTIN = {
],
'def_log_file': '/var/log/cloud-init.log',
'log_cfgs': [],
- 'syslog_fix_perms': ['syslog:adm', 'root:adm', 'root:wheel', 'root:root'],
+ 'mount_default_fields': [None, None, 'auto', 'defaults,nofail', '0', '2'],
+ 'ssh_deletekeys': False,
+ 'ssh_genkeytypes': [],
+ 'syslog_fix_perms': [],
'system_info': {
'paths': {
'cloud_dir': '/var/lib/cloud',
'templates_dir': '/etc/cloud/templates/',
},
- 'distro': 'ubuntu',
+ 'distro': 'rhel',
'network': {'renderers': None},
},
'vendor_data': {'enabled': True, 'prefix': []},
diff --git a/rhel/README.rhel b/rhel/README.rhel
new file mode 100644
index 00000000..aa29630d
--- /dev/null
+++ b/rhel/README.rhel
@@ -0,0 +1,5 @@
+The following cloud-init modules are currently unsupported on this OS:
+ - apt_update_upgrade ('apt_update', 'apt_upgrade', 'apt_mirror', 'apt_preserve_sources_list', 'apt_old_mirror', 'apt_sources', 'debconf_selections', 'packages' options)
+ - byobu ('byobu_by_default' option)
+ - chef
+ - grub_dpkg
diff --git a/rhel/cloud-init-tmpfiles.conf b/rhel/cloud-init-tmpfiles.conf
new file mode 100644
index 00000000..0c6d2a3b
--- /dev/null
+++ b/rhel/cloud-init-tmpfiles.conf
@@ -0,0 +1 @@
+d /run/cloud-init 0700 root root - -
diff --git a/rhel/cloud.cfg b/rhel/cloud.cfg
new file mode 100644
index 00000000..82e8bf62
--- /dev/null
+++ b/rhel/cloud.cfg
@@ -0,0 +1,69 @@
+users:
+ - default
+
+disable_root: 1
+ssh_pwauth: 0
+
+mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2']
+resize_rootfs_tmp: /dev
+ssh_deletekeys: 0
+ssh_genkeytypes: ~
+syslog_fix_perms: ~
+disable_vmware_customization: false
+
+cloud_init_modules:
+ - disk_setup
+ - migrator
+ - bootcmd
+ - write-files
+ - growpart
+ - resizefs
+ - set_hostname
+ - update_hostname
+ - update_etc_hosts
+ - rsyslog
+ - users-groups
+ - ssh
+
+cloud_config_modules:
+ - mounts
+ - locale
+ - set-passwords
+ - rh_subscription
+ - yum-add-repo
+ - package-update-upgrade-install
+ - timezone
+ - puppet
+ - chef
+ - salt-minion
+ - mcollective
+ - disable-ec2-metadata
+ - runcmd
+
+cloud_final_modules:
+ - rightscale_userdata
+ - scripts-per-once
+ - scripts-per-boot
+ - scripts-per-instance
+ - scripts-user
+ - ssh-authkey-fingerprints
+ - keys-to-console
+ - phone-home
+ - final-message
+ - power-state-change
+
+system_info:
+ default_user:
+ name: cloud-user
+ lock_passwd: true
+ gecos: Cloud User
+ groups: [adm, systemd-journal]
+ sudo: ["ALL=(ALL) NOPASSWD:ALL"]
+ shell: /bin/bash
+ distro: rhel
+ paths:
+ cloud_dir: /var/lib/cloud
+ templates_dir: /etc/cloud/templates
+ ssh_svcname: sshd
+
+# vim:syntax=yaml
diff --git a/rhel/systemd/cloud-config.service b/rhel/systemd/cloud-config.service
new file mode 100644
index 00000000..f3dcd4be
--- /dev/null
+++ b/rhel/systemd/cloud-config.service
@@ -0,0 +1,18 @@
+[Unit]
+Description=Apply the settings specified in cloud-config
+After=network-online.target cloud-config.target
+Wants=network-online.target cloud-config.target
+ConditionPathExists=!/etc/cloud/cloud-init.disabled
+ConditionKernelCommandLine=!cloud-init=disabled
+
+[Service]
+Type=oneshot
+ExecStart=/usr/bin/cloud-init modules --mode=config
+RemainAfterExit=yes
+TimeoutSec=0
+
+# Output needs to appear in instance console output
+StandardOutput=journal+console
+
+[Install]
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-config.target b/rhel/systemd/cloud-config.target
new file mode 100644
index 00000000..ae9b7d02
--- /dev/null
+++ b/rhel/systemd/cloud-config.target
@@ -0,0 +1,11 @@
+# cloud-init normally emits a "cloud-config" upstart event to inform third
+# parties that cloud-config is available, which does us no good when we're
+# using systemd. cloud-config.target serves as this synchronization point
+# instead. Services that would "start on cloud-config" with upstart can
+# instead use "After=cloud-config.target" and "Wants=cloud-config.target"
+# as appropriate.
+
+[Unit]
+Description=Cloud-config availability
+Wants=cloud-init-local.service cloud-init.service
+After=cloud-init-local.service cloud-init.service
diff --git a/rhel/systemd/cloud-final.service b/rhel/systemd/cloud-final.service
new file mode 100644
index 00000000..e281c0cf
--- /dev/null
+++ b/rhel/systemd/cloud-final.service
@@ -0,0 +1,24 @@
+[Unit]
+Description=Execute cloud user/final scripts
+After=network-online.target cloud-config.service rc-local.service
+Wants=network-online.target cloud-config.service
+ConditionPathExists=!/etc/cloud/cloud-init.disabled
+ConditionKernelCommandLine=!cloud-init=disabled
+
+[Service]
+Type=oneshot
+ExecStart=/usr/bin/cloud-init modules --mode=final
+RemainAfterExit=yes
+TimeoutSec=0
+KillMode=process
+# Restart NetworkManager if it is present and running.
+ExecStartPost=/bin/sh -c 'u=NetworkManager.service; \
+ out=$(systemctl show --property=SubState $u) || exit; \
+ [ "$out" = "SubState=running" ] || exit 0; \
+ systemctl reload-or-try-restart $u'
+
+# Output needs to appear in instance console output
+StandardOutput=journal+console
+
+[Install]
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-init-local.service b/rhel/systemd/cloud-init-local.service
new file mode 100644
index 00000000..8f9f6c9f
--- /dev/null
+++ b/rhel/systemd/cloud-init-local.service
@@ -0,0 +1,31 @@
+[Unit]
+Description=Initial cloud-init job (pre-networking)
+DefaultDependencies=no
+Wants=network-pre.target
+After=systemd-remount-fs.service
+Requires=dbus.socket
+After=dbus.socket
+Before=NetworkManager.service network.service
+Before=network-pre.target
+Before=shutdown.target
+Before=firewalld.target
+Conflicts=shutdown.target
+RequiresMountsFor=/var/lib/cloud
+ConditionPathExists=!/etc/cloud/cloud-init.disabled
+ConditionKernelCommandLine=!cloud-init=disabled
+
+[Service]
+Type=oneshot
+ExecStartPre=/bin/mkdir -p /run/cloud-init
+ExecStartPre=/sbin/restorecon /run/cloud-init
+ExecStartPre=/usr/bin/touch /run/cloud-init/enabled
+ExecStart=/usr/bin/cloud-init init --local
+ExecStart=/bin/touch /run/cloud-init/network-config-ready
+RemainAfterExit=yes
+TimeoutSec=0
+
+# Output needs to appear in instance console output
+StandardOutput=journal+console
+
+[Install]
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-init.service b/rhel/systemd/cloud-init.service
new file mode 100644
index 00000000..d0023a05
--- /dev/null
+++ b/rhel/systemd/cloud-init.service
@@ -0,0 +1,25 @@
+[Unit]
+Description=Initial cloud-init job (metadata service crawler)
+Wants=cloud-init-local.service
+Wants=sshd-keygen.service
+Wants=sshd.service
+After=cloud-init-local.service
+After=NetworkManager.service network.service
+Before=network-online.target
+Before=sshd-keygen.service
+Before=sshd.service
+Before=systemd-user-sessions.service
+ConditionPathExists=!/etc/cloud/cloud-init.disabled
+ConditionKernelCommandLine=!cloud-init=disabled
+
+[Service]
+Type=oneshot
+ExecStart=/usr/bin/cloud-init init
+RemainAfterExit=yes
+TimeoutSec=0
+
+# Output needs to appear in instance console output
+StandardOutput=journal+console
+
+[Install]
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-init.target b/rhel/systemd/cloud-init.target
new file mode 100644
index 00000000..083c3b6f
--- /dev/null
+++ b/rhel/systemd/cloud-init.target
@@ -0,0 +1,7 @@
+# cloud-init target is enabled by cloud-init-generator
+# To disable it you can either:
+# a.) boot with kernel cmdline of 'cloud-init=disabled'
+# b.) touch a file /etc/cloud/cloud-init.disabled
+[Unit]
+Description=Cloud-init target
+After=multi-user.target
diff --git a/setup.py b/setup.py
index cbacf48e..d5cd01a4 100755
--- a/setup.py
+++ b/setup.py
@@ -125,14 +125,6 @@ INITSYS_FILES = {
'sysvinit_deb': [f for f in glob('sysvinit/debian/*') if is_f(f)],
'sysvinit_openrc': [f for f in glob('sysvinit/gentoo/*') if is_f(f)],
'sysvinit_suse': [f for f in glob('sysvinit/suse/*') if is_f(f)],
- 'systemd': [render_tmpl(f)
- for f in (glob('systemd/*.tmpl') +
- glob('systemd/*.service') +
- glob('systemd/*.target'))
- if (is_f(f) and not is_generator(f))],
- 'systemd.generators': [
- render_tmpl(f, mode=0o755)
- for f in glob('systemd/*') if is_f(f) and is_generator(f)],
'upstart': [f for f in glob('upstart/*') if is_f(f)],
}
INITSYS_ROOTS = {
@@ -142,9 +134,6 @@ INITSYS_ROOTS = {
'sysvinit_deb': 'etc/init.d',
'sysvinit_openrc': 'etc/init.d',
'sysvinit_suse': 'etc/init.d',
- 'systemd': pkg_config_read('systemd', 'systemdsystemunitdir'),
- 'systemd.generators': pkg_config_read('systemd',
- 'systemdsystemgeneratordir'),
'upstart': 'etc/init/',
}
INITSYS_TYPES = sorted([f.partition(".")[0] for f in INITSYS_ROOTS.keys()])
@@ -245,14 +234,11 @@ if not in_virtualenv():
INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k]
data_files = [
- (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]),
+ (ETC + '/bash_completion.d', ['bash_completion/cloud-init']),
(ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')),
(ETC + '/cloud/templates', glob('templates/*')),
- (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify',
- 'tools/uncloud-init',
+ (USR_LIB_EXEC + '/cloud-init', ['tools/uncloud-init',
'tools/write-ssh-key-fingerprints']),
- (USR + '/share/bash-completion/completions',
- ['bash_completion/cloud-init']),
(USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]),
(USR + '/share/doc/cloud-init/examples',
[f for f in glob('doc/examples/*') if is_f(f)]),
@@ -263,8 +249,7 @@ if not platform.system().endswith('BSD'):
data_files.extend([
(ETC + '/NetworkManager/dispatcher.d/',
['tools/hook-network-manager']),
- (ETC + '/dhcp/dhclient-exit-hooks.d/', ['tools/hook-dhclient']),
- (LIB + '/udev/rules.d', [f for f in glob('udev/*.rules')])
+ ('/usr/lib/udev/rules.d', [f for f in glob('udev/*.rules')])
])
# Use a subclass for install that handles
# adding on the right init system configuration files
@@ -286,8 +271,6 @@ setuptools.setup(
scripts=['tools/cloud-init-per'],
license='Dual-licensed under GPLv3 or Apache 2.0',
data_files=data_files,
- install_requires=requirements,
- cmdclass=cmdclass,
entry_points={
'console_scripts': [
'cloud-init = cloudinit.cmd.main:main',
diff --git a/tools/read-version b/tools/read-version
index 02c90643..79755f78 100755
--- a/tools/read-version
+++ b/tools/read-version
@@ -71,32 +71,8 @@ version_long = None
is_release_branch_ci = (
os.environ.get("TRAVIS_PULL_REQUEST_BRANCH", "").startswith("upstream/")
)
-if is_gitdir(_tdir) and which("git") and not is_release_branch_ci:
- flags = []
- if use_tags:
- flags = ['--tags']
- cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags
-
- try:
- version = tiny_p(cmd).strip()
- except RuntimeError:
- version = None
-
- if version is None or not version.startswith(src_version):
- sys.stderr.write("git describe version (%s) differs from "
- "cloudinit.version (%s)\n" % (version, src_version))
- sys.stderr.write(
- "Please get the latest upstream tags.\n"
- "As an example, this can be done with the following:\n"
- "$ git remote add upstream https://git.launchpad.net/cloud-init\n"
- "$ git fetch upstream --tags\n"
- )
- sys.exit(1)
-
- version_long = tiny_p(cmd + ["--long"]).strip()
-else:
- version = src_version
- version_long = None
+version = src_version
+version_long = None
# version is X.Y.Z[+xxx.gHASH]
# version_long is None or X.Y.Z-xxx-gHASH
--
2.27.0

View File

@ -1,4 +1,4 @@
From 472c2b5d4342b6ab6ce1584dc39bed0e6c1ca2e7 Mon Sep 17 00:00:00 2001
From 04847980754f9d5c4f5363f4bb637d1e95470fa9 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Fri, 7 May 2021 13:36:06 +0200
Subject: Do not write NM_CONTROLLED=no in generated interface config files
@ -11,29 +11,30 @@ correct settings for NM_CONTROLLED.
X-downstream-only: true
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Ryan McCabe <rmccabe@redhat.com>
(cherry picked from commit e0dc628ac553072891fa6607dc91b652efd99be2)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/net/sysconfig.py | 2 +-
cloudinit/net/sysconfig.py | 1 -
tests/unittests/test_net.py | 28 ----------------------------
2 files changed, 1 insertion(+), 29 deletions(-)
2 files changed, 29 deletions(-)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 99a4bae4..3d276666 100644
index d4daa78f..1d3d83dc 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -289,7 +289,7 @@ class Renderer(renderer.Renderer):
# details about this)
iface_defaults = {
- 'rhel': {'ONBOOT': True, 'USERCTL': False, 'NM_CONTROLLED': False,
+ 'rhel': {'ONBOOT': True, 'USERCTL': False,
'BOOTPROTO': 'none'},
'suse': {'BOOTPROTO': 'static', 'STARTMODE': 'auto'},
}
@@ -316,7 +316,6 @@ class Renderer(renderer.Renderer):
"rhel": {
"ONBOOT": True,
"USERCTL": False,
- "NM_CONTROLLED": False,
"BOOTPROTO": "none",
},
"suse": {"BOOTPROTO": "static", "STARTMODE": "auto"},
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 38d934d4..c67b5fcc 100644
index 056aaeb6..0f523ff8 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -535,7 +535,6 @@ GATEWAY=172.19.3.254
@@ -585,7 +585,6 @@ GATEWAY=172.19.3.254
HWADDR=fa:16:3e:ed:9a:59
IPADDR=172.19.1.34
NETMASK=255.255.252.0
@ -41,7 +42,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -633,7 +632,6 @@ IPADDR=172.19.1.34
@@ -749,7 +748,6 @@ IPADDR=172.19.1.34
IPADDR1=10.0.0.10
NETMASK=255.255.252.0
NETMASK1=255.255.255.0
@ -49,7 +50,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -756,7 +754,6 @@ IPV6_AUTOCONF=no
@@ -911,7 +909,6 @@ IPV6_AUTOCONF=no
IPV6_DEFAULTGW=2001:DB8::1
IPV6_FORCE_ACCEPT_RA=no
NETMASK=255.255.252.0
@ -57,23 +58,23 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -884,7 +881,6 @@ NETWORK_CONFIGS = {
@@ -1090,7 +1087,6 @@ NETWORK_CONFIGS = {
BOOTPROTO=none
DEVICE=eth1
HWADDR=cf:d6:af:48:e8:80
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no"""),
@@ -901,7 +897,6 @@ NETWORK_CONFIGS = {
USERCTL=no"""
@@ -1109,7 +1105,6 @@ NETWORK_CONFIGS = {
IPADDR=192.168.21.3
NETMASK=255.255.255.0
METRIC=10000
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no"""),
@@ -1032,7 +1027,6 @@ NETWORK_CONFIGS = {
USERCTL=no"""
@@ -1353,7 +1348,6 @@ NETWORK_CONFIGS = {
IPV6_AUTOCONF=no
IPV6_FORCE_ACCEPT_RA=no
NETMASK=255.255.255.0
@ -81,15 +82,15 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -1737,7 +1731,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
@@ -2377,7 +2371,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
DHCPV6C=yes
IPV6INIT=yes
MACADDR=aa:bb:cc:dd:ee:ff
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Bond
USERCTL=no"""),
@@ -1745,7 +1738,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
USERCTL=no"""
@@ -2387,7 +2380,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BOOTPROTO=dhcp
DEVICE=bond0.200
DHCLIENT_SET_DEFAULT_ROUTE=no
@ -97,7 +98,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
PHYSDEV=bond0
USERCTL=no
@@ -1763,7 +1755,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
@@ -2407,7 +2399,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
IPV6_DEFAULTGW=2001:4800:78ff:1b::1
MACADDR=bb:bb:bb:bb:bb:aa
NETMASK=255.255.255.0
@ -105,15 +106,15 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
PRIO=22
STP=no
@@ -1773,7 +1764,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
@@ -2419,7 +2410,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BOOTPROTO=none
DEVICE=eth0
HWADDR=c0:d6:9f:2c:e8:80
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no"""),
@@ -1790,7 +1780,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
USERCTL=no"""
@@ -2438,7 +2428,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
MTU=1500
NETMASK=255.255.255.0
NETMASK1=255.255.255.0
@ -121,7 +122,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
PHYSDEV=eth0
USERCTL=no
@@ -1800,7 +1789,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
@@ -2450,7 +2439,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
DEVICE=eth1
HWADDR=aa:d6:9f:2c:e8:80
MASTER=bond0
@ -129,7 +130,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
@@ -1810,7 +1798,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
@@ -2462,7 +2450,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
DEVICE=eth2
HWADDR=c0:bb:9f:2c:e8:80
MASTER=bond0
@ -137,31 +138,31 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
@@ -1820,7 +1807,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
@@ -2474,7 +2461,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BRIDGE=br0
DEVICE=eth3
HWADDR=66:bb:9f:2c:e8:80
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no"""),
@@ -1829,7 +1815,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
USERCTL=no"""
@@ -2485,7 +2471,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BRIDGE=br0
DEVICE=eth4
HWADDR=98:bb:9f:2c:e8:80
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no"""),
@@ -1838,7 +1823,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
USERCTL=no"""
@@ -2496,7 +2481,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
DEVICE=eth5
DHCLIENT_SET_DEFAULT_ROUTE=no
HWADDR=98:bb:9f:2c:e8:8a
- NM_CONTROLLED=no
ONBOOT=no
TYPE=Ethernet
USERCTL=no"""),
@@ -2294,7 +2278,6 @@ iface bond0 inet6 static
USERCTL=no"""
@@ -3220,7 +3204,6 @@ iface bond0 inet6 static
MTU=9000
NETMASK=255.255.255.0
NETMASK1=255.255.255.0
@ -169,7 +170,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
TYPE=Bond
USERCTL=no
@@ -2304,7 +2287,6 @@ iface bond0 inet6 static
@@ -3232,7 +3215,6 @@ iface bond0 inet6 static
DEVICE=bond0s0
HWADDR=aa:bb:cc:dd:e8:00
MASTER=bond0
@ -177,7 +178,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
@@ -2326,7 +2308,6 @@ iface bond0 inet6 static
@@ -3260,7 +3242,6 @@ iface bond0 inet6 static
DEVICE=bond0s1
HWADDR=aa:bb:cc:dd:e8:01
MASTER=bond0
@ -185,15 +186,15 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
@@ -2383,7 +2364,6 @@ iface bond0 inet6 static
@@ -3406,7 +3387,6 @@ iface bond0 inet6 static
BOOTPROTO=none
DEVICE=en0
HWADDR=aa:bb:cc:dd:e8:00
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no"""),
@@ -2402,7 +2382,6 @@ iface bond0 inet6 static
USERCTL=no"""
@@ -3427,7 +3407,6 @@ iface bond0 inet6 static
MTU=2222
NETMASK=255.255.255.0
NETMASK1=255.255.255.0
@ -201,7 +202,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
PHYSDEV=en0
USERCTL=no
@@ -2467,7 +2446,6 @@ iface bond0 inet6 static
@@ -3553,7 +3532,6 @@ iface bond0 inet6 static
DEVICE=br0
IPADDR=192.168.2.2
NETMASK=255.255.255.0
@ -209,7 +210,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
PRIO=22
STP=no
@@ -2591,7 +2569,6 @@ iface bond0 inet6 static
@@ -3769,7 +3747,6 @@ iface bond0 inet6 static
HWADDR=52:54:00:12:34:00
IPADDR=192.168.1.2
NETMASK=255.255.255.0
@ -217,7 +218,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=no
TYPE=Ethernet
USERCTL=no
@@ -2601,7 +2578,6 @@ iface bond0 inet6 static
@@ -3781,7 +3758,6 @@ iface bond0 inet6 static
DEVICE=eth1
HWADDR=52:54:00:12:34:aa
MTU=1480
@ -225,7 +226,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -2610,7 +2586,6 @@ iface bond0 inet6 static
@@ -3792,7 +3768,6 @@ iface bond0 inet6 static
BOOTPROTO=none
DEVICE=eth2
HWADDR=52:54:00:12:34:ff
@ -233,7 +234,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=no
TYPE=Ethernet
USERCTL=no
@@ -3027,7 +3002,6 @@ class TestRhelSysConfigRendering(CiTestCase):
@@ -4469,7 +4444,6 @@ class TestRhelSysConfigRendering(CiTestCase):
BOOTPROTO=dhcp
DEVICE=eth1000
HWADDR=07-1c-c6-75-a4-be
@ -241,7 +242,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -3148,7 +3122,6 @@ GATEWAY=10.0.2.2
@@ -4681,7 +4655,6 @@ GATEWAY=10.0.2.2
HWADDR=52:54:00:12:34:00
IPADDR=10.0.2.15
NETMASK=255.255.255.0
@ -249,7 +250,7 @@ index 38d934d4..c67b5fcc 100644
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -3218,7 +3191,6 @@ USERCTL=no
@@ -4751,7 +4724,6 @@ USERCTL=no
#
BOOTPROTO=dhcp
DEVICE=eth0
@ -258,5 +259,5 @@ index 38d934d4..c67b5fcc 100644
TYPE=Ethernet
USERCTL=no
--
2.27.0
2.37.3

View File

@ -1,4 +1,4 @@
From 6134624f10ef56534e37624adc12f11b09910591 Mon Sep 17 00:00:00 2001
From 1308991156950833f62ec1464b1aef3673864c02 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Fri, 7 May 2021 13:36:08 +0200
Subject: limit permissions on def_log_file
@ -15,6 +15,8 @@ Conflicts 21.1:
recent version
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
(cherry picked from commit cb7b35ca10c82c9725c3527e3ec5fb8cb7c61bc0)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/settings.py | 1 +
cloudinit/stages.py | 1 +
@ -22,34 +24,34 @@ Signed-off-by: Eduardo Otubo <otubo@redhat.com>
3 files changed, 6 insertions(+)
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index e690c0fd..43a1490c 100644
index 8684d003..681ea771 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -46,6 +46,7 @@ CFG_BUILTIN = {
'None',
@@ -52,6 +52,7 @@ CFG_BUILTIN = {
"None",
],
'def_log_file': '/var/log/cloud-init.log',
+ 'def_log_file_mode': 0o600,
'log_cfgs': [],
'mount_default_fields': [None, None, 'auto', 'defaults,nofail', '0', '2'],
'ssh_deletekeys': False,
"def_log_file": "/var/log/cloud-init.log",
+ "def_log_file_mode": 0o600,
"log_cfgs": [],
"syslog_fix_perms": ["syslog:adm", "root:adm", "root:wheel", "root:root"],
"system_info": {
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index 3ef4491c..83e25dd1 100644
index 9494a0bf..a624a6fb 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -147,6 +147,7 @@ class Init(object):
@@ -202,6 +202,7 @@ class Init:
def _initialize_filesystem(self):
util.ensure_dirs(self._initial_subdirs())
log_file = util.get_cfg_option_str(self.cfg, 'def_log_file')
+ log_file_mode = util.get_cfg_option_int(self.cfg, 'def_log_file_mode')
log_file = util.get_cfg_option_str(self.cfg, "def_log_file")
+ log_file_mode = util.get_cfg_option_int(self.cfg, "def_log_file_mode")
if log_file:
util.ensure_file(log_file, preserve_mode=True)
perms = self.cfg.get('syslog_fix_perms')
util.ensure_file(log_file, mode=0o640, preserve_mode=True)
perms = self.cfg.get("syslog_fix_perms")
diff --git a/doc/examples/cloud-config.txt b/doc/examples/cloud-config.txt
index de9a0f87..bb33ad45 100644
index 15d788f3..b6d16c9c 100644
--- a/doc/examples/cloud-config.txt
+++ b/doc/examples/cloud-config.txt
@@ -414,10 +414,14 @@ timezone: US/Eastern
@@ -383,10 +383,14 @@ timezone: US/Eastern
# if syslog_fix_perms is a list, it will iterate through and use the
# first pair that does not raise error.
#
@ -65,5 +67,5 @@ index de9a0f87..bb33ad45 100644
# you can set passwords for a user or multiple users
--
2.27.0
2.37.3

View File

@ -1,4 +1,4 @@
From dfea0490b899804761fbd7aa23822783d7c36ec5 Mon Sep 17 00:00:00 2001
From 06b2d8279628eb5d0ec36c6b5493346d6cf9a752 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Fri, 7 May 2021 13:36:13 +0200
Subject: include 'NOZEROCONF=yes' in /etc/sysconfig/network
@ -21,45 +21,34 @@ Resolves: rhbz#1653131
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
(cherry picked from commit ffa647e83efd4293bd027e9e390274aad8a12d94)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/net/sysconfig.py | 11 ++++++++++-
tests/unittests/test_net.py | 1 -
2 files changed, 10 insertions(+), 2 deletions(-)
cloudinit/net/sysconfig.py | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 3d276666..d5440998 100644
index 1d3d83dc..9abe2279 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -925,7 +925,16 @@ class Renderer(renderer.Renderer):
@@ -1018,7 +1018,16 @@ class Renderer(renderer.Renderer):
# Distros configuring /etc/sysconfig/network as a file e.g. Centos
if sysconfig_path.endswith('network'):
if sysconfig_path.endswith("network"):
util.ensure_dir(os.path.dirname(sysconfig_path))
- netcfg = [_make_header(), 'NETWORKING=yes']
- netcfg = [_make_header(), "NETWORKING=yes"]
+ netcfg = []
+ for line in util.load_file(sysconfig_path, quiet=True).split('\n'):
+ if 'cloud-init' in line:
+ for line in util.load_file(sysconfig_path, quiet=True).split("\n"):
+ if "cloud-init" in line:
+ break
+ if not line.startswith(('NETWORKING=',
+ 'IPV6_AUTOCONF=',
+ 'NETWORKING_IPV6=')):
+ if not line.startswith(("NETWORKING=",
+ "IPV6_AUTOCONF=",
+ "NETWORKING_IPV6=")):
+ netcfg.append(line)
+ # Now generate the cloud-init portion of sysconfig/network
+ netcfg.extend([_make_header(), 'NETWORKING=yes'])
+ netcfg.extend([_make_header(), "NETWORKING=yes"])
if network_state.use_ipv6:
netcfg.append('NETWORKING_IPV6=yes')
netcfg.append('IPV6_AUTOCONF=no')
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 4ea0e597..c67b5fcc 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -1729,7 +1729,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BOOTPROTO=none
DEVICE=bond0
DHCPV6C=yes
- IPV6_AUTOCONF=no
IPV6INIT=yes
MACADDR=aa:bb:cc:dd:ee:ff
ONBOOT=yes
netcfg.append("NETWORKING_IPV6=yes")
netcfg.append("IPV6_AUTOCONF=no")
--
2.27.0
2.37.3

View File

@ -1,36 +0,0 @@
From 699d37a6ff3e343e214943794aac09e4156c2b2b Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Fri, 7 May 2021 13:36:10 +0200
Subject: sysconfig: Don't write BOOTPROTO=dhcp for ipv6 dhcp
Don't write BOOTPROTO=dhcp for ipv6 dhcp, as BOOTPROTO applies
only to ipv4. Explicitly write IPV6_AUTOCONF=no for dhcp on ipv6.
X-downstream-only: yes
Resolves: rhbz#1519271
Signed-off-by: Ryan McCabe <rmccabe@redhat.com>
Merged patches (19.4):
- 6444df4 sysconfig: Don't disable IPV6_AUTOCONF
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
---
tests/unittests/test_net.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index c67b5fcc..4ea0e597 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -1729,6 +1729,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BOOTPROTO=none
DEVICE=bond0
DHCPV6C=yes
+ IPV6_AUTOCONF=no
IPV6INIT=yes
MACADDR=aa:bb:cc:dd:ee:ff
ONBOOT=yes
--
2.27.0

View File

@ -1,57 +0,0 @@
From ccc75c1be3ae08d813193071c798fc905b5c03e5 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Fri, 7 May 2021 13:36:12 +0200
Subject: DataSourceAzure.py: use hostnamectl to set hostname
RH-Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-id: <20180417130754.12918-3-vkuznets@redhat.com>
Patchwork-id: 79659
O-Subject: [RHEL7.6/7.5.z cloud-init PATCH 2/2] DataSourceAzure.py: use hostnamectl to set hostname
Bugzilla: 1568717
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
The right way to set hostname in RHEL7 is:
$ hostnamectl set-hostname HOSTNAME
DataSourceAzure, however, uses:
$ hostname HOSTSNAME
instead and this causes problems. We can't simply change
'BUILTIN_DS_CONFIG' in DataSourceAzure.py as 'hostname' is being used
for both getting and setting the hostname.
Long term, this should be fixed in a different way. Cloud-init
has distro-specific hostname setting/getting (see
cloudinit/distros/rhel.py) and DataSourceAzure.py needs to be switched
to use these.
Resolves: rhbz#1434109
X-downstream-only: yes
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/sources/DataSourceAzure.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index cee630f7..553b5a7e 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -296,7 +296,7 @@ def get_hostname(hostname_command='hostname'):
def set_hostname(hostname, hostname_command='hostname'):
- subp.subp([hostname_command, hostname])
+ util.subp(['hostnamectl', 'set-hostname', str(hostname)])
@azure_ds_telemetry_reporter
--
2.27.0

View File

@ -0,0 +1,95 @@
From 0616dbd3f523395b619960b67b3b65c2f0ea15f4 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Fri, 10 Mar 2023 11:51:48 +0100
Subject: Manual revert "Use Network-Manager and Netplan as default renderers
for RHEL and Fedora (#1465)"
This reverts changes done in commit 7703aa98b.
Done by hand because the doc file affected by that commit has changed.
X-downstream-only: true
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/net/renderers.py | 1 -
config/cloud.cfg.tmpl | 3 ---
doc/rtd/reference/network-config.rst | 16 ++--------------
3 files changed, 2 insertions(+), 18 deletions(-)
diff --git a/cloudinit/net/renderers.py b/cloudinit/net/renderers.py
index fcf7feba..b241683f 100644
--- a/cloudinit/net/renderers.py
+++ b/cloudinit/net/renderers.py
@@ -30,7 +30,6 @@ DEFAULT_PRIORITY = [
"eni",
"sysconfig",
"netplan",
- "network-manager",
"freebsd",
"netbsd",
"openbsd",
diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
index 7238c102..12f32c51 100644
--- a/config/cloud.cfg.tmpl
+++ b/config/cloud.cfg.tmpl
@@ -381,9 +381,6 @@ system_info:
{% elif variant in ["dragonfly"] %}
network:
renderers: ['freebsd']
-{% elif variant in ["fedora"] or is_rhel %}
- network:
- renderers: ['netplan', 'network-manager', 'networkd', 'sysconfig', 'eni']
{% elif variant == "openmandriva" %}
network:
renderers: ['network-manager', 'networkd']
diff --git a/doc/rtd/reference/network-config.rst b/doc/rtd/reference/network-config.rst
index ea331f1c..bc52afa5 100644
--- a/doc/rtd/reference/network-config.rst
+++ b/doc/rtd/reference/network-config.rst
@@ -176,16 +176,6 @@ this state, ``cloud-init`` delegates rendering of the configuration to
distro-supported formats. The following ``renderers`` are supported in
``cloud-init``:
-NetworkManager
---------------
-
-`NetworkManager`_ is the standard Linux network configuration tool suite. It
-supports a wide range of networking setups. Configuration is typically stored
-in :file:`/etc/NetworkManager`.
-
-It is the default for a number of Linux distributions; notably Fedora,
-CentOS/RHEL, and their derivatives.
-
ENI
---
@@ -223,7 +213,6 @@ preference) is as follows:
- ENI
- Sysconfig
- Netplan
-- NetworkManager
- FreeBSD
- NetBSD
- OpenBSD
@@ -234,7 +223,6 @@ preference) is as follows:
- **ENI**: using ``ifup``, ``ifdown`` to manage device setup/teardown
- **Netplan**: using ``netplan apply`` to manage device setup/teardown
-- **NetworkManager**: using ``nmcli`` to manage device setup/teardown
- **Networkd**: using ``ip`` to manage device setup/teardown
When applying the policy, ``cloud-init`` checks if the current instance has the
@@ -244,8 +232,8 @@ supplying an updated configuration in cloud-config. ::
system_info:
network:
- renderers: ['netplan', 'network-manager', 'eni', 'sysconfig', 'freebsd', 'netbsd', 'openbsd']
- activators: ['eni', 'netplan', 'network-manager', 'networkd']
+ renderers: ['netplan', 'eni', 'sysconfig', 'freebsd', 'netbsd', 'openbsd']
+ activators: ['eni', 'netplan', 'networkd']
Network configuration tools
===========================
--
2.37.3

File diff suppressed because it is too large Load Diff

View File

@ -1,148 +0,0 @@
From 24894dcf45a307f44e29dc5d5b2d864b75fd982c Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Fri, 7 May 2021 13:36:14 +0200
Subject: Remove race condition between cloud-init and NetworkManager
Message-id: <20200302104635.11648-1-otubo@redhat.com>
Patchwork-id: 94098
O-Subject: [RHEL-7.9/RHEL-8.2.0 cloud-init PATCH] Remove race condition between cloud-init and NetworkManager
Bugzilla: 1807797
RH-Acked-by: Cathy Avery <cavery@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
BZ: 1748015
BRANCH: rhel7/master-18.5
BREW: 26924611
BZ: 1807797
BRANCH: rhel820/master-18.5
BREW: 26924957
cloud-init service is set to start before NetworkManager service starts,
but this does not avoid a race condition between them. NetworkManager
starts before cloud-init can write `dns=none' to the file:
/etc/NetworkManager/conf.d/99-cloud-init.conf. This way NetworkManager
doesn't read the configuration and erases all resolv.conf values upon
shutdown. On the next reboot neither cloud-init or NetworkManager will
write anything to resolv.conf, leaving it blank.
This patch introduces a NM reload (try-restart) at the end of cloud-init
start up so it won't erase resolv.conf upon first shutdown.
x-downstream-only: yes
resolves: rhbz#1748015, rhbz#1807797 and rhbz#1804780
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
This commit is a squash and also includes the folloowing commits:
commit 316a17b7c02a87fa9b2981535be0b20d165adc46
Author: Eduardo Otubo <otubo@redhat.com>
Date: Mon Jun 1 11:58:06 2020 +0200
Make cloud-init.service execute after network is up
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200526090804.2047-1-otubo@redhat.com>
Patchwork-id: 96809
O-Subject: [RHEL-8.2.1 cloud-init PATCH] Make cloud-init.service execute after network is up
Bugzilla: 1803928
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
cloud-init.service needs to wait until network is fully up before
continuing executing and configuring its service.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
x-downstream-only: yes
Resolves: rhbz#1831646
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
commit 0422ba0e773d1a8257a3f2bf3db05f3bc7917eb7
Author: Eduardo Otubo <otubo@redhat.com>
Date: Thu May 28 08:44:08 2020 +0200
Remove race condition between cloud-init and NetworkManager
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200327121911.17699-1-otubo@redhat.com>
Patchwork-id: 94453
O-Subject: [RHEL-7.9/RHEL-8.2.0 cloud-init PATCHv2] Remove race condition between cloud-init and NetworkManager
Bugzilla: 1840648
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
cloud-init service is set to start before NetworkManager service starts,
but this does not avoid a race condition between them. NetworkManager
starts before cloud-init can write `dns=none' to the file:
/etc/NetworkManager/conf.d/99-cloud-init.conf. This way NetworkManager
doesn't read the configuration and erases all resolv.conf values upon
shutdown. On the next reboot neither cloud-init or NetworkManager will
write anything to resolv.conf, leaving it blank.
This patch introduces a NM reload (try-reload-or-restart) at the end of cloud-init
start up so it won't erase resolv.conf upon first shutdown.
x-downstream-only: yes
Signed-off-by: Eduardo Otubo otubo@redhat.com
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
commit e0b48a936433faea7f56dbc29dda35acf7d375f7
Author: Eduardo Otubo <otubo@redhat.com>
Date: Thu May 28 08:44:06 2020 +0200
Enable ssh_deletekeys by default
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200317091705.15715-1-otubo@redhat.com>
Patchwork-id: 94365
O-Subject: [RHEL-7.9/RHEL-8.2.0 cloud-init PATCH] Enable ssh_deletekeys by default
Bugzilla: 1814152
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
The configuration option ssh_deletekeys will trigger the generation
of new ssh keys for every new instance deployed.
x-downstream-only: yes
resolves: rhbz#1814152
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
rhel/cloud.cfg | 2 +-
rhel/systemd/cloud-init.service | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/rhel/cloud.cfg b/rhel/cloud.cfg
index 82e8bf62..9ecba215 100644
--- a/rhel/cloud.cfg
+++ b/rhel/cloud.cfg
@@ -6,7 +6,7 @@ ssh_pwauth: 0
mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2']
resize_rootfs_tmp: /dev
-ssh_deletekeys: 0
+ssh_deletekeys: 1
ssh_genkeytypes: ~
syslog_fix_perms: ~
disable_vmware_customization: false
diff --git a/rhel/systemd/cloud-init.service b/rhel/systemd/cloud-init.service
index d0023a05..0b3d796d 100644
--- a/rhel/systemd/cloud-init.service
+++ b/rhel/systemd/cloud-init.service
@@ -5,6 +5,7 @@ Wants=sshd-keygen.service
Wants=sshd.service
After=cloud-init-local.service
After=NetworkManager.service network.service
+After=NetworkManager-wait-online.service
Before=network-online.target
Before=sshd-keygen.service
Before=sshd.service
--
2.27.0

View File

@ -0,0 +1,47 @@
From d0c97b400552489ed39ef44fed0889111e528bca Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Tue, 11 Apr 2023 04:20:00 -0400
Subject: settings.py: update settings for rhel
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Please see commit 5e1e568d7085fd4443
(" Add initial redhat setup")
from rhel8.8.0 branch for setings.py. Applying the same for the rebased
cloud-init.
X-downstream-only: true
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/settings.py | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index 681ea771..88aac6be 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -54,13 +54,16 @@ CFG_BUILTIN = {
"def_log_file": "/var/log/cloud-init.log",
"def_log_file_mode": 0o600,
"log_cfgs": [],
- "syslog_fix_perms": ["syslog:adm", "root:adm", "root:wheel", "root:root"],
+ "syslog_fix_perms": [],
+ "mount_default_fields": [None, None, "auto", "defaults,nofail", "0", "2"],
+ "ssh_deletekeys": False,
+ "ssh_genkeytypes": [],
"system_info": {
"paths": {
"cloud_dir": "/var/lib/cloud",
"templates_dir": "/etc/cloud/templates/",
},
- "distro": "ubuntu",
+ "distro": "rhel",
"network": {"renderers": None},
},
"vendor_data": {"enabled": True, "prefix": []},
--
2.37.3

View File

@ -1,496 +0,0 @@
From b48dda73da94782d7ab0c455fa382d3a5ef3c419 Mon Sep 17 00:00:00 2001
From: Daniel Watkins <oddbloke@ubuntu.com>
Date: Mon, 8 Mar 2021 12:50:57 -0500
Subject: net: exclude OVS internal interfaces in get_interfaces (#829)
`get_interfaces` is used to in two ways, broadly: firstly, to determine
the available interfaces when converting cloud network configuration
formats to cloud-init's network configuration formats; and, secondly, to
ensure that any interfaces which are specified in network configuration
are (a) available, and (b) named correctly. The first of these is
unaffected by this commit, as no clouds support Open vSwitch
configuration in their network configuration formats.
For the second, we check that MAC addresses of physical devices are
unique. In some OVS configurations, there are OVS-created devices which
have duplicate MAC addresses, either with each other or with physical
devices. As these interfaces are created by OVS, we can be confident
that (a) they will be available when appropriate, and (b) that OVS will
name them correctly. As such, this commit excludes any OVS-internal
interfaces from the set of interfaces returned by `get_interfaces`.
LP: #1912844
---
cloudinit/net/__init__.py | 62 +++++++++
cloudinit/net/tests/test_init.py | 119 ++++++++++++++++++
.../sources/helpers/tests/test_openstack.py | 5 +
cloudinit/sources/tests/test_oracle.py | 4 +
.../integration_tests/bugs/test_lp1912844.py | 103 +++++++++++++++
.../test_datasource/test_configdrive.py | 8 ++
tests/unittests/test_net.py | 20 +++
7 files changed, 321 insertions(+)
create mode 100644 tests/integration_tests/bugs/test_lp1912844.py
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index de65e7af..385b7bcc 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -6,6 +6,7 @@
# This file is part of cloud-init. See LICENSE file for license information.
import errno
+import functools
import ipaddress
import logging
import os
@@ -19,6 +20,19 @@ from cloudinit.url_helper import UrlError, readurl
LOG = logging.getLogger(__name__)
SYS_CLASS_NET = "/sys/class/net/"
DEFAULT_PRIMARY_INTERFACE = 'eth0'
+OVS_INTERNAL_INTERFACE_LOOKUP_CMD = [
+ "ovs-vsctl",
+ "--format",
+ "csv",
+ "--no-headings",
+ "--timeout",
+ "10",
+ "--columns",
+ "name",
+ "find",
+ "interface",
+ "type=internal",
+]
def natural_sort_key(s, _nsre=re.compile('([0-9]+)')):
@@ -133,6 +147,52 @@ def master_is_openvswitch(devname):
return os.path.exists(ovs_path)
+@functools.lru_cache(maxsize=None)
+def openvswitch_is_installed() -> bool:
+ """Return a bool indicating if Open vSwitch is installed in the system."""
+ ret = bool(subp.which("ovs-vsctl"))
+ if not ret:
+ LOG.debug(
+ "ovs-vsctl not in PATH; not detecting Open vSwitch interfaces"
+ )
+ return ret
+
+
+@functools.lru_cache(maxsize=None)
+def get_ovs_internal_interfaces() -> list:
+ """Return a list of the names of OVS internal interfaces on the system.
+
+ These will all be strings, and are used to exclude OVS-specific interface
+ from cloud-init's network configuration handling.
+ """
+ try:
+ out, _err = subp.subp(OVS_INTERNAL_INTERFACE_LOOKUP_CMD)
+ except subp.ProcessExecutionError as exc:
+ if "database connection failed" in exc.stderr:
+ LOG.info(
+ "Open vSwitch is not yet up; no interfaces will be detected as"
+ " OVS-internal"
+ )
+ return []
+ raise
+ else:
+ return out.splitlines()
+
+
+def is_openvswitch_internal_interface(devname: str) -> bool:
+ """Returns True if this is an OVS internal interface.
+
+ If OVS is not installed or not yet running, this will return False.
+ """
+ if not openvswitch_is_installed():
+ return False
+ ovs_bridges = get_ovs_internal_interfaces()
+ if devname in ovs_bridges:
+ LOG.debug("Detected %s as an OVS interface", devname)
+ return True
+ return False
+
+
def is_netfailover(devname, driver=None):
""" netfailover driver uses 3 nics, master, primary and standby.
this returns True if the device is either the primary or standby
@@ -884,6 +944,8 @@ def get_interfaces(blacklist_drivers=None) -> list:
# skip nics that have no mac (00:00....)
if name != 'lo' and mac == zero_mac[:len(mac)]:
continue
+ if is_openvswitch_internal_interface(name):
+ continue
# skip nics that have drivers blacklisted
driver = device_driver(name)
if driver in blacklist_drivers:
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 0535387a..946f8ee2 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -391,6 +391,10 @@ class TestGetDeviceList(CiTestCase):
self.assertCountEqual(['eth0', 'eth1'], net.get_devicelist())
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False),
+)
class TestGetInterfaceMAC(CiTestCase):
def setUp(self):
@@ -1224,6 +1228,121 @@ class TestNetFailOver(CiTestCase):
self.assertFalse(net.is_netfailover(devname, driver))
+class TestOpenvswitchIsInstalled:
+ """Test cloudinit.net.openvswitch_is_installed.
+
+ Uses the ``clear_lru_cache`` local autouse fixture to allow us to test
+ despite the ``lru_cache`` decorator on the unit under test.
+ """
+
+ @pytest.fixture(autouse=True)
+ def clear_lru_cache(self):
+ net.openvswitch_is_installed.cache_clear()
+
+ @pytest.mark.parametrize(
+ "expected,which_return", [(True, "/some/path"), (False, None)]
+ )
+ @mock.patch("cloudinit.net.subp.which")
+ def test_mirrors_which_result(self, m_which, expected, which_return):
+ m_which.return_value = which_return
+ assert expected == net.openvswitch_is_installed()
+
+ @mock.patch("cloudinit.net.subp.which")
+ def test_only_calls_which_once(self, m_which):
+ net.openvswitch_is_installed()
+ net.openvswitch_is_installed()
+ assert 1 == m_which.call_count
+
+
+@mock.patch("cloudinit.net.subp.subp", return_value=("", ""))
+class TestGetOVSInternalInterfaces:
+ """Test cloudinit.net.get_ovs_internal_interfaces.
+
+ Uses the ``clear_lru_cache`` local autouse fixture to allow us to test
+ despite the ``lru_cache`` decorator on the unit under test.
+ """
+ @pytest.fixture(autouse=True)
+ def clear_lru_cache(self):
+ net.get_ovs_internal_interfaces.cache_clear()
+
+ def test_command_used(self, m_subp):
+ """Test we use the correct command when we call subp"""
+ net.get_ovs_internal_interfaces()
+
+ assert [
+ mock.call(net.OVS_INTERNAL_INTERFACE_LOOKUP_CMD)
+ ] == m_subp.call_args_list
+
+ def test_subp_contents_split_and_returned(self, m_subp):
+ """Test that the command output is appropriately mangled."""
+ stdout = "iface1\niface2\niface3\n"
+ m_subp.return_value = (stdout, "")
+
+ assert [
+ "iface1",
+ "iface2",
+ "iface3",
+ ] == net.get_ovs_internal_interfaces()
+
+ def test_database_connection_error_handled_gracefully(self, m_subp):
+ """Test that the error indicating OVS is down is handled gracefully."""
+ m_subp.side_effect = ProcessExecutionError(
+ stderr="database connection failed"
+ )
+
+ assert [] == net.get_ovs_internal_interfaces()
+
+ def test_other_errors_raised(self, m_subp):
+ """Test that only database connection errors are handled."""
+ m_subp.side_effect = ProcessExecutionError()
+
+ with pytest.raises(ProcessExecutionError):
+ net.get_ovs_internal_interfaces()
+
+ def test_only_runs_once(self, m_subp):
+ """Test that we cache the value."""
+ net.get_ovs_internal_interfaces()
+ net.get_ovs_internal_interfaces()
+
+ assert 1 == m_subp.call_count
+
+
+@mock.patch("cloudinit.net.get_ovs_internal_interfaces")
+@mock.patch("cloudinit.net.openvswitch_is_installed")
+class TestIsOpenVSwitchInternalInterface:
+ def test_false_if_ovs_not_installed(
+ self, m_openvswitch_is_installed, _m_get_ovs_internal_interfaces
+ ):
+ """Test that OVS' absence returns False."""
+ m_openvswitch_is_installed.return_value = False
+
+ assert not net.is_openvswitch_internal_interface("devname")
+
+ @pytest.mark.parametrize(
+ "detected_interfaces,devname,expected_return",
+ [
+ ([], "devname", False),
+ (["notdevname"], "devname", False),
+ (["devname"], "devname", True),
+ (["some", "other", "devices", "and", "ours"], "ours", True),
+ ],
+ )
+ def test_return_value_based_on_detected_interfaces(
+ self,
+ m_openvswitch_is_installed,
+ m_get_ovs_internal_interfaces,
+ detected_interfaces,
+ devname,
+ expected_return,
+ ):
+ """Test that the detected interfaces are used correctly."""
+ m_openvswitch_is_installed.return_value = True
+ m_get_ovs_internal_interfaces.return_value = detected_interfaces
+ assert expected_return == net.is_openvswitch_internal_interface(
+ devname
+ )
+
+
class TestIsIpAddress:
"""Tests for net.is_ip_address.
diff --git a/cloudinit/sources/helpers/tests/test_openstack.py b/cloudinit/sources/helpers/tests/test_openstack.py
index 2bde1e3f..95fb9743 100644
--- a/cloudinit/sources/helpers/tests/test_openstack.py
+++ b/cloudinit/sources/helpers/tests/test_openstack.py
@@ -1,10 +1,15 @@
# This file is part of cloud-init. See LICENSE file for license information.
# ./cloudinit/sources/helpers/tests/test_openstack.py
+from unittest import mock
from cloudinit.sources.helpers import openstack
from cloudinit.tests import helpers as test_helpers
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False)
+)
class TestConvertNetJson(test_helpers.CiTestCase):
def test_phy_types(self):
diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py
index a7bbdfd9..dcf33b9b 100644
--- a/cloudinit/sources/tests/test_oracle.py
+++ b/cloudinit/sources/tests/test_oracle.py
@@ -173,6 +173,10 @@ class TestIsPlatformViable(test_helpers.CiTestCase):
m_read_dmi_data.assert_has_calls([mock.call('chassis-asset-tag')])
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False)
+)
class TestNetworkConfigFromOpcImds:
def test_no_secondary_nics_does_not_mutate_input(self, oracle_ds):
oracle_ds._vnics_data = [{}]
diff --git a/tests/integration_tests/bugs/test_lp1912844.py b/tests/integration_tests/bugs/test_lp1912844.py
new file mode 100644
index 00000000..efafae50
--- /dev/null
+++ b/tests/integration_tests/bugs/test_lp1912844.py
@@ -0,0 +1,103 @@
+"""Integration test for LP: #1912844
+
+cloud-init should ignore OVS-internal interfaces when performing its own
+interface determination: these interfaces are handled fully by OVS, so
+cloud-init should never need to touch them.
+
+This test is a semi-synthetic reproducer for the bug. It uses a similar
+network configuration, tweaked slightly to DHCP in a way that will succeed even
+on "failed" boots. The exact bug doesn't reproduce with the NoCloud
+datasource, because it runs at init-local time (whereas the MAAS datasource,
+from the report, runs only at init (network) time): this means that the
+networking code runs before OVS creates its interfaces (which happens after
+init-local but, of course, before networking is up), and so doesn't generate
+the traceback that they cause. We work around this by calling
+``get_interfaces_by_mac` directly in the test code.
+"""
+import pytest
+
+from tests.integration_tests import random_mac_address
+
+MAC_ADDRESS = random_mac_address()
+
+NETWORK_CONFIG = """\
+bonds:
+ bond0:
+ interfaces:
+ - enp5s0
+ macaddress: {0}
+ mtu: 1500
+bridges:
+ ovs-br:
+ interfaces:
+ - bond0
+ macaddress: {0}
+ mtu: 1500
+ openvswitch: {{}}
+ dhcp4: true
+ethernets:
+ enp5s0:
+ mtu: 1500
+ set-name: enp5s0
+ match:
+ macaddress: {0}
+version: 2
+vlans:
+ ovs-br.100:
+ id: 100
+ link: ovs-br
+ mtu: 1500
+ ovs-br.200:
+ id: 200
+ link: ovs-br
+ mtu: 1500
+""".format(MAC_ADDRESS)
+
+
+SETUP_USER_DATA = """\
+#cloud-config
+packages:
+- openvswitch-switch
+"""
+
+
+@pytest.fixture
+def ovs_enabled_session_cloud(session_cloud):
+ """A session_cloud wrapper, to use an OVS-enabled image for tests.
+
+ This implementation is complicated by wanting to use ``session_cloud``s
+ snapshot cleanup/retention logic, to avoid having to reimplement that here.
+ """
+ old_snapshot_id = session_cloud.snapshot_id
+ with session_cloud.launch(
+ user_data=SETUP_USER_DATA,
+ ) as instance:
+ instance.instance.clean()
+ session_cloud.snapshot_id = instance.snapshot()
+
+ yield session_cloud
+
+ try:
+ session_cloud.delete_snapshot()
+ finally:
+ session_cloud.snapshot_id = old_snapshot_id
+
+
+@pytest.mark.lxd_vm
+def test_get_interfaces_by_mac_doesnt_traceback(ovs_enabled_session_cloud):
+ """Launch our OVS-enabled image and confirm the bug doesn't reproduce."""
+ launch_kwargs = {
+ "config_dict": {
+ "user.network-config": NETWORK_CONFIG,
+ "volatile.eth0.hwaddr": MAC_ADDRESS,
+ },
+ }
+ with ovs_enabled_session_cloud.launch(
+ launch_kwargs=launch_kwargs,
+ ) as client:
+ result = client.execute(
+ "python3 -c"
+ "'from cloudinit.net import get_interfaces_by_mac;"
+ "get_interfaces_by_mac()'"
+ )
+ assert result.ok
diff --git a/tests/unittests/test_datasource/test_configdrive.py b/tests/unittests/test_datasource/test_configdrive.py
index 6f830cc6..2e2b7847 100644
--- a/tests/unittests/test_datasource/test_configdrive.py
+++ b/tests/unittests/test_datasource/test_configdrive.py
@@ -494,6 +494,10 @@ class TestConfigDriveDataSource(CiTestCase):
self.assertEqual('config-disk (/dev/anything)', cfg_ds.subplatform)
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False)
+)
class TestNetJson(CiTestCase):
def setUp(self):
super(TestNetJson, self).setUp()
@@ -654,6 +658,10 @@ class TestNetJson(CiTestCase):
self.assertEqual(out_data, conv_data)
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False)
+)
class TestConvertNetworkData(CiTestCase):
with_logs = True
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index c67b5fcc..14d3462f 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -2908,6 +2908,10 @@ iface eth1 inet dhcp
self.assertEqual(0, mock_settle.call_count)
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False)
+)
class TestRhelSysConfigRendering(CiTestCase):
with_logs = True
@@ -3592,6 +3596,10 @@ USERCTL=no
expected, self._render_and_read(network_config=v2data))
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False)
+)
class TestOpenSuseSysConfigRendering(CiTestCase):
with_logs = True
@@ -5009,6 +5017,10 @@ class TestNetRenderers(CiTestCase):
self.assertTrue(result)
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False)
+)
class TestGetInterfaces(CiTestCase):
_data = {'bonds': ['bond1'],
'bridges': ['bridge1'],
@@ -5158,6 +5170,10 @@ class TestInterfaceHasOwnMac(CiTestCase):
self.assertFalse(interface_has_own_mac("eth0"))
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False)
+)
class TestGetInterfacesByMac(CiTestCase):
_data = {'bonds': ['bond1'],
'bridges': ['bridge1'],
@@ -5314,6 +5330,10 @@ class TestInterfacesSorting(CiTestCase):
['enp0s3', 'enp0s8', 'enp0s13', 'enp1s2', 'enp2s0', 'enp2s3'])
+@mock.patch(
+ "cloudinit.net.is_openvswitch_internal_interface",
+ mock.Mock(return_value=False)
+)
class TestGetIBHwaddrsByInterface(CiTestCase):
_ib_addr = '80:00:00:28:fe:80:00:00:00:00:00:00:00:11:22:03:00:33:44:56'
--
2.27.0

View File

@ -1,87 +0,0 @@
From bec5fb60ffae3d1137c7261e5571c2751c5dda25 Mon Sep 17 00:00:00 2001
From: James Falcon <TheRealFalcon@users.noreply.github.com>
Date: Mon, 8 Mar 2021 14:09:47 -0600
Subject: Fix requiring device-number on EC2 derivatives (#836)
#342 (70dbccbb) introduced the ability to determine route-metrics based on
the `device-number` provided by the EC2 IMDS. Not all datasources that
subclass EC2 will have this attribute, so allow the old behavior if
`device-number` is not present.
LP: #1917875
---
cloudinit/sources/DataSourceEc2.py | 3 +-
.../unittests/test_datasource/test_aliyun.py | 30 +++++++++++++++++++
2 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
index 1930a509..a2105dc7 100644
--- a/cloudinit/sources/DataSourceEc2.py
+++ b/cloudinit/sources/DataSourceEc2.py
@@ -765,13 +765,14 @@ def convert_ec2_metadata_network_config(
netcfg['ethernets'][nic_name] = dev_config
return netcfg
# Apply network config for all nics and any secondary IPv4/v6 addresses
+ nic_idx = 0
for mac, nic_name in sorted(macs_to_nics.items()):
nic_metadata = macs_metadata.get(mac)
if not nic_metadata:
continue # Not a physical nic represented in metadata
# device-number is zero-indexed, we want it 1-indexed for the
# multiplication on the following line
- nic_idx = int(nic_metadata['device-number']) + 1
+ nic_idx = int(nic_metadata.get('device-number', nic_idx)) + 1
dhcp_override = {'route-metric': nic_idx * 100}
dev_config = {'dhcp4': True, 'dhcp4-overrides': dhcp_override,
'dhcp6': False,
diff --git a/tests/unittests/test_datasource/test_aliyun.py b/tests/unittests/test_datasource/test_aliyun.py
index eb2828d5..cab1ac2b 100644
--- a/tests/unittests/test_datasource/test_aliyun.py
+++ b/tests/unittests/test_datasource/test_aliyun.py
@@ -7,6 +7,7 @@ from unittest import mock
from cloudinit import helpers
from cloudinit.sources import DataSourceAliYun as ay
+from cloudinit.sources.DataSourceEc2 import convert_ec2_metadata_network_config
from cloudinit.tests import helpers as test_helpers
DEFAULT_METADATA = {
@@ -183,6 +184,35 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase):
self.assertEqual(ay.parse_public_keys(public_keys),
public_keys['key-pair-0']['openssh-key'])
+ def test_route_metric_calculated_without_device_number(self):
+ """Test that route-metric code works without `device-number`
+
+ `device-number` is part of EC2 metadata, but not supported on aliyun.
+ Attempting to access it will raise a KeyError.
+
+ LP: #1917875
+ """
+ netcfg = convert_ec2_metadata_network_config(
+ {"interfaces": {"macs": {
+ "06:17:04:d7:26:09": {
+ "interface-id": "eni-e44ef49e",
+ },
+ "06:17:04:d7:26:08": {
+ "interface-id": "eni-e44ef49f",
+ }
+ }}},
+ macs_to_nics={
+ '06:17:04:d7:26:09': 'eth0',
+ '06:17:04:d7:26:08': 'eth1',
+ }
+ )
+
+ met0 = netcfg['ethernets']['eth0']['dhcp4-overrides']['route-metric']
+ met1 = netcfg['ethernets']['eth1']['dhcp4-overrides']['route-metric']
+
+ # route-metric numbers should be 100 apart
+ assert 100 == abs(met0 - met1)
+
class TestIsAliYun(test_helpers.CiTestCase):
ALIYUN_PRODUCT = 'Alibaba Cloud ECS'
--
2.27.0

View File

@ -1,295 +0,0 @@
From 2a2a5cdec0de0b96d503f9357c1641043574f90a Mon Sep 17 00:00:00 2001
From: Thomas Stringer <thstring@microsoft.com>
Date: Wed, 3 Mar 2021 11:07:43 -0500
Subject: [PATCH 1/7] Add flexibility to IMDS api-version (#793)
RH-Author: Eduardo Otubo <otubo@redhat.com>
RH-MergeRequest: 45: Add support for userdata on Azure from IMDS
RH-Commit: [1/7] 9aa42581c4ff175fb6f8f4a78d94cac9c9971062
RH-Bugzilla: 2023940
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
Add flexibility to IMDS api-version by having both a desired IMDS
api-version and a minimum api-version. The desired api-version will
be used first, and if that fails it will fall back to the minimum
api-version.
---
cloudinit/sources/DataSourceAzure.py | 113 ++++++++++++++----
tests/unittests/test_datasource/test_azure.py | 42 ++++++-
2 files changed, 129 insertions(+), 26 deletions(-)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 553b5a7e..de1452ce 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -78,17 +78,15 @@ AGENT_SEED_DIR = '/var/lib/waagent'
# In the event where the IMDS primary server is not
# available, it takes 1s to fallback to the secondary one
IMDS_TIMEOUT_IN_SECONDS = 2
-IMDS_URL = "http://169.254.169.254/metadata/"
-IMDS_VER = "2019-06-01"
-IMDS_VER_PARAM = "api-version={}".format(IMDS_VER)
+IMDS_URL = "http://169.254.169.254/metadata"
+IMDS_VER_MIN = "2019-06-01"
+IMDS_VER_WANT = "2020-09-01"
class metadata_type(Enum):
- compute = "{}instance?{}".format(IMDS_URL, IMDS_VER_PARAM)
- network = "{}instance/network?{}".format(IMDS_URL,
- IMDS_VER_PARAM)
- reprovisiondata = "{}reprovisiondata?{}".format(IMDS_URL,
- IMDS_VER_PARAM)
+ compute = "{}/instance".format(IMDS_URL)
+ network = "{}/instance/network".format(IMDS_URL)
+ reprovisiondata = "{}/reprovisiondata".format(IMDS_URL)
PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0"
@@ -349,6 +347,8 @@ class DataSourceAzure(sources.DataSource):
self.update_events['network'].add(EventType.BOOT)
self._ephemeral_dhcp_ctx = None
+ self.failed_desired_api_version = False
+
def __str__(self):
root = sources.DataSource.__str__(self)
return "%s [seed=%s]" % (root, self.seed)
@@ -520,8 +520,10 @@ class DataSourceAzure(sources.DataSource):
self._wait_for_all_nics_ready()
ret = self._reprovision()
- imds_md = get_metadata_from_imds(
- self.fallback_interface, retries=10)
+ imds_md = self.get_imds_data_with_api_fallback(
+ self.fallback_interface,
+ retries=10
+ )
(md, userdata_raw, cfg, files) = ret
self.seed = cdev
crawled_data.update({
@@ -652,6 +654,57 @@ class DataSourceAzure(sources.DataSource):
self.ds_cfg['data_dir'], crawled_data['files'], dirmode=0o700)
return True
+ @azure_ds_telemetry_reporter
+ def get_imds_data_with_api_fallback(
+ self,
+ fallback_nic,
+ retries,
+ md_type=metadata_type.compute):
+ """
+ Wrapper for get_metadata_from_imds so that we can have flexibility
+ in which IMDS api-version we use. If a particular instance of IMDS
+ does not have the api version that is desired, we want to make
+ this fault tolerant and fall back to a good known minimum api
+ version.
+ """
+
+ if not self.failed_desired_api_version:
+ for _ in range(retries):
+ try:
+ LOG.info(
+ "Attempting IMDS api-version: %s",
+ IMDS_VER_WANT
+ )
+ return get_metadata_from_imds(
+ fallback_nic=fallback_nic,
+ retries=0,
+ md_type=md_type,
+ api_version=IMDS_VER_WANT
+ )
+ except UrlError as err:
+ LOG.info(
+ "UrlError with IMDS api-version: %s",
+ IMDS_VER_WANT
+ )
+ if err.code == 400:
+ log_msg = "Fall back to IMDS api-version: {}".format(
+ IMDS_VER_MIN
+ )
+ report_diagnostic_event(
+ log_msg,
+ logger_func=LOG.info
+ )
+ self.failed_desired_api_version = True
+ break
+
+ LOG.info("Using IMDS api-version: %s", IMDS_VER_MIN)
+ return get_metadata_from_imds(
+ fallback_nic=fallback_nic,
+ retries=retries,
+ md_type=md_type,
+ api_version=IMDS_VER_MIN
+ )
+
def device_name_to_device(self, name):
return self.ds_cfg['disk_aliases'].get(name)
@@ -880,10 +933,11 @@ class DataSourceAzure(sources.DataSource):
# primary nic is being attached first helps here. Otherwise each nic
# could add several seconds of delay.
try:
- imds_md = get_metadata_from_imds(
+ imds_md = self.get_imds_data_with_api_fallback(
ifname,
5,
- metadata_type.network)
+ metadata_type.network
+ )
except Exception as e:
LOG.warning(
"Failed to get network metadata using nic %s. Attempt to "
@@ -1017,7 +1071,10 @@ class DataSourceAzure(sources.DataSource):
def _poll_imds(self):
"""Poll IMDS for the new provisioning data until we get a valid
response. Then return the returned JSON object."""
- url = metadata_type.reprovisiondata.value
+ url = "{}?api-version={}".format(
+ metadata_type.reprovisiondata.value,
+ IMDS_VER_MIN
+ )
headers = {"Metadata": "true"}
nl_sock = None
report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
@@ -2059,7 +2116,8 @@ def _generate_network_config_from_fallback_config() -> dict:
@azure_ds_telemetry_reporter
def get_metadata_from_imds(fallback_nic,
retries,
- md_type=metadata_type.compute):
+ md_type=metadata_type.compute,
+ api_version=IMDS_VER_MIN):
"""Query Azure's instance metadata service, returning a dictionary.
If network is not up, setup ephemeral dhcp on fallback_nic to talk to the
@@ -2069,13 +2127,16 @@ def get_metadata_from_imds(fallback_nic,
@param fallback_nic: String. The name of the nic which requires active
network in order to query IMDS.
@param retries: The number of retries of the IMDS_URL.
+ @param md_type: Metadata type for IMDS request.
+ @param api_version: IMDS api-version to use in the request.
@return: A dict of instance metadata containing compute and network
info.
"""
kwargs = {'logfunc': LOG.debug,
'msg': 'Crawl of Azure Instance Metadata Service (IMDS)',
- 'func': _get_metadata_from_imds, 'args': (retries, md_type,)}
+ 'func': _get_metadata_from_imds,
+ 'args': (retries, md_type, api_version,)}
if net.is_up(fallback_nic):
return util.log_time(**kwargs)
else:
@@ -2091,20 +2152,26 @@ def get_metadata_from_imds(fallback_nic,
@azure_ds_telemetry_reporter
-def _get_metadata_from_imds(retries, md_type=metadata_type.compute):
-
- url = md_type.value
+def _get_metadata_from_imds(
+ retries,
+ md_type=metadata_type.compute,
+ api_version=IMDS_VER_MIN):
+ url = "{}?api-version={}".format(md_type.value, api_version)
headers = {"Metadata": "true"}
try:
response = readurl(
url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,
retries=retries, exception_cb=retry_on_url_exc)
except Exception as e:
- report_diagnostic_event(
- 'Ignoring IMDS instance metadata. '
- 'Get metadata from IMDS failed: %s' % e,
- logger_func=LOG.warning)
- return {}
+ # pylint:disable=no-member
+ if isinstance(e, UrlError) and e.code == 400:
+ raise
+ else:
+ report_diagnostic_event(
+ 'Ignoring IMDS instance metadata. '
+ 'Get metadata from IMDS failed: %s' % e,
+ logger_func=LOG.warning)
+ return {}
try:
from json.decoder import JSONDecodeError
json_decode_error = JSONDecodeError
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index f597c723..dedebeb1 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -408,7 +408,9 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
def setUp(self):
super(TestGetMetadataFromIMDS, self).setUp()
- self.network_md_url = dsaz.IMDS_URL + "instance?api-version=2019-06-01"
+ self.network_md_url = "{}/instance?api-version=2019-06-01".format(
+ dsaz.IMDS_URL
+ )
@mock.patch(MOCKPATH + 'readurl')
@mock.patch(MOCKPATH + 'EphemeralDHCPv4', autospec=True)
@@ -518,7 +520,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
"""Return empty dict when IMDS network metadata is absent."""
httpretty.register_uri(
httpretty.GET,
- dsaz.IMDS_URL + 'instance?api-version=2017-12-01',
+ dsaz.IMDS_URL + '/instance?api-version=2017-12-01',
body={}, status=404)
m_net_is_up.return_value = True # skips dhcp
@@ -1877,6 +1879,40 @@ scbus-1 on xpt0 bus 0
ssh_keys = dsrc.get_public_ssh_keys()
self.assertEqual(ssh_keys, ['key2'])
+ @mock.patch(MOCKPATH + 'get_metadata_from_imds')
+ def test_imds_api_version_wanted_nonexistent(
+ self,
+ m_get_metadata_from_imds):
+ def get_metadata_from_imds_side_eff(*args, **kwargs):
+ if kwargs['api_version'] == dsaz.IMDS_VER_WANT:
+ raise url_helper.UrlError("No IMDS version", code=400)
+ return NETWORK_METADATA
+ m_get_metadata_from_imds.side_effect = get_metadata_from_imds_side_eff
+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+ odata = {'HostName': "myhost", 'UserName': "myuser"}
+ data = {
+ 'ovfcontent': construct_valid_ovf_env(data=odata),
+ 'sys_cfg': sys_cfg
+ }
+ dsrc = self._get_ds(data)
+ dsrc.get_data()
+ self.assertIsNotNone(dsrc.metadata)
+ self.assertTrue(dsrc.failed_desired_api_version)
+
+ @mock.patch(
+ MOCKPATH + 'get_metadata_from_imds', return_value=NETWORK_METADATA)
+ def test_imds_api_version_wanted_exists(self, m_get_metadata_from_imds):
+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+ odata = {'HostName': "myhost", 'UserName': "myuser"}
+ data = {
+ 'ovfcontent': construct_valid_ovf_env(data=odata),
+ 'sys_cfg': sys_cfg
+ }
+ dsrc = self._get_ds(data)
+ dsrc.get_data()
+ self.assertIsNotNone(dsrc.metadata)
+ self.assertFalse(dsrc.failed_desired_api_version)
+
class TestAzureBounce(CiTestCase):
@@ -2657,7 +2693,7 @@ class TestPreprovisioningHotAttachNics(CiTestCase):
@mock.patch(MOCKPATH + 'DataSourceAzure.wait_for_link_up')
@mock.patch('cloudinit.sources.helpers.netlink.wait_for_nic_attach_event')
@mock.patch('cloudinit.sources.net.find_fallback_nic')
- @mock.patch(MOCKPATH + 'get_metadata_from_imds')
+ @mock.patch(MOCKPATH + 'DataSourceAzure.get_imds_data_with_api_fallback')
@mock.patch(MOCKPATH + 'EphemeralDHCPv4')
@mock.patch(MOCKPATH + 'DataSourceAzure._wait_for_nic_detach')
@mock.patch('os.path.isfile')
--
2.27.0

View File

@ -1,62 +0,0 @@
From f73d2460e5ad205a1cd2d74a73c2d1308265d9f9 Mon Sep 17 00:00:00 2001
From: Miroslav Rezanina <mrezanin@redhat.com>
Date: Wed, 18 May 2022 05:23:48 -0400
Subject: [PATCH] Add \r\n check for SSH keys in Azure (#889)
RH-Author: Miroslav Rezanina <mrezanin@redhat.com>
RH-MergeRequest: 64: Properly handle \r\n in SSH keys in Azure
RH-Commit: [1/1] c0868258fd63f6c531acd8da81e0494a8412d5ea (mrezanin/src_rhel_cloud-init)
RH-Bugzilla: 2088028
RH-Acked-by: xiachen <xiachen@redhat.com>
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
See https://bugs.launchpad.net/cloud-init/+bug/1910835
(cherry picked from commit f17f78fa9d28e62793a5f2c7109fc29eeffb0c89)
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/sources/DataSourceAzure.py | 3 +++
tests/unittests/test_datasource/test_azure.py | 12 ++++++++++++
2 files changed, 15 insertions(+)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index a66f023d..247284ad 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -1551,6 +1551,9 @@ def _key_is_openssh_formatted(key):
"""
Validate whether or not the key is OpenSSH-formatted.
"""
+ # See https://bugs.launchpad.net/cloud-init/+bug/1910835
+ if '\r\n' in key.strip():
+ return False
parser = ssh_util.AuthKeyLineParser()
try:
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index f8433690..742d1faa 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -1764,6 +1764,18 @@ scbus-1 on xpt0 bus 0
self.assertEqual(ssh_keys, ["ssh-rsa key1"])
self.assertEqual(m_parse_certificates.call_count, 0)
+ def test_key_without_crlf_valid(self):
+ test_key = 'ssh-rsa somerandomkeystuff some comment'
+ assert True is dsaz._key_is_openssh_formatted(test_key)
+
+ def test_key_with_crlf_invalid(self):
+ test_key = 'ssh-rsa someran\r\ndomkeystuff some comment'
+ assert False is dsaz._key_is_openssh_formatted(test_key)
+
+ def test_key_endswith_crlf_valid(self):
+ test_key = 'ssh-rsa somerandomkeystuff some comment\r\n'
+ assert True is dsaz._key_is_openssh_formatted(test_key)
+
@mock.patch(
'cloudinit.sources.helpers.azure.OpenSSLManager.parse_certificates')
@mock.patch(MOCKPATH + 'get_metadata_from_imds')
--
2.31.1

View File

@ -1,397 +0,0 @@
From 3ec4ddbc595c5fe781b3dc501631d23569849818 Mon Sep 17 00:00:00 2001
From: Thomas Stringer <thstring@microsoft.com>
Date: Mon, 26 Apr 2021 09:41:38 -0400
Subject: [PATCH 5/7] Azure: Retrieve username and hostname from IMDS (#865)
RH-Author: Eduardo Otubo <otubo@redhat.com>
RH-MergeRequest: 45: Add support for userdata on Azure from IMDS
RH-Commit: [5/7] 6fab7ef28c7fd340bda4f82dbf828f10716cb3f1
RH-Bugzilla: 2023940
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
This change allows us to retrieve the username and hostname from
IMDS instead of having to rely on the mounted OVF.
---
cloudinit/sources/DataSourceAzure.py | 149 ++++++++++++++----
tests/unittests/test_datasource/test_azure.py | 87 +++++++++-
2 files changed, 205 insertions(+), 31 deletions(-)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 39e67c4f..6d7954ee 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -5,6 +5,7 @@
# This file is part of cloud-init. See LICENSE file for license information.
import base64
+from collections import namedtuple
import contextlib
import crypt
from functools import partial
@@ -25,6 +26,7 @@ from cloudinit.net import device_driver
from cloudinit.net.dhcp import EphemeralDHCPv4
from cloudinit import sources
from cloudinit.sources.helpers import netlink
+from cloudinit import ssh_util
from cloudinit import subp
from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc
from cloudinit import util
@@ -80,7 +82,12 @@ AGENT_SEED_DIR = '/var/lib/waagent'
IMDS_TIMEOUT_IN_SECONDS = 2
IMDS_URL = "http://169.254.169.254/metadata"
IMDS_VER_MIN = "2019-06-01"
-IMDS_VER_WANT = "2020-09-01"
+IMDS_VER_WANT = "2020-10-01"
+
+
+# This holds SSH key data including if the source was
+# from IMDS, as well as the SSH key data itself.
+SSHKeys = namedtuple("SSHKeys", ("keys_from_imds", "ssh_keys"))
class metadata_type(Enum):
@@ -391,6 +398,8 @@ class DataSourceAzure(sources.DataSource):
"""Return the subplatform metadata source details."""
if self.seed.startswith('/dev'):
subplatform_type = 'config-disk'
+ elif self.seed.lower() == 'imds':
+ subplatform_type = 'imds'
else:
subplatform_type = 'seed-dir'
return '%s (%s)' % (subplatform_type, self.seed)
@@ -433,9 +442,11 @@ class DataSourceAzure(sources.DataSource):
found = None
reprovision = False
+ ovf_is_accessible = True
reprovision_after_nic_attach = False
for cdev in candidates:
try:
+ LOG.debug("cdev: %s", cdev)
if cdev == "IMDS":
ret = None
reprovision = True
@@ -462,8 +473,18 @@ class DataSourceAzure(sources.DataSource):
raise sources.InvalidMetaDataException(msg)
except util.MountFailedError:
report_diagnostic_event(
- '%s was not mountable' % cdev, logger_func=LOG.warning)
- continue
+ '%s was not mountable' % cdev, logger_func=LOG.debug)
+ cdev = 'IMDS'
+ ovf_is_accessible = False
+ empty_md = {'local-hostname': ''}
+ empty_cfg = dict(
+ system_info=dict(
+ default_user=dict(
+ name=''
+ )
+ )
+ )
+ ret = (empty_md, '', empty_cfg, {})
report_diagnostic_event("Found provisioning metadata in %s" % cdev,
logger_func=LOG.debug)
@@ -490,6 +511,10 @@ class DataSourceAzure(sources.DataSource):
self.fallback_interface,
retries=10
)
+ if not imds_md and not ovf_is_accessible:
+ msg = 'No OVF or IMDS available'
+ report_diagnostic_event(msg)
+ raise sources.InvalidMetaDataException(msg)
(md, userdata_raw, cfg, files) = ret
self.seed = cdev
crawled_data.update({
@@ -498,6 +523,21 @@ class DataSourceAzure(sources.DataSource):
'metadata': util.mergemanydict(
[md, {'imds': imds_md}]),
'userdata_raw': userdata_raw})
+ imds_username = _username_from_imds(imds_md)
+ imds_hostname = _hostname_from_imds(imds_md)
+ imds_disable_password = _disable_password_from_imds(imds_md)
+ if imds_username:
+ LOG.debug('Username retrieved from IMDS: %s', imds_username)
+ cfg['system_info']['default_user']['name'] = imds_username
+ if imds_hostname:
+ LOG.debug('Hostname retrieved from IMDS: %s', imds_hostname)
+ crawled_data['metadata']['local-hostname'] = imds_hostname
+ if imds_disable_password:
+ LOG.debug(
+ 'Disable password retrieved from IMDS: %s',
+ imds_disable_password
+ )
+ crawled_data['metadata']['disable_password'] = imds_disable_password # noqa: E501
found = cdev
report_diagnostic_event(
@@ -676,6 +716,13 @@ class DataSourceAzure(sources.DataSource):
@azure_ds_telemetry_reporter
def get_public_ssh_keys(self):
+ """
+ Retrieve public SSH keys.
+ """
+
+ return self._get_public_ssh_keys_and_source().ssh_keys
+
+ def _get_public_ssh_keys_and_source(self):
"""
Try to get the ssh keys from IMDS first, and if that fails
(i.e. IMDS is unavailable) then fallback to getting the ssh
@@ -685,30 +732,50 @@ class DataSourceAzure(sources.DataSource):
advantage, so this is a strong preference. But we must keep
OVF as a second option for environments that don't have IMDS.
"""
+
LOG.debug('Retrieving public SSH keys')
ssh_keys = []
+ keys_from_imds = True
+ LOG.debug('Attempting to get SSH keys from IMDS')
try:
- raise KeyError(
- "Not using public SSH keys from IMDS"
- )
- # pylint:disable=unreachable
ssh_keys = [
public_key['keyData']
for public_key
in self.metadata['imds']['compute']['publicKeys']
]
- LOG.debug('Retrieved SSH keys from IMDS')
+ for key in ssh_keys:
+ if not _key_is_openssh_formatted(key=key):
+ keys_from_imds = False
+ break
+
+ if not keys_from_imds:
+ log_msg = 'Keys not in OpenSSH format, using OVF'
+ else:
+ log_msg = 'Retrieved {} keys from IMDS'.format(
+ len(ssh_keys)
+ if ssh_keys is not None
+ else 0
+ )
except KeyError:
log_msg = 'Unable to get keys from IMDS, falling back to OVF'
+ keys_from_imds = False
+ finally:
report_diagnostic_event(log_msg, logger_func=LOG.debug)
+
+ if not keys_from_imds:
+ LOG.debug('Attempting to get SSH keys from OVF')
try:
ssh_keys = self.metadata['public-keys']
- LOG.debug('Retrieved keys from OVF')
+ log_msg = 'Retrieved {} keys from OVF'.format(len(ssh_keys))
except KeyError:
log_msg = 'No keys available from OVF'
+ finally:
report_diagnostic_event(log_msg, logger_func=LOG.debug)
- return ssh_keys
+ return SSHKeys(
+ keys_from_imds=keys_from_imds,
+ ssh_keys=ssh_keys
+ )
def get_config_obj(self):
return self.cfg
@@ -1325,30 +1392,21 @@ class DataSourceAzure(sources.DataSource):
self.bounce_network_with_azure_hostname()
pubkey_info = None
- try:
- raise KeyError(
- "Not using public SSH keys from IMDS"
- )
- # pylint:disable=unreachable
- public_keys = self.metadata['imds']['compute']['publicKeys']
- LOG.debug(
- 'Successfully retrieved %s key(s) from IMDS',
- len(public_keys)
- if public_keys is not None
+ ssh_keys_and_source = self._get_public_ssh_keys_and_source()
+
+ if not ssh_keys_and_source.keys_from_imds:
+ pubkey_info = self.cfg.get('_pubkeys', None)
+ log_msg = 'Retrieved {} fingerprints from OVF'.format(
+ len(pubkey_info)
+ if pubkey_info is not None
else 0
)
- except KeyError:
- LOG.debug(
- 'Unable to retrieve SSH keys from IMDS during '
- 'negotiation, falling back to OVF'
- )
- pubkey_info = self.cfg.get('_pubkeys', None)
+ report_diagnostic_event(log_msg, logger_func=LOG.debug)
metadata_func = partial(get_metadata_from_fabric,
fallback_lease_file=self.
dhclient_lease_file,
- pubkey_info=pubkey_info,
- iso_dev=self.iso_dev)
+ pubkey_info=pubkey_info)
LOG.debug("negotiating with fabric via agent command %s",
self.ds_cfg['agent_command'])
@@ -1404,6 +1462,41 @@ class DataSourceAzure(sources.DataSource):
return self.metadata.get('imds', {}).get('compute', {}).get('location')
+def _username_from_imds(imds_data):
+ try:
+ return imds_data['compute']['osProfile']['adminUsername']
+ except KeyError:
+ return None
+
+
+def _hostname_from_imds(imds_data):
+ try:
+ return imds_data['compute']['osProfile']['computerName']
+ except KeyError:
+ return None
+
+
+def _disable_password_from_imds(imds_data):
+ try:
+ return imds_data['compute']['osProfile']['disablePasswordAuthentication'] == 'true' # noqa: E501
+ except KeyError:
+ return None
+
+
+def _key_is_openssh_formatted(key):
+ """
+ Validate whether or not the key is OpenSSH-formatted.
+ """
+
+ parser = ssh_util.AuthKeyLineParser()
+ try:
+ akl = parser.parse(key)
+ except TypeError:
+ return False
+
+ return akl.keytype is not None
+
+
def _partitions_on_device(devpath, maxnum=16):
# return a list of tuples (ptnum, path) for each part on devpath
for suff in ("-part", "p", ""):
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index 320fa857..d9817d84 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -108,7 +108,7 @@ NETWORK_METADATA = {
"zone": "",
"publicKeys": [
{
- "keyData": "key1",
+ "keyData": "ssh-rsa key1",
"path": "path1"
}
]
@@ -1761,8 +1761,29 @@ scbus-1 on xpt0 bus 0
dsrc.get_data()
dsrc.setup(True)
ssh_keys = dsrc.get_public_ssh_keys()
- # Temporarily alter this test so that SSH public keys
- # from IMDS are *not* going to be in use to fix a regression.
+ self.assertEqual(ssh_keys, ["ssh-rsa key1"])
+ self.assertEqual(m_parse_certificates.call_count, 0)
+
+ @mock.patch(
+ 'cloudinit.sources.helpers.azure.OpenSSLManager.parse_certificates')
+ @mock.patch(MOCKPATH + 'get_metadata_from_imds')
+ def test_get_public_ssh_keys_with_no_openssh_format(
+ self,
+ m_get_metadata_from_imds,
+ m_parse_certificates):
+ imds_data = copy.deepcopy(NETWORK_METADATA)
+ imds_data['compute']['publicKeys'][0]['keyData'] = 'no-openssh-format'
+ m_get_metadata_from_imds.return_value = imds_data
+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+ odata = {'HostName': "myhost", 'UserName': "myuser"}
+ data = {
+ 'ovfcontent': construct_valid_ovf_env(data=odata),
+ 'sys_cfg': sys_cfg
+ }
+ dsrc = self._get_ds(data)
+ dsrc.get_data()
+ dsrc.setup(True)
+ ssh_keys = dsrc.get_public_ssh_keys()
self.assertEqual(ssh_keys, [])
self.assertEqual(m_parse_certificates.call_count, 0)
@@ -1818,6 +1839,66 @@ scbus-1 on xpt0 bus 0
self.assertIsNotNone(dsrc.metadata)
self.assertFalse(dsrc.failed_desired_api_version)
+ @mock.patch(MOCKPATH + 'get_metadata_from_imds')
+ def test_hostname_from_imds(self, m_get_metadata_from_imds):
+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+ odata = {'HostName': "myhost", 'UserName': "myuser"}
+ data = {
+ 'ovfcontent': construct_valid_ovf_env(data=odata),
+ 'sys_cfg': sys_cfg
+ }
+ imds_data_with_os_profile = copy.deepcopy(NETWORK_METADATA)
+ imds_data_with_os_profile["compute"]["osProfile"] = dict(
+ adminUsername="username1",
+ computerName="hostname1",
+ disablePasswordAuthentication="true"
+ )
+ m_get_metadata_from_imds.return_value = imds_data_with_os_profile
+ dsrc = self._get_ds(data)
+ dsrc.get_data()
+ self.assertEqual(dsrc.metadata["local-hostname"], "hostname1")
+
+ @mock.patch(MOCKPATH + 'get_metadata_from_imds')
+ def test_username_from_imds(self, m_get_metadata_from_imds):
+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+ odata = {'HostName': "myhost", 'UserName': "myuser"}
+ data = {
+ 'ovfcontent': construct_valid_ovf_env(data=odata),
+ 'sys_cfg': sys_cfg
+ }
+ imds_data_with_os_profile = copy.deepcopy(NETWORK_METADATA)
+ imds_data_with_os_profile["compute"]["osProfile"] = dict(
+ adminUsername="username1",
+ computerName="hostname1",
+ disablePasswordAuthentication="true"
+ )
+ m_get_metadata_from_imds.return_value = imds_data_with_os_profile
+ dsrc = self._get_ds(data)
+ dsrc.get_data()
+ self.assertEqual(
+ dsrc.cfg["system_info"]["default_user"]["name"],
+ "username1"
+ )
+
+ @mock.patch(MOCKPATH + 'get_metadata_from_imds')
+ def test_disable_password_from_imds(self, m_get_metadata_from_imds):
+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+ odata = {'HostName': "myhost", 'UserName': "myuser"}
+ data = {
+ 'ovfcontent': construct_valid_ovf_env(data=odata),
+ 'sys_cfg': sys_cfg
+ }
+ imds_data_with_os_profile = copy.deepcopy(NETWORK_METADATA)
+ imds_data_with_os_profile["compute"]["osProfile"] = dict(
+ adminUsername="username1",
+ computerName="hostname1",
+ disablePasswordAuthentication="true"
+ )
+ m_get_metadata_from_imds.return_value = imds_data_with_os_profile
+ dsrc = self._get_ds(data)
+ dsrc.get_data()
+ self.assertTrue(dsrc.metadata["disable_password"])
+
class TestAzureBounce(CiTestCase):
--
2.27.0

View File

@ -1,315 +0,0 @@
From ca5b83cee7b45bf56eec258db739cb5fe51b3231 Mon Sep 17 00:00:00 2001
From: aswinrajamannar <39812128+aswinrajamannar@users.noreply.github.com>
Date: Mon, 26 Apr 2021 07:28:39 -0700
Subject: [PATCH 6/7] Azure: Retry net metadata during nic attach for
non-timeout errs (#878)
RH-Author: Eduardo Otubo <otubo@redhat.com>
RH-MergeRequest: 45: Add support for userdata on Azure from IMDS
RH-Commit: [6/7] 4e6e44f017d5ffcb72ac8959a94f80c71fef9560
RH-Bugzilla: 2023940
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
When network interfaces are hot-attached to the VM, attempting to get
network metadata might return 410 (or 500, 503 etc) because the info
is not yet available. In those cases, we retry getting the metadata
before giving up. The only case where we can move on to wait for more
nic attach events is if the call times out despite retries, which
means the interface is not likely a primary interface, and we should
try for more nic attach events.
---
cloudinit/sources/DataSourceAzure.py | 65 +++++++++++--
tests/unittests/test_datasource/test_azure.py | 95 ++++++++++++++++---
2 files changed, 140 insertions(+), 20 deletions(-)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 6d7954ee..d0be6d84 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -17,6 +17,7 @@ from time import sleep
from xml.dom import minidom
import xml.etree.ElementTree as ET
from enum import Enum
+import requests
from cloudinit import dmi
from cloudinit import log as logging
@@ -665,7 +666,9 @@ class DataSourceAzure(sources.DataSource):
self,
fallback_nic,
retries,
- md_type=metadata_type.compute):
+ md_type=metadata_type.compute,
+ exc_cb=retry_on_url_exc,
+ infinite=False):
"""
Wrapper for get_metadata_from_imds so that we can have flexibility
in which IMDS api-version we use. If a particular instance of IMDS
@@ -685,7 +688,8 @@ class DataSourceAzure(sources.DataSource):
fallback_nic=fallback_nic,
retries=0,
md_type=md_type,
- api_version=IMDS_VER_WANT
+ api_version=IMDS_VER_WANT,
+ exc_cb=exc_cb
)
except UrlError as err:
LOG.info(
@@ -708,7 +712,9 @@ class DataSourceAzure(sources.DataSource):
fallback_nic=fallback_nic,
retries=retries,
md_type=md_type,
- api_version=IMDS_VER_MIN
+ api_version=IMDS_VER_MIN,
+ exc_cb=exc_cb,
+ infinite=infinite
)
def device_name_to_device(self, name):
@@ -938,6 +944,9 @@ class DataSourceAzure(sources.DataSource):
is_primary = False
expected_nic_count = -1
imds_md = None
+ metadata_poll_count = 0
+ metadata_logging_threshold = 1
+ metadata_timeout_count = 0
# For now, only a VM's primary NIC can contact IMDS and WireServer. If
# DHCP fails for a NIC, we have no mechanism to determine if the NIC is
@@ -962,14 +971,48 @@ class DataSourceAzure(sources.DataSource):
% (ifname, e), logger_func=LOG.error)
raise
+ # Retry polling network metadata for a limited duration only when the
+ # calls fail due to timeout. This is because the platform drops packets
+ # going towards IMDS when it is not a primary nic. If the calls fail
+ # due to other issues like 410, 503 etc, then it means we are primary
+ # but IMDS service is unavailable at the moment. Retry indefinitely in
+ # those cases since we cannot move on without the network metadata.
+ def network_metadata_exc_cb(msg, exc):
+ nonlocal metadata_timeout_count, metadata_poll_count
+ nonlocal metadata_logging_threshold
+
+ metadata_poll_count = metadata_poll_count + 1
+
+ # Log when needed but back off exponentially to avoid exploding
+ # the log file.
+ if metadata_poll_count >= metadata_logging_threshold:
+ metadata_logging_threshold *= 2
+ report_diagnostic_event(
+ "Ran into exception when attempting to reach %s "
+ "after %d polls." % (msg, metadata_poll_count),
+ logger_func=LOG.error)
+
+ if isinstance(exc, UrlError):
+ report_diagnostic_event("poll IMDS with %s failed. "
+ "Exception: %s and code: %s" %
+ (msg, exc.cause, exc.code),
+ logger_func=LOG.error)
+
+ if exc.cause and isinstance(exc.cause, requests.Timeout):
+ metadata_timeout_count = metadata_timeout_count + 1
+ return (metadata_timeout_count <= 10)
+ return True
+
# Primary nic detection will be optimized in the future. The fact that
# primary nic is being attached first helps here. Otherwise each nic
# could add several seconds of delay.
try:
imds_md = self.get_imds_data_with_api_fallback(
ifname,
- 5,
- metadata_type.network
+ 0,
+ metadata_type.network,
+ network_metadata_exc_cb,
+ True
)
except Exception as e:
LOG.warning(
@@ -2139,7 +2182,9 @@ def _generate_network_config_from_fallback_config() -> dict:
def get_metadata_from_imds(fallback_nic,
retries,
md_type=metadata_type.compute,
- api_version=IMDS_VER_MIN):
+ api_version=IMDS_VER_MIN,
+ exc_cb=retry_on_url_exc,
+ infinite=False):
"""Query Azure's instance metadata service, returning a dictionary.
If network is not up, setup ephemeral dhcp on fallback_nic to talk to the
@@ -2158,7 +2203,7 @@ def get_metadata_from_imds(fallback_nic,
kwargs = {'logfunc': LOG.debug,
'msg': 'Crawl of Azure Instance Metadata Service (IMDS)',
'func': _get_metadata_from_imds,
- 'args': (retries, md_type, api_version,)}
+ 'args': (retries, exc_cb, md_type, api_version, infinite)}
if net.is_up(fallback_nic):
return util.log_time(**kwargs)
else:
@@ -2176,14 +2221,16 @@ def get_metadata_from_imds(fallback_nic,
@azure_ds_telemetry_reporter
def _get_metadata_from_imds(
retries,
+ exc_cb,
md_type=metadata_type.compute,
- api_version=IMDS_VER_MIN):
+ api_version=IMDS_VER_MIN,
+ infinite=False):
url = "{}?api-version={}".format(md_type.value, api_version)
headers = {"Metadata": "true"}
try:
response = readurl(
url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,
- retries=retries, exception_cb=retry_on_url_exc)
+ retries=retries, exception_cb=exc_cb, infinite=infinite)
except Exception as e:
# pylint:disable=no-member
if isinstance(e, UrlError) and e.code == 400:
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index d9817d84..c4a8e08d 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -448,7 +448,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
"http://169.254.169.254/metadata/instance?api-version="
"2019-06-01", exception_cb=mock.ANY,
headers=mock.ANY, retries=mock.ANY,
- timeout=mock.ANY)
+ timeout=mock.ANY, infinite=False)
@mock.patch(MOCKPATH + 'readurl', autospec=True)
@mock.patch(MOCKPATH + 'EphemeralDHCPv4')
@@ -467,7 +467,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
"http://169.254.169.254/metadata/instance/network?api-version="
"2019-06-01", exception_cb=mock.ANY,
headers=mock.ANY, retries=mock.ANY,
- timeout=mock.ANY)
+ timeout=mock.ANY, infinite=False)
@mock.patch(MOCKPATH + 'readurl', autospec=True)
@mock.patch(MOCKPATH + 'EphemeralDHCPv4')
@@ -486,7 +486,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
"http://169.254.169.254/metadata/instance?api-version="
"2019-06-01", exception_cb=mock.ANY,
headers=mock.ANY, retries=mock.ANY,
- timeout=mock.ANY)
+ timeout=mock.ANY, infinite=False)
@mock.patch(MOCKPATH + 'readurl', autospec=True)
@mock.patch(MOCKPATH + 'EphemeralDHCPv4WithReporting', autospec=True)
@@ -511,7 +511,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
m_readurl.assert_called_with(
self.network_md_url, exception_cb=mock.ANY,
headers={'Metadata': 'true'}, retries=2,
- timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS)
+ timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, infinite=False)
@mock.patch('cloudinit.url_helper.time.sleep')
@mock.patch(MOCKPATH + 'net.is_up', autospec=True)
@@ -2694,15 +2694,22 @@ class TestPreprovisioningHotAttachNics(CiTestCase):
def nic_attach_ret(nl_sock, nics_found):
nonlocal m_attach_call_count
- if m_attach_call_count == 0:
- m_attach_call_count = m_attach_call_count + 1
+ m_attach_call_count = m_attach_call_count + 1
+ if m_attach_call_count == 1:
return "eth0"
- return "eth1"
+ elif m_attach_call_count == 2:
+ return "eth1"
+ raise RuntimeError("Must have found primary nic by now.")
+
+ # Simulate two NICs by adding the same one twice.
+ md = {
+ "interface": [
+ IMDS_NETWORK_METADATA['interface'][0],
+ IMDS_NETWORK_METADATA['interface'][0]
+ ]
+ }
- def network_metadata_ret(ifname, retries, type):
- # Simulate two NICs by adding the same one twice.
- md = IMDS_NETWORK_METADATA
- md['interface'].append(md['interface'][0])
+ def network_metadata_ret(ifname, retries, type, exc_cb, infinite):
if ifname == "eth0":
return md
raise requests.Timeout('Fake connection timeout')
@@ -2724,6 +2731,72 @@ class TestPreprovisioningHotAttachNics(CiTestCase):
self.assertEqual(1, m_imds.call_count)
self.assertEqual(2, m_link_up.call_count)
+ @mock.patch(MOCKPATH + 'DataSourceAzure.get_imds_data_with_api_fallback')
+ @mock.patch(MOCKPATH + 'EphemeralDHCPv4')
+ def test_check_if_nic_is_primary_retries_on_failures(
+ self, m_dhcpv4, m_imds):
+ """Retry polling for network metadata on all failures except timeout"""
+ dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+ lease = {
+ 'interface': 'eth9', 'fixed-address': '192.168.2.9',
+ 'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
+ 'unknown-245': '624c3620'}
+
+ eth0Retries = []
+ eth1Retries = []
+ # Simulate two NICs by adding the same one twice.
+ md = {
+ "interface": [
+ IMDS_NETWORK_METADATA['interface'][0],
+ IMDS_NETWORK_METADATA['interface'][0]
+ ]
+ }
+
+ def network_metadata_ret(ifname, retries, type, exc_cb, infinite):
+ nonlocal eth0Retries, eth1Retries
+
+ # Simulate readurl functionality with retries and
+ # exception callbacks so that the callback logic can be
+ # validated.
+ if ifname == "eth0":
+ cause = requests.HTTPError()
+ for _ in range(0, 15):
+ error = url_helper.UrlError(cause=cause, code=410)
+ eth0Retries.append(exc_cb("No goal state.", error))
+ else:
+ cause = requests.Timeout('Fake connection timeout')
+ for _ in range(0, 10):
+ error = url_helper.UrlError(cause=cause)
+ eth1Retries.append(exc_cb("Connection timeout", error))
+ # Should stop retrying after 10 retries
+ eth1Retries.append(exc_cb("Connection timeout", error))
+ raise cause
+ return md
+
+ m_imds.side_effect = network_metadata_ret
+
+ dhcp_ctx = mock.MagicMock(lease=lease)
+ dhcp_ctx.obtain_lease.return_value = lease
+ m_dhcpv4.return_value = dhcp_ctx
+
+ is_primary, expected_nic_count = dsa._check_if_nic_is_primary("eth0")
+ self.assertEqual(True, is_primary)
+ self.assertEqual(2, expected_nic_count)
+
+ # All Eth0 errors are non-timeout errors. So we should have been
+ # retrying indefinitely until success.
+ for i in eth0Retries:
+ self.assertTrue(i)
+
+ is_primary, expected_nic_count = dsa._check_if_nic_is_primary("eth1")
+ self.assertEqual(False, is_primary)
+
+ # All Eth1 errors are timeout errors. Retry happens for a max of 10 and
+ # then we should have moved on assuming it is not the primary nic.
+ for i in range(0, 10):
+ self.assertTrue(eth1Retries[i])
+ self.assertFalse(eth1Retries[10])
+
@mock.patch('cloudinit.distros.networking.LinuxNetworking.try_set_link_up')
def test_wait_for_link_up_returns_if_already_up(
self, m_is_link_up):
--
2.27.0

View File

@ -1,129 +0,0 @@
From c0df7233fa99d4191b5d4142e209e7465d8db5f6 Mon Sep 17 00:00:00 2001
From: Anh Vo <anhvo@microsoft.com>
Date: Tue, 27 Apr 2021 13:40:59 -0400
Subject: [PATCH 7/7] Azure: adding support for consuming userdata from IMDS
(#884)
RH-Author: Eduardo Otubo <otubo@redhat.com>
RH-MergeRequest: 45: Add support for userdata on Azure from IMDS
RH-Commit: [7/7] 32f840412da1a0f49b9ab5ba1d6f1bcb1bfacc16
RH-Bugzilla: 2023940
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
---
cloudinit/sources/DataSourceAzure.py | 23 ++++++++-
tests/unittests/test_datasource/test_azure.py | 50 +++++++++++++++++++
2 files changed, 72 insertions(+), 1 deletion(-)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index d0be6d84..a66f023d 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -83,7 +83,7 @@ AGENT_SEED_DIR = '/var/lib/waagent'
IMDS_TIMEOUT_IN_SECONDS = 2
IMDS_URL = "http://169.254.169.254/metadata"
IMDS_VER_MIN = "2019-06-01"
-IMDS_VER_WANT = "2020-10-01"
+IMDS_VER_WANT = "2021-01-01"
# This holds SSH key data including if the source was
@@ -539,6 +539,20 @@ class DataSourceAzure(sources.DataSource):
imds_disable_password
)
crawled_data['metadata']['disable_password'] = imds_disable_password # noqa: E501
+
+ # only use userdata from imds if OVF did not provide custom data
+ # userdata provided by IMDS is always base64 encoded
+ if not userdata_raw:
+ imds_userdata = _userdata_from_imds(imds_md)
+ if imds_userdata:
+ LOG.debug("Retrieved userdata from IMDS")
+ try:
+ crawled_data['userdata_raw'] = base64.b64decode(
+ ''.join(imds_userdata.split()))
+ except Exception:
+ report_diagnostic_event(
+ "Bad userdata in IMDS",
+ logger_func=LOG.warning)
found = cdev
report_diagnostic_event(
@@ -1512,6 +1526,13 @@ def _username_from_imds(imds_data):
return None
+def _userdata_from_imds(imds_data):
+ try:
+ return imds_data['compute']['userData']
+ except KeyError:
+ return None
+
+
def _hostname_from_imds(imds_data):
try:
return imds_data['compute']['osProfile']['computerName']
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index c4a8e08d..f8433690 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -1899,6 +1899,56 @@ scbus-1 on xpt0 bus 0
dsrc.get_data()
self.assertTrue(dsrc.metadata["disable_password"])
+ @mock.patch(MOCKPATH + 'get_metadata_from_imds')
+ def test_userdata_from_imds(self, m_get_metadata_from_imds):
+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+ odata = {'HostName': "myhost", 'UserName': "myuser"}
+ data = {
+ 'ovfcontent': construct_valid_ovf_env(data=odata),
+ 'sys_cfg': sys_cfg
+ }
+ userdata = "userdataImds"
+ imds_data = copy.deepcopy(NETWORK_METADATA)
+ imds_data["compute"]["osProfile"] = dict(
+ adminUsername="username1",
+ computerName="hostname1",
+ disablePasswordAuthentication="true",
+ )
+ imds_data["compute"]["userData"] = b64e(userdata)
+ m_get_metadata_from_imds.return_value = imds_data
+ dsrc = self._get_ds(data)
+ ret = dsrc.get_data()
+ self.assertTrue(ret)
+ self.assertEqual(dsrc.userdata_raw, userdata.encode('utf-8'))
+
+ @mock.patch(MOCKPATH + 'get_metadata_from_imds')
+ def test_userdata_from_imds_with_customdata_from_OVF(
+ self, m_get_metadata_from_imds):
+ userdataOVF = "userdataOVF"
+ odata = {
+ 'HostName': "myhost", 'UserName': "myuser",
+ 'UserData': {'text': b64e(userdataOVF), 'encoding': 'base64'}
+ }
+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+ data = {
+ 'ovfcontent': construct_valid_ovf_env(data=odata),
+ 'sys_cfg': sys_cfg
+ }
+
+ userdataImds = "userdataImds"
+ imds_data = copy.deepcopy(NETWORK_METADATA)
+ imds_data["compute"]["osProfile"] = dict(
+ adminUsername="username1",
+ computerName="hostname1",
+ disablePasswordAuthentication="true",
+ )
+ imds_data["compute"]["userData"] = b64e(userdataImds)
+ m_get_metadata_from_imds.return_value = imds_data
+ dsrc = self._get_ds(data)
+ ret = dsrc.get_data()
+ self.assertTrue(ret)
+ self.assertEqual(dsrc.userdata_raw, userdataOVF.encode('utf-8'))
+
class TestAzureBounce(CiTestCase):
--
2.27.0

View File

@ -1,177 +0,0 @@
From 01489fb91f64f6137ddf88c39feabe4296f3a156 Mon Sep 17 00:00:00 2001
From: Anh Vo <anhvo@microsoft.com>
Date: Fri, 23 Apr 2021 10:18:05 -0400
Subject: [PATCH 4/7] Azure: eject the provisioning iso before reporting ready
(#861)
RH-Author: Eduardo Otubo <otubo@redhat.com>
RH-MergeRequest: 45: Add support for userdata on Azure from IMDS
RH-Commit: [4/7] ba830546a62ac5bea33b91d133d364a897b9f6c0
RH-Bugzilla: 2023940
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
Due to hyper-v implementations, iso ejection is more efficient if performed
from within the guest. The code will attempt to perform a best-effort ejection.
Failure during ejection will not prevent reporting ready from happening. If iso
ejection is successful, later iso ejection from the platform will be a no-op.
In the event the iso ejection from the guest fails, iso ejection will still happen at
the platform level.
---
cloudinit/sources/DataSourceAzure.py | 22 +++++++++++++++---
cloudinit/sources/helpers/azure.py | 23 ++++++++++++++++---
.../test_datasource/test_azure_helper.py | 13 +++++++++--
3 files changed, 50 insertions(+), 8 deletions(-)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 020b7006..39e67c4f 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -332,6 +332,7 @@ class DataSourceAzure(sources.DataSource):
dsname = 'Azure'
_negotiated = False
_metadata_imds = sources.UNSET
+ _ci_pkl_version = 1
def __init__(self, sys_cfg, distro, paths):
sources.DataSource.__init__(self, sys_cfg, distro, paths)
@@ -346,8 +347,13 @@ class DataSourceAzure(sources.DataSource):
# Regenerate network config new_instance boot and every boot
self.update_events['network'].add(EventType.BOOT)
self._ephemeral_dhcp_ctx = None
-
self.failed_desired_api_version = False
+ self.iso_dev = None
+
+ def _unpickle(self, ci_pkl_version: int) -> None:
+ super()._unpickle(ci_pkl_version)
+ if "iso_dev" not in self.__dict__:
+ self.iso_dev = None
def __str__(self):
root = sources.DataSource.__str__(self)
@@ -459,6 +465,13 @@ class DataSourceAzure(sources.DataSource):
'%s was not mountable' % cdev, logger_func=LOG.warning)
continue
+ report_diagnostic_event("Found provisioning metadata in %s" % cdev,
+ logger_func=LOG.debug)
+
+ # save the iso device for ejection before reporting ready
+ if cdev.startswith("/dev"):
+ self.iso_dev = cdev
+
perform_reprovision = reprovision or self._should_reprovision(ret)
perform_reprovision_after_nic_attach = (
reprovision_after_nic_attach or
@@ -1226,7 +1239,9 @@ class DataSourceAzure(sources.DataSource):
@return: The success status of sending the ready signal.
"""
try:
- get_metadata_from_fabric(None, lease['unknown-245'])
+ get_metadata_from_fabric(fallback_lease_file=None,
+ dhcp_opts=lease['unknown-245'],
+ iso_dev=self.iso_dev)
return True
except Exception as e:
report_diagnostic_event(
@@ -1332,7 +1347,8 @@ class DataSourceAzure(sources.DataSource):
metadata_func = partial(get_metadata_from_fabric,
fallback_lease_file=self.
dhclient_lease_file,
- pubkey_info=pubkey_info)
+ pubkey_info=pubkey_info,
+ iso_dev=self.iso_dev)
LOG.debug("negotiating with fabric via agent command %s",
self.ds_cfg['agent_command'])
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index 03e7156b..ad476076 100755
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -865,7 +865,19 @@ class WALinuxAgentShim:
return endpoint_ip_address
@azure_ds_telemetry_reporter
- def register_with_azure_and_fetch_data(self, pubkey_info=None) -> dict:
+ def eject_iso(self, iso_dev) -> None:
+ try:
+ LOG.debug("Ejecting the provisioning iso")
+ subp.subp(['eject', iso_dev])
+ except Exception as e:
+ report_diagnostic_event(
+ "Failed ejecting the provisioning iso: %s" % e,
+ logger_func=LOG.debug)
+
+ @azure_ds_telemetry_reporter
+ def register_with_azure_and_fetch_data(self,
+ pubkey_info=None,
+ iso_dev=None) -> dict:
"""Gets the VM's GoalState from Azure, uses the GoalState information
to report ready/send the ready signal/provisioning complete signal to
Azure, and then uses pubkey_info to filter and obtain the user's
@@ -891,6 +903,10 @@ class WALinuxAgentShim:
ssh_keys = self._get_user_pubkeys(goal_state, pubkey_info)
health_reporter = GoalStateHealthReporter(
goal_state, self.azure_endpoint_client, self.endpoint)
+
+ if iso_dev is not None:
+ self.eject_iso(iso_dev)
+
health_reporter.send_ready_signal()
return {'public-keys': ssh_keys}
@@ -1046,11 +1062,12 @@ class WALinuxAgentShim:
@azure_ds_telemetry_reporter
def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None,
- pubkey_info=None):
+ pubkey_info=None, iso_dev=None):
shim = WALinuxAgentShim(fallback_lease_file=fallback_lease_file,
dhcp_options=dhcp_opts)
try:
- return shim.register_with_azure_and_fetch_data(pubkey_info=pubkey_info)
+ return shim.register_with_azure_and_fetch_data(
+ pubkey_info=pubkey_info, iso_dev=iso_dev)
finally:
shim.clean_up()
diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
index 63482c6c..552c7905 100644
--- a/tests/unittests/test_datasource/test_azure_helper.py
+++ b/tests/unittests/test_datasource/test_azure_helper.py
@@ -1009,6 +1009,14 @@ class TestWALinuxAgentShim(CiTestCase):
self.GoalState.return_value.container_id = self.test_container_id
self.GoalState.return_value.instance_id = self.test_instance_id
+ def test_eject_iso_is_called(self):
+ shim = wa_shim()
+ with mock.patch.object(
+ shim, 'eject_iso', autospec=True
+ ) as m_eject_iso:
+ shim.register_with_azure_and_fetch_data(iso_dev="/dev/sr0")
+ m_eject_iso.assert_called_once_with("/dev/sr0")
+
def test_http_client_does_not_use_certificate_for_report_ready(self):
shim = wa_shim()
shim.register_with_azure_and_fetch_data()
@@ -1283,13 +1291,14 @@ class TestGetMetadataGoalStateXMLAndReportReadyToFabric(CiTestCase):
def test_calls_shim_register_with_azure_and_fetch_data(self):
m_pubkey_info = mock.MagicMock()
- azure_helper.get_metadata_from_fabric(pubkey_info=m_pubkey_info)
+ azure_helper.get_metadata_from_fabric(
+ pubkey_info=m_pubkey_info, iso_dev="/dev/sr0")
self.assertEqual(
1,
self.m_shim.return_value
.register_with_azure_and_fetch_data.call_count)
self.assertEqual(
- mock.call(pubkey_info=m_pubkey_info),
+ mock.call(iso_dev="/dev/sr0", pubkey_info=m_pubkey_info),
self.m_shim.return_value
.register_with_azure_and_fetch_data.call_args)
--
2.27.0

View File

@ -1,90 +0,0 @@
From f11bbe7f04a48eebcb446e283820d7592f76cf86 Mon Sep 17 00:00:00 2001
From: Johnson Shi <Johnson.Shi@microsoft.com>
Date: Thu, 25 Mar 2021 07:20:10 -0700
Subject: [PATCH 2/7] Azure helper: Ensure Azure http handler sleeps between
retries (#842)
RH-Author: Eduardo Otubo <otubo@redhat.com>
RH-MergeRequest: 45: Add support for userdata on Azure from IMDS
RH-Commit: [2/7] e8f8bb658b629a8444bd2ba19f109952acf33311
RH-Bugzilla: 2023940
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
Ensure that the Azure helper's http handler sleeps a fixed duration
between retry failure attempts. The http handler will sleep a fixed
duration between failed attempts regardless of whether the attempt
failed due to (1) request timing out or (2) instant failure (no
timeout).
Due to certain platform issues, the http request to the Azure endpoint
may instantly fail without reaching the http timeout duration. Without
sleeping a fixed duration in between retry attempts, the http handler
will loop through the max retry attempts quickly. This causes the
communication between cloud-init and the Azure platform to be less
resilient due to the short total duration if there is no sleep in
between retries.
---
cloudinit/sources/helpers/azure.py | 2 ++
tests/unittests/test_datasource/test_azure_helper.py | 11 +++++++++--
2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index d3055d08..03e7156b 100755
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -303,6 +303,7 @@ def http_with_retries(url, **kwargs) -> str:
max_readurl_attempts = 240
default_readurl_timeout = 5
+ sleep_duration_between_retries = 5
periodic_logging_attempts = 12
if 'timeout' not in kwargs:
@@ -338,6 +339,7 @@ def http_with_retries(url, **kwargs) -> str:
'attempt %d with exception: %s' %
(url, attempt, e),
logger_func=LOG.debug)
+ time.sleep(sleep_duration_between_retries)
raise exc
diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
index b8899807..63482c6c 100644
--- a/tests/unittests/test_datasource/test_azure_helper.py
+++ b/tests/unittests/test_datasource/test_azure_helper.py
@@ -384,6 +384,7 @@ class TestAzureHelperHttpWithRetries(CiTestCase):
max_readurl_attempts = 240
default_readurl_timeout = 5
+ sleep_duration_between_retries = 5
periodic_logging_attempts = 12
def setUp(self):
@@ -394,8 +395,8 @@ class TestAzureHelperHttpWithRetries(CiTestCase):
self.m_readurl = patches.enter_context(
mock.patch.object(
azure_helper.url_helper, 'readurl', mock.MagicMock()))
- patches.enter_context(
- mock.patch.object(azure_helper.time, 'sleep', mock.MagicMock()))
+ self.m_sleep = patches.enter_context(
+ mock.patch.object(azure_helper.time, 'sleep', autospec=True))
def test_http_with_retries(self):
self.m_readurl.return_value = 'TestResp'
@@ -438,6 +439,12 @@ class TestAzureHelperHttpWithRetries(CiTestCase):
self.m_readurl.call_count,
self.periodic_logging_attempts + 1)
+ # Ensure that cloud-init did sleep between each failed request
+ self.assertEqual(
+ self.m_sleep.call_count,
+ self.periodic_logging_attempts)
+ self.m_sleep.assert_called_with(self.sleep_duration_between_retries)
+
def test_http_with_retries_long_delay_logs_periodic_failure_msg(self):
self.m_readurl.side_effect = \
[SentinelException] * self.periodic_logging_attempts + \
--
2.27.0

View File

@ -1,47 +0,0 @@
From c3d41dc6b18df0d74f569b1a0ba43c8118437948 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Fri, 14 Jan 2022 16:40:24 +0100
Subject: [PATCH 3/6] Change netifaces dependency to 0.10.4 (#965)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 44: Datasource for VMware
RH-Commit: [3/6] d25d68427ab8b86ee1521c66483e9300e8fcc735
RH-Bugzilla: 2026587
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
commit b9d308b4d61d22bacc05bcae59819755975631f8
Author: Andrew Kutz <101085+akutz@users.noreply.github.com>
Date: Tue Aug 10 15:10:44 2021 -0500
Change netifaces dependency to 0.10.4 (#965)
Change netifaces dependency to 0.10.4
Currently versions Ubuntu <=20.10 use netifaces 0.10.4 By requiring
netifaces 0.10.9, the VMware datasource omitted itself from cloud-init
on Ubuntu <=20.10.
This patch changes the netifaces dependency to 0.10.4. While it is true
there are patches to netifaces post 0.10.4 that are desirable, testing
against the most common network configuration was performed to verify
the VMware datasource will still function with netifaces 0.10.4.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/requirements.txt b/requirements.txt
index 41d01d62..c4adc455 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -40,4 +40,4 @@ jsonschema
# and still participate in instance-data by gathering the network in detail at
# runtime and merge that information into the metadata and repersist that to
# disk.
-netifaces>=0.10.9
+netifaces>=0.10.4
--
2.27.0

View File

@ -0,0 +1,72 @@
From ca6f3397e1ebdb48f5b85c5cf262356480991430 Mon Sep 17 00:00:00 2001
From: PengpengSun <40026211+PengpengSun@users.noreply.github.com>
Date: Tue, 25 Jul 2023 05:21:46 +0800
Subject: [PATCH] DS VMware: modify a few log level (#4284)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 106: DS VMware: modify a few log level (#4284)
RH-Bugzilla: 2223810
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Camilla Conte <cconte@redhat.com>
RH-Commit: [1/1] 1741098157b12b28be03ecdb041fa1f78d7ac042 (anisinha/rhel-cloud-init)
Multiple ip addresses are common scenario for modern Linux, so set
debug log level for such cases.
(cherry picked from commit 4a6a9d3f6c8fe213c51f6c1336f1dd378bf4bdca)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/sources/DataSourceVMware.py | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/cloudinit/sources/DataSourceVMware.py b/cloudinit/sources/DataSourceVMware.py
index 07a80222..bc3b5a5f 100644
--- a/cloudinit/sources/DataSourceVMware.py
+++ b/cloudinit/sources/DataSourceVMware.py
@@ -1,6 +1,6 @@
# Cloud-Init DataSource for VMware
#
-# Copyright (c) 2018-2022 VMware, Inc. All Rights Reserved.
+# Copyright (c) 2018-2023 VMware, Inc. All Rights Reserved.
#
# Authors: Anish Swaminathan <anishs@vmware.com>
# Andrew Kutz <akutz@vmware.com>
@@ -719,7 +719,7 @@ def get_default_ip_addrs():
af_inet4 = addr4_fams.get(netifaces.AF_INET)
if af_inet4:
if len(af_inet4) > 1:
- LOG.warning(
+ LOG.debug(
"device %s has more than one ipv4 address: %s",
dev4,
af_inet4,
@@ -737,7 +737,7 @@ def get_default_ip_addrs():
af_inet6 = addr6_fams.get(netifaces.AF_INET6)
if af_inet6:
if len(af_inet6) > 1:
- LOG.warning(
+ LOG.debug(
"device %s has more than one ipv6 address: %s",
dev6,
af_inet6,
@@ -752,7 +752,7 @@ def get_default_ip_addrs():
af_inet6 = addr4_fams.get(netifaces.AF_INET6)
if af_inet6:
if len(af_inet6) > 1:
- LOG.warning(
+ LOG.debug(
"device %s has more than one ipv6 address: %s",
dev4,
af_inet6,
@@ -767,7 +767,7 @@ def get_default_ip_addrs():
af_inet4 = addr6_fams.get(netifaces.AF_INET)
if af_inet4:
if len(af_inet4) > 1:
- LOG.warning(
+ LOG.debug(
"device %s has more than one ipv4 address: %s",
dev6,
af_inet4,
--
2.41.0

File diff suppressed because it is too large Load Diff

View File

@ -1,180 +0,0 @@
From b226448134b5182ba685702e7b7a486db772d956 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Fri, 4 Mar 2022 11:21:16 +0100
Subject: [PATCH 1/2] - Detect a Python version change and clear the cache
(#857)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 54: - Detect a Python version change and clear the cache (#857)
RH-Commit: [1/2] c562cd802eabae9dc14079de0b26d471d2229ca8
RH-Bugzilla: 1935826
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
commit 78e89b03ecb29e7df3181b1219a0b5f44b9d7532
Author: Robert Schweikert <rjschwei@suse.com>
Date: Thu Jul 1 12:35:40 2021 -0400
- Detect a Python version change and clear the cache (#857)
summary: Clear cache when a Python version change is detected
When a distribution gets updated it is possible that the Python version
changes. Python makes no guarantee that pickle is consistent across
versions as such we need to purge the cache and start over.
Co-authored-by: James Falcon <therealfalcon@gmail.com>
Conflicts:
tests/integration_tests/util.py: test is not present downstream
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/cmd/main.py | 30 ++++++++++
cloudinit/cmd/tests/test_main.py | 2 +
.../assets/test_version_change.pkl | Bin 0 -> 21 bytes
.../modules/test_ssh_auth_key_fingerprints.py | 2 +-
.../modules/test_version_change.py | 56 ++++++++++++++++++
5 files changed, 89 insertions(+), 1 deletion(-)
create mode 100644 tests/integration_tests/assets/test_version_change.pkl
create mode 100644 tests/integration_tests/modules/test_version_change.py
diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py
index baf1381f..21213a4a 100644
--- a/cloudinit/cmd/main.py
+++ b/cloudinit/cmd/main.py
@@ -210,6 +210,35 @@ def attempt_cmdline_url(path, network=True, cmdline=None):
(cmdline_name, url, path))
+def purge_cache_on_python_version_change(init):
+ """Purge the cache if python version changed on us.
+
+ There could be changes not represented in our cache (obj.pkl) after we
+ upgrade to a new version of python, so at that point clear the cache
+ """
+ current_python_version = '%d.%d' % (
+ sys.version_info.major, sys.version_info.minor
+ )
+ python_version_path = os.path.join(
+ init.paths.get_cpath('data'), 'python-version'
+ )
+ if os.path.exists(python_version_path):
+ cached_python_version = open(python_version_path).read()
+ # The Python version has changed out from under us, anything that was
+ # pickled previously is likely useless due to API changes.
+ if cached_python_version != current_python_version:
+ LOG.debug('Python version change detected. Purging cache')
+ init.purge_cache(True)
+ util.write_file(python_version_path, current_python_version)
+ else:
+ if os.path.exists(init.paths.get_ipath_cur('obj_pkl')):
+ LOG.info(
+ 'Writing python-version file. '
+ 'Cache compatibility status is currently unknown.'
+ )
+ util.write_file(python_version_path, current_python_version)
+
+
def main_init(name, args):
deps = [sources.DEP_FILESYSTEM, sources.DEP_NETWORK]
if args.local:
@@ -276,6 +305,7 @@ def main_init(name, args):
util.logexc(LOG, "Failed to initialize, likely bad things to come!")
# Stage 4
path_helper = init.paths
+ purge_cache_on_python_version_change(init)
mode = sources.DSMODE_LOCAL if args.local else sources.DSMODE_NETWORK
if mode == sources.DSMODE_NETWORK:
diff --git a/cloudinit/cmd/tests/test_main.py b/cloudinit/cmd/tests/test_main.py
index 78b27441..1f5975b0 100644
--- a/cloudinit/cmd/tests/test_main.py
+++ b/cloudinit/cmd/tests/test_main.py
@@ -17,6 +17,8 @@ myargs = namedtuple('MyArgs', 'debug files force local reporter subcommand')
class TestMain(FilesystemMockingTestCase):
+ with_logs = True
+ allowed_subp = False
def setUp(self):
super(TestMain, self).setUp()
diff --git a/tests/integration_tests/modules/test_ssh_auth_key_fingerprints.py b/tests/integration_tests/modules/test_ssh_auth_key_fingerprints.py
index b9b0d85e..e1946cb1 100644
--- a/tests/integration_tests/modules/test_ssh_auth_key_fingerprints.py
+++ b/tests/integration_tests/modules/test_ssh_auth_key_fingerprints.py
@@ -18,7 +18,7 @@ USER_DATA_SSH_AUTHKEY_DISABLE = """\
no_ssh_fingerprints: true
"""
-USER_DATA_SSH_AUTHKEY_ENABLE="""\
+USER_DATA_SSH_AUTHKEY_ENABLE = """\
#cloud-config
ssh_genkeytypes:
- ecdsa
diff --git a/tests/integration_tests/modules/test_version_change.py b/tests/integration_tests/modules/test_version_change.py
new file mode 100644
index 00000000..4e9ab63f
--- /dev/null
+++ b/tests/integration_tests/modules/test_version_change.py
@@ -0,0 +1,56 @@
+from pathlib import Path
+
+from tests.integration_tests.instances import IntegrationInstance
+from tests.integration_tests.util import ASSETS_DIR
+
+
+PICKLE_PATH = Path('/var/lib/cloud/instance/obj.pkl')
+TEST_PICKLE = ASSETS_DIR / 'test_version_change.pkl'
+
+
+def _assert_no_pickle_problems(log):
+ assert 'Failed loading pickled blob' not in log
+ assert 'Traceback' not in log
+ assert 'WARN' not in log
+
+
+def test_reboot_without_version_change(client: IntegrationInstance):
+ log = client.read_from_file('/var/log/cloud-init.log')
+ assert 'Python version change detected' not in log
+ assert 'Cache compatibility status is currently unknown.' not in log
+ _assert_no_pickle_problems(log)
+
+ client.restart()
+ log = client.read_from_file('/var/log/cloud-init.log')
+ assert 'Python version change detected' not in log
+ assert 'Could not determine Python version used to write cache' not in log
+ _assert_no_pickle_problems(log)
+
+ # Now ensure that loading a bad pickle gives us problems
+ client.push_file(TEST_PICKLE, PICKLE_PATH)
+ client.restart()
+ log = client.read_from_file('/var/log/cloud-init.log')
+ assert 'Failed loading pickled blob from {}'.format(PICKLE_PATH) in log
+
+
+def test_cache_purged_on_version_change(client: IntegrationInstance):
+ # Start by pushing the invalid pickle so we'll hit an error if the
+ # cache didn't actually get purged
+ client.push_file(TEST_PICKLE, PICKLE_PATH)
+ client.execute("echo '1.0' > /var/lib/cloud/data/python-version")
+ client.restart()
+ log = client.read_from_file('/var/log/cloud-init.log')
+ assert 'Python version change detected. Purging cache' in log
+ _assert_no_pickle_problems(log)
+
+
+def test_log_message_on_missing_version_file(client: IntegrationInstance):
+ # Start by pushing a pickle so we can see the log message
+ client.push_file(TEST_PICKLE, PICKLE_PATH)
+ client.execute("rm /var/lib/cloud/data/python-version")
+ client.restart()
+ log = client.read_from_file('/var/log/cloud-init.log')
+ assert (
+ 'Writing python-version file. '
+ 'Cache compatibility status is currently unknown.'
+ ) in log
--
2.31.1

View File

@ -0,0 +1,120 @@
From 285d8d8005db06ea86afc042bc2eec07bf3c6fab Mon Sep 17 00:00:00 2001
From: James Falcon <james.falcon@canonical.com>
Date: Thu, 23 Mar 2023 10:21:56 -0500
Subject: [PATCH 1/2] Don't change permissions of netrules target (#2076)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 98: Don't change permissions of netrules target (#2076)
RH-Bugzilla: 2182947
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Commit: [1/1] 37fa74519da67b383de87b41108561b09d7b9210 (anisinha/rhel-cloud-init)
Set permissions if file doesn't exist. Leave them if it does.
LP: #2011783
Co-authored-by: Chad Smith <chad.smith@canonical.com>
(cherry picked from commit 56c88cafd1b3606e814069a79f4ec265fc427c87)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/net/eni.py | 4 +++-
cloudinit/net/sysconfig.py | 7 ++++++-
tests/unittests/distros/test_netconfig.py | 20 ++++++++++++++++++--
3 files changed, 27 insertions(+), 4 deletions(-)
diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py
index 53bd35ca..1de3bec2 100644
--- a/cloudinit/net/eni.py
+++ b/cloudinit/net/eni.py
@@ -576,7 +576,9 @@ class Renderer(renderer.Renderer):
netrules = subp.target_path(target, self.netrules_path)
util.ensure_dir(os.path.dirname(netrules))
util.write_file(
- netrules, self._render_persistent_net(network_state)
+ netrules,
+ content=self._render_persistent_net(network_state),
+ preserve_mode=True,
)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index db084e07..da6d11b3 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -1033,7 +1033,12 @@ class Renderer(renderer.Renderer):
if self.netrules_path:
netrules_content = self._render_persistent_net(network_state)
netrules_path = subp.target_path(target, self.netrules_path)
- util.write_file(netrules_path, netrules_content, file_mode)
+ util.write_file(
+ netrules_path,
+ content=netrules_content,
+ mode=file_mode,
+ preserve_mode=True,
+ )
if available_nm(target=target):
enable_ifcfg_rh(subp.target_path(target, path=NM_CFG_FILE))
diff --git a/tests/unittests/distros/test_netconfig.py b/tests/unittests/distros/test_netconfig.py
index e9fb0591..b1c89ce3 100644
--- a/tests/unittests/distros/test_netconfig.py
+++ b/tests/unittests/distros/test_netconfig.py
@@ -458,8 +458,16 @@ class TestNetCfgDistroUbuntuEni(TestNetCfgDistroBase):
def eni_path(self):
return "/etc/network/interfaces.d/50-cloud-init.cfg"
+ def rules_path(self):
+ return "/etc/udev/rules.d/70-persistent-net.rules"
+
def _apply_and_verify_eni(
- self, apply_fn, config, expected_cfgs=None, bringup=False
+ self,
+ apply_fn,
+ config,
+ expected_cfgs=None,
+ bringup=False,
+ previous_files=(),
):
if not expected_cfgs:
raise ValueError("expected_cfg must not be None")
@@ -467,7 +475,11 @@ class TestNetCfgDistroUbuntuEni(TestNetCfgDistroBase):
tmpd = None
with mock.patch("cloudinit.net.eni.available") as m_avail:
m_avail.return_value = True
+ path_modes = {}
with self.reRooted(tmpd) as tmpd:
+ for previous_path, content, mode in previous_files:
+ util.write_file(previous_path, content, mode=mode)
+ path_modes[previous_path] = mode
apply_fn(config, bringup)
results = dir2dict(tmpd)
@@ -478,7 +490,9 @@ class TestNetCfgDistroUbuntuEni(TestNetCfgDistroBase):
print(results[cfgpath])
print("----------")
self.assertEqual(expected, results[cfgpath])
- self.assertEqual(0o644, get_mode(cfgpath, tmpd))
+ self.assertEqual(
+ path_modes.get(cfgpath, 0o644), get_mode(cfgpath, tmpd)
+ )
def test_apply_network_config_and_bringup_filters_priority_eni_ub(self):
"""Network activator search priority can be overridden from config."""
@@ -527,11 +541,13 @@ class TestNetCfgDistroUbuntuEni(TestNetCfgDistroBase):
def test_apply_network_config_eni_ub(self):
expected_cfgs = {
self.eni_path(): V1_NET_CFG_OUTPUT,
+ self.rules_path(): "",
}
self._apply_and_verify_eni(
self.distro.apply_network_config,
V1_NET_CFG,
expected_cfgs=expected_cfgs.copy(),
+ previous_files=((self.rules_path(), "something", 0o660),),
)
def test_apply_network_config_ipv6_ub(self):
--
2.37.3

View File

@ -0,0 +1,93 @@
From e5d0944117fba5079de5452307f1bea89147f747 Mon Sep 17 00:00:00 2001
From: Robert Schweikert <rjschwei@suse.com>
Date: Thu, 23 Feb 2023 16:43:56 -0500
Subject: [PATCH 04/11] Enable SUSE based distros for ca handling (#2036)
CA handling in the configuration module was previously not supported
for SUSE based distros. Enable this functionality by creating the
necessary configuration settings.
Secondly update the test such that it does not bleed through to the
test system.
(cherry picked from commit 46fcd03187d70f405c748f7a6cfdb02ecb8c6ee7)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/config/cc_ca_certs.py | 31 +++++++++++++++++++++-
tests/unittests/config/test_cc_ca_certs.py | 2 ++
2 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/cloudinit/config/cc_ca_certs.py b/cloudinit/config/cc_ca_certs.py
index 169b0e18..51b8577c 100644
--- a/cloudinit/config/cc_ca_certs.py
+++ b/cloudinit/config/cc_ca_certs.py
@@ -32,8 +32,25 @@ DISTRO_OVERRIDES = {
"ca_cert_config": None,
"ca_cert_update_cmd": ["update-ca-trust"],
},
+ "opensuse": {
+ "ca_cert_path": "/etc/pki/trust/",
+ "ca_cert_local_path": "/usr/share/pki/trust/",
+ "ca_cert_filename": "anchors/cloud-init-ca-cert-{cert_index}.crt",
+ "ca_cert_config": None,
+ "ca_cert_update_cmd": ["update-ca-certificates"],
+ },
}
+for distro in (
+ "opensuse-microos",
+ "opensuse-tumbleweed",
+ "opensuse-leap",
+ "sle_hpc",
+ "sle-micro",
+ "sles",
+):
+ DISTRO_OVERRIDES[distro] = DISTRO_OVERRIDES["opensuse"]
+
MODULE_DESCRIPTION = """\
This module adds CA certificates to the system's CA store and updates any
related files using the appropriate OS-specific utility. The default CA
@@ -48,7 +65,19 @@ configuration option ``remove_defaults``.
Alpine Linux requires the ca-certificates package to be installed in
order to provide the ``update-ca-certificates`` command.
"""
-distros = ["alpine", "debian", "rhel", "ubuntu"]
+distros = [
+ "alpine",
+ "debian",
+ "rhel",
+ "opensuse",
+ "opensuse-microos",
+ "opensuse-tumbleweed",
+ "opensuse-leap",
+ "sle_hpc",
+ "sle-micro",
+ "sles",
+ "ubuntu",
+]
meta: MetaSchema = {
"id": "cc_ca_certs",
diff --git a/tests/unittests/config/test_cc_ca_certs.py b/tests/unittests/config/test_cc_ca_certs.py
index 19e5d422..6db17485 100644
--- a/tests/unittests/config/test_cc_ca_certs.py
+++ b/tests/unittests/config/test_cc_ca_certs.py
@@ -311,6 +311,7 @@ class TestRemoveDefaultCaCerts(TestCase):
"cloud_dir": tmpdir,
}
)
+ self.add_patch("cloudinit.config.cc_ca_certs.os.stat", "m_stat")
def test_commands(self):
ca_certs_content = "# line1\nline2\nline3\n"
@@ -318,6 +319,7 @@ class TestRemoveDefaultCaCerts(TestCase):
"# line1\n# Modified by cloud-init to deselect certs due to"
" user-data\n!line2\n!line3\n"
)
+ self.m_stat.return_value.st_size = 1
for distro_name in cc_ca_certs.distros:
conf = cc_ca_certs._distro_ca_certs_configs(distro_name)
--
2.39.3

View File

@ -1,474 +0,0 @@
From 7bd016008429f0a18393a070d88e669f3ed89caa Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Fri, 11 Feb 2022 14:37:46 +0100
Subject: [PATCH] Fix IPv6 netmask format for sysconfig (#1215)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 48: Fix IPv6 netmask format for sysconfig (#1215)
RH-Commit: [1/1] 4c940bbcf85dba1fce9f4acb9fc7820c0d7777f6
RH-Bugzilla: 2046540
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
commit b97a30f0a05c1dea918c46ca9c05c869d15fe2d5
Author: Harald <hjensas@redhat.com>
Date: Tue Feb 8 15:49:00 2022 +0100
Fix IPv6 netmask format for sysconfig (#1215)
This change converts the IPv6 netmask from the network_data.json[1]
format to the CIDR style, <IPv6_addr>/<prefix>.
Using an IPv6 address like ffff:ffff:ffff:ffff:: does not work with
NetworkManager, nor networkscripts.
NetworkManager will ignore the route, logging:
ifcfg-rh: ignoring invalid route at \
"::/:: via fd00:fd00:fd00:2::fffe dev $DEV" \
(/etc/sysconfig/network-scripts/route6-$DEV:3): \
Argument for "::/::" is not ADDR/PREFIX format
Similarly if using networkscripts, ip route fail with error:
Error: inet6 prefix is expected rather than \
"fd00:fd00:fd00::/ffff:ffff:ffff:ffff::".
Also a bit of refactoring ...
cloudinit.net.sysconfig.Route.to_string:
* Move a couple of lines around to reduce repeated code.
* if "ADDRESS" not in key -> continute, so that the
code block following it can be de-indented.
cloudinit.net.network_state:
* Refactors the ipv4_mask_to_net_prefix, ipv6_mask_to_net_prefix
removes mask_to_net_prefix methods. Utilize ipaddress library to
do some of the heavy lifting.
LP: #1959148
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/net/__init__.py | 7 +-
cloudinit/net/network_state.py | 103 +++++++-----------
cloudinit/net/sysconfig.py | 91 ++++++++++------
cloudinit/sources/DataSourceOpenNebula.py | 2 +-
.../sources/helpers/vmware/imc/config_nic.py | 4 +-
tests/unittests/test_net.py | 78 ++++++++++++-
6 files changed, 176 insertions(+), 109 deletions(-)
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index 003efa2a..12bf64de 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -14,7 +14,7 @@ import re
from cloudinit import subp
from cloudinit import util
-from cloudinit.net.network_state import mask_to_net_prefix
+from cloudinit.net.network_state import ipv4_mask_to_net_prefix
from cloudinit.url_helper import UrlError, readurl
LOG = logging.getLogger(__name__)
@@ -1048,10 +1048,11 @@ class EphemeralIPv4Network(object):
'Cannot init network on {0} with {1}/{2} and bcast {3}'.format(
interface, ip, prefix_or_mask, broadcast))
try:
- self.prefix = mask_to_net_prefix(prefix_or_mask)
+ self.prefix = ipv4_mask_to_net_prefix(prefix_or_mask)
except ValueError as e:
raise ValueError(
- 'Cannot setup network: {0}'.format(e)
+ "Cannot setup network, invalid prefix or "
+ "netmask: {0}".format(e)
) from e
self.connectivity_url = connectivity_url
diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py
index e8bf9e39..2768ef94 100644
--- a/cloudinit/net/network_state.py
+++ b/cloudinit/net/network_state.py
@@ -6,6 +6,7 @@
import copy
import functools
+import ipaddress
import logging
import socket
import struct
@@ -872,12 +873,18 @@ def _normalize_net_keys(network, address_keys=()):
try:
prefix = int(maybe_prefix)
except ValueError:
- # this supports input of <address>/255.255.255.0
- prefix = mask_to_net_prefix(maybe_prefix)
- elif netmask:
- prefix = mask_to_net_prefix(netmask)
- elif 'prefix' in net:
- prefix = int(net['prefix'])
+ if ipv6:
+ # this supports input of ffff:ffff:ffff::
+ prefix = ipv6_mask_to_net_prefix(maybe_prefix)
+ else:
+ # this supports input of 255.255.255.0
+ prefix = ipv4_mask_to_net_prefix(maybe_prefix)
+ elif netmask and not ipv6:
+ prefix = ipv4_mask_to_net_prefix(netmask)
+ elif netmask and ipv6:
+ prefix = ipv6_mask_to_net_prefix(netmask)
+ elif "prefix" in net:
+ prefix = int(net["prefix"])
else:
prefix = 64 if ipv6 else 24
@@ -972,72 +979,42 @@ def ipv4_mask_to_net_prefix(mask):
str(24) => 24
"24" => 24
"""
- if isinstance(mask, int):
- return mask
- if isinstance(mask, str):
- try:
- return int(mask)
- except ValueError:
- pass
- else:
- raise TypeError("mask '%s' is not a string or int")
-
- if '.' not in mask:
- raise ValueError("netmask '%s' does not contain a '.'" % mask)
-
- toks = mask.split(".")
- if len(toks) != 4:
- raise ValueError("netmask '%s' had only %d parts" % (mask, len(toks)))
-
- return sum([bin(int(x)).count('1') for x in toks])
+ return ipaddress.ip_network(f"0.0.0.0/{mask}").prefixlen
def ipv6_mask_to_net_prefix(mask):
"""Convert an ipv6 netmask (very uncommon) or prefix (64) to prefix.
- If 'mask' is an integer or string representation of one then
- int(mask) will be returned.
+ If the input is already an integer or a string representation of
+ an integer, then int(mask) will be returned.
+ "ffff:ffff:ffff::" => 48
+ "48" => 48
"""
-
- if isinstance(mask, int):
- return mask
- if isinstance(mask, str):
- try:
- return int(mask)
- except ValueError:
- pass
- else:
- raise TypeError("mask '%s' is not a string or int")
-
- if ':' not in mask:
- raise ValueError("mask '%s' does not have a ':'")
-
- bitCount = [0, 0x8000, 0xc000, 0xe000, 0xf000, 0xf800, 0xfc00, 0xfe00,
- 0xff00, 0xff80, 0xffc0, 0xffe0, 0xfff0, 0xfff8, 0xfffc,
- 0xfffe, 0xffff]
- prefix = 0
- for word in mask.split(':'):
- if not word or int(word, 16) == 0:
- break
- prefix += bitCount.index(int(word, 16))
-
- return prefix
-
-
-def mask_to_net_prefix(mask):
- """Return the network prefix for the netmask provided.
-
- Supports ipv4 or ipv6 netmasks."""
try:
- # if 'mask' is a prefix that is an integer.
- # then just return it.
- return int(mask)
+ # In the case the mask is already a prefix
+ prefixlen = ipaddress.ip_network(f"::/{mask}").prefixlen
+ return prefixlen
except ValueError:
+ # ValueError means mask is an IPv6 address representation and need
+ # conversion.
pass
- if is_ipv6_addr(mask):
- return ipv6_mask_to_net_prefix(mask)
- else:
- return ipv4_mask_to_net_prefix(mask)
+
+ netmask = ipaddress.ip_address(mask)
+ mask_int = int(netmask)
+ # If the mask is all zeroes, just return it
+ if mask_int == 0:
+ return mask_int
+
+ trailing_zeroes = min(
+ ipaddress.IPV6LENGTH, (~mask_int & (mask_int - 1)).bit_length()
+ )
+ leading_ones = mask_int >> trailing_zeroes
+ prefixlen = ipaddress.IPV6LENGTH - trailing_zeroes
+ all_ones = (1 << prefixlen) - 1
+ if leading_ones != all_ones:
+ raise ValueError("Invalid network mask '%s'" % mask)
+
+ return prefixlen
def mask_and_ipv4_to_bcast_addr(mask, ip):
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index d5440998..7ecbe1c3 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -12,6 +12,7 @@ from cloudinit import util
from cloudinit import subp
from cloudinit.distros.parsers import networkmanager_conf
from cloudinit.distros.parsers import resolv_conf
+from cloudinit.net import network_state
from . import renderer
from .network_state import (
@@ -171,43 +172,61 @@ class Route(ConfigMap):
# (because Route can contain a mix of IPv4 and IPv6)
reindex = -1
for key in sorted(self._conf.keys()):
- if 'ADDRESS' in key:
- index = key.replace('ADDRESS', '')
- address_value = str(self._conf[key])
- # only accept combinations:
- # if proto ipv6 only display ipv6 routes
- # if proto ipv4 only display ipv4 routes
- # do not add ipv6 routes if proto is ipv4
- # do not add ipv4 routes if proto is ipv6
- # (this array will contain a mix of ipv4 and ipv6)
- if proto == "ipv4" and not self.is_ipv6_route(address_value):
- netmask_value = str(self._conf['NETMASK' + index])
- gateway_value = str(self._conf['GATEWAY' + index])
- # increase IPv4 index
- reindex = reindex + 1
- buf.write("%s=%s\n" % ('ADDRESS' + str(reindex),
- _quote_value(address_value)))
- buf.write("%s=%s\n" % ('GATEWAY' + str(reindex),
- _quote_value(gateway_value)))
- buf.write("%s=%s\n" % ('NETMASK' + str(reindex),
- _quote_value(netmask_value)))
- metric_key = 'METRIC' + index
- if metric_key in self._conf:
- metric_value = str(self._conf['METRIC' + index])
- buf.write("%s=%s\n" % ('METRIC' + str(reindex),
- _quote_value(metric_value)))
- elif proto == "ipv6" and self.is_ipv6_route(address_value):
- netmask_value = str(self._conf['NETMASK' + index])
- gateway_value = str(self._conf['GATEWAY' + index])
- metric_value = (
- 'metric ' + str(self._conf['METRIC' + index])
- if 'METRIC' + index in self._conf else '')
+ if "ADDRESS" not in key:
+ continue
+
+ index = key.replace("ADDRESS", "")
+ address_value = str(self._conf[key])
+ netmask_value = str(self._conf["NETMASK" + index])
+ gateway_value = str(self._conf["GATEWAY" + index])
+
+ # only accept combinations:
+ # if proto ipv6 only display ipv6 routes
+ # if proto ipv4 only display ipv4 routes
+ # do not add ipv6 routes if proto is ipv4
+ # do not add ipv4 routes if proto is ipv6
+ # (this array will contain a mix of ipv4 and ipv6)
+ if proto == "ipv4" and not self.is_ipv6_route(address_value):
+ # increase IPv4 index
+ reindex = reindex + 1
+ buf.write(
+ "%s=%s\n"
+ % ("ADDRESS" + str(reindex), _quote_value(address_value))
+ )
+ buf.write(
+ "%s=%s\n"
+ % ("GATEWAY" + str(reindex), _quote_value(gateway_value))
+ )
+ buf.write(
+ "%s=%s\n"
+ % ("NETMASK" + str(reindex), _quote_value(netmask_value))
+ )
+ metric_key = "METRIC" + index
+ if metric_key in self._conf:
+ metric_value = str(self._conf["METRIC" + index])
buf.write(
- "%s/%s via %s %s dev %s\n" % (address_value,
- netmask_value,
- gateway_value,
- metric_value,
- self._route_name))
+ "%s=%s\n"
+ % ("METRIC" + str(reindex), _quote_value(metric_value))
+ )
+ elif proto == "ipv6" and self.is_ipv6_route(address_value):
+ prefix_value = network_state.ipv6_mask_to_net_prefix(
+ netmask_value
+ )
+ metric_value = (
+ "metric " + str(self._conf["METRIC" + index])
+ if "METRIC" + index in self._conf
+ else ""
+ )
+ buf.write(
+ "%s/%s via %s %s dev %s\n"
+ % (
+ address_value,
+ prefix_value,
+ gateway_value,
+ metric_value,
+ self._route_name,
+ )
+ )
return buf.getvalue()
diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py
index 730ec586..e7980ab1 100644
--- a/cloudinit/sources/DataSourceOpenNebula.py
+++ b/cloudinit/sources/DataSourceOpenNebula.py
@@ -233,7 +233,7 @@ class OpenNebulaNetwork(object):
# Set IPv4 address
devconf['addresses'] = []
mask = self.get_mask(c_dev)
- prefix = str(net.mask_to_net_prefix(mask))
+ prefix = str(net.ipv4_mask_to_net_prefix(mask))
devconf['addresses'].append(
self.get_ip(c_dev, mac) + '/' + prefix)
diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py
index 9cd2c0c0..3a45c67e 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_nic.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py
@@ -9,7 +9,7 @@ import logging
import os
import re
-from cloudinit.net.network_state import mask_to_net_prefix
+from cloudinit.net.network_state import ipv4_mask_to_net_prefix
from cloudinit import subp
from cloudinit import util
@@ -180,7 +180,7 @@ class NicConfigurator(object):
"""
route_list = []
- cidr = mask_to_net_prefix(netmask)
+ cidr = ipv4_mask_to_net_prefix(netmask)
for gateway in gateways:
destination = "%s/%d" % (gen_subnet(gateway, netmask), cidr)
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 14d3462f..a7f6a1f7 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -2025,10 +2025,10 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
routes:
- gateway: 2001:67c:1562:1
network: 2001:67c:1
- netmask: ffff:ffff:0
+ netmask: "ffff:ffff::"
- gateway: 3001:67c:1562:1
network: 3001:67c:1
- netmask: ffff:ffff:0
+ netmask: "ffff:ffff::"
metric: 10000
"""),
'expected_netplan': textwrap.dedent("""
@@ -2295,8 +2295,8 @@ iface bond0 inet6 static
'route6-bond0': textwrap.dedent("""\
# Created by cloud-init on instance boot automatically, do not edit.
#
- 2001:67c:1/ffff:ffff:0 via 2001:67c:1562:1 dev bond0
- 3001:67c:1/ffff:ffff:0 via 3001:67c:1562:1 metric 10000 dev bond0
+ 2001:67c:1/32 via 2001:67c:1562:1 dev bond0
+ 3001:67c:1/32 via 3001:67c:1562:1 metric 10000 dev bond0
"""),
'route-bond0': textwrap.dedent("""\
ADDRESS0=10.1.3.0
@@ -3088,6 +3088,76 @@ USERCTL=no
renderer.render_network_state(ns, target=render_dir)
self.assertEqual([], os.listdir(render_dir))
+ def test_invalid_network_mask_ipv6(self):
+ net_json = {
+ "services": [{"type": "dns", "address": "172.19.0.12"}],
+ "networks": [
+ {
+ "network_id": "public-ipv6",
+ "type": "ipv6",
+ "netmask": "",
+ "link": "tap1a81968a-79",
+ "routes": [
+ {
+ "gateway": "2001:DB8::1",
+ "netmask": "ff:ff:ff:ff::",
+ "network": "2001:DB8:1::1",
+ },
+ ],
+ "ip_address": "2001:DB8::10",
+ "id": "network1",
+ }
+ ],
+ "links": [
+ {
+ "ethernet_mac_address": "fa:16:3e:ed:9a:59",
+ "mtu": None,
+ "type": "bridge",
+ "id": "tap1a81968a-79",
+ "vif_id": "1a81968a-797a-400f-8a80-567f997eb93f",
+ },
+ ],
+ }
+ macs = {"fa:16:3e:ed:9a:59": "eth0"}
+ network_cfg = openstack.convert_net_json(net_json, known_macs=macs)
+ with self.assertRaises(ValueError):
+ network_state.parse_net_config_data(network_cfg, skip_broken=False)
+
+ def test_invalid_network_mask_ipv4(self):
+ net_json = {
+ "services": [{"type": "dns", "address": "172.19.0.12"}],
+ "networks": [
+ {
+ "network_id": "public-ipv4",
+ "type": "ipv4",
+ "netmask": "",
+ "link": "tap1a81968a-79",
+ "routes": [
+ {
+ "gateway": "172.20.0.1",
+ "netmask": "255.234.255.0",
+ "network": "172.19.0.0",
+ },
+ ],
+ "ip_address": "172.20.0.10",
+ "id": "network1",
+ }
+ ],
+ "links": [
+ {
+ "ethernet_mac_address": "fa:16:3e:ed:9a:59",
+ "mtu": None,
+ "type": "bridge",
+ "id": "tap1a81968a-79",
+ "vif_id": "1a81968a-797a-400f-8a80-567f997eb93f",
+ },
+ ],
+ }
+ macs = {"fa:16:3e:ed:9a:59": "eth0"}
+ network_cfg = openstack.convert_net_json(net_json, known_macs=macs)
+ with self.assertRaises(ValueError):
+ network_state.parse_net_config_data(network_cfg, skip_broken=False)
+
def test_openstack_rendering_samples(self):
for os_sample in OS_SAMPLES:
render_dir = self.tmp_dir()
--
2.27.0

View File

@ -1,705 +0,0 @@
From 04a4cc7b8da04ba4103118cf9d975d8e9548e0dc Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Fri, 4 Mar 2022 11:23:22 +0100
Subject: [PATCH 2/2] Fix MIME policy failure on python version upgrade (#934)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 54: - Detect a Python version change and clear the cache (#857)
RH-Commit: [2/2] 05fc8c52a39b5ad464ad146488703467e39d73b1
RH-Bugzilla: 1935826
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
commit eacb0353803263934aa2ac827c37e461c87cb107
Author: James Falcon <therealfalcon@gmail.com>
Date: Thu Jul 15 17:52:21 2021 -0500
Fix MIME policy failure on python version upgrade (#934)
Python 3.6 added a new `policy` attribute to `MIMEMultipart`.
MIMEMultipart may be part of the cached object pickle of a datasource.
Upgrading from an old version of python to 3.6+ will cause the
datasource to be invalid after pickle load.
This commit uses the upgrade framework to attempt to access the mime
message and fail early (thus discarding the cache) if we cannot.
Commit 78e89b03 should fix this issue more generally.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/sources/__init__.py | 18 +
cloudinit/stages.py | 2 +
.../assets/trusty_with_mime.pkl | 572 ++++++++++++++++++
.../modules/test_persistence.py | 30 +
4 files changed, 622 insertions(+)
create mode 100644 tests/integration_tests/assets/trusty_with_mime.pkl
create mode 100644 tests/integration_tests/modules/test_persistence.py
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index 7d74f8d9..338861e6 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -74,6 +74,10 @@ NetworkConfigSource = namedtuple('NetworkConfigSource',
_NETCFG_SOURCE_NAMES)(*_NETCFG_SOURCE_NAMES)
+class DatasourceUnpickleUserDataError(Exception):
+ """Raised when userdata is unable to be unpickled due to python upgrades"""
+
+
class DataSourceNotFoundException(Exception):
pass
@@ -227,6 +231,20 @@ class DataSource(CloudInitPickleMixin, metaclass=abc.ABCMeta):
self.vendordata2 = None
if not hasattr(self, 'vendordata2_raw'):
self.vendordata2_raw = None
+ if hasattr(self, 'userdata') and self.userdata is not None:
+ # If userdata stores MIME data, on < python3.6 it will be
+ # missing the 'policy' attribute that exists on >=python3.6.
+ # Calling str() on the userdata will attempt to access this
+ # policy attribute. This will raise an exception, causing
+ # the pickle load to fail, so cloud-init will discard the cache
+ try:
+ str(self.userdata)
+ except AttributeError as e:
+ LOG.debug(
+ "Unable to unpickle datasource: %s."
+ " Ignoring current cache.", e
+ )
+ raise DatasourceUnpickleUserDataError() from e
def __str__(self):
return type_utils.obj_name(self)
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index 83e25dd1..e709a5cf 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -980,6 +980,8 @@ def _pkl_load(fname):
return None
try:
return pickle.loads(pickle_contents)
+ except sources.DatasourceUnpickleUserDataError:
+ return None
except Exception:
util.logexc(LOG, "Failed loading pickled blob from %s", fname)
return None
diff --git a/tests/integration_tests/assets/trusty_with_mime.pkl b/tests/integration_tests/assets/trusty_with_mime.pkl
new file mode 100644
index 00000000..a4089ecf
--- /dev/null
+++ b/tests/integration_tests/assets/trusty_with_mime.pkl
@@ -0,0 +1,572 @@
+ccopy_reg
+_reconstructor
+p1
+(ccloudinit.sources.DataSourceNoCloud
+DataSourceNoCloudNet
+p2
+c__builtin__
+object
+p3
+NtRp4
+(dp5
+S'paths'
+p6
+g1
+(ccloudinit.helpers
+Paths
+p7
+g3
+NtRp8
+(dp9
+S'lookups'
+p10
+(dp11
+S'cloud_config'
+p12
+S'cloud-config.txt'
+p13
+sS'userdata'
+p14
+S'user-data.txt.i'
+p15
+sS'vendordata'
+p16
+S'vendor-data.txt.i'
+p17
+sS'userdata_raw'
+p18
+S'user-data.txt'
+p19
+sS'boothooks'
+p20
+g20
+sS'scripts'
+p21
+g21
+sS'sem'
+p22
+g22
+sS'data'
+p23
+g23
+sS'vendor_scripts'
+p24
+S'scripts/vendor'
+p25
+sS'handlers'
+p26
+g26
+sS'obj_pkl'
+p27
+S'obj.pkl'
+p28
+sS'vendordata_raw'
+p29
+S'vendor-data.txt'
+p30
+sS'vendor_cloud_config'
+p31
+S'vendor-cloud-config.txt'
+p32
+ssS'template_tpl'
+p33
+S'/etc/cloud/templates/%s.tmpl'
+p34
+sS'cfgs'
+p35
+(dp36
+S'cloud_dir'
+p37
+S'/var/lib/cloud/'
+p38
+sS'templates_dir'
+p39
+S'/etc/cloud/templates/'
+p40
+sS'upstart_dir'
+p41
+S'/etc/init/'
+p42
+ssS'cloud_dir'
+p43
+g38
+sS'datasource'
+p44
+NsS'upstart_conf_d'
+p45
+g42
+sS'boot_finished'
+p46
+S'/var/lib/cloud/instance/boot-finished'
+p47
+sS'instance_link'
+p48
+S'/var/lib/cloud/instance'
+p49
+sS'seed_dir'
+p50
+S'/var/lib/cloud/seed'
+p51
+sbsS'supported_seed_starts'
+p52
+(S'http://'
+p53
+S'https://'
+p54
+S'ftp://'
+p55
+tp56
+sS'sys_cfg'
+p57
+(dp58
+S'output'
+p59
+(dp60
+S'all'
+p61
+S'| tee -a /var/log/cloud-init-output.log'
+p62
+ssS'users'
+p63
+(lp64
+S'default'
+p65
+asS'def_log_file'
+p66
+S'/var/log/cloud-init.log'
+p67
+sS'cloud_final_modules'
+p68
+(lp69
+S'rightscale_userdata'
+p70
+aS'scripts-vendor'
+p71
+aS'scripts-per-once'
+p72
+aS'scripts-per-boot'
+p73
+aS'scripts-per-instance'
+p74
+aS'scripts-user'
+p75
+aS'ssh-authkey-fingerprints'
+p76
+aS'keys-to-console'
+p77
+aS'phone-home'
+p78
+aS'final-message'
+p79
+aS'power-state-change'
+p80
+asS'disable_root'
+p81
+I01
+sS'syslog_fix_perms'
+p82
+S'syslog:adm'
+p83
+sS'log_cfgs'
+p84
+(lp85
+(lp86
+S'[loggers]\nkeys=root,cloudinit\n\n[handlers]\nkeys=consoleHandler,cloudLogHandler\n\n[formatters]\nkeys=simpleFormatter,arg0Formatter\n\n[logger_root]\nlevel=DEBUG\nhandlers=consoleHandler,cloudLogHandler\n\n[logger_cloudinit]\nlevel=DEBUG\nqualname=cloudinit\nhandlers=\npropagate=1\n\n[handler_consoleHandler]\nclass=StreamHandler\nlevel=WARNING\nformatter=arg0Formatter\nargs=(sys.stderr,)\n\n[formatter_arg0Formatter]\nformat=%(asctime)s - %(filename)s[%(levelname)s]: %(message)s\n\n[formatter_simpleFormatter]\nformat=[CLOUDINIT] %(filename)s[%(levelname)s]: %(message)s\n'
+p87
+aS'[handler_cloudLogHandler]\nclass=handlers.SysLogHandler\nlevel=DEBUG\nformatter=simpleFormatter\nargs=("/dev/log", handlers.SysLogHandler.LOG_USER)\n'
+p88
+aa(lp89
+g87
+aS"[handler_cloudLogHandler]\nclass=FileHandler\nlevel=DEBUG\nformatter=arg0Formatter\nargs=('/var/log/cloud-init.log',)\n"
+p90
+aasS'cloud_init_modules'
+p91
+(lp92
+S'migrator'
+p93
+aS'seed_random'
+p94
+aS'bootcmd'
+p95
+aS'write-files'
+p96
+aS'growpart'
+p97
+aS'resizefs'
+p98
+aS'set_hostname'
+p99
+aS'update_hostname'
+p100
+aS'update_etc_hosts'
+p101
+aS'ca-certs'
+p102
+aS'rsyslog'
+p103
+aS'users-groups'
+p104
+aS'ssh'
+p105
+asS'preserve_hostname'
+p106
+I00
+sS'_log'
+p107
+(lp108
+g87
+ag90
+ag88
+asS'datasource_list'
+p109
+(lp110
+S'NoCloud'
+p111
+aS'ConfigDrive'
+p112
+aS'OpenNebula'
+p113
+aS'Azure'
+p114
+aS'AltCloud'
+p115
+aS'OVF'
+p116
+aS'MAAS'
+p117
+aS'GCE'
+p118
+aS'OpenStack'
+p119
+aS'CloudSigma'
+p120
+aS'Ec2'
+p121
+aS'CloudStack'
+p122
+aS'SmartOS'
+p123
+aS'None'
+p124
+asS'vendor_data'
+p125
+(dp126
+S'prefix'
+p127
+(lp128
+sS'enabled'
+p129
+I01
+ssS'cloud_config_modules'
+p130
+(lp131
+S'emit_upstart'
+p132
+aS'disk_setup'
+p133
+aS'mounts'
+p134
+aS'ssh-import-id'
+p135
+aS'locale'
+p136
+aS'set-passwords'
+p137
+aS'grub-dpkg'
+p138
+aS'apt-pipelining'
+p139
+aS'apt-configure'
+p140
+aS'package-update-upgrade-install'
+p141
+aS'landscape'
+p142
+aS'timezone'
+p143
+aS'puppet'
+p144
+aS'chef'
+p145
+aS'salt-minion'
+p146
+aS'mcollective'
+p147
+aS'disable-ec2-metadata'
+p148
+aS'runcmd'
+p149
+aS'byobu'
+p150
+assg14
+(iemail.mime.multipart
+MIMEMultipart
+p151
+(dp152
+S'_headers'
+p153
+(lp154
+(S'Content-Type'
+p155
+S'multipart/mixed; boundary="===============4291038100093149247=="'
+tp156
+a(S'MIME-Version'
+p157
+S'1.0'
+p158
+tp159
+a(S'Number-Attachments'
+p160
+S'1'
+tp161
+asS'_payload'
+p162
+(lp163
+(iemail.mime.base
+MIMEBase
+p164
+(dp165
+g153
+(lp166
+(g157
+g158
+tp167
+a(S'Content-Type'
+p168
+S'text/x-not-multipart'
+tp169
+a(S'Content-Disposition'
+p170
+S'attachment; filename="part-001"'
+tp171
+asg162
+S''
+sS'_charset'
+p172
+NsS'_default_type'
+p173
+S'text/plain'
+p174
+sS'preamble'
+p175
+NsS'defects'
+p176
+(lp177
+sS'_unixfrom'
+p178
+NsS'epilogue'
+p179
+Nsbasg172
+Nsg173
+g174
+sg175
+Nsg176
+(lp180
+sg178
+Nsg179
+Nsbsg16
+S'#cloud-config\n{}\n\n'
+p181
+sg18
+S'Content-Type: multipart/mixed; boundary="===============1378281702283945349=="\nMIME-Version: 1.0\n\n--===============1378281702283945349==\nContent-Type: text/x-shellscript; charset="utf-8"\nMIME-Version: 1.0\nContent-Transfer-Encoding: base64\nContent-Disposition: attachment; filename="script1.sh"\n\nIyEvYmluL3NoCgplY2hvICdoaScgPiAvdmFyL3RtcC9oaQo=\n\n--===============1378281702283945349==\nContent-Type: text/x-shellscript; charset="utf-8"\nMIME-Version: 1.0\nContent-Transfer-Encoding: base64\nContent-Disposition: attachment; filename="script2.sh"\n\nIyEvYmluL2Jhc2gKCmVjaG8gJ2hpMicgPiAvdmFyL3RtcC9oaTIK\n\n--===============1378281702283945349==--\n\n#cloud-config\n# final_message: |\n# This is my final message!\n# $version\n# $timestamp\n# $datasource\n# $uptime\n# updates:\n# network:\n# when: [\'hotplug\']\n'
+p182
+sg29
+NsS'dsmode'
+p183
+S'net'
+p184
+sS'seed'
+p185
+S'/var/lib/cloud/seed/nocloud-net'
+p186
+sS'cmdline_id'
+p187
+S'ds=nocloud-net'
+p188
+sS'ud_proc'
+p189
+g1
+(ccloudinit.user_data
+UserDataProcessor
+p190
+g3
+NtRp191
+(dp192
+g6
+g8
+sS'ssl_details'
+p193
+(dp194
+sbsg50
+g186
+sS'ds_cfg'
+p195
+(dp196
+sS'distro'
+p197
+g1
+(ccloudinit.distros.ubuntu
+Distro
+p198
+g3
+NtRp199
+(dp200
+S'osfamily'
+p201
+S'debian'
+p202
+sS'_paths'
+p203
+g8
+sS'name'
+p204
+S'ubuntu'
+p205
+sS'_runner'
+p206
+g1
+(ccloudinit.helpers
+Runners
+p207
+g3
+NtRp208
+(dp209
+g6
+g8
+sS'sems'
+p210
+(dp211
+sbsS'_cfg'
+p212
+(dp213
+S'paths'
+p214
+(dp215
+g37
+g38
+sg39
+g40
+sg41
+g42
+ssS'default_user'
+p216
+(dp217
+S'shell'
+p218
+S'/bin/bash'
+p219
+sS'name'
+p220
+S'ubuntu'
+p221
+sS'sudo'
+p222
+(lp223
+S'ALL=(ALL) NOPASSWD:ALL'
+p224
+asS'lock_passwd'
+p225
+I01
+sS'gecos'
+p226
+S'Ubuntu'
+p227
+sS'groups'
+p228
+(lp229
+S'adm'
+p230
+aS'audio'
+p231
+aS'cdrom'
+p232
+aS'dialout'
+p233
+aS'dip'
+p234
+aS'floppy'
+p235
+aS'netdev'
+p236
+aS'plugdev'
+p237
+aS'sudo'
+p238
+aS'video'
+p239
+assS'package_mirrors'
+p240
+(lp241
+(dp242
+S'arches'
+p243
+(lp244
+S'i386'
+p245
+aS'amd64'
+p246
+asS'failsafe'
+p247
+(dp248
+S'security'
+p249
+S'http://security.ubuntu.com/ubuntu'
+p250
+sS'primary'
+p251
+S'http://archive.ubuntu.com/ubuntu'
+p252
+ssS'search'
+p253
+(dp254
+S'security'
+p255
+(lp256
+sS'primary'
+p257
+(lp258
+S'http://%(ec2_region)s.ec2.archive.ubuntu.com/ubuntu/'
+p259
+aS'http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/'
+p260
+aS'http://%(region)s.clouds.archive.ubuntu.com/ubuntu/'
+p261
+assa(dp262
+S'arches'
+p263
+(lp264
+S'armhf'
+p265
+aS'armel'
+p266
+aS'default'
+p267
+asS'failsafe'
+p268
+(dp269
+S'security'
+p270
+S'http://ports.ubuntu.com/ubuntu-ports'
+p271
+sS'primary'
+p272
+S'http://ports.ubuntu.com/ubuntu-ports'
+p273
+ssasS'ssh_svcname'
+p274
+S'ssh'
+p275
+ssbsS'metadata'
+p276
+(dp277
+g183
+g184
+sS'local-hostname'
+p278
+S'me'
+p279
+sS'instance-id'
+p280
+S'me'
+p281
+ssb.
\ No newline at end of file
diff --git a/tests/integration_tests/modules/test_persistence.py b/tests/integration_tests/modules/test_persistence.py
new file mode 100644
index 00000000..00fdeaea
--- /dev/null
+++ b/tests/integration_tests/modules/test_persistence.py
@@ -0,0 +1,30 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+"""Test the behavior of loading/discarding pickle data"""
+from pathlib import Path
+
+import pytest
+
+from tests.integration_tests.instances import IntegrationInstance
+from tests.integration_tests.util import (
+ ASSETS_DIR,
+ verify_ordered_items_in_text,
+)
+
+
+PICKLE_PATH = Path('/var/lib/cloud/instance/obj.pkl')
+TEST_PICKLE = ASSETS_DIR / 'trusty_with_mime.pkl'
+
+
+@pytest.mark.lxd_container
+def test_log_message_on_missing_version_file(client: IntegrationInstance):
+ client.push_file(TEST_PICKLE, PICKLE_PATH)
+ client.restart()
+ assert client.execute('cloud-init status --wait').ok
+ log = client.read_from_file('/var/log/cloud-init.log')
+ verify_ordered_items_in_text([
+ "Unable to unpickle datasource: 'MIMEMultipart' object has no "
+ "attribute 'policy'. Ignoring current cache.",
+ 'no cache found',
+ 'Searching for local data source',
+ 'SUCCESS: found local data from DataSourceNoCloud'
+ ], log)
--
2.31.1

View File

@ -1,262 +0,0 @@
From 71989367e7a634fdd2af8ef58473975e0ef60464 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Sat, 21 Aug 2021 13:53:27 +0200
Subject: [PATCH] Fix home permissions modified by ssh module (SC-338) (#984)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 29: Fix home permissions modified by ssh module (SC-338) (#984)
RH-Commit: [1/1] c409f2609b1d7e024eba77b55a196a4cafadd1d7 (eesposit/cloud-init)
RH-Bugzilla: 1995840
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
TESTED: By me and QA
BREW: 39178090
Fix home permissions modified by ssh module (SC-338) (#984)
commit 7d3f5d750f6111c2716143364ea33486df67c927
Author: James Falcon <therealfalcon@gmail.com>
Date: Fri Aug 20 17:09:49 2021 -0500
Fix home permissions modified by ssh module (SC-338) (#984)
Fix home permissions modified by ssh module
In #956, we updated the file and directory permissions for keys not in
the user's home directory. We also unintentionally modified the
permissions within the home directory as well. These should not change,
and this commit changes that back.
LP: #1940233
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/ssh_util.py | 35 ++++-
.../modules/test_ssh_keysfile.py | 132 +++++++++++++++---
2 files changed, 146 insertions(+), 21 deletions(-)
diff --git a/cloudinit/ssh_util.py b/cloudinit/ssh_util.py
index b8a3c8f7..9ccadf09 100644
--- a/cloudinit/ssh_util.py
+++ b/cloudinit/ssh_util.py
@@ -321,23 +321,48 @@ def check_create_path(username, filename, strictmodes):
home_folder = os.path.dirname(user_pwent.pw_dir)
for directory in directories:
parent_folder += "/" + directory
- if home_folder.startswith(parent_folder):
+
+ # security check, disallow symlinks in the AuthorizedKeysFile path.
+ if os.path.islink(parent_folder):
+ LOG.debug(
+ "Invalid directory. Symlink exists in path: %s",
+ parent_folder)
+ return False
+
+ if os.path.isfile(parent_folder):
+ LOG.debug(
+ "Invalid directory. File exists in path: %s",
+ parent_folder)
+ return False
+
+ if (home_folder.startswith(parent_folder) or
+ parent_folder == user_pwent.pw_dir):
continue
- if not os.path.isdir(parent_folder):
+ if not os.path.exists(parent_folder):
# directory does not exist, and permission so far are good:
# create the directory, and make it accessible by everyone
# but owned by root, as it might be used by many users.
with util.SeLinuxGuard(parent_folder):
- os.makedirs(parent_folder, mode=0o755, exist_ok=True)
- util.chownbyid(parent_folder, root_pwent.pw_uid,
- root_pwent.pw_gid)
+ mode = 0o755
+ uid = root_pwent.pw_uid
+ gid = root_pwent.pw_gid
+ if parent_folder.startswith(user_pwent.pw_dir):
+ mode = 0o700
+ uid = user_pwent.pw_uid
+ gid = user_pwent.pw_gid
+ os.makedirs(parent_folder, mode=mode, exist_ok=True)
+ util.chownbyid(parent_folder, uid, gid)
permissions = check_permissions(username, parent_folder,
filename, False, strictmodes)
if not permissions:
return False
+ if os.path.islink(filename) or os.path.isdir(filename):
+ LOG.debug("%s is not a file!", filename)
+ return False
+
# check the file
if not os.path.exists(filename):
# if file does not exist: we need to create it, since the
diff --git a/tests/integration_tests/modules/test_ssh_keysfile.py b/tests/integration_tests/modules/test_ssh_keysfile.py
index f82d7649..3159feb9 100644
--- a/tests/integration_tests/modules/test_ssh_keysfile.py
+++ b/tests/integration_tests/modules/test_ssh_keysfile.py
@@ -10,10 +10,10 @@ TEST_USER1_KEYS = get_test_rsa_keypair('test1')
TEST_USER2_KEYS = get_test_rsa_keypair('test2')
TEST_DEFAULT_KEYS = get_test_rsa_keypair('test3')
-USERDATA = """\
+_USERDATA = """\
#cloud-config
bootcmd:
- - sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile /etc/ssh/authorized_keys %h/.ssh/authorized_keys2;' /etc/ssh/sshd_config
+ - {bootcmd}
ssh_authorized_keys:
- {default}
users:
@@ -24,27 +24,17 @@ users:
- name: test_user2
ssh_authorized_keys:
- {user2}
-""".format( # noqa: E501
+""".format(
+ bootcmd='{bootcmd}',
default=TEST_DEFAULT_KEYS.public_key,
user1=TEST_USER1_KEYS.public_key,
user2=TEST_USER2_KEYS.public_key,
)
-@pytest.mark.ubuntu
-@pytest.mark.user_data(USERDATA)
-def test_authorized_keys(client: IntegrationInstance):
- expected_keys = [
- ('test_user1', '/home/test_user1/.ssh/authorized_keys2',
- TEST_USER1_KEYS),
- ('test_user2', '/home/test_user2/.ssh/authorized_keys2',
- TEST_USER2_KEYS),
- ('ubuntu', '/home/ubuntu/.ssh/authorized_keys2',
- TEST_DEFAULT_KEYS),
- ('root', '/root/.ssh/authorized_keys2', TEST_DEFAULT_KEYS),
- ]
-
+def common_verify(client, expected_keys):
for user, filename, keys in expected_keys:
+ # Ensure key is in the key file
contents = client.read_from_file(filename)
if user in ['ubuntu', 'root']:
# Our personal public key gets added by pycloudlib
@@ -83,3 +73,113 @@ def test_authorized_keys(client: IntegrationInstance):
look_for_keys=False,
allow_agent=False,
)
+
+ # Ensure we haven't messed with any /home permissions
+ # See LP: #1940233
+ home_dir = '/home/{}'.format(user)
+ home_perms = '755'
+ if user == 'root':
+ home_dir = '/root'
+ home_perms = '700'
+ assert '{} {}'.format(user, home_perms) == client.execute(
+ 'stat -c "%U %a" {}'.format(home_dir)
+ )
+ if client.execute("test -d {}/.ssh".format(home_dir)).ok:
+ assert '{} 700'.format(user) == client.execute(
+ 'stat -c "%U %a" {}/.ssh'.format(home_dir)
+ )
+ assert '{} 600'.format(user) == client.execute(
+ 'stat -c "%U %a" {}'.format(filename)
+ )
+
+ # Also ensure ssh-keygen works as expected
+ client.execute('mkdir {}/.ssh'.format(home_dir))
+ assert client.execute(
+ "ssh-keygen -b 2048 -t rsa -f {}/.ssh/id_rsa -q -N ''".format(
+ home_dir)
+ ).ok
+ assert client.execute('test -f {}/.ssh/id_rsa'.format(home_dir))
+ assert client.execute('test -f {}/.ssh/id_rsa.pub'.format(home_dir))
+
+ assert 'root 755' == client.execute('stat -c "%U %a" /home')
+
+
+DEFAULT_KEYS_USERDATA = _USERDATA.format(bootcmd='""')
+
+
+@pytest.mark.ubuntu
+@pytest.mark.user_data(DEFAULT_KEYS_USERDATA)
+def test_authorized_keys_default(client: IntegrationInstance):
+ expected_keys = [
+ ('test_user1', '/home/test_user1/.ssh/authorized_keys',
+ TEST_USER1_KEYS),
+ ('test_user2', '/home/test_user2/.ssh/authorized_keys',
+ TEST_USER2_KEYS),
+ ('ubuntu', '/home/ubuntu/.ssh/authorized_keys',
+ TEST_DEFAULT_KEYS),
+ ('root', '/root/.ssh/authorized_keys', TEST_DEFAULT_KEYS),
+ ]
+ common_verify(client, expected_keys)
+
+
+AUTHORIZED_KEYS2_USERDATA = _USERDATA.format(bootcmd=(
+ "sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile "
+ "/etc/ssh/authorized_keys %h/.ssh/authorized_keys2;' "
+ "/etc/ssh/sshd_config"))
+
+
+@pytest.mark.ubuntu
+@pytest.mark.user_data(AUTHORIZED_KEYS2_USERDATA)
+def test_authorized_keys2(client: IntegrationInstance):
+ expected_keys = [
+ ('test_user1', '/home/test_user1/.ssh/authorized_keys2',
+ TEST_USER1_KEYS),
+ ('test_user2', '/home/test_user2/.ssh/authorized_keys2',
+ TEST_USER2_KEYS),
+ ('ubuntu', '/home/ubuntu/.ssh/authorized_keys2',
+ TEST_DEFAULT_KEYS),
+ ('root', '/root/.ssh/authorized_keys2', TEST_DEFAULT_KEYS),
+ ]
+ common_verify(client, expected_keys)
+
+
+NESTED_KEYS_USERDATA = _USERDATA.format(bootcmd=(
+ "sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile "
+ "/etc/ssh/authorized_keys %h/foo/bar/ssh/keys;' "
+ "/etc/ssh/sshd_config"))
+
+
+@pytest.mark.ubuntu
+@pytest.mark.user_data(NESTED_KEYS_USERDATA)
+def test_nested_keys(client: IntegrationInstance):
+ expected_keys = [
+ ('test_user1', '/home/test_user1/foo/bar/ssh/keys',
+ TEST_USER1_KEYS),
+ ('test_user2', '/home/test_user2/foo/bar/ssh/keys',
+ TEST_USER2_KEYS),
+ ('ubuntu', '/home/ubuntu/foo/bar/ssh/keys',
+ TEST_DEFAULT_KEYS),
+ ('root', '/root/foo/bar/ssh/keys', TEST_DEFAULT_KEYS),
+ ]
+ common_verify(client, expected_keys)
+
+
+EXTERNAL_KEYS_USERDATA = _USERDATA.format(bootcmd=(
+ "sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile "
+ "/etc/ssh/authorized_keys /etc/ssh/authorized_keys/%u/keys;' "
+ "/etc/ssh/sshd_config"))
+
+
+@pytest.mark.ubuntu
+@pytest.mark.user_data(EXTERNAL_KEYS_USERDATA)
+def test_external_keys(client: IntegrationInstance):
+ expected_keys = [
+ ('test_user1', '/etc/ssh/authorized_keys/test_user1/keys',
+ TEST_USER1_KEYS),
+ ('test_user2', '/etc/ssh/authorized_keys/test_user2/keys',
+ TEST_USER2_KEYS),
+ ('ubuntu', '/etc/ssh/authorized_keys/ubuntu/keys',
+ TEST_DEFAULT_KEYS),
+ ('root', '/etc/ssh/authorized_keys/root/keys', TEST_DEFAULT_KEYS),
+ ]
+ common_verify(client, expected_keys)
--
2.27.0

View File

@ -0,0 +1,88 @@
From 8b9627be7ed3e44c6890e52723cb86375f56a0e4 Mon Sep 17 00:00:00 2001
From: Shreenidhi Shedi <53473811+sshedi@users.noreply.github.com>
Date: Fri, 17 Mar 2023 03:01:22 +0530
Subject: [PATCH 05/11] Handle non existent ca-cert-config situation (#2073)
Currently if a cert file doesn't exist, cc_ca_certs module crashes
This fix makes it possible to handle it gracefully.
Also, out_lines variable may not be available if os.stat returns 0.
This issue is also taken care of.
Added tests for the same.
(cherry picked from commit 3634678465e7b8f8608bcb9a1f5773ae7837cbe9)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/config/cc_ca_certs.py | 19 +++++++++++++------
tests/unittests/config/test_cc_ca_certs.py | 12 ++++++++++++
2 files changed, 25 insertions(+), 6 deletions(-)
diff --git a/cloudinit/config/cc_ca_certs.py b/cloudinit/config/cc_ca_certs.py
index 51b8577c..4dc08681 100644
--- a/cloudinit/config/cc_ca_certs.py
+++ b/cloudinit/config/cc_ca_certs.py
@@ -177,14 +177,20 @@ def disable_system_ca_certs(distro_cfg):
@param distro_cfg: A hash providing _distro_ca_certs_configs function.
"""
- if distro_cfg["ca_cert_config"] is None:
+
+ ca_cert_cfg_fn = distro_cfg["ca_cert_config"]
+
+ if not ca_cert_cfg_fn or not os.path.exists(ca_cert_cfg_fn):
return
+
header_comment = (
"# Modified by cloud-init to deselect certs due to user-data"
)
+
added_header = False
- if os.stat(distro_cfg["ca_cert_config"]).st_size != 0:
- orig = util.load_file(distro_cfg["ca_cert_config"])
+
+ if os.stat(ca_cert_cfg_fn).st_size:
+ orig = util.load_file(ca_cert_cfg_fn)
out_lines = []
for line in orig.splitlines():
if line == header_comment:
@@ -197,9 +203,10 @@ def disable_system_ca_certs(distro_cfg):
out_lines.append(header_comment)
added_header = True
out_lines.append("!" + line)
- util.write_file(
- distro_cfg["ca_cert_config"], "\n".join(out_lines) + "\n", omode="wb"
- )
+
+ util.write_file(
+ ca_cert_cfg_fn, "\n".join(out_lines) + "\n", omode="wb"
+ )
def remove_default_ca_certs(distro_cfg):
diff --git a/tests/unittests/config/test_cc_ca_certs.py b/tests/unittests/config/test_cc_ca_certs.py
index 6db17485..5f1894e7 100644
--- a/tests/unittests/config/test_cc_ca_certs.py
+++ b/tests/unittests/config/test_cc_ca_certs.py
@@ -365,6 +365,18 @@ class TestRemoveDefaultCaCerts(TestCase):
else:
assert mock_subp.call_count == 0
+ def test_non_existent_cert_cfg(self):
+ self.m_stat.return_value.st_size = 0
+
+ for distro_name in cc_ca_certs.distros:
+ conf = cc_ca_certs._distro_ca_certs_configs(distro_name)
+ with ExitStack() as mocks:
+ mocks.enter_context(
+ mock.patch.object(util, "delete_dir_contents")
+ )
+ mocks.enter_context(mock.patch.object(subp, "subp"))
+ cc_ca_certs.disable_default_ca_certs(distro_name, conf)
+
class TestCACertsSchema:
"""Directly test schema rather than through handle."""
--
2.39.3

View File

@ -0,0 +1,309 @@
From dd1a79fc5c0b5f486ca2e66ed3a45c8f4f7b1f15 Mon Sep 17 00:00:00 2001
From: James Falcon <james.falcon@canonical.com>
Date: Wed, 26 Apr 2023 15:11:55 -0500
Subject: [PATCH 2/2] Make user/vendor data sensitive and remove log
permissions (#2144)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 99: Make user/vendor data sensitive and remove log permissions (#2144)
RH-Bugzilla: 2190081
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Commit: [1/1] 1b34e2c9c61a90abb88f2df87d41f96b54e79ff7 (anisinha/rhel-cloud-init)
Because user data and vendor data may contain sensitive information,
this commit ensures that any user data or vendor data written to
instance-data.json gets redacted and is only available to root user.
Also, modify the permissions of cloud-init.log to be 640, so that
sensitive data leaked to the log isn't world readable.
Additionally, remove the logging of user data and vendor data to
cloud-init.log from the Vultr datasource.
Conflicts:
cloudinit/sources/DataSourceVultr.py
- editor directives missing in file on upstream version.
LP: #2013967
CVE: CVE-2023-1786
(cherry picked from commit a378b7e4f47375458651c0972e7cd813f6fe0a6b)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/sources/DataSourceLXD.py | 9 ++++++---
cloudinit/sources/DataSourceVultr.py | 14 ++++++--------
cloudinit/sources/__init__.py | 28 +++++++++++++++++++++++++---
cloudinit/stages.py | 4 +++-
tests/unittests/sources/test_init.py | 27 ++++++++++++++++++++++++++-
tests/unittests/test_stages.py | 18 +++++++++++-------
6 files changed, 77 insertions(+), 23 deletions(-)
diff --git a/cloudinit/sources/DataSourceLXD.py b/cloudinit/sources/DataSourceLXD.py
index ab440cc8..e4cae91a 100644
--- a/cloudinit/sources/DataSourceLXD.py
+++ b/cloudinit/sources/DataSourceLXD.py
@@ -14,7 +14,7 @@ import stat
import time
from enum import Flag, auto
from json.decoder import JSONDecodeError
-from typing import Any, Dict, List, Optional, Union, cast
+from typing import Any, Dict, List, Optional, Tuple, Union, cast
import requests
from requests.adapters import HTTPAdapter
@@ -168,11 +168,14 @@ class DataSourceLXD(sources.DataSource):
_network_config: Union[Dict, str] = sources.UNSET
_crawled_metadata: Union[Dict, str] = sources.UNSET
- sensitive_metadata_keys = (
- "merged_cfg",
+ sensitive_metadata_keys: Tuple[
+ str, ...
+ ] = sources.DataSource.sensitive_metadata_keys + (
"user.meta-data",
"user.vendor-data",
"user.user-data",
+ "cloud-init.user-data",
+ "cloud-init.vendor-data",
)
skip_hotplug_detect = True
diff --git a/cloudinit/sources/DataSourceVultr.py b/cloudinit/sources/DataSourceVultr.py
index 9d7c84fb..660e9f14 100644
--- a/cloudinit/sources/DataSourceVultr.py
+++ b/cloudinit/sources/DataSourceVultr.py
@@ -5,6 +5,8 @@
# Vultr Metadata API:
# https://www.vultr.com/metadata/
+from typing import Tuple
+
import cloudinit.sources.helpers.vultr as vultr
from cloudinit import log as log
from cloudinit import sources, util, version
@@ -28,6 +30,10 @@ class DataSourceVultr(sources.DataSource):
dsname = "Vultr"
+ sensitive_metadata_keys: Tuple[
+ str, ...
+ ] = sources.DataSource.sensitive_metadata_keys + ("startup-script",)
+
def __init__(self, sys_cfg, distro, paths):
super(DataSourceVultr, self).__init__(sys_cfg, distro, paths)
self.ds_cfg = util.mergemanydict(
@@ -54,13 +60,8 @@ class DataSourceVultr(sources.DataSource):
self.get_datasource_data(self.metadata)
# Dump some data so diagnosing failures is manageable
- LOG.debug("Vultr Vendor Config:")
- LOG.debug(util.json_dumps(self.metadata["vendor-data"]))
LOG.debug("SUBID: %s", self.metadata["instance-id"])
LOG.debug("Hostname: %s", self.metadata["local-hostname"])
- if self.userdata_raw is not None:
- LOG.debug("User-Data:")
- LOG.debug(self.userdata_raw)
return True
@@ -146,7 +147,4 @@ if __name__ == "__main__":
config = md["vendor-data"]
sysinfo = vultr.get_sysinfo()
- print(util.json_dumps(sysinfo))
- print(util.json_dumps(config))
-
# vi: ts=4 expandtab
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index 565e1754..5c6ae8b1 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -110,7 +110,10 @@ def process_instance_metadata(metadata, key_path="", sensitive_keys=()):
sub_key_path = key_path + "/" + key
else:
sub_key_path = key
- if key in sensitive_keys or sub_key_path in sensitive_keys:
+ if (
+ key.lower() in sensitive_keys
+ or sub_key_path.lower() in sensitive_keys
+ ):
sens_keys.append(sub_key_path)
if isinstance(val, str) and val.startswith("ci-b64:"):
base64_encoded_keys.append(sub_key_path)
@@ -132,6 +135,12 @@ def redact_sensitive_keys(metadata, redact_value=REDACT_SENSITIVE_VALUE):
Replace any keys values listed in 'sensitive_keys' with redact_value.
"""
+ # While 'sensitive_keys' should already sanitized to only include what
+ # is in metadata, it is possible keys will overlap. For example, if
+ # "merged_cfg" and "merged_cfg/ds/userdata" both match, it's possible that
+ # "merged_cfg" will get replaced first, meaning "merged_cfg/ds/userdata"
+ # no longer represents a valid key.
+ # Thus, we still need to do membership checks in this function.
if not metadata.get("sensitive_keys", []):
return metadata
md_copy = copy.deepcopy(metadata)
@@ -139,9 +148,14 @@ def redact_sensitive_keys(metadata, redact_value=REDACT_SENSITIVE_VALUE):
path_parts = key_path.split("/")
obj = md_copy
for path in path_parts:
- if isinstance(obj[path], dict) and path != path_parts[-1]:
+ if (
+ path in obj
+ and isinstance(obj[path], dict)
+ and path != path_parts[-1]
+ ):
obj = obj[path]
- obj[path] = redact_value
+ if path in obj:
+ obj[path] = redact_value
return md_copy
@@ -249,6 +263,14 @@ class DataSource(CloudInitPickleMixin, metaclass=abc.ABCMeta):
sensitive_metadata_keys: Tuple[str, ...] = (
"merged_cfg",
"security-credentials",
+ "userdata",
+ "user-data",
+ "user_data",
+ "vendordata",
+ "vendor-data",
+ # Provide ds/vendor_data to avoid redacting top-level
+ # "vendor_data": {enabled: True}
+ "ds/vendor_data",
)
# True on datasources that may not see hotplugged devices reflected
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index a624a6fb..1326d205 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -204,7 +204,9 @@ class Init:
log_file = util.get_cfg_option_str(self.cfg, "def_log_file")
log_file_mode = util.get_cfg_option_int(self.cfg, "def_log_file_mode")
if log_file:
- util.ensure_file(log_file, mode=0o640, preserve_mode=True)
+ # At this point the log file should have already been created
+ # in the setupLogging function of log.py
+ util.ensure_file(log_file, mode=0o640, preserve_mode=False)
perms = self.cfg.get("syslog_fix_perms")
if not perms:
perms = {}
diff --git a/tests/unittests/sources/test_init.py b/tests/unittests/sources/test_init.py
index 0447e02c..eb27198f 100644
--- a/tests/unittests/sources/test_init.py
+++ b/tests/unittests/sources/test_init.py
@@ -458,12 +458,24 @@ class TestDataSource(CiTestCase):
"cred2": "othersekret",
}
},
+ "someother": {
+ "nested": {
+ "userData": "HIDE ME",
+ }
+ },
+ "VENDOR-DAta": "HIDE ME TOO",
},
)
self.assertCountEqual(
(
"merged_cfg",
"security-credentials",
+ "userdata",
+ "user-data",
+ "user_data",
+ "vendordata",
+ "vendor-data",
+ "ds/vendor_data",
),
datasource.sensitive_metadata_keys,
)
@@ -490,7 +502,9 @@ class TestDataSource(CiTestCase):
"base64_encoded_keys": [],
"merged_cfg": REDACT_SENSITIVE_VALUE,
"sensitive_keys": [
+ "ds/meta_data/VENDOR-DAta",
"ds/meta_data/some/security-credentials",
+ "ds/meta_data/someother/nested/userData",
"merged_cfg",
],
"sys_info": sys_info,
@@ -500,6 +514,7 @@ class TestDataSource(CiTestCase):
"availability_zone": "myaz",
"cloud-name": "subclasscloudname",
"cloud_name": "subclasscloudname",
+ "cloud_id": "subclasscloudname",
"distro": "ubuntu",
"distro_release": "focal",
"distro_version": "20.04",
@@ -522,14 +537,18 @@ class TestDataSource(CiTestCase):
"ds": {
"_doc": EXPERIMENTAL_TEXT,
"meta_data": {
+ "VENDOR-DAta": REDACT_SENSITIVE_VALUE,
"availability_zone": "myaz",
"local-hostname": "test-subclass-hostname",
"region": "myregion",
"some": {"security-credentials": REDACT_SENSITIVE_VALUE},
+ "someother": {
+ "nested": {"userData": REDACT_SENSITIVE_VALUE}
+ },
},
},
}
- self.assertCountEqual(expected, redacted)
+ self.assertEqual(expected, redacted)
file_stat = os.stat(json_file)
self.assertEqual(0o644, stat.S_IMODE(file_stat.st_mode))
@@ -574,6 +593,12 @@ class TestDataSource(CiTestCase):
(
"merged_cfg",
"security-credentials",
+ "userdata",
+ "user-data",
+ "user_data",
+ "vendordata",
+ "vendor-data",
+ "ds/vendor_data",
),
datasource.sensitive_metadata_keys,
)
diff --git a/tests/unittests/test_stages.py b/tests/unittests/test_stages.py
index 15a7e973..a61f9df9 100644
--- a/tests/unittests/test_stages.py
+++ b/tests/unittests/test_stages.py
@@ -606,19 +606,23 @@ class TestInit_InitializeFilesystem:
# Assert we create it 0o640 by default if it doesn't already exist
assert 0o640 == stat.S_IMODE(log_file.stat().mode)
- def test_existing_file_permissions_are_not_modified(self, init, tmpdir):
- """If the log file already exists, we should not modify its permissions
+ def test_existing_file_permissions(self, init, tmpdir):
+ """Test file permissions are set as expected.
+
+ CIS Hardening requires 640 permissions. These permissions are
+ currently hardcoded on every boot, but if there's ever a reason
+ to change this, we need to then ensure that they
+ are *not* set every boot.
See https://bugs.launchpad.net/cloud-init/+bug/1900837.
"""
- # Use a mode that will never be made the default so this test will
- # always be valid
- mode = 0o606
log_file = tmpdir.join("cloud-init.log")
log_file.ensure()
- log_file.chmod(mode)
+ # Use a mode that will never be made the default so this test will
+ # always be valid
+ log_file.chmod(0o606)
init._cfg = {"def_log_file": str(log_file)}
init._initialize_filesystem()
- assert mode == stat.S_IMODE(log_file.stat().mode)
+ assert 0o640 == stat.S_IMODE(log_file.stat().mode)
--
2.37.3

View File

@ -0,0 +1,293 @@
From e9e49fc09636609ec5cf55984bee01784da52083 Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Fri, 4 Aug 2023 08:58:26 +0530
Subject: [PATCH] NM renderer: set default IPv6 addr-gen-mode for all
interfaces to eui64 (#4291)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 107: NM renderer: set default IPv6 addr-gen-mode for all interfaces to eui64 (#4291)
RH-Bugzilla: 2229460
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [1/1] 2a8ed5a008d6fac5ab5263d94703a065ff3c192f (anisinha/rhel-cloud-init)
By default, NetworkManager renderer in cloud-init does not set any specific
method for IPV6 addr-gen-mode in the keyfiles it writes. Hence, implicitly the
mode is set to `eui64` in the absence of any global addr-gen-mode option in
NetworkManager configuration.
Later when other interfaces get added via D-Bus API or by using nmcli commands
without explictly setting an addr-gen-mode, NM auto generates new profiles for
those interfaces with addr-gen-mode set to `stable-privacy`. This introduces
inconsistency of configurations between interfaces based on how they were
added. This can cause problems for the customers.
In this change, cloud-init overrides NetworkManager's preferred default of
`stable-privacy` to use EUI64 using a drop in NetworkManager configuration
file. This setting can be overriden by using global-connection-defaults
setting in /etc/NetworkManager/NetworkManager.conf file.
RHBZ: 2188388
Signed-off-by: Ani Sinha <anisinha@redhat.com>
(cherry picked from commit d41264cb4297a4b143a23f3677d33b81fbfc6e8e)
Conflicts:
tests/unittests/test_net.py
---
cloudinit/net/network_manager.py | 21 ++++++++
tests/unittests/test_net.py | 91 +++++++++++++++++++++++++-------
2 files changed, 94 insertions(+), 18 deletions(-)
diff --git a/cloudinit/net/network_manager.py b/cloudinit/net/network_manager.py
index ca216928..8047f796 100644
--- a/cloudinit/net/network_manager.py
+++ b/cloudinit/net/network_manager.py
@@ -21,6 +21,15 @@ from cloudinit.net.network_state import NetworkState
NM_RUN_DIR = "/etc/NetworkManager"
NM_LIB_DIR = "/usr/lib/NetworkManager"
NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf"
+NM_IPV6_ADDR_GEN_CONF = """# This is generated by cloud-init. Do not edit.
+#
+[.config]
+ enable=nm-version-min:1.40
+[connection.30-cloud-init-ip6-addr-gen-mode]
+ # Select EUI64 to be used if the profile does not specify it.
+ ipv6.addr-gen-mode=0
+
+"""
LOG = logging.getLogger(__name__)
@@ -368,6 +377,12 @@ class Renderer(renderer.Renderer):
name = conn_filename(con_id, target)
util.write_file(name, conn.dump(), 0o600)
+ # Select EUI64 to be used by default by NM for creating the address
+ # for use with RFC4862 IPv6 Stateless Address Autoconfiguration.
+ util.write_file(
+ cloud_init_nm_conf_filename(target), NM_IPV6_ADDR_GEN_CONF, 0o600
+ )
+
def conn_filename(con_id, target=None):
target_con_dir = subp.target_path(target, NM_RUN_DIR)
@@ -375,6 +390,12 @@ def conn_filename(con_id, target=None):
return f"{target_con_dir}/system-connections/{con_file}"
+def cloud_init_nm_conf_filename(target=None):
+ target_con_dir = subp.target_path(target, NM_RUN_DIR)
+ conf_file = "30-cloud-init-ip6-addr-gen-mode.conf"
+ return f"{target_con_dir}/conf.d/{conf_file}"
+
+
def available(target=None):
# TODO: Move `uses_systemd` to a more appropriate location
# It is imported here to avoid circular import
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 6274f12d..aa4098b8 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -5628,9 +5628,25 @@ class TestNetworkManagerRendering(CiTestCase):
with_logs = True
scripts_dir = "/etc/NetworkManager/system-connections"
+ conf_dir = "/etc/NetworkManager/conf.d"
expected_name = "expected_network_manager"
+ expected_conf_d = {
+ "30-cloud-init-ip6-addr-gen-mode.conf": textwrap.dedent(
+ """\
+ # This is generated by cloud-init. Do not edit.
+ #
+ [.config]
+ enable=nm-version-min:1.40
+ [connection.30-cloud-init-ip6-addr-gen-mode]
+ # Select EUI64 to be used if the profile does not specify it.
+ ipv6.addr-gen-mode=0
+
+ """
+ ),
+ }
+
def _get_renderer(self):
return network_manager.Renderer()
@@ -5649,11 +5665,19 @@ class TestNetworkManagerRendering(CiTestCase):
renderer.render_network_state(ns, target=dir)
return dir2dict(dir)
- def _compare_files_to_expected(self, expected, found):
+ def _compare_files_to_expected(
+ self, expected_scripts, expected_conf, found
+ ):
orig_maxdiff = self.maxDiff
- expected_d = dict(
- (os.path.join(self.scripts_dir, k), v) for k, v in expected.items()
+ conf_d = dict(
+ (os.path.join(self.conf_dir, k), v)
+ for k, v in expected_conf.items()
+ )
+ scripts_d = dict(
+ (os.path.join(self.scripts_dir, k), v)
+ for k, v in expected_scripts.items()
)
+ expected_d = {**conf_d, **scripts_d}
try:
self.maxDiff = None
@@ -5714,6 +5738,7 @@ class TestNetworkManagerRendering(CiTestCase):
"""
),
},
+ self.expected_conf_d,
found,
)
@@ -5769,8 +5794,9 @@ class TestNetworkManagerRendering(CiTestCase):
gateway=10.0.2.2
"""
- ),
+ )
},
+ self.expected_conf_d,
found,
)
@@ -5806,33 +5832,44 @@ class TestNetworkManagerRendering(CiTestCase):
"""
),
},
+ self.expected_conf_d,
found,
)
def test_bond_config(self):
entry = NETWORK_CONFIGS["bond"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_vlan_config(self):
entry = NETWORK_CONFIGS["vlan"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_bridge_config(self):
entry = NETWORK_CONFIGS["bridge"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_manual_config(self):
entry = NETWORK_CONFIGS["manual"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_all_config(self):
entry = NETWORK_CONFIGS["all"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
self.assertNotIn(
"WARNING: Network config: ignoring eth0.101 device-level mtu",
self.logs.getvalue(),
@@ -5841,12 +5878,16 @@ class TestNetworkManagerRendering(CiTestCase):
def test_small_config(self):
entry = NETWORK_CONFIGS["small"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_v4_and_v6_static_config(self):
entry = NETWORK_CONFIGS["v4_and_v6_static"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
expected_msg = (
"WARNING: Network config: ignoring iface0 device-level mtu:8999"
" because ipv4 subnet-level mtu:9000 provided."
@@ -5856,41 +5897,55 @@ class TestNetworkManagerRendering(CiTestCase):
def test_dhcpv6_only_config(self):
entry = NETWORK_CONFIGS["dhcpv6_only"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_simple_render_ipv6_slaac(self):
entry = NETWORK_CONFIGS["ipv6_slaac"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_dhcpv6_stateless_config(self):
entry = NETWORK_CONFIGS["dhcpv6_stateless"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_wakeonlan_disabled_config_v2(self):
entry = NETWORK_CONFIGS["wakeonlan_disabled"]
found = self._render_and_read(
network_config=yaml.load(entry["yaml_v2"])
)
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_wakeonlan_enabled_config_v2(self):
entry = NETWORK_CONFIGS["wakeonlan_enabled"]
found = self._render_and_read(
network_config=yaml.load(entry["yaml_v2"])
)
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_render_v4_and_v6(self):
entry = NETWORK_CONFIGS["v4_and_v6"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
def test_render_v6_and_v4(self):
entry = NETWORK_CONFIGS["v6_and_v4"]
found = self._render_and_read(network_config=yaml.load(entry["yaml"]))
- self._compare_files_to_expected(entry[self.expected_name], found)
+ self._compare_files_to_expected(
+ entry[self.expected_name], self.expected_conf_d, found
+ )
@mock.patch(
--
2.37.3

View File

@ -0,0 +1,102 @@
From f7aaef405cd87d7d969f28401f3a4a7538d57c76 Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Thu, 4 May 2023 15:34:43 +0530
Subject: [PATCH 1/7] Revert "Manual revert "Use Network-Manager and Netplan as
default renderers for RHEL and Fedora (#1465)""
RH-Author: Ani Sinha <None>
RH-MergeRequest: 103: [RHEL8] Support configuring network by NM keyfiles
RH-Bugzilla: 2219528
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [1/7] 65838b451e21f92cf92d2d4967015c48816f82f9
This reverts commit 0616dbd3f523395b619960b67b3b65c2f0ea15f4.
This is patch 1 of the two patches that re-enables NM renderer. This change
can be ignored while rebasing to latest upstream.
X-downstream-only: true
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/net/renderers.py | 1 +
config/cloud.cfg.tmpl | 3 +++
doc/rtd/reference/network-config.rst | 16 ++++++++++++++--
3 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/cloudinit/net/renderers.py b/cloudinit/net/renderers.py
index c92b9dcf..022ff938 100644
--- a/cloudinit/net/renderers.py
+++ b/cloudinit/net/renderers.py
@@ -28,6 +28,7 @@ DEFAULT_PRIORITY = [
"eni",
"sysconfig",
"netplan",
+ "network-manager",
"freebsd",
"netbsd",
"openbsd",
diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
index 12f32c51..7238c102 100644
--- a/config/cloud.cfg.tmpl
+++ b/config/cloud.cfg.tmpl
@@ -381,6 +381,9 @@ system_info:
{% elif variant in ["dragonfly"] %}
network:
renderers: ['freebsd']
+{% elif variant in ["fedora"] or is_rhel %}
+ network:
+ renderers: ['netplan', 'network-manager', 'networkd', 'sysconfig', 'eni']
{% elif variant == "openmandriva" %}
network:
renderers: ['network-manager', 'networkd']
diff --git a/doc/rtd/reference/network-config.rst b/doc/rtd/reference/network-config.rst
index bc52afa5..ea331f1c 100644
--- a/doc/rtd/reference/network-config.rst
+++ b/doc/rtd/reference/network-config.rst
@@ -176,6 +176,16 @@ this state, ``cloud-init`` delegates rendering of the configuration to
distro-supported formats. The following ``renderers`` are supported in
``cloud-init``:
+NetworkManager
+--------------
+
+`NetworkManager`_ is the standard Linux network configuration tool suite. It
+supports a wide range of networking setups. Configuration is typically stored
+in :file:`/etc/NetworkManager`.
+
+It is the default for a number of Linux distributions; notably Fedora,
+CentOS/RHEL, and their derivatives.
+
ENI
---
@@ -213,6 +223,7 @@ preference) is as follows:
- ENI
- Sysconfig
- Netplan
+- NetworkManager
- FreeBSD
- NetBSD
- OpenBSD
@@ -223,6 +234,7 @@ preference) is as follows:
- **ENI**: using ``ifup``, ``ifdown`` to manage device setup/teardown
- **Netplan**: using ``netplan apply`` to manage device setup/teardown
+- **NetworkManager**: using ``nmcli`` to manage device setup/teardown
- **Networkd**: using ``ip`` to manage device setup/teardown
When applying the policy, ``cloud-init`` checks if the current instance has the
@@ -232,8 +244,8 @@ supplying an updated configuration in cloud-config. ::
system_info:
network:
- renderers: ['netplan', 'eni', 'sysconfig', 'freebsd', 'netbsd', 'openbsd']
- activators: ['eni', 'netplan', 'networkd']
+ renderers: ['netplan', 'network-manager', 'eni', 'sysconfig', 'freebsd', 'netbsd', 'openbsd']
+ activators: ['eni', 'netplan', 'network-manager', 'networkd']
Network configuration tools
===========================
--
2.39.3

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,63 @@
From fcd4f7c99e866abb93d0a56f5967b35dbec4088c Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Fri, 7 Jul 2023 16:05:48 +0530
Subject: [PATCH 06/11] Revert "limit permissions on def_log_file"
This reverts commit 1308991156950833f62ec1464b1aef3673864c02.
This patch seems to be not doing anythiing at all.
X-downstream-only: true
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/settings.py | 1 -
cloudinit/stages.py | 1 -
doc/examples/cloud-config.txt | 4 ----
3 files changed, 6 deletions(-)
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index 88aac6be..a36c518d 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -52,7 +52,6 @@ CFG_BUILTIN = {
"None",
],
"def_log_file": "/var/log/cloud-init.log",
- "def_log_file_mode": 0o600,
"log_cfgs": [],
"syslog_fix_perms": [],
"mount_default_fields": [None, None, "auto", "defaults,nofail", "0", "2"],
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index 1326d205..21f30a1f 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -202,7 +202,6 @@ class Init:
def _initialize_filesystem(self):
util.ensure_dirs(self._initial_subdirs())
log_file = util.get_cfg_option_str(self.cfg, "def_log_file")
- log_file_mode = util.get_cfg_option_int(self.cfg, "def_log_file_mode")
if log_file:
# At this point the log file should have already been created
# in the setupLogging function of log.py
diff --git a/doc/examples/cloud-config.txt b/doc/examples/cloud-config.txt
index b6d16c9c..15d788f3 100644
--- a/doc/examples/cloud-config.txt
+++ b/doc/examples/cloud-config.txt
@@ -383,14 +383,10 @@ timezone: US/Eastern
# if syslog_fix_perms is a list, it will iterate through and use the
# first pair that does not raise error.
#
-# 'def_log_file' will be created with mode 'def_log_file_mode', which
-# is specified as a numeric value and defaults to 0600.
-#
# the default values are '/var/log/cloud-init.log' and 'syslog:adm'
# the value of 'def_log_file' should match what is configured in logging
# if either is empty, then no change of ownership will be done
def_log_file: /var/log/my-logging-file.log
-def_log_file_mode: 0600
syslog_fix_perms: syslog:root
# you can set passwords for a user or multiple users
--
2.39.3

View File

@ -1,47 +0,0 @@
From 0eeec94882779de76c08b1a7faf862e22f21f242 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Fri, 14 Jan 2022 16:42:46 +0100
Subject: [PATCH 5/6] Revert unnecesary lcase in ds-identify (#978)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 44: Datasource for VMware
RH-Commit: [5/6] f7385c15cf17a9c4a2fa15b29afd1b8a96b24d1e
RH-Bugzilla: 2026587
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
commit f516a7d37c1654addc02485e681b4358d7e7c0db
Author: Andrew Kutz <101085+akutz@users.noreply.github.com>
Date: Fri Aug 13 14:30:55 2021 -0500
Revert unnecesary lcase in ds-identify (#978)
This patch reverts an unnecessary lcase optimization in the
ds-identify script. SystemD documents the values produced by
the systemd-detect-virt command are lower case, and the mapping
table used by the FreeBSD check is also lower-case.
The optimization added two new forked processes, needlessly
causing overhead.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
tools/ds-identify | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/ds-identify b/tools/ds-identify
index 0e12298f..7b782462 100755
--- a/tools/ds-identify
+++ b/tools/ds-identify
@@ -449,7 +449,7 @@ detect_virt() {
read_virt() {
cached "$DI_VIRT" && return 0
detect_virt
- DI_VIRT="$(echo "${_RET}" | tr '[:upper:]' '[:lower:]')"
+ DI_VIRT="${_RET}"
}
is_container() {
--
2.27.0

View File

@ -0,0 +1,44 @@
From c33a3f27e449371e36f19269f81883c5a50131bb Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Thu, 8 Jun 2023 03:29:13 +0530
Subject: [PATCH 7/7] Set default renderer as sysconfig for centos/rhel (#4165)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 103: [RHEL8] Support configuring network by NM keyfiles
RH-Bugzilla: 2219528
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [7/7] aec68bb518c82bfd6b67fbe89b72bbda81c01cf9
Currently, network manager is disabled on c9s and therefore sysconfig is used as the primary renderer for network configuration. We do not want to change this for c9s even when network-manager renderer is re-enabled as it would mean a big behaviour change for cloud-init in the centos 9 stream.
This change bumps up the priority for sysconfig renderer so that it is used as the primary renderer on c9s and other downstream distributions derived from it. In the next major centos stream release, we may use network manager as the default renderer and make changes accordingly.
RHBZ: 2209349
Signed-off-by: Ani Sinha <anisinha@redhat.com>
(cherry picked from commit a1f375095bd0ac8628c4fdc79538dc177bb9ff99)
---
config/cloud.cfg.tmpl | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
index 7238c102..020340f9 100644
--- a/config/cloud.cfg.tmpl
+++ b/config/cloud.cfg.tmpl
@@ -381,9 +381,12 @@ system_info:
{% elif variant in ["dragonfly"] %}
network:
renderers: ['freebsd']
-{% elif variant in ["fedora"] or is_rhel %}
+{% elif variant in ["fedora"] %}
network:
renderers: ['netplan', 'network-manager', 'networkd', 'sysconfig', 'eni']
+{% elif is_rhel %}
+ network:
+ renderers: ['sysconfig', 'eni', 'netplan', 'network-manager', 'networkd' ]
{% elif variant == "openmandriva" %}
network:
renderers: ['network-manager', 'networkd']
--
2.39.3

View File

@ -1,97 +0,0 @@
From ded01bd47c65636e59dc332d06fb8acb982ec677 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Fri, 14 Jan 2022 16:41:52 +0100
Subject: [PATCH 4/6] Update dscheck_VMware's rpctool check (#970)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 44: Datasource for VMware
RH-Commit: [4/6] 509f68596f2d8f32027677f756b9d81e6a507ff1
RH-Bugzilla: 2026587
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
commit 7781dec3306e9467f216cfcb36b7e10a8b38547a
Author: Shreenidhi Shedi <53473811+sshedi@users.noreply.github.com>
Date: Fri Aug 13 00:40:39 2021 +0530
Update dscheck_VMware's rpctool check (#970)
This patch updates the dscheck_VMware function's use of "vmware-rpctool".
When checking to see if a "guestinfo" property is set.
Because a successful exit code can occur even if there is an empty
string returned, it is possible that the VMware datasource will be
loaded as a false-positive. This patch ensures that in addition to
validating the exit code, the emitted output is also examined to ensure
a non-empty value is returned by rpctool before returning "${DS_FOUND}"
from "dscheck_VMware()".
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
tools/ds-identify | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/tools/ds-identify b/tools/ds-identify
index c01eae3d..0e12298f 100755
--- a/tools/ds-identify
+++ b/tools/ds-identify
@@ -141,6 +141,7 @@ error() {
debug 0 "$@"
stderr "$@"
}
+
warn() {
set -- "WARN:" "$@"
debug 0 "$@"
@@ -344,7 +345,6 @@ geom_label_status_as() {
return $ret
}
-
read_fs_info_freebsd() {
local oifs="$IFS" line="" delim=","
local ret=0 labels="" dev="" label="" ftype="" isodevs=""
@@ -404,7 +404,6 @@ cached() {
[ -n "$1" ] && _RET="$1" && return || return 1
}
-
detect_virt() {
local virt="${UNAVAILABLE}" r="" out=""
if [ -d /run/systemd ]; then
@@ -450,7 +449,7 @@ detect_virt() {
read_virt() {
cached "$DI_VIRT" && return 0
detect_virt
- DI_VIRT=${_RET}
+ DI_VIRT="$(echo "${_RET}" | tr '[:upper:]' '[:lower:]')"
}
is_container() {
@@ -1370,16 +1369,20 @@ vmware_has_rpctool() {
command -v vmware-rpctool >/dev/null 2>&1
}
+vmware_rpctool_guestinfo() {
+ vmware-rpctool "info-get guestinfo.${1}" 2>/dev/null | grep "[[:alnum:]]"
+}
+
vmware_rpctool_guestinfo_metadata() {
- vmware-rpctool "info-get guestinfo.metadata"
+ vmware_rpctool_guestinfo "metadata"
}
vmware_rpctool_guestinfo_userdata() {
- vmware-rpctool "info-get guestinfo.userdata"
+ vmware_rpctool_guestinfo "userdata"
}
vmware_rpctool_guestinfo_vendordata() {
- vmware-rpctool "info-get guestinfo.vendordata"
+ vmware_rpctool_guestinfo "vendordata"
}
dscheck_VMware() {
--
2.27.0

View File

@ -1,470 +0,0 @@
From 6e79106a09a0d142915da1fb48640575bb4bfe08 Mon Sep 17 00:00:00 2001
From: Anh Vo <anhvo@microsoft.com>
Date: Tue, 13 Apr 2021 17:39:39 -0400
Subject: [PATCH 3/7] azure: Removing ability to invoke walinuxagent (#799)
RH-Author: Eduardo Otubo <otubo@redhat.com>
RH-MergeRequest: 45: Add support for userdata on Azure from IMDS
RH-Commit: [3/7] f5e98665bf2093edeeccfcd95b47df2e44a40536
RH-Bugzilla: 2023940
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
Invoking walinuxagent from within cloud-init is no longer
supported/necessary
---
cloudinit/sources/DataSourceAzure.py | 137 ++++--------------
doc/rtd/topics/datasources/azure.rst | 62 ++------
tests/unittests/test_datasource/test_azure.py | 97 -------------
3 files changed, 35 insertions(+), 261 deletions(-)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index de1452ce..020b7006 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -381,53 +381,6 @@ class DataSourceAzure(sources.DataSource):
util.logexc(LOG, "handling set_hostname failed")
return False
- @azure_ds_telemetry_reporter
- def get_metadata_from_agent(self):
- temp_hostname = self.metadata.get('local-hostname')
- agent_cmd = self.ds_cfg['agent_command']
- LOG.debug("Getting metadata via agent. hostname=%s cmd=%s",
- temp_hostname, agent_cmd)
-
- self.bounce_network_with_azure_hostname()
-
- try:
- invoke_agent(agent_cmd)
- except subp.ProcessExecutionError:
- # claim the datasource even if the command failed
- util.logexc(LOG, "agent command '%s' failed.",
- self.ds_cfg['agent_command'])
-
- ddir = self.ds_cfg['data_dir']
-
- fp_files = []
- key_value = None
- for pk in self.cfg.get('_pubkeys', []):
- if pk.get('value', None):
- key_value = pk['value']
- LOG.debug("SSH authentication: using value from fabric")
- else:
- bname = str(pk['fingerprint'] + ".crt")
- fp_files += [os.path.join(ddir, bname)]
- LOG.debug("SSH authentication: "
- "using fingerprint from fabric")
-
- with events.ReportEventStack(
- name="waiting-for-ssh-public-key",
- description="wait for agents to retrieve SSH keys",
- parent=azure_ds_reporter):
- # wait very long for public SSH keys to arrive
- # https://bugs.launchpad.net/cloud-init/+bug/1717611
- missing = util.log_time(logfunc=LOG.debug,
- msg="waiting for SSH public key files",
- func=util.wait_for_files,
- args=(fp_files, 900))
- if len(missing):
- LOG.warning("Did not find files, but going on: %s", missing)
-
- metadata = {}
- metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files)
- return metadata
-
def _get_subplatform(self):
"""Return the subplatform metadata source details."""
if self.seed.startswith('/dev'):
@@ -1354,35 +1307,32 @@ class DataSourceAzure(sources.DataSource):
On failure, returns False.
"""
- if self.ds_cfg['agent_command'] == AGENT_START_BUILTIN:
- self.bounce_network_with_azure_hostname()
+ self.bounce_network_with_azure_hostname()
- pubkey_info = None
- try:
- raise KeyError(
- "Not using public SSH keys from IMDS"
- )
- # pylint:disable=unreachable
- public_keys = self.metadata['imds']['compute']['publicKeys']
- LOG.debug(
- 'Successfully retrieved %s key(s) from IMDS',
- len(public_keys)
- if public_keys is not None
- else 0
- )
- except KeyError:
- LOG.debug(
- 'Unable to retrieve SSH keys from IMDS during '
- 'negotiation, falling back to OVF'
- )
- pubkey_info = self.cfg.get('_pubkeys', None)
-
- metadata_func = partial(get_metadata_from_fabric,
- fallback_lease_file=self.
- dhclient_lease_file,
- pubkey_info=pubkey_info)
- else:
- metadata_func = self.get_metadata_from_agent
+ pubkey_info = None
+ try:
+ raise KeyError(
+ "Not using public SSH keys from IMDS"
+ )
+ # pylint:disable=unreachable
+ public_keys = self.metadata['imds']['compute']['publicKeys']
+ LOG.debug(
+ 'Successfully retrieved %s key(s) from IMDS',
+ len(public_keys)
+ if public_keys is not None
+ else 0
+ )
+ except KeyError:
+ LOG.debug(
+ 'Unable to retrieve SSH keys from IMDS during '
+ 'negotiation, falling back to OVF'
+ )
+ pubkey_info = self.cfg.get('_pubkeys', None)
+
+ metadata_func = partial(get_metadata_from_fabric,
+ fallback_lease_file=self.
+ dhclient_lease_file,
+ pubkey_info=pubkey_info)
LOG.debug("negotiating with fabric via agent command %s",
self.ds_cfg['agent_command'])
@@ -1617,33 +1567,6 @@ def perform_hostname_bounce(hostname, cfg, prev_hostname):
return True
-@azure_ds_telemetry_reporter
-def crtfile_to_pubkey(fname, data=None):
- pipeline = ('openssl x509 -noout -pubkey < "$0" |'
- 'ssh-keygen -i -m PKCS8 -f /dev/stdin')
- (out, _err) = subp.subp(['sh', '-c', pipeline, fname],
- capture=True, data=data)
- return out.rstrip()
-
-
-@azure_ds_telemetry_reporter
-def pubkeys_from_crt_files(flist):
- pubkeys = []
- errors = []
- for fname in flist:
- try:
- pubkeys.append(crtfile_to_pubkey(fname))
- except subp.ProcessExecutionError:
- errors.append(fname)
-
- if errors:
- report_diagnostic_event(
- "failed to convert the crt files to pubkey: %s" % errors,
- logger_func=LOG.warning)
-
- return pubkeys
-
-
@azure_ds_telemetry_reporter
def write_files(datadir, files, dirmode=None):
@@ -1672,16 +1595,6 @@ def write_files(datadir, files, dirmode=None):
util.write_file(filename=fname, content=content, mode=0o600)
-@azure_ds_telemetry_reporter
-def invoke_agent(cmd):
- # this is a function itself to simplify patching it for test
- if cmd:
- LOG.debug("invoking agent: %s", cmd)
- subp.subp(cmd, shell=(not isinstance(cmd, list)))
- else:
- LOG.debug("not invoking agent")
-
-
def find_child(node, filter_func):
ret = []
if not node.hasChildNodes():
diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst
index e04c3a33..ad9f2236 100644
--- a/doc/rtd/topics/datasources/azure.rst
+++ b/doc/rtd/topics/datasources/azure.rst
@@ -5,28 +5,6 @@ Azure
This datasource finds metadata and user-data from the Azure cloud platform.
-walinuxagent
-------------
-walinuxagent has several functions within images. For cloud-init
-specifically, the relevant functionality it performs is to register the
-instance with the Azure cloud platform at boot so networking will be
-permitted. For more information about the other functionality of
-walinuxagent, see `Azure's documentation
-<https://github.com/Azure/WALinuxAgent#introduction>`_ for more details.
-(Note, however, that only one of walinuxagent's provisioning and cloud-init
-should be used to perform instance customisation.)
-
-If you are configuring walinuxagent yourself, you will want to ensure that you
-have `Provisioning.UseCloudInit
-<https://github.com/Azure/WALinuxAgent#provisioningusecloudinit>`_ set to
-``y``.
-
-
-Builtin Agent
--------------
-An alternative to using walinuxagent to register to the Azure cloud platform
-is to use the ``__builtin__`` agent command. This section contains more
-background on what that code path does, and how to enable it.
The Azure cloud platform provides initial data to an instance via an attached
CD formatted in UDF. That CD contains a 'ovf-env.xml' file that provides some
@@ -41,16 +19,6 @@ by calling a script in /etc/dhcp/dhclient-exit-hooks or a file in
'dhclient_hook' of cloud-init itself. This sub-command will write the client
information in json format to /run/cloud-init/dhclient.hook/<interface>.json.
-In order for cloud-init to leverage this method to find the endpoint, the
-cloud.cfg file must contain:
-
-.. sourcecode:: yaml
-
- datasource:
- Azure:
- set_hostname: False
- agent_command: __builtin__
-
If those files are not available, the fallback is to check the leases file
for the endpoint server (again option 245).
@@ -83,9 +51,6 @@ configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``).
The settings that may be configured are:
- * **agent_command**: Either __builtin__ (default) or a command to run to getcw
- metadata. If __builtin__, get metadata from walinuxagent. Otherwise run the
- provided command to obtain metadata.
* **apply_network_config**: Boolean set to True to use network configuration
described by Azure's IMDS endpoint instead of fallback network config of
dhcp on eth0. Default is True. For Ubuntu 16.04 or earlier, default is
@@ -121,7 +86,6 @@ An example configuration with the default values is provided below:
datasource:
Azure:
- agent_command: __builtin__
apply_network_config: true
data_dir: /var/lib/waagent
dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases
@@ -144,9 +108,7 @@ child of the ``LinuxProvisioningConfigurationSet`` (a sibling to ``UserName``)
If both ``UserData`` and ``CustomData`` are provided behavior is undefined on
which will be selected.
-In the example below, user-data provided is 'this is my userdata', and the
-datasource config provided is ``{"agent_command": ["start", "walinuxagent"]}``.
-That agent command will take affect as if it were specified in system config.
+In the example below, user-data provided is 'this is my userdata'
Example:
@@ -184,20 +146,16 @@ The hostname is provided to the instance in the ovf-env.xml file as
Whatever value the instance provides in its dhcp request will resolve in the
domain returned in the 'search' request.
-The interesting issue is that a generic image will already have a hostname
-configured. The ubuntu cloud images have 'ubuntu' as the hostname of the
-system, and the initial dhcp request on eth0 is not guaranteed to occur after
-the datasource code has been run. So, on first boot, that initial value will
-be sent in the dhcp request and *that* value will resolve.
-
-In order to make the ``HostName`` provided in the ovf-env.xml resolve, a
-dhcp request must be made with the new value. Walinuxagent (in its current
-version) handles this by polling the state of hostname and bouncing ('``ifdown
-eth0; ifup eth0``' the network interface if it sees that a change has been
-made.
+A generic image will already have a hostname configured. The ubuntu
+cloud images have 'ubuntu' as the hostname of the system, and the
+initial dhcp request on eth0 is not guaranteed to occur after the
+datasource code has been run. So, on first boot, that initial value
+will be sent in the dhcp request and *that* value will resolve.
-cloud-init handles this by setting the hostname in the DataSource's 'get_data'
-method via '``hostname $HostName``', and then bouncing the interface. This
+In order to make the ``HostName`` provided in the ovf-env.xml resolve,
+a dhcp request must be made with the new value. cloud-init handles
+this by setting the hostname in the DataSource's 'get_data' method via
+'``hostname $HostName``', and then bouncing the interface. This
behavior can be configured or disabled in the datasource config. See
'Configuration' above.
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index dedebeb1..320fa857 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -638,17 +638,10 @@ scbus-1 on xpt0 bus 0
def dsdevs():
return data.get('dsdevs', [])
- def _invoke_agent(cmd):
- data['agent_invoked'] = cmd
-
def _wait_for_files(flist, _maxwait=None, _naplen=None):
data['waited'] = flist
return []
- def _pubkeys_from_crt_files(flist):
- data['pubkey_files'] = flist
- return ["pubkey_from: %s" % f for f in flist]
-
if data.get('ovfcontent') is not None:
populate_dir(os.path.join(self.paths.seed_dir, "azure"),
{'ovf-env.xml': data['ovfcontent']})
@@ -675,8 +668,6 @@ scbus-1 on xpt0 bus 0
self.apply_patches([
(dsaz, 'list_possible_azure_ds_devs', dsdevs),
- (dsaz, 'invoke_agent', _invoke_agent),
- (dsaz, 'pubkeys_from_crt_files', _pubkeys_from_crt_files),
(dsaz, 'perform_hostname_bounce', mock.MagicMock()),
(dsaz, 'get_hostname', mock.MagicMock()),
(dsaz, 'set_hostname', mock.MagicMock()),
@@ -765,7 +756,6 @@ scbus-1 on xpt0 bus 0
ret = dsrc.get_data()
self.m_is_platform_viable.assert_called_with(dsrc.seed_dir)
self.assertFalse(ret)
- self.assertNotIn('agent_invoked', data)
# Assert that for non viable platforms,
# there is no communication with the Azure datasource.
self.assertEqual(
@@ -789,7 +779,6 @@ scbus-1 on xpt0 bus 0
ret = dsrc.get_data()
self.m_is_platform_viable.assert_called_with(dsrc.seed_dir)
self.assertFalse(ret)
- self.assertNotIn('agent_invoked', data)
self.assertEqual(
1,
m_report_failure.call_count)
@@ -806,7 +795,6 @@ scbus-1 on xpt0 bus 0
1,
m_crawl_metadata.call_count)
self.assertFalse(ret)
- self.assertNotIn('agent_invoked', data)
def test_crawl_metadata_exception_should_report_failure_with_msg(self):
data = {}
@@ -1086,21 +1074,6 @@ scbus-1 on xpt0 bus 0
self.assertTrue(os.path.isdir(self.waagent_d))
self.assertEqual(stat.S_IMODE(os.stat(self.waagent_d).st_mode), 0o700)
- def test_user_cfg_set_agent_command_plain(self):
- # set dscfg in via plaintext
- # we must have friendly-to-xml formatted plaintext in yaml_cfg
- # not all plaintext is expected to work.
- yaml_cfg = "{agent_command: my_command}\n"
- cfg = yaml.safe_load(yaml_cfg)
- odata = {'HostName': "myhost", 'UserName': "myuser",
- 'dscfg': {'text': yaml_cfg, 'encoding': 'plain'}}
- data = {'ovfcontent': construct_valid_ovf_env(data=odata)}
-
- dsrc = self._get_ds(data)
- ret = self._get_and_setup(dsrc)
- self.assertTrue(ret)
- self.assertEqual(data['agent_invoked'], cfg['agent_command'])
-
@mock.patch('cloudinit.sources.DataSourceAzure.device_driver',
return_value=None)
def test_network_config_set_from_imds(self, m_driver):
@@ -1205,29 +1178,6 @@ scbus-1 on xpt0 bus 0
dsrc.get_data()
self.assertEqual('eastus2', dsrc.region)
- def test_user_cfg_set_agent_command(self):
- # set dscfg in via base64 encoded yaml
- cfg = {'agent_command': "my_command"}
- odata = {'HostName': "myhost", 'UserName': "myuser",
- 'dscfg': {'text': b64e(yaml.dump(cfg)),
- 'encoding': 'base64'}}
- data = {'ovfcontent': construct_valid_ovf_env(data=odata)}
-
- dsrc = self._get_ds(data)
- ret = self._get_and_setup(dsrc)
- self.assertTrue(ret)
- self.assertEqual(data['agent_invoked'], cfg['agent_command'])
-
- def test_sys_cfg_set_agent_command(self):
- sys_cfg = {'datasource': {'Azure': {'agent_command': '_COMMAND'}}}
- data = {'ovfcontent': construct_valid_ovf_env(data={}),
- 'sys_cfg': sys_cfg}
-
- dsrc = self._get_ds(data)
- ret = self._get_and_setup(dsrc)
- self.assertTrue(ret)
- self.assertEqual(data['agent_invoked'], '_COMMAND')
-
def test_sys_cfg_set_never_destroy_ntfs(self):
sys_cfg = {'datasource': {'Azure': {
'never_destroy_ntfs': 'user-supplied-value'}}}
@@ -1311,51 +1261,6 @@ scbus-1 on xpt0 bus 0
self.assertTrue(ret)
self.assertEqual(dsrc.userdata_raw, mydata.encode('utf-8'))
- def test_cfg_has_pubkeys_fingerprint(self):
- odata = {'HostName': "myhost", 'UserName': "myuser"}
- mypklist = [{'fingerprint': 'fp1', 'path': 'path1', 'value': ''}]
- pubkeys = [(x['fingerprint'], x['path'], x['value']) for x in mypklist]
- data = {'ovfcontent': construct_valid_ovf_env(data=odata,
- pubkeys=pubkeys)}
-
- dsrc = self._get_ds(data, agent_command=['not', '__builtin__'])
- ret = self._get_and_setup(dsrc)
- self.assertTrue(ret)
- for mypk in mypklist:
- self.assertIn(mypk, dsrc.cfg['_pubkeys'])
- self.assertIn('pubkey_from', dsrc.metadata['public-keys'][-1])
-
- def test_cfg_has_pubkeys_value(self):
- # make sure that provided key is used over fingerprint
- odata = {'HostName': "myhost", 'UserName': "myuser"}
- mypklist = [{'fingerprint': 'fp1', 'path': 'path1', 'value': 'value1'}]
- pubkeys = [(x['fingerprint'], x['path'], x['value']) for x in mypklist]
- data = {'ovfcontent': construct_valid_ovf_env(data=odata,
- pubkeys=pubkeys)}
-
- dsrc = self._get_ds(data, agent_command=['not', '__builtin__'])
- ret = self._get_and_setup(dsrc)
- self.assertTrue(ret)
-
- for mypk in mypklist:
- self.assertIn(mypk, dsrc.cfg['_pubkeys'])
- self.assertIn(mypk['value'], dsrc.metadata['public-keys'])
-
- def test_cfg_has_no_fingerprint_has_value(self):
- # test value is used when fingerprint not provided
- odata = {'HostName': "myhost", 'UserName': "myuser"}
- mypklist = [{'fingerprint': None, 'path': 'path1', 'value': 'value1'}]
- pubkeys = [(x['fingerprint'], x['path'], x['value']) for x in mypklist]
- data = {'ovfcontent': construct_valid_ovf_env(data=odata,
- pubkeys=pubkeys)}
-
- dsrc = self._get_ds(data, agent_command=['not', '__builtin__'])
- ret = self._get_and_setup(dsrc)
- self.assertTrue(ret)
-
- for mypk in mypklist:
- self.assertIn(mypk['value'], dsrc.metadata['public-keys'])
-
def test_default_ephemeral_configs_ephemeral_exists(self):
# make sure the ephemeral configs are correct if disk present
odata = {}
@@ -1919,8 +1824,6 @@ class TestAzureBounce(CiTestCase):
with_logs = True
def mock_out_azure_moving_parts(self):
- self.patches.enter_context(
- mock.patch.object(dsaz, 'invoke_agent'))
self.patches.enter_context(
mock.patch.object(dsaz.util, 'wait_for_files'))
self.patches.enter_context(
--
2.27.0

View File

@ -1,97 +0,0 @@
From 478709d7c157a085e3b2fee432e24978a3485234 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Wed, 20 Oct 2021 16:28:42 +0200
Subject: [PATCH] cc_ssh.py: fix private key group owner and permissions
(#1070)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 32: cc_ssh.py: fix private key group owner and permissions (#1070)
RH-Commit: [1/1] 0382c3f671ae0fa9cab23dfad1f636967b012148
RH-Bugzilla: 2013644
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
commit ee296ced9c0a61b1484d850b807c601bcd670ec1
Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Tue Oct 19 21:32:10 2021 +0200
cc_ssh.py: fix private key group owner and permissions (#1070)
When default host keys are created by sshd-keygen (/etc/ssh/ssh_host_*_key)
in RHEL/CentOS/Fedora, openssh it performs the following:
# create new keys
if ! $KEYGEN -q -t $KEYTYPE -f $KEY -C '' -N '' >&/dev/null; then
exit 1
fi
# sanitize permissions
/usr/bin/chgrp ssh_keys $KEY
/usr/bin/chmod 640 $KEY
/usr/bin/chmod 644 $KEY.pub
Note that the group ssh_keys exists only in RHEL/CentOS/Fedora.
Now that we disable sshd-keygen to allow only cloud-init to create
them, we miss the "sanitize permissions" part, where we set the group
owner as ssh_keys and the private key mode to 640.
According to https://bugzilla.redhat.com/show_bug.cgi?id=2013644#c8, failing
to set group ownership and permissions like openssh does makes the RHEL openscap
tool generate an error.
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com
RHBZ: 2013644
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/config/cc_ssh.py | 7 +++++++
cloudinit/util.py | 14 ++++++++++++++
2 files changed, 21 insertions(+)
diff --git a/cloudinit/config/cc_ssh.py b/cloudinit/config/cc_ssh.py
index 05a16dbc..4e986c55 100755
--- a/cloudinit/config/cc_ssh.py
+++ b/cloudinit/config/cc_ssh.py
@@ -240,6 +240,13 @@ def handle(_name, cfg, cloud, log, _args):
try:
out, err = subp.subp(cmd, capture=True, env=lang_c)
sys.stdout.write(util.decode_binary(out))
+
+ gid = util.get_group_id("ssh_keys")
+ if gid != -1:
+ # perform same "sanitize permissions" as sshd-keygen
+ os.chown(keyfile, -1, gid)
+ os.chmod(keyfile, 0o640)
+ os.chmod(keyfile + ".pub", 0o644)
except subp.ProcessExecutionError as e:
err = util.decode_binary(e.stderr).lower()
if (e.exit_code == 1 and
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 343976ad..fe37ae89 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -1831,6 +1831,20 @@ def chmod(path, mode):
os.chmod(path, real_mode)
+def get_group_id(grp_name: str) -> int:
+ """
+ Returns the group id of a group name, or -1 if no group exists
+
+ @param grp_name: the name of the group
+ """
+ gid = -1
+ try:
+ gid = grp.getgrnam(grp_name).gr_gid
+ except KeyError:
+ LOG.debug("Group %s is not a valid group name", grp_name)
+ return gid
+
+
def get_permissions(path: str) -> int:
"""
Returns the octal permissions of the file/folder pointed by the path,
--
2.27.0

View File

@ -1,87 +0,0 @@
From ea83e72b335e652b080fda66a075c0d1322ed6dc Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Tue, 7 Dec 2021 10:00:41 +0100
Subject: [PATCH] cloudinit/net: handle two different routes for the same ip
(#1124)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 39: cloudinit/net: handle two different routes for the same ip (#1124)
RH-Commit: [1/1] 6810dc29ce786fbca96d2033386aa69c6ab65997
RH-Bugzilla: 2028028
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
commit 0e25076b34fa995161b83996e866c0974cee431f
Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Mon Dec 6 18:34:26 2021 +0100
cloudinit/net: handle two different routes for the same ip (#1124)
If we set a dhcp server side like this:
$ cat /var/tmp/cloud-init/cloud-init-dhcp-f0rie5tm/dhcp.leases
lease {
...
option classless-static-routes 31.169.254.169.254 0.0.0.0,31.169.254.169.254
10.112.143.127,22.10.112.140 0.0.0.0,0 10.112.140.1;
...
}
cloud-init fails to configure the routes via 'ip route add' because to there are
two different routes for 169.254.169.254:
$ ip -4 route add 192.168.1.1/32 via 0.0.0.0 dev eth0
$ ip -4 route add 192.168.1.1/32 via 10.112.140.248 dev eth0
But NetworkManager can handle such scenario successfully as it uses "ip route append".
So change cloud-init to also use "ip route append" to fix the issue:
$ ip -4 route append 192.168.1.1/32 via 0.0.0.0 dev eth0
$ ip -4 route append 192.168.1.1/32 via 10.112.140.248 dev eth0
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RHBZ: #2003231
Conflicts:
cloudinit/net/tests/test_init.py: a mock call in
test_ephemeral_ipv4_network_with_rfc3442_static_routes is not
present downstream.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/net/__init__.py | 2 +-
cloudinit/net/tests/test_init.py | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index 385b7bcc..003efa2a 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -1138,7 +1138,7 @@ class EphemeralIPv4Network(object):
if gateway != "0.0.0.0/0":
via_arg = ['via', gateway]
subp.subp(
- ['ip', '-4', 'route', 'add', net_address] + via_arg +
+ ['ip', '-4', 'route', 'append', net_address] + via_arg +
['dev', self.interface], capture=True)
self.cleanup_cmds.insert(
0, ['ip', '-4', 'route', 'del', net_address] + via_arg +
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 946f8ee2..2350837b 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -719,10 +719,10 @@ class TestEphemeralIPV4Network(CiTestCase):
['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', 'up'],
capture=True),
mock.call(
- ['ip', '-4', 'route', 'add', '169.254.169.254/32',
+ ['ip', '-4', 'route', 'append', '169.254.169.254/32',
'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
mock.call(
- ['ip', '-4', 'route', 'add', '0.0.0.0/0',
+ ['ip', '-4', 'route', 'append', '0.0.0.0/0',
'via', '192.168.2.1', 'dev', 'eth0'], capture=True)]
expected_teardown_calls = [
mock.call(
--
2.27.0

View File

@ -0,0 +1,35 @@
From 9f560fd70f64cbe1827e2e490206d245f3ac7812 Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Fri, 7 Jul 2023 15:38:14 +0530
Subject: [PATCH 08/11] cosmetic: fix tox formatting
This is a cosmetic formatting change that makes tox happy.
X-downstream-only: true
fixes: 06b2d8279628eb5d0 ("include 'NOZEROCONF=yes' in /etc/sysconfig/network")
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/net/sysconfig.py | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 5bf3e7ca..421564ee 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -1028,9 +1028,9 @@ class Renderer(renderer.Renderer):
for line in util.load_file(sysconfig_path, quiet=True).split("\n"):
if "cloud-init" in line:
break
- if not line.startswith(("NETWORKING=",
- "IPV6_AUTOCONF=",
- "NETWORKING_IPV6=")):
+ if not line.startswith(
+ ("NETWORKING=", "IPV6_AUTOCONF=", "NETWORKING_IPV6=")
+ ):
netcfg.append(line)
# Now generate the cloud-init portion of sysconfig/network
netcfg.extend([_make_header(), "NETWORKING=yes"])
--
2.39.3

View File

@ -1,173 +0,0 @@
From 005d0a98c69d154a00e9fd599c7fbe5aef73c933 Mon Sep 17 00:00:00 2001
From: Amy Chen <xiachen@redhat.com>
Date: Thu, 25 Nov 2021 18:30:48 +0800
Subject: [PATCH] fix error on upgrade caused by new vendordata2 attributes
RH-Author: xiachen <None>
RH-MergeRequest: 35: fix error on upgrade caused by new vendordata2 attributes
RH-Commit: [1/1] 9e00a7744838afbbdc5eb14628b7f572beba9f19
RH-Bugzilla: 2021538
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
commit d132356cc361abef2d90d4073438f3ab759d5964
Author: James Falcon <TheRealFalcon@users.noreply.github.com>
Date: Mon Apr 19 11:31:28 2021 -0500
fix error on upgrade caused by new vendordata2 attributes (#869)
In #777, we added 'vendordata2' and 'vendordata2_raw' attributes to
the DataSource class, but didn't use the upgrade framework to deal
with an unpickle after upgrade. This commit adds the necessary
upgrade code.
Additionally, added a smaller-scope upgrade test to our integration
tests that will be run on every CI run so we catch these issues
immediately in the future.
LP: #1922739
Signed-off-by: Amy Chen <xiachen@redhat.com>
---
cloudinit/sources/__init__.py | 12 +++++++++++-
cloudinit/tests/test_upgrade.py | 4 ++++
tests/integration_tests/clouds.py | 4 ++--
tests/integration_tests/test_upgrade.py | 25 ++++++++++++++++++++++++-
4 files changed, 41 insertions(+), 4 deletions(-)
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index 1ad1880d..7d74f8d9 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -24,6 +24,7 @@ from cloudinit import util
from cloudinit.atomic_helper import write_json
from cloudinit.event import EventType
from cloudinit.filters import launch_index
+from cloudinit.persistence import CloudInitPickleMixin
from cloudinit.reporting import events
DSMODE_DISABLED = "disabled"
@@ -134,7 +135,7 @@ URLParams = namedtuple(
'URLParms', ['max_wait_seconds', 'timeout_seconds', 'num_retries'])
-class DataSource(metaclass=abc.ABCMeta):
+class DataSource(CloudInitPickleMixin, metaclass=abc.ABCMeta):
dsmode = DSMODE_NETWORK
default_locale = 'en_US.UTF-8'
@@ -196,6 +197,8 @@ class DataSource(metaclass=abc.ABCMeta):
# non-root users
sensitive_metadata_keys = ('merged_cfg', 'security-credentials',)
+ _ci_pkl_version = 1
+
def __init__(self, sys_cfg, distro, paths, ud_proc=None):
self.sys_cfg = sys_cfg
self.distro = distro
@@ -218,6 +221,13 @@ class DataSource(metaclass=abc.ABCMeta):
else:
self.ud_proc = ud_proc
+ def _unpickle(self, ci_pkl_version: int) -> None:
+ """Perform deserialization fixes for Paths."""
+ if not hasattr(self, 'vendordata2'):
+ self.vendordata2 = None
+ if not hasattr(self, 'vendordata2_raw'):
+ self.vendordata2_raw = None
+
def __str__(self):
return type_utils.obj_name(self)
diff --git a/cloudinit/tests/test_upgrade.py b/cloudinit/tests/test_upgrade.py
index f79a2536..fd3c5812 100644
--- a/cloudinit/tests/test_upgrade.py
+++ b/cloudinit/tests/test_upgrade.py
@@ -43,3 +43,7 @@ class TestUpgrade:
def test_blacklist_drivers_set_on_networking(self, previous_obj_pkl):
"""We always expect Networking.blacklist_drivers to be initialised."""
assert previous_obj_pkl.distro.networking.blacklist_drivers is None
+
+ def test_vendordata_exists(self, previous_obj_pkl):
+ assert previous_obj_pkl.vendordata2 is None
+ assert previous_obj_pkl.vendordata2_raw is None
diff --git a/tests/integration_tests/clouds.py b/tests/integration_tests/clouds.py
index 9527a413..1d0b9d83 100644
--- a/tests/integration_tests/clouds.py
+++ b/tests/integration_tests/clouds.py
@@ -100,14 +100,14 @@ class IntegrationCloud(ABC):
# Even if we're using the default key, it may still have a
# different name in the clouds, so we need to set it separately.
self.cloud_instance.key_pair.name = settings.KEYPAIR_NAME
- self._released_image_id = self._get_initial_image()
+ self.released_image_id = self._get_initial_image()
self.snapshot_id = None
@property
def image_id(self):
if self.snapshot_id:
return self.snapshot_id
- return self._released_image_id
+ return self.released_image_id
def emit_settings_to_log(self) -> None:
log.info(
diff --git a/tests/integration_tests/test_upgrade.py b/tests/integration_tests/test_upgrade.py
index c20cb3c1..48e0691b 100644
--- a/tests/integration_tests/test_upgrade.py
+++ b/tests/integration_tests/test_upgrade.py
@@ -1,4 +1,5 @@
import logging
+import os
import pytest
import time
from pathlib import Path
@@ -8,6 +9,8 @@ from tests.integration_tests.conftest import (
get_validated_source,
session_start_time,
)
+from tests.integration_tests.instances import CloudInitSource
+
log = logging.getLogger('integration_testing')
@@ -63,7 +66,7 @@ def test_upgrade(session_cloud: IntegrationCloud):
return # type checking doesn't understand that skip raises
launch_kwargs = {
- 'image_id': session_cloud._get_initial_image(),
+ 'image_id': session_cloud.released_image_id,
}
image = ImageSpecification.from_os_image()
@@ -93,6 +96,26 @@ def test_upgrade(session_cloud: IntegrationCloud):
instance.install_new_cloud_init(source, take_snapshot=False)
instance.execute('hostname something-else')
_restart(instance)
+ assert instance.execute('cloud-init status --wait --long').ok
_output_to_compare(instance, after_path, netcfg_path)
log.info('Wrote upgrade test logs to %s and %s', before_path, after_path)
+
+
+@pytest.mark.ci
+@pytest.mark.ubuntu
+def test_upgrade_package(session_cloud: IntegrationCloud):
+ if get_validated_source(session_cloud) != CloudInitSource.DEB_PACKAGE:
+ not_run_message = 'Test only supports upgrading to build deb'
+ if os.environ.get('TRAVIS'):
+ # If this isn't running on CI, we should know
+ pytest.fail(not_run_message)
+ else:
+ pytest.skip(not_run_message)
+
+ launch_kwargs = {'image_id': session_cloud.released_image_id}
+
+ with session_cloud.launch(launch_kwargs=launch_kwargs) as instance:
+ instance.install_deb()
+ instance.restart()
+ assert instance.execute('cloud-init status --wait --long').ok
--
2.27.0

View File

@ -0,0 +1,183 @@
From 0de2584f99c49b5d22bc7d1d08070d53b8fc1b3b Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Thu, 20 Jul 2023 23:56:01 +0530
Subject: [PATCH 11/11] logging: keep current file mode of log file if its
stricter than the new mode (#4250)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 105: [RHEL 8.9] logging: keep current file mode of log file if its stricter than the new mode (#4250)
RH-Bugzilla: 2222501
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [1/1] 2733073d4dd119e29d1cf227e787afa15c9f8991
By default, the cloud init log file is created with mode 0o644 with
`preserve_mode` parameter of `write_file()` set to False. This means that when
an existing log file is found, its mode will be unconditionally reset to the
mode 0o644. It is possible that this might cause the change of the mode of the
log file from the current more stricter mode to a less strict mode
(when the new mode 0o644 is less strict than the existing mode of the file).
In order to mitigate the above issue, check the current mode of the log file
and if the current mode is stricter than the default new mode 0o644, then
preserve the current mode of the file.
Fixes GH-4243
Signed-off-by: Ani Sinha <anisinha@redhat.com>
(cherry picked from commit a0e4ec15a1adffabd1c539879514eae4807c834c)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
Conflicts:
tests/unittests/test_util.py
---
cloudinit/stages.py | 15 ++++++++++++++-
cloudinit/util.py | 23 +++++++++++++++++++++++
tests/unittests/test_stages.py | 23 ++++++++++++++++-------
tests/unittests/test_util.py | 24 ++++++++++++++++++++++++
4 files changed, 77 insertions(+), 8 deletions(-)
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index 21f30a1f..979179af 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -200,12 +200,25 @@ class Init:
self._initialize_filesystem()
def _initialize_filesystem(self):
+ mode = 0o640
+ fmode = None
+
util.ensure_dirs(self._initial_subdirs())
log_file = util.get_cfg_option_str(self.cfg, "def_log_file")
if log_file:
# At this point the log file should have already been created
# in the setupLogging function of log.py
- util.ensure_file(log_file, mode=0o640, preserve_mode=False)
+
+ try:
+ fmode = util.get_permissions(log_file)
+ except OSError:
+ pass
+
+ # if existing file mode fmode is stricter, do not change it.
+ if fmode and util.compare_permission(fmode, mode) < 0:
+ mode = fmode
+
+ util.ensure_file(log_file, mode, preserve_mode=False)
perms = self.cfg.get("syslog_fix_perms")
if not perms:
perms = {}
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 8ba3e2b6..00892d6f 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -2087,6 +2087,29 @@ def safe_int(possible_int):
return None
+def compare_permission(mode1, mode2):
+ """Compare two file modes in octal.
+
+ If mode1 is less restrictive than mode2 return 1
+ If mode1 is more restrictive than mode2 return -1
+ If mode1 is same as mode2, return 0
+
+ The comparison starts from the permission of the
+ set of users in "others" and then works up to the
+ permission of "user" set.
+ """
+ # Convert modes to octal and reverse the last 3 digits
+ # so 0o640 would be become 0o046
+ mode1_oct = oct(mode1)[2:].rjust(3, "0")
+ mode2_oct = oct(mode2)[2:].rjust(3, "0")
+ m1 = int(mode1_oct[:-3] + mode1_oct[-3:][::-1], 8)
+ m2 = int(mode2_oct[:-3] + mode2_oct[-3:][::-1], 8)
+
+ # Then do a traditional cmp()
+ # https://docs.python.org/3.0/whatsnew/3.0.html#ordering-comparisons
+ return (m1 > m2) - (m1 < m2)
+
+
def chmod(path, mode):
real_mode = safe_int(mode)
if path and real_mode:
diff --git a/tests/unittests/test_stages.py b/tests/unittests/test_stages.py
index a61f9df9..831ea9f2 100644
--- a/tests/unittests/test_stages.py
+++ b/tests/unittests/test_stages.py
@@ -606,13 +606,22 @@ class TestInit_InitializeFilesystem:
# Assert we create it 0o640 by default if it doesn't already exist
assert 0o640 == stat.S_IMODE(log_file.stat().mode)
- def test_existing_file_permissions(self, init, tmpdir):
+ @pytest.mark.parametrize(
+ "set_perms,expected_perms",
+ [
+ (0o640, 0o640),
+ (0o606, 0o640),
+ (0o600, 0o600),
+ ],
+ )
+ def test_existing_file_permissions(
+ self, init, tmpdir, set_perms, expected_perms
+ ):
"""Test file permissions are set as expected.
- CIS Hardening requires 640 permissions. These permissions are
- currently hardcoded on every boot, but if there's ever a reason
- to change this, we need to then ensure that they
- are *not* set every boot.
+ CIS Hardening requires 640 permissions. If the file has looser
+ permissions, then hard code 640. If the file has tighter
+ permissions, then leave them as they are
See https://bugs.launchpad.net/cloud-init/+bug/1900837.
"""
@@ -620,9 +629,9 @@ class TestInit_InitializeFilesystem:
log_file.ensure()
# Use a mode that will never be made the default so this test will
# always be valid
- log_file.chmod(0o606)
+ log_file.chmod(set_perms)
init._cfg = {"def_log_file": str(log_file)}
init._initialize_filesystem()
- assert 0o640 == stat.S_IMODE(log_file.stat().mode)
+ assert expected_perms == stat.S_IMODE(log_file.stat().mode)
diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
index 07142a86..af96da05 100644
--- a/tests/unittests/test_util.py
+++ b/tests/unittests/test_util.py
@@ -3026,3 +3026,27 @@ class TestVersion:
)
def test_from_str(self, str_ver, cls_ver):
assert util.Version.from_str(str_ver) == cls_ver
+
+
+class TestComparePermissions:
+ @pytest.mark.parametrize(
+ "perm1,perm2,expected",
+ [
+ (0o777, 0o777, 0),
+ (0o000, 0o000, 0),
+ (0o421, 0o421, 0),
+ (0o1640, 0o1640, 0),
+ (0o1407, 0o1600, 1),
+ (0o1600, 0o1407, -1),
+ (0o407, 0o600, 1),
+ (0o600, 0o407, -1),
+ (0o007, 0o700, 1),
+ (0o700, 0o007, -1),
+ (0o077, 0o100, 1),
+ (0o644, 0o640, 1),
+ (0o640, 0o600, 1),
+ (0o600, 0o400, 1),
+ ],
+ )
+ def test_compare_permissions(self, perm1, perm2, expected):
+ assert util.compare_permission(perm1, perm2) == expected
--
2.39.3

View File

@ -0,0 +1,71 @@
From 3b68f70013c84ae9efbc31aa35641b61041fd62a Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Mon, 22 May 2023 22:06:28 +0530
Subject: [PATCH 5/7] net/sysconfig: enable sysconfig renderer if network
manager has ifcfg-rh plugin (#4132)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 103: [RHEL8] Support configuring network by NM keyfiles
RH-Bugzilla: 2219528
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [5/7] 4d1602e39fbf85277e50a1fde046a0b528a18364
Some distributions like RHEL does not have ifup and ifdown
scripts that traditionally handled ifcfg-eth* files. Instead RHEL
uses network manager with ifcfg-rh plugin to handle ifcfg
scripts. Therefore, the sysconfig should check for the
existence of ifcfg-rh plugin in addition to checking for the
existence of ifup and ifdown scripts in order to determine if it
can handle ifcfg files. If either the plugin or ifup/ifdown scripts
are present, sysconfig renderer can be enabled.
fixes: #4131
RHBZ: 2194050
Signed-off-by: Ani Sinha <anisinha@redhat.com>
(cherry picked from commit 009dbf85a72a9077b2267d377b2ff46639fb3def)
---
cloudinit/net/sysconfig.py | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index f7ac5898..5bf3e7ca 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -1,6 +1,7 @@
# This file is part of cloud-init. See LICENSE file for license information.
import copy
+import glob
import io
import os
import re
@@ -1058,7 +1059,25 @@ def _supported_vlan_names(rdev, vid):
def available(target=None):
if not util.system_info()["variant"] in KNOWN_DISTROS:
return False
+ if available_sysconfig(target):
+ return True
+ if available_nm_ifcfg_rh(target):
+ return True
+ return False
+
+
+def available_nm_ifcfg_rh(target=None):
+ # The ifcfg-rh plugin of NetworkManager is installed.
+ # NetworkManager can handle the ifcfg files.
+ return glob.glob(
+ subp.target_path(
+ target,
+ "usr/lib*/NetworkManager/*/libnm-settings-plugin-ifcfg-rh.so",
+ )
+ )
+
+def available_sysconfig(target=None):
expected = ["ifup", "ifdown"]
search = ["/sbin", "/usr/sbin"]
for p in expected:
--
2.39.3

View File

@ -0,0 +1,410 @@
From f3f9a6933ba2c348d0ccd92706b1c17655f91625 Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Tue, 23 May 2023 20:38:31 +0530
Subject: [PATCH 6/7] network-manager: Set higher autoconnect priority for nm
keyfiles (#3671)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 103: [RHEL8] Support configuring network by NM keyfiles
RH-Bugzilla: 2219528
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [6/7] f263baba1870ed035bd1662ddeb0ab5bcb6a8cd1
cloud init generated keyfiles by network manager renderer for network
interfaces can sometimes conflict with existing keyfiles that are left as an
artifact of an upgrade process or are old user generated keyfiles. When two
such keyfiles are present, the existing keyfile can take precedence over the
cloud init generated keyfile making the later ineffective. Removing the old
keyfile blindly by cloud init would also not be correct since there would be
no way to enforce a different interface configuration if one needs it.
This change adds an autoconnect-priority value for cloud init generated keyfile
so that the cloud init configuration takes precedence over the existing old
keyfile configuration in the default case. The priority values range from 0
to 999. We set a value of 120 so that it would be high enough in the default
case and result in cloud init keyfile to take precedence but not too high so
that if the user generated keyfile needs to take precedence, the user can do
so by using a higher value than the one used by cloud init key file, between
the values 121 and 999.
RHBZ: 2196231
Signed-off-by: Ani Sinha <anisinha@redhat.com>
(cherry picked from commit f663e94ac50bc518e694cbd167fdab216fcff029)
---
cloudinit/net/network_manager.py | 1 +
tests/unittests/cmd/devel/test_net_convert.py | 1 +
.../cloud-init-encc000.2653.nmconnection | 1 +
.../cloud-init-encc000.nmconnection | 1 +
.../cloud-init-zz-all-en.nmconnection | 1 +
.../cloud-init-zz-all-eth.nmconnection | 1 +
tests/unittests/test_net.py | 36 +++++++++++++++++++
7 files changed, 42 insertions(+)
diff --git a/cloudinit/net/network_manager.py b/cloudinit/net/network_manager.py
index 2752f52f..ca216928 100644
--- a/cloudinit/net/network_manager.py
+++ b/cloudinit/net/network_manager.py
@@ -43,6 +43,7 @@ class NMConnection:
self.config["connection"] = {
"id": f"cloud-init {con_id}",
"uuid": str(uuid.uuid5(CI_NM_UUID, con_id)),
+ "autoconnect-priority": "120",
}
# This is not actually used anywhere, but may be useful in future
diff --git a/tests/unittests/cmd/devel/test_net_convert.py b/tests/unittests/cmd/devel/test_net_convert.py
index 100aa8de..71654750 100644
--- a/tests/unittests/cmd/devel/test_net_convert.py
+++ b/tests/unittests/cmd/devel/test_net_convert.py
@@ -74,6 +74,7 @@ SAMPLE_NETWORK_MANAGER_CONTENT = """\
[connection]
id=cloud-init eth0
uuid=1dd9a779-d327-56e1-8454-c65e2556c12c
+autoconnect-priority=120
type=ethernet
interface-name=eth0
diff --git a/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-encc000.2653.nmconnection b/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-encc000.2653.nmconnection
index 80483d4f..f44485d2 100644
--- a/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-encc000.2653.nmconnection
+++ b/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-encc000.2653.nmconnection
@@ -3,6 +3,7 @@
[connection]
id=cloud-init encc000.2653
uuid=116aaf19-aabc-50ea-b480-e9aee18bda59
+autoconnect-priority=120
type=vlan
interface-name=encc000.2653
diff --git a/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-encc000.nmconnection b/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-encc000.nmconnection
index 3368388d..fbdfbc65 100644
--- a/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-encc000.nmconnection
+++ b/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-encc000.nmconnection
@@ -3,6 +3,7 @@
[connection]
id=cloud-init encc000
uuid=f869ebd3-f175-5747-bf02-d0d44d687248
+autoconnect-priority=120
type=ethernet
interface-name=encc000
diff --git a/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-zz-all-en.nmconnection b/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-zz-all-en.nmconnection
index 16120bc1..dce56c7d 100644
--- a/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-zz-all-en.nmconnection
+++ b/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-zz-all-en.nmconnection
@@ -3,6 +3,7 @@
[connection]
id=cloud-init zz-all-en
uuid=159daec9-cba3-5101-85e7-46d831857f43
+autoconnect-priority=120
type=ethernet
interface-name=zz-all-en
diff --git a/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-zz-all-eth.nmconnection b/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-zz-all-eth.nmconnection
index df44d546..ee436bf2 100644
--- a/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-zz-all-eth.nmconnection
+++ b/tests/unittests/net/artifacts/no_matching_mac/etc/NetworkManager/system-connections/cloud-init-zz-all-eth.nmconnection
@@ -3,6 +3,7 @@
[connection]
id=cloud-init zz-all-eth
uuid=23a83d8a-d7db-5133-a77b-e68a6ac61ec9
+autoconnect-priority=120
type=ethernet
interface-name=zz-all-eth
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 0f523ff8..7abe61b9 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -631,6 +631,7 @@ dns = none
[connection]
id=cloud-init eth0
uuid=1dd9a779-d327-56e1-8454-c65e2556c12c
+autoconnect-priority=120
type=ethernet
[user]
@@ -1118,6 +1119,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init eth1
uuid=3c50eb47-7260-5a6d-801d-bd4f587d6b58
+ autoconnect-priority=120
type=ethernet
[user]
@@ -1135,6 +1137,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init eth99
uuid=b1b88000-1f03-5360-8377-1a2205efffb4
+ autoconnect-priority=120
type=ethernet
[user]
@@ -1234,6 +1237,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init iface0
uuid=8ddfba48-857c-5e86-ac09-1b43eae0bf70
+ autoconnect-priority=120
type=ethernet
interface-name=iface0
@@ -1364,6 +1368,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init iface0
uuid=8ddfba48-857c-5e86-ac09-1b43eae0bf70
+ autoconnect-priority=120
type=ethernet
interface-name=iface0
@@ -1404,6 +1409,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init iface0
uuid=8ddfba48-857c-5e86-ac09-1b43eae0bf70
+ autoconnect-priority=120
type=ethernet
interface-name=iface0
@@ -1504,6 +1510,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init iface0
uuid=8ddfba48-857c-5e86-ac09-1b43eae0bf70
+ autoconnect-priority=120
type=ethernet
interface-name=iface0
@@ -1734,6 +1741,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init iface0
uuid=8ddfba48-857c-5e86-ac09-1b43eae0bf70
+ autoconnect-priority=120
type=ethernet
interface-name=iface0
@@ -1845,6 +1853,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init iface0
uuid=8ddfba48-857c-5e86-ac09-1b43eae0bf70
+ autoconnect-priority=120
type=ethernet
interface-name=iface0
@@ -1967,6 +1976,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init iface0
uuid=8ddfba48-857c-5e86-ac09-1b43eae0bf70
+ autoconnect-priority=120
type=ethernet
interface-name=iface0
@@ -2043,6 +2053,7 @@ NETWORK_CONFIGS = {
[connection]
id=cloud-init iface0
uuid=8ddfba48-857c-5e86-ac09-1b43eae0bf70
+ autoconnect-priority=120
type=ethernet
interface-name=iface0
@@ -2507,6 +2518,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init eth3
uuid=b7e95dda-7746-5bf8-bf33-6e5f3c926790
+ autoconnect-priority=120
type=ethernet
slave-type=bridge
master=dee46ce4-af7a-5e7c-aa08-b25533ae9213
@@ -2526,6 +2538,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init eth5
uuid=5fda13c7-9942-5e90-a41b-1d043bd725dc
+ autoconnect-priority=120
type=ethernet
[user]
@@ -2547,6 +2560,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init ib0
uuid=11a1dda7-78b4-5529-beba-d9b5f549ad7b
+ autoconnect-priority=120
type=infiniband
[user]
@@ -2571,6 +2585,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init bond0.200
uuid=88984a9c-ff22-5233-9267-86315e0acaa7
+ autoconnect-priority=120
type=vlan
interface-name=bond0.200
@@ -2594,6 +2609,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init eth0
uuid=1dd9a779-d327-56e1-8454-c65e2556c12c
+ autoconnect-priority=120
type=ethernet
[user]
@@ -2611,6 +2627,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init eth4
uuid=e27e4959-fb50-5580-b9a4-2073554627b9
+ autoconnect-priority=120
type=ethernet
slave-type=bridge
master=dee46ce4-af7a-5e7c-aa08-b25533ae9213
@@ -2630,6 +2647,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init eth1
uuid=3c50eb47-7260-5a6d-801d-bd4f587d6b58
+ autoconnect-priority=120
type=ethernet
slave-type=bond
master=54317911-f840-516b-a10d-82cb4c1f075c
@@ -2649,6 +2667,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init br0
uuid=dee46ce4-af7a-5e7c-aa08-b25533ae9213
+ autoconnect-priority=120
type=bridge
interface-name=br0
@@ -2680,6 +2699,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init eth0.101
uuid=b5acec5e-db80-5935-8b02-0d5619fc42bf
+ autoconnect-priority=120
type=vlan
interface-name=eth0.101
@@ -2708,6 +2728,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init bond0
uuid=54317911-f840-516b-a10d-82cb4c1f075c
+ autoconnect-priority=120
type=bond
interface-name=bond0
@@ -2732,6 +2753,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
[connection]
id=cloud-init eth2
uuid=5559a242-3421-5fdd-896e-9cb8313d5804
+ autoconnect-priority=120
type=ethernet
slave-type=bond
master=54317911-f840-516b-a10d-82cb4c1f075c
@@ -3257,6 +3279,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init bond0s0
uuid=09d0b5b9-67e7-5577-a1af-74d1cf17a71e
+ autoconnect-priority=120
type=ethernet
slave-type=bond
master=54317911-f840-516b-a10d-82cb4c1f075c
@@ -3276,6 +3299,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init bond0s1
uuid=4d9aca96-b515-5630-ad83-d13daac7f9d0
+ autoconnect-priority=120
type=ethernet
slave-type=bond
master=54317911-f840-516b-a10d-82cb4c1f075c
@@ -3295,6 +3319,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init bond0
uuid=54317911-f840-516b-a10d-82cb4c1f075c
+ autoconnect-priority=120
type=bond
interface-name=bond0
@@ -3421,6 +3446,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init en0.99
uuid=f594e2ed-f107-51df-b225-1dc530a5356b
+ autoconnect-priority=120
type=vlan
interface-name=en0.99
@@ -3453,6 +3479,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init en0
uuid=e0ca478b-8d84-52ab-8fae-628482c629b5
+ autoconnect-priority=120
type=ethernet
[user]
@@ -3580,6 +3607,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init br0
uuid=dee46ce4-af7a-5e7c-aa08-b25533ae9213
+ autoconnect-priority=120
type=bridge
interface-name=br0
@@ -3604,6 +3632,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init eth0
uuid=1dd9a779-d327-56e1-8454-c65e2556c12c
+ autoconnect-priority=120
type=ethernet
slave-type=bridge
master=dee46ce4-af7a-5e7c-aa08-b25533ae9213
@@ -3628,6 +3657,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init eth1
uuid=3c50eb47-7260-5a6d-801d-bd4f587d6b58
+ autoconnect-priority=120
type=ethernet
slave-type=bridge
master=dee46ce4-af7a-5e7c-aa08-b25533ae9213
@@ -3782,6 +3812,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init eth0
uuid=1dd9a779-d327-56e1-8454-c65e2556c12c
+ autoconnect-priority=120
type=ethernet
[user]
@@ -3804,6 +3835,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init eth1
uuid=3c50eb47-7260-5a6d-801d-bd4f587d6b58
+ autoconnect-priority=120
type=ethernet
[user]
@@ -3826,6 +3858,7 @@ iface bond0 inet6 static
[connection]
id=cloud-init eth2
uuid=5559a242-3421-5fdd-896e-9cb8313d5804
+ autoconnect-priority=120
type=ethernet
[user]
@@ -5688,6 +5721,7 @@ class TestNetworkManagerRendering(CiTestCase):
[connection]
id=cloud-init eth1000
uuid=8c517500-0c95-5308-9c8a-3092eebc44eb
+ autoconnect-priority=120
type=ethernet
[user]
@@ -5742,6 +5776,7 @@ class TestNetworkManagerRendering(CiTestCase):
[connection]
id=cloud-init interface0
uuid=8b6862ed-dbd6-5830-93f7-a91451c13828
+ autoconnect-priority=120
type=ethernet
[user]
@@ -5778,6 +5813,7 @@ class TestNetworkManagerRendering(CiTestCase):
[connection]
id=cloud-init eth0
uuid=1dd9a779-d327-56e1-8454-c65e2556c12c
+ autoconnect-priority=120
type=ethernet
interface-name=eth0
--
2.39.3

View File

@ -0,0 +1,40 @@
From 2db9b803e64171d2c8d8a3ad465b0fb979abf146 Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Mon, 22 May 2023 21:33:53 +0530
Subject: [PATCH 4/7] network_manager: add a method for ipv6 static IP
configuration (#4127)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 103: [RHEL8] Support configuring network by NM keyfiles
RH-Bugzilla: 2219528
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [4/7] dfc67da03ac11c18439c3500b8cfba6a66a7428e
The static IP configuration for IPv6 in the method_map is missing for
network manager renderer. This is causing cloud-init to generate a keyfile with
IPv6 method as "auto" instead of "manual". This fixes this issue.
fixes: #4126
RHBZ: 2196284
Signed-off-by: Ani Sinha <anisinha@redhat.com>
(cherry picked from commit 5d440856cb6d2b4c908015fe4eb7227615c17c8b)
---
cloudinit/net/network_manager.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/cloudinit/net/network_manager.py b/cloudinit/net/network_manager.py
index 744c0cbb..2752f52f 100644
--- a/cloudinit/net/network_manager.py
+++ b/cloudinit/net/network_manager.py
@@ -69,6 +69,7 @@ class NMConnection:
method_map = {
"static": "manual",
+ "static6": "manual",
"dhcp6": "auto",
"ipv6_slaac": "auto",
"ipv6_dhcpv6-stateless": "auto",
--
2.39.3

View File

@ -0,0 +1,58 @@
From 2e5e0383567191808e2054cb236bdbd785540b26 Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Mon, 22 May 2023 21:30:01 +0530
Subject: [PATCH 3/7] nm: generate ipv6 stateful dhcp config at par with
sysconfig (#4115)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 103: [RHEL8] Support configuring network by NM keyfiles
RH-Bugzilla: 2219528
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [3/7] cf60e9477ac047f9e7e58c2fc528745fc2ae4248
The sysconfig renderer sets the following in the ifcfg file for IPV6 stateful
DHCP configuration:
BOOTPROTO = "dhcp"
DHCPV6C = True
IPV6INIT = True
IPV6_AUTOCONF = False
This should result in
[ipv6]
method=dhcp
in the network manager generated keyfile as DHCPV6C is set and
IPV6_AUTOCONF is not set. Unfortunately the network manager renderer
deviates from this and generates:
[ipv6]
method=auto
in it's rendered keyfile. This change fixes this deviation and sets the
IPV6 dhcp stateful configuration in alignment with what is generated by the
sysconfig renderer.
RHBZ: 2207716
Signed-off-by: Ani Sinha <anisinha@redhat.com>
(cherry picked from commit ea573ba6fc25fe49a6a1a322eeb5259b6238d78b)
---
cloudinit/net/network_manager.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cloudinit/net/network_manager.py b/cloudinit/net/network_manager.py
index 53763d15..744c0cbb 100644
--- a/cloudinit/net/network_manager.py
+++ b/cloudinit/net/network_manager.py
@@ -72,7 +72,7 @@ class NMConnection:
"dhcp6": "auto",
"ipv6_slaac": "auto",
"ipv6_dhcpv6-stateless": "auto",
- "ipv6_dhcpv6-stateful": "auto",
+ "ipv6_dhcpv6-stateful": "dhcp",
"dhcp4": "auto",
"dhcp": "auto",
}
--
2.39.3

View File

@ -1,65 +0,0 @@
From abf1adeae8211f5acd87dc63b03b2ed995047efd Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Thu, 20 May 2021 08:53:55 +0200
Subject: [PATCH 1/2] rhel/cloud.cfg: remove ssh_genkeytypes in settings.py and
set in cloud.cfg
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 10: rhel/cloud.cfg: remove ssh_genkeytypes in settings.py and set in cloud.cfg
RH-Commit: [1/1] 6da989423b9b6e017afbac2f1af3649b0487310f
RH-Bugzilla: 1957532
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
Currently genkeytypes in cloud.cfg is set to None, so together with
ssh_deletekeys=1 cloudinit on first boot it will just delete the existing
keys and not generate new ones.
Just removing that property in cloud.cfg is not enough, because
settings.py provides another empty default value that will be used
instead, resulting to no key generated even when the property is not defined.
Removing genkeytypes also in settings.py will default to GENERATE_KEY_NAMES,
but since we want only 'rsa', 'ecdsa' and 'ed25519', add back genkeytypes in
cloud.cfg with the above defaults.
Also remove ssh_deletekeys in settings.py as we always need
to 1 (and it also defaults to 1).
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/settings.py | 2 --
rhel/cloud.cfg | 2 +-
2 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index 43a1490c..2acf2615 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -49,8 +49,6 @@ CFG_BUILTIN = {
'def_log_file_mode': 0o600,
'log_cfgs': [],
'mount_default_fields': [None, None, 'auto', 'defaults,nofail', '0', '2'],
- 'ssh_deletekeys': False,
- 'ssh_genkeytypes': [],
'syslog_fix_perms': [],
'system_info': {
'paths': {
diff --git a/rhel/cloud.cfg b/rhel/cloud.cfg
index 9ecba215..cbee197a 100644
--- a/rhel/cloud.cfg
+++ b/rhel/cloud.cfg
@@ -7,7 +7,7 @@ ssh_pwauth: 0
mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2']
resize_rootfs_tmp: /dev
ssh_deletekeys: 1
-ssh_genkeytypes: ~
+ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519']
syslog_fix_perms: ~
disable_vmware_customization: false
--
2.27.0

View File

@ -0,0 +1,65 @@
From 5a3db5dddab530ad45aaaa0e20fdaadc9a82a7c9 Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Tue, 4 Apr 2023 19:59:07 +0530
Subject: [PATCH] rhel: make sure previous-hostname file ends with a new line
(#2108)
RH-Author: Ani Sinha <None>
RH-MergeRequest: 97: rhel: make sure previous-hostname file ends with a new line (#2108)
RH-Bugzilla: 2182407
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Jon Maloy <jmaloy@redhat.com>
RH-Commit: [1/1] 126208f85cc3bf5f2264bd5a71524716b28a7686 (anisinha/rhel-cloud-init)
cloud-init strips new line from "/etc/hostname" on rhel distro when processing
"/var/lib/cloud/data/previous-hostname". Although this does not pose a serious
issue, it is still better if the behavior is similar to other distros like
Ubuntu where /previous-hostname does end with a new line. Fix this issue by
using hostname parser in rhel similar to debian.
Signed-off-by: Ani Sinha <anisinha@redhat.com>
(cherry picked from commit 6d42aa8e2c1a5454a658ab4e2b9cead2677c77cd)
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
cloudinit/distros/rhel.py | 5 ++++-
tools/.github-cla-signers | 1 +
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/cloudinit/distros/rhel.py b/cloudinit/distros/rhel.py
index df7dc3d6..9625709e 100644
--- a/cloudinit/distros/rhel.py
+++ b/cloudinit/distros/rhel.py
@@ -13,6 +13,7 @@ from cloudinit import distros, helpers
from cloudinit import log as logging
from cloudinit import subp, util
from cloudinit.distros import rhel_util
+from cloudinit.distros.parsers.hostname import HostnameConf
from cloudinit.settings import PER_INSTANCE
LOG = logging.getLogger(__name__)
@@ -111,7 +112,9 @@ class Distro(distros.Distro):
# systemd will never update previous-hostname for us, so
# we need to do it ourselves
if self.uses_systemd() and filename.endswith("/previous-hostname"):
- util.write_file(filename, hostname)
+ conf = HostnameConf("")
+ conf.set_hostname(hostname)
+ util.write_file(filename, str(conf), 0o644)
elif self.uses_systemd():
subp.subp(["hostnamectl", "set-hostname", str(hostname)])
else:
diff --git a/tools/.github-cla-signers b/tools/.github-cla-signers
index d8cca015..457dacf4 100644
--- a/tools/.github-cla-signers
+++ b/tools/.github-cla-signers
@@ -9,6 +9,7 @@ andgein
andrew-lee-metaswitch
andrewbogott
andrewlukoshko
+ani-sinha
antonyc
aswinrajamannar
beantaxi
--
2.37.3

View File

@ -1,653 +0,0 @@
From aeab67600eb2d5e483812620b56ce5fb031a57d6 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Mon, 12 Jul 2021 21:47:37 +0200
Subject: [PATCH] ssh-util: allow cloudinit to merge all ssh keys into a custom
user file, defined in AuthorizedKeysFile (#937)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 25: ssh-util: allow cloudinit to merge all ssh keys into a custom user file, defined in AuthorizedKeysFile (#937)
RH-Commit: [1/1] 27bbe94f3b9dd8734865766bd30b06cff83383ab (eesposit/cloud-init)
RH-Bugzilla: 1862967
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
TESTED: By me and QA
BREW: 38030830
Conflicts: upstream patch modifies tests/integration_tests/util.py, that is
not present in RHEL.
commit 9b52405c6f0de5e00d5ee9c1d13540425d8f6bf5
Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Mon Jul 12 20:21:02 2021 +0200
ssh-util: allow cloudinit to merge all ssh keys into a custom user file, defined in AuthorizedKeysFile (#937)
This patch aims to fix LP1911680, by analyzing the files provided
in sshd_config and merge all keys into an user-specific file. Also
introduces additional tests to cover this specific case.
The file is picked by analyzing the path given in AuthorizedKeysFile.
If it points inside the current user folder (path is /home/user/*), it
means it is an user-specific file, so we can copy all user-keys there.
If it contains a %u or %h, it means that there will be a specific
authorized_keys file for each user, so we can copy all user-keys there.
If no path points to an user-specific file, for example when only
/etc/ssh/authorized_keys is given, default to ~/.ssh/authorized_keys.
Note that if there are more than a single user-specific file, the last
one will be picked.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Co-authored-by: James Falcon <therealfalcon@gmail.com>
LP: #1911680
RHBZ:1862967
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/ssh_util.py | 22 +-
.../assets/keys/id_rsa.test1 | 38 +++
.../assets/keys/id_rsa.test1.pub | 1 +
.../assets/keys/id_rsa.test2 | 38 +++
.../assets/keys/id_rsa.test2.pub | 1 +
.../assets/keys/id_rsa.test3 | 38 +++
.../assets/keys/id_rsa.test3.pub | 1 +
.../modules/test_ssh_keysfile.py | 85 ++++++
tests/unittests/test_sshutil.py | 246 +++++++++++++++++-
9 files changed, 456 insertions(+), 14 deletions(-)
create mode 100644 tests/integration_tests/assets/keys/id_rsa.test1
create mode 100644 tests/integration_tests/assets/keys/id_rsa.test1.pub
create mode 100644 tests/integration_tests/assets/keys/id_rsa.test2
create mode 100644 tests/integration_tests/assets/keys/id_rsa.test2.pub
create mode 100644 tests/integration_tests/assets/keys/id_rsa.test3
create mode 100644 tests/integration_tests/assets/keys/id_rsa.test3.pub
create mode 100644 tests/integration_tests/modules/test_ssh_keysfile.py
diff --git a/cloudinit/ssh_util.py b/cloudinit/ssh_util.py
index c08042d6..89057262 100644
--- a/cloudinit/ssh_util.py
+++ b/cloudinit/ssh_util.py
@@ -252,13 +252,15 @@ def render_authorizedkeysfile_paths(value, homedir, username):
def extract_authorized_keys(username, sshd_cfg_file=DEF_SSHD_CFG):
(ssh_dir, pw_ent) = users_ssh_info(username)
default_authorizedkeys_file = os.path.join(ssh_dir, 'authorized_keys')
+ user_authorizedkeys_file = default_authorizedkeys_file
auth_key_fns = []
with util.SeLinuxGuard(ssh_dir, recursive=True):
try:
ssh_cfg = parse_ssh_config_map(sshd_cfg_file)
+ key_paths = ssh_cfg.get("authorizedkeysfile",
+ "%h/.ssh/authorized_keys")
auth_key_fns = render_authorizedkeysfile_paths(
- ssh_cfg.get("authorizedkeysfile", "%h/.ssh/authorized_keys"),
- pw_ent.pw_dir, username)
+ key_paths, pw_ent.pw_dir, username)
except (IOError, OSError):
# Give up and use a default key filename
@@ -267,8 +269,22 @@ def extract_authorized_keys(username, sshd_cfg_file=DEF_SSHD_CFG):
"config from %r, using 'AuthorizedKeysFile' file "
"%r instead", DEF_SSHD_CFG, auth_key_fns[0])
+ # check if one of the keys is the user's one
+ for key_path, auth_key_fn in zip(key_paths.split(), auth_key_fns):
+ if any([
+ '%u' in key_path,
+ '%h' in key_path,
+ auth_key_fn.startswith('{}/'.format(pw_ent.pw_dir))
+ ]):
+ user_authorizedkeys_file = auth_key_fn
+
+ if user_authorizedkeys_file != default_authorizedkeys_file:
+ LOG.debug(
+ "AuthorizedKeysFile has an user-specific authorized_keys, "
+ "using %s", user_authorizedkeys_file)
+
# always store all the keys in the user's private file
- return (default_authorizedkeys_file, parse_authorized_keys(auth_key_fns))
+ return (user_authorizedkeys_file, parse_authorized_keys(auth_key_fns))
def setup_user_keys(keys, username, options=None):
diff --git a/tests/integration_tests/assets/keys/id_rsa.test1 b/tests/integration_tests/assets/keys/id_rsa.test1
new file mode 100644
index 00000000..bd4c822e
--- /dev/null
+++ b/tests/integration_tests/assets/keys/id_rsa.test1
@@ -0,0 +1,38 @@
+-----BEGIN OPENSSH PRIVATE KEY-----
+b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
+NhAAAAAwEAAQAAAYEAtRlG96aJ23URvAgO/bBsuLl+lquc350aSwV98/i8vlvOn5GVcHye
+t/rXQg4lZ4s0owG3kWyQFY8nvTk+G+UNU8fN0anAzBDi+4MzsejkF9scjTMFmXVrIpICqV
+3bYQNjPv6r+ubQdkD01du3eB9t5/zl84gtshp0hBdofyz8u1/A25s7fVU67GyI7PdKvaS+
+yvJSInZnb2e9VQzfJC+qAnN7gUZatBKjdgUtJeiUUeDaVnaS17b0aoT9iBO0sIcQtOTBlY
+lCjFt1TAMLZ64Hj3SfGZB7Yj0Z+LzFB2IWX1zzsjI68YkYPKOSL/NYhQU9e55kJQ7WnngN
+HY/2n/A7dNKSFDmgM5c9IWgeZ7fjpsfIYAoJ/CAxFIND+PEHd1gCS6xoEhaUVyh5WH/Xkw
+Kv1nx4AiZ2BFCE+75kySRLZUJ+5y0r3DU5ktMXeURzVIP7pu0R8DCul+GU+M/+THyWtAEO
+geaNJ6fYpo2ipDhbmTYt3kk2lMIapRxGBFs+37sdAAAFgGGJssNhibLDAAAAB3NzaC1yc2
+EAAAGBALUZRvemidt1EbwIDv2wbLi5fparnN+dGksFffP4vL5bzp+RlXB8nrf610IOJWeL
+NKMBt5FskBWPJ705PhvlDVPHzdGpwMwQ4vuDM7Ho5BfbHI0zBZl1ayKSAqld22EDYz7+q/
+rm0HZA9NXbt3gfbef85fOILbIadIQXaH8s/LtfwNubO31VOuxsiOz3Sr2kvsryUiJ2Z29n
+vVUM3yQvqgJze4FGWrQSo3YFLSXolFHg2lZ2kte29GqE/YgTtLCHELTkwZWJQoxbdUwDC2
+euB490nxmQe2I9Gfi8xQdiFl9c87IyOvGJGDyjki/zWIUFPXueZCUO1p54DR2P9p/wO3TS
+khQ5oDOXPSFoHme346bHyGAKCfwgMRSDQ/jxB3dYAkusaBIWlFcoeVh/15MCr9Z8eAImdg
+RQhPu+ZMkkS2VCfuctK9w1OZLTF3lEc1SD+6btEfAwrpfhlPjP/kx8lrQBDoHmjSen2KaN
+oqQ4W5k2Ld5JNpTCGqUcRgRbPt+7HQAAAAMBAAEAAAGBAJJCTOd70AC2ptEGbR0EHHqADT
+Wgefy7A94tHFEqxTy0JscGq/uCGimaY7kMdbcPXT59B4VieWeAC2cuUPP0ZHQSfS5ke7oT
+tU3N47U+0uBVbNS4rUAH7bOo2o9wptnOA5x/z+O+AARRZ6tEXQOd1oSy4gByLf2Wkh2QTi
+vP6Hln1vlFgKEzcXg6G8fN3MYWxKRhWmZM3DLERMvorlqqSBLcs5VvfZfLKcsKWTExioAq
+KgwEjYm8T9+rcpsw1xBus3j9k7wCI1Sus6PCDjq0pcYKLMYM7p8ygnU2tRYrOztdIxgWRA
+w/1oenm1Mqq2tV5xJcBCwCLOGe6SFwkIRywOYc57j5McH98Xhhg9cViyyBdXy/baF0mro+
+qPhOsWDxqwD4VKZ9UmQ6O8kPNKcc7QcIpFJhcO0g9zbp/MT0KueaWYrTKs8y4lUkTT7Xz6
++MzlR122/JwlAbBo6Y2kWtB+y+XwBZ0BfyJsm2czDhKm7OI5KfuBNhq0tFfKwOlYBq4QAA
+AMAyvUof1R8LLISkdO3EFTKn5RGNkPPoBJmGs6LwvU7NSjjLj/wPQe4jsIBc585tvbrddp
+60h72HgkZ5tqOfdeBYOKqX0qQQBHUEvI6M+NeQTQRev8bCHMLXQ21vzpClnrwNzlja359E
+uTRfiPRwIlyPLhOUiClBDSAnBI9h82Hkk3zzsQ/xGfsPB7iOjRbW69bMRSVCRpeweCVmWC
+77DTsEOq69V2TdljhQNIXE5OcOWonIlfgPiI74cdd+dLhzc/AAAADBAO1/JXd2kYiRyNkZ
+aXTLcwiSgBQIYbobqVP3OEtTclr0P1JAvby3Y4cCaEhkenx+fBqgXAku5lKM+U1Q9AEsMk
+cjIhaDpb43rU7GPjMn4zHwgGsEKd5pC1yIQ2PlK+cHanAdsDjIg+6RR+fuvid/mBeBOYXb
+Py0sa3HyekLJmCdx4UEyNASoiNaGFLQVAqo+RACsXy6VMxFH5dqDYlvwrfUQLwxJmse9Vb
+GEuuPAsklNugZqssC2XOIujFVUpslduQAAAMEAwzVHQVtsc3icCSzEAARpDTUdTbI29OhB
+/FMBnjzS9/3SWfLuBOSm9heNCHs2jdGNb8cPdKZuY7S9Fx6KuVUPyTbSSYkjj0F4fTeC9g
+0ym4p4UWYdF67WSWwLORkaG8K0d+G/CXkz8hvKUg6gcZWKBHAE1ROrHu1nsc8v7mkiKq4I
+bnTw5Q9TgjbWcQWtgPq0wXyyl/K8S1SFdkMCTOHDD0RQ+jTV2WNGVwFTodIRHenX+Rw2g4
+CHbTWbsFrHR1qFAAAACmphbWVzQG5ld3Q=
+-----END OPENSSH PRIVATE KEY-----
diff --git a/tests/integration_tests/assets/keys/id_rsa.test1.pub b/tests/integration_tests/assets/keys/id_rsa.test1.pub
new file mode 100644
index 00000000..3d2e26e1
--- /dev/null
+++ b/tests/integration_tests/assets/keys/id_rsa.test1.pub
@@ -0,0 +1 @@
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1GUb3ponbdRG8CA79sGy4uX6Wq5zfnRpLBX3z+Ly+W86fkZVwfJ63+tdCDiVnizSjAbeRbJAVjye9OT4b5Q1Tx83RqcDMEOL7gzOx6OQX2xyNMwWZdWsikgKpXdthA2M+/qv65tB2QPTV27d4H23n/OXziC2yGnSEF2h/LPy7X8Dbmzt9VTrsbIjs90q9pL7K8lIidmdvZ71VDN8kL6oCc3uBRlq0EqN2BS0l6JRR4NpWdpLXtvRqhP2IE7SwhxC05MGViUKMW3VMAwtnrgePdJ8ZkHtiPRn4vMUHYhZfXPOyMjrxiRg8o5Iv81iFBT17nmQlDtaeeA0dj/af8Dt00pIUOaAzlz0haB5nt+Omx8hgCgn8IDEUg0P48Qd3WAJLrGgSFpRXKHlYf9eTAq/WfHgCJnYEUIT7vmTJJEtlQn7nLSvcNTmS0xd5RHNUg/um7RHwMK6X4ZT4z/5MfJa0AQ6B5o0np9imjaKkOFuZNi3eSTaUwhqlHEYEWz7fux0= test1@host
diff --git a/tests/integration_tests/assets/keys/id_rsa.test2 b/tests/integration_tests/assets/keys/id_rsa.test2
new file mode 100644
index 00000000..5854d901
--- /dev/null
+++ b/tests/integration_tests/assets/keys/id_rsa.test2
@@ -0,0 +1,38 @@
+-----BEGIN OPENSSH PRIVATE KEY-----
+b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
+NhAAAAAwEAAQAAAYEAvK50D2PWOc4ikyHVRJS6tDhqzjL5cKiivID4p1X8BYCVw83XAEGO
+LnItUyVXHNADlh6fpVq1NY6A2JVtygoPF6ZFx8ph7IWMmnhDdnxLLyGsbhd1M1tiXJD/R+
+3WnGHRJ4PKrQavMLgqHRrieV3QVVfjFSeo6jX/4TruP6ZmvITMZWJrXaGphxJ/pPykEdkO
+i8AmKU9FNviojyPS2nNtj9B/635IdgWvrd7Vf5Ycsw9MR55LWSidwa856RH62Yl6LpEGTH
+m1lJiMk1u88JPSqvohhaUkLKkFpcQwcB0m76W1KOyllJsmX8bNXrlZsI+WiiYI7Xl5vQm2
+17DEuNeavtPAtDMxu8HmTg2UJ55Naxehbfe2lx2k5kYGGw3i1O1OVN2pZ2/OB71LucYd/5
+qxPaz03wswcGOJYGPkNc40vdES/Scc7Yt8HsnZuzqkyOgzn0HiUCzoYUYLYTpLf+yGmwxS
+yAEY056aOfkCsboKHOKiOmlJxNaZZFQkX1evep4DAAAFgC7HMbUuxzG1AAAAB3NzaC1yc2
+EAAAGBALyudA9j1jnOIpMh1USUurQ4as4y+XCooryA+KdV/AWAlcPN1wBBji5yLVMlVxzQ
+A5Yen6VatTWOgNiVbcoKDxemRcfKYeyFjJp4Q3Z8Sy8hrG4XdTNbYlyQ/0ft1pxh0SeDyq
+0GrzC4Kh0a4nld0FVX4xUnqOo1/+E67j+mZryEzGVia12hqYcSf6T8pBHZDovAJilPRTb4
+qI8j0tpzbY/Qf+t+SHYFr63e1X+WHLMPTEeeS1koncGvOekR+tmJei6RBkx5tZSYjJNbvP
+CT0qr6IYWlJCypBaXEMHAdJu+ltSjspZSbJl/GzV65WbCPloomCO15eb0JttewxLjXmr7T
+wLQzMbvB5k4NlCeeTWsXoW33tpcdpOZGBhsN4tTtTlTdqWdvzge9S7nGHf+asT2s9N8LMH
+BjiWBj5DXONL3REv0nHO2LfB7J2bs6pMjoM59B4lAs6GFGC2E6S3/shpsMUsgBGNOemjn5
+ArG6ChziojppScTWmWRUJF9Xr3qeAwAAAAMBAAEAAAGASj/kkEHbhbfmxzujL2/P4Sfqb+
+aDXqAeGkwujbs6h/fH99vC5ejmSMTJrVSeaUo6fxLiBDIj6UWA0rpLEBzRP59BCpRL4MXV
+RNxav/+9nniD4Hb+ug0WMhMlQmsH71ZW9lPYqCpfOq7ec8GmqdgPKeaCCEspH7HMVhfYtd
+eHylwAC02lrpz1l5/h900sS5G9NaWR3uPA+xbzThDs4uZVkSidjlCNt1QZhDSSk7jA5n34
+qJ5UTGu9WQDZqyxWKND+RIyQuFAPGQyoyCC1FayHO2sEhT5qHuumL14Mn81XpzoXFoKyql
+rhBDe+pHhKArBYt92Evch0k1ABKblFxtxLXcvk4Fs7pHi+8k4+Cnazej2kcsu1kURlMZJB
+w2QT/8BV4uImbH05LtyscQuwGzpIoxqrnHrvg5VbohStmhoOjYybzqqW3/M0qhkn5JgTiy
+dJcHRJisRnAcmbmEchYtLDi6RW1e022H4I9AFXQqyr5HylBq6ugtWcFCsrcX8ibZ8xAAAA
+wQCAOPgwae6yZLkrYzRfbxZtGKNmhpI0EtNSDCHYuQQapFZJe7EFENs/VAaIiiut0yajGj
+c3aoKcwGIoT8TUM8E3GSNW6+WidUOC7H6W+/6N2OYZHRBACGz820xO+UBCl2oSk+dLBlfr
+IQzBGUWn5uVYCs0/2nxfCdFyHtMK8dMF/ypbdG+o1rXz5y9b7PVG6Mn+o1Rjsdkq7VERmy
+Pukd8hwATOIJqoKl3TuFyBeYFLqe+0e7uTeswQFw17PF31VjAAAADBAOpJRQb8c6qWqsvv
+vkve0uMuL0DfWW0G6+SxjPLcV6aTWL5xu0Grd8uBxDkkHU/CDrAwpchXyuLsvbw21Eje/u
+U5k9nLEscWZwcX7odxlK+EfAY2Bf5+Hd9bH5HMzTRJH8KkWK1EppOLPyiDxz4LZGzPLVyv
+/1PgSuvXkSWk1KIE4SvSemyxGX2tPVI6uO+URqevfnPOS1tMB7BMQlgkR6eh4bugx9UYx9
+mwlXonNa4dN0iQxZ7N4rKFBbT/uyB2bQAAAMEAzisnkD8k9Tn8uyhxpWLHwb03X4ZUUHDV
+zu15e4a8dZ+mM8nHO986913Xz5JujlJKkGwFTvgWkIiR2zqTEauZHARH7gANpaweTm6lPd
+E4p2S0M3ulY7xtp9lCFIrDhMPPkGq8SFZB6qhgucHcZSRLq6ZDou3S2IdNOzDTpBtkhRCS
+0zFcdTLh3zZweoy8HGbW36bwB6s1CIL76Pd4F64i0Ms9CCCU6b+E5ArFhYQIsXiDbgHWbD
+tZRSm2GEgnDGAvAAAACmphbWVzQG5ld3Q=
+-----END OPENSSH PRIVATE KEY-----
diff --git a/tests/integration_tests/assets/keys/id_rsa.test2.pub b/tests/integration_tests/assets/keys/id_rsa.test2.pub
new file mode 100644
index 00000000..f3831a57
--- /dev/null
+++ b/tests/integration_tests/assets/keys/id_rsa.test2.pub
@@ -0,0 +1 @@
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8rnQPY9Y5ziKTIdVElLq0OGrOMvlwqKK8gPinVfwFgJXDzdcAQY4uci1TJVcc0AOWHp+lWrU1joDYlW3KCg8XpkXHymHshYyaeEN2fEsvIaxuF3UzW2JckP9H7dacYdEng8qtBq8wuCodGuJ5XdBVV+MVJ6jqNf/hOu4/pma8hMxlYmtdoamHEn+k/KQR2Q6LwCYpT0U2+KiPI9Lac22P0H/rfkh2Ba+t3tV/lhyzD0xHnktZKJ3BrznpEfrZiXoukQZMebWUmIyTW7zwk9Kq+iGFpSQsqQWlxDBwHSbvpbUo7KWUmyZfxs1euVmwj5aKJgjteXm9CbbXsMS415q+08C0MzG7weZODZQnnk1rF6Ft97aXHaTmRgYbDeLU7U5U3alnb84HvUu5xh3/mrE9rPTfCzBwY4lgY+Q1zjS90RL9Jxzti3weydm7OqTI6DOfQeJQLOhhRgthOkt/7IabDFLIARjTnpo5+QKxugoc4qI6aUnE1plkVCRfV696ngM= test2@host
diff --git a/tests/integration_tests/assets/keys/id_rsa.test3 b/tests/integration_tests/assets/keys/id_rsa.test3
new file mode 100644
index 00000000..2596c762
--- /dev/null
+++ b/tests/integration_tests/assets/keys/id_rsa.test3
@@ -0,0 +1,38 @@
+-----BEGIN OPENSSH PRIVATE KEY-----
+b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
+NhAAAAAwEAAQAAAYEApPG4MdkYQKD57/qreFrh9GRC22y66qZOWZWRjC887rrbvBzO69hV
+yJpTIXleJEvpWiHYcjMR5G6NNFsnNtZ4fxDqmSc4vcFj53JsE/XNqLKq6psXadCb5vkNpG
+bxA+Z5bJlzJ969PgJIIEbgc86sei4kgR2MuPWqtZbY5GkpNCTqWuLYeFK+14oFruA2nyWH
+9MOIRDHK/d597psHy+LTMtymO7ZPhO571abKw6jvvwiSeDxVE9kV7KAQIuM9/S3gftvgQQ
+ron3GL34pgmIabdSGdbfHqGDooryJhlbquJZELBN236KgRNTCAjVvUzjjQr1eRP3xssGwV
+O6ECBGCQLl/aYogAgtwnwj9iXqtfiLK3EwlgjquU4+JQ0CVtLhG3gIZB+qoMThco0pmHTr
+jtfQCwrztsBBFunSa2/CstuV1mQ5O5ZrZ6ACo9yPRBNkns6+CiKdtMtCtzi3k2RDz9jpYm
+Pcak03Lr7IkdC1Tp6+jA+//yPHSO1o4CqW89IQzNAAAFgEUd7lZFHe5WAAAAB3NzaC1yc2
+EAAAGBAKTxuDHZGECg+e/6q3ha4fRkQttsuuqmTlmVkYwvPO6627wczuvYVciaUyF5XiRL
+6Voh2HIzEeRujTRbJzbWeH8Q6pknOL3BY+dybBP1zaiyquqbF2nQm+b5DaRm8QPmeWyZcy
+fevT4CSCBG4HPOrHouJIEdjLj1qrWW2ORpKTQk6lri2HhSvteKBa7gNp8lh/TDiEQxyv3e
+fe6bB8vi0zLcpju2T4Tue9WmysOo778Ikng8VRPZFeygECLjPf0t4H7b4EEK6J9xi9+KYJ
+iGm3UhnW3x6hg6KK8iYZW6riWRCwTdt+ioETUwgI1b1M440K9XkT98bLBsFTuhAgRgkC5f
+2mKIAILcJ8I/Yl6rX4iytxMJYI6rlOPiUNAlbS4Rt4CGQfqqDE4XKNKZh0647X0AsK87bA
+QRbp0mtvwrLbldZkOTuWa2egAqPcj0QTZJ7OvgoinbTLQrc4t5NkQ8/Y6WJj3GpNNy6+yJ
+HQtU6evowPv/8jx0jtaOAqlvPSEMzQAAAAMBAAEAAAGAGaqbdPZJNdVWzyb8g6/wtSzc0n
+Qq6dSTIJGLonq/So69HpqFAGIbhymsger24UMGvsXBfpO/1wH06w68HWZmPa+OMeLOi4iK
+WTuO4dQ/+l5DBlq32/lgKSLcIpb6LhcxEdsW9j9Mx1dnjc45owun/yMq/wRwH1/q/nLIsV
+JD3R9ZcGcYNDD8DWIm3D17gmw+qbG7hJES+0oh4n0xS2KyZpm7LFOEMDVEA8z+hE/HbryQ
+vjD1NC91n+qQWD1wKfN3WZDRwip3z1I5VHMpvXrA/spHpa9gzHK5qXNmZSz3/dfA1zHjCR
+2dHjJnrIUH8nyPfw8t+COC+sQBL3Nr0KUWEFPRM08cOcQm4ctzg17aDIZBONjlZGKlReR8
+1zfAw84Q70q2spLWLBLXSFblHkaOfijEbejIbaz2UUEQT27WD7RHAORdQlkx7eitk66T9d
+DzIq/cpYhm5Fs8KZsh3PLldp9nsHbD2Oa9J9LJyI4ryuIW0mVwRdvPSiiYi3K+mDCpAAAA
+wBe+ugEEJ+V7orb1f4Zez0Bd4FNkEc52WZL4CWbaCtM+ZBg5KnQ6xW14JdC8IS9cNi/I5P
+yLsBvG4bWPLGgQruuKY6oLueD6BFnKjqF6ACUCiSQldh4BAW1nYc2U48+FFvo3ZQyudFSy
+QEFlhHmcaNMDo0AIJY5Xnq2BG3nEX7AqdtZ8hhenHwLCRQJatDwSYBHDpSDdh9vpTnGp/2
+0jBz25Ko4UANzvSAc3sA4yN3jfpoM366TgdNf8x3g1v7yljQAAAMEA0HSQjzH5nhEwB58k
+mYYxnBYp1wb86zIuVhAyjZaeinvBQSTmLow8sXIHcCVuD3CgBezlU2SX5d9YuvRU9rcthi
+uzn4wWnbnzYy4SwzkMJXchUAkumFVD8Hq5TNPh2Z+033rLLE08EhYypSeVpuzdpFoStaS9
+3DUZA2bR/zLZI9MOVZRUcYImNegqIjOYHY8Sbj3/0QPV6+WpUJFMPvvedWhfaOsRMTA6nr
+VLG4pxkrieVl0UtuRGbzD/exXhXVi7AAAAwQDKkJj4ez/+KZFYlZQKiV0BrfUFcgS6ElFM
+2CZIEagCtu8eedrwkNqx2FUX33uxdvUTr4c9I3NvWeEEGTB9pgD4lh1x/nxfuhyGXtimFM
+GnznGV9oyz0DmKlKiKSEGwWf5G+/NiiCwwVJ7wsQQm7TqNtkQ9b8MhWWXC7xlXKUs7dmTa
+e8AqAndCCMEnbS1UQFO/R5PNcZXkFWDggLQ/eWRYKlrXgdnUgH6h0saOcViKpNJBUXb3+x
+eauhOY52PS/BcAAAAKamFtZXNAbmV3dAE=
+-----END OPENSSH PRIVATE KEY-----
diff --git a/tests/integration_tests/assets/keys/id_rsa.test3.pub b/tests/integration_tests/assets/keys/id_rsa.test3.pub
new file mode 100644
index 00000000..057db632
--- /dev/null
+++ b/tests/integration_tests/assets/keys/id_rsa.test3.pub
@@ -0,0 +1 @@
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCk8bgx2RhAoPnv+qt4WuH0ZELbbLrqpk5ZlZGMLzzuutu8HM7r2FXImlMheV4kS+laIdhyMxHkbo00Wyc21nh/EOqZJzi9wWPncmwT9c2osqrqmxdp0Jvm+Q2kZvED5nlsmXMn3r0+AkggRuBzzqx6LiSBHYy49aq1ltjkaSk0JOpa4th4Ur7XigWu4DafJYf0w4hEMcr93n3umwfL4tMy3KY7tk+E7nvVpsrDqO+/CJJ4PFUT2RXsoBAi4z39LeB+2+BBCuifcYvfimCYhpt1IZ1t8eoYOiivImGVuq4lkQsE3bfoqBE1MICNW9TOONCvV5E/fGywbBU7oQIEYJAuX9piiACC3CfCP2Jeq1+IsrcTCWCOq5Tj4lDQJW0uEbeAhkH6qgxOFyjSmYdOuO19ALCvO2wEEW6dJrb8Ky25XWZDk7lmtnoAKj3I9EE2Sezr4KIp20y0K3OLeTZEPP2OliY9xqTTcuvsiR0LVOnr6MD7//I8dI7WjgKpbz0hDM0= test3@host
diff --git a/tests/integration_tests/modules/test_ssh_keysfile.py b/tests/integration_tests/modules/test_ssh_keysfile.py
new file mode 100644
index 00000000..f82d7649
--- /dev/null
+++ b/tests/integration_tests/modules/test_ssh_keysfile.py
@@ -0,0 +1,85 @@
+import paramiko
+import pytest
+from io import StringIO
+from paramiko.ssh_exception import SSHException
+
+from tests.integration_tests.instances import IntegrationInstance
+from tests.integration_tests.util import get_test_rsa_keypair
+
+TEST_USER1_KEYS = get_test_rsa_keypair('test1')
+TEST_USER2_KEYS = get_test_rsa_keypair('test2')
+TEST_DEFAULT_KEYS = get_test_rsa_keypair('test3')
+
+USERDATA = """\
+#cloud-config
+bootcmd:
+ - sed -i 's;#AuthorizedKeysFile.*;AuthorizedKeysFile /etc/ssh/authorized_keys %h/.ssh/authorized_keys2;' /etc/ssh/sshd_config
+ssh_authorized_keys:
+ - {default}
+users:
+- default
+- name: test_user1
+ ssh_authorized_keys:
+ - {user1}
+- name: test_user2
+ ssh_authorized_keys:
+ - {user2}
+""".format( # noqa: E501
+ default=TEST_DEFAULT_KEYS.public_key,
+ user1=TEST_USER1_KEYS.public_key,
+ user2=TEST_USER2_KEYS.public_key,
+)
+
+
+@pytest.mark.ubuntu
+@pytest.mark.user_data(USERDATA)
+def test_authorized_keys(client: IntegrationInstance):
+ expected_keys = [
+ ('test_user1', '/home/test_user1/.ssh/authorized_keys2',
+ TEST_USER1_KEYS),
+ ('test_user2', '/home/test_user2/.ssh/authorized_keys2',
+ TEST_USER2_KEYS),
+ ('ubuntu', '/home/ubuntu/.ssh/authorized_keys2',
+ TEST_DEFAULT_KEYS),
+ ('root', '/root/.ssh/authorized_keys2', TEST_DEFAULT_KEYS),
+ ]
+
+ for user, filename, keys in expected_keys:
+ contents = client.read_from_file(filename)
+ if user in ['ubuntu', 'root']:
+ # Our personal public key gets added by pycloudlib
+ lines = contents.split('\n')
+ assert len(lines) == 2
+ assert keys.public_key.strip() in contents
+ else:
+ assert contents.strip() == keys.public_key.strip()
+
+ # Ensure we can actually connect
+ ssh = paramiko.SSHClient()
+ ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
+ paramiko_key = paramiko.RSAKey.from_private_key(StringIO(
+ keys.private_key))
+
+ # Will fail with AuthenticationException if
+ # we cannot connect
+ ssh.connect(
+ client.instance.ip,
+ username=user,
+ pkey=paramiko_key,
+ look_for_keys=False,
+ allow_agent=False,
+ )
+
+ # Ensure other uses can't connect using our key
+ other_users = [u[0] for u in expected_keys if u[2] != keys]
+ for other_user in other_users:
+ with pytest.raises(SSHException):
+ print('trying to connect as {} with key from {}'.format(
+ other_user, user))
+ ssh.connect(
+ client.instance.ip,
+ username=other_user,
+ pkey=paramiko_key,
+ look_for_keys=False,
+ allow_agent=False,
+ )
diff --git a/tests/unittests/test_sshutil.py b/tests/unittests/test_sshutil.py
index fd1d1bac..bcb8044f 100644
--- a/tests/unittests/test_sshutil.py
+++ b/tests/unittests/test_sshutil.py
@@ -570,20 +570,33 @@ class TestBasicAuthorizedKeyParse(test_helpers.CiTestCase):
ssh_util.render_authorizedkeysfile_paths(
"%h/.keys", "/homedirs/bobby", "bobby"))
+ def test_all(self):
+ self.assertEqual(
+ ["/homedirs/bobby/.keys", "/homedirs/bobby/.secret/keys",
+ "/keys/path1", "/opt/bobby/keys"],
+ ssh_util.render_authorizedkeysfile_paths(
+ "%h/.keys .secret/keys /keys/path1 /opt/%u/keys",
+ "/homedirs/bobby", "bobby"))
+
class TestMultipleSshAuthorizedKeysFile(test_helpers.CiTestCase):
@patch("cloudinit.ssh_util.pwd.getpwnam")
def test_multiple_authorizedkeys_file_order1(self, m_getpwnam):
- fpw = FakePwEnt(pw_name='bobby', pw_dir='/home2/bobby')
+ fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
m_getpwnam.return_value = fpw
- authorized_keys = self.tmp_path('authorized_keys')
+ user_ssh_folder = "%s/.ssh" % fpw.pw_dir
+
+ # /tmp/home2/bobby/.ssh/authorized_keys = rsa
+ authorized_keys = self.tmp_path('authorized_keys', dir=user_ssh_folder)
util.write_file(authorized_keys, VALID_CONTENT['rsa'])
- user_keys = self.tmp_path('user_keys')
+ # /tmp/home2/bobby/.ssh/user_keys = dsa
+ user_keys = self.tmp_path('user_keys', dir=user_ssh_folder)
util.write_file(user_keys, VALID_CONTENT['dsa'])
- sshd_config = self.tmp_path('sshd_config')
+ # /tmp/sshd_config
+ sshd_config = self.tmp_path('sshd_config', dir="/tmp")
util.write_file(
sshd_config,
"AuthorizedKeysFile %s %s" % (authorized_keys, user_keys)
@@ -593,33 +606,244 @@ class TestMultipleSshAuthorizedKeysFile(test_helpers.CiTestCase):
fpw.pw_name, sshd_config)
content = ssh_util.update_authorized_keys(auth_key_entries, [])
- self.assertEqual("%s/.ssh/authorized_keys" % fpw.pw_dir, auth_key_fn)
+ self.assertEqual(user_keys, auth_key_fn)
self.assertTrue(VALID_CONTENT['rsa'] in content)
self.assertTrue(VALID_CONTENT['dsa'] in content)
@patch("cloudinit.ssh_util.pwd.getpwnam")
def test_multiple_authorizedkeys_file_order2(self, m_getpwnam):
- fpw = FakePwEnt(pw_name='suzie', pw_dir='/home/suzie')
+ fpw = FakePwEnt(pw_name='suzie', pw_dir='/tmp/home/suzie')
m_getpwnam.return_value = fpw
- authorized_keys = self.tmp_path('authorized_keys')
+ user_ssh_folder = "%s/.ssh" % fpw.pw_dir
+
+ # /tmp/home/suzie/.ssh/authorized_keys = rsa
+ authorized_keys = self.tmp_path('authorized_keys', dir=user_ssh_folder)
util.write_file(authorized_keys, VALID_CONTENT['rsa'])
- user_keys = self.tmp_path('user_keys')
+ # /tmp/home/suzie/.ssh/user_keys = dsa
+ user_keys = self.tmp_path('user_keys', dir=user_ssh_folder)
util.write_file(user_keys, VALID_CONTENT['dsa'])
- sshd_config = self.tmp_path('sshd_config')
+ # /tmp/sshd_config
+ sshd_config = self.tmp_path('sshd_config', dir="/tmp")
util.write_file(
sshd_config,
- "AuthorizedKeysFile %s %s" % (authorized_keys, user_keys)
+ "AuthorizedKeysFile %s %s" % (user_keys, authorized_keys)
)
(auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
- fpw.pw_name, sshd_config
+ fpw.pw_name, sshd_config)
+ content = ssh_util.update_authorized_keys(auth_key_entries, [])
+
+ self.assertEqual(authorized_keys, auth_key_fn)
+ self.assertTrue(VALID_CONTENT['rsa'] in content)
+ self.assertTrue(VALID_CONTENT['dsa'] in content)
+
+ @patch("cloudinit.ssh_util.pwd.getpwnam")
+ def test_multiple_authorizedkeys_file_local_global(self, m_getpwnam):
+ fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
+ m_getpwnam.return_value = fpw
+ user_ssh_folder = "%s/.ssh" % fpw.pw_dir
+
+ # /tmp/home2/bobby/.ssh/authorized_keys = rsa
+ authorized_keys = self.tmp_path('authorized_keys', dir=user_ssh_folder)
+ util.write_file(authorized_keys, VALID_CONTENT['rsa'])
+
+ # /tmp/home2/bobby/.ssh/user_keys = dsa
+ user_keys = self.tmp_path('user_keys', dir=user_ssh_folder)
+ util.write_file(user_keys, VALID_CONTENT['dsa'])
+
+ # /tmp/etc/ssh/authorized_keys = ecdsa
+ authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys',
+ dir="/tmp")
+ util.write_file(authorized_keys_global, VALID_CONTENT['ecdsa'])
+
+ # /tmp/sshd_config
+ sshd_config = self.tmp_path('sshd_config', dir="/tmp")
+ util.write_file(
+ sshd_config,
+ "AuthorizedKeysFile %s %s %s" % (authorized_keys_global,
+ user_keys, authorized_keys)
+ )
+
+ (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
+ fpw.pw_name, sshd_config)
+ content = ssh_util.update_authorized_keys(auth_key_entries, [])
+
+ self.assertEqual(authorized_keys, auth_key_fn)
+ self.assertTrue(VALID_CONTENT['rsa'] in content)
+ self.assertTrue(VALID_CONTENT['ecdsa'] in content)
+ self.assertTrue(VALID_CONTENT['dsa'] in content)
+
+ @patch("cloudinit.ssh_util.pwd.getpwnam")
+ def test_multiple_authorizedkeys_file_local_global2(self, m_getpwnam):
+ fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
+ m_getpwnam.return_value = fpw
+ user_ssh_folder = "%s/.ssh" % fpw.pw_dir
+
+ # /tmp/home2/bobby/.ssh/authorized_keys2 = rsa
+ authorized_keys = self.tmp_path('authorized_keys2',
+ dir=user_ssh_folder)
+ util.write_file(authorized_keys, VALID_CONTENT['rsa'])
+
+ # /tmp/home2/bobby/.ssh/user_keys3 = dsa
+ user_keys = self.tmp_path('user_keys3', dir=user_ssh_folder)
+ util.write_file(user_keys, VALID_CONTENT['dsa'])
+
+ # /tmp/etc/ssh/authorized_keys = ecdsa
+ authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys',
+ dir="/tmp")
+ util.write_file(authorized_keys_global, VALID_CONTENT['ecdsa'])
+
+ # /tmp/sshd_config
+ sshd_config = self.tmp_path('sshd_config', dir="/tmp")
+ util.write_file(
+ sshd_config,
+ "AuthorizedKeysFile %s %s %s" % (authorized_keys_global,
+ authorized_keys, user_keys)
+ )
+
+ (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
+ fpw.pw_name, sshd_config)
+ content = ssh_util.update_authorized_keys(auth_key_entries, [])
+
+ self.assertEqual(user_keys, auth_key_fn)
+ self.assertTrue(VALID_CONTENT['rsa'] in content)
+ self.assertTrue(VALID_CONTENT['ecdsa'] in content)
+ self.assertTrue(VALID_CONTENT['dsa'] in content)
+
+ @patch("cloudinit.ssh_util.pwd.getpwnam")
+ def test_multiple_authorizedkeys_file_global(self, m_getpwnam):
+ fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
+ m_getpwnam.return_value = fpw
+
+ # /tmp/etc/ssh/authorized_keys = rsa
+ authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys',
+ dir="/tmp")
+ util.write_file(authorized_keys_global, VALID_CONTENT['rsa'])
+
+ # /tmp/sshd_config
+ sshd_config = self.tmp_path('sshd_config')
+ util.write_file(
+ sshd_config,
+ "AuthorizedKeysFile %s" % (authorized_keys_global)
)
+
+ (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
+ fpw.pw_name, sshd_config)
content = ssh_util.update_authorized_keys(auth_key_entries, [])
self.assertEqual("%s/.ssh/authorized_keys" % fpw.pw_dir, auth_key_fn)
self.assertTrue(VALID_CONTENT['rsa'] in content)
+
+ @patch("cloudinit.ssh_util.pwd.getpwnam")
+ def test_multiple_authorizedkeys_file_multiuser(self, m_getpwnam):
+ fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home2/bobby')
+ m_getpwnam.return_value = fpw
+ user_ssh_folder = "%s/.ssh" % fpw.pw_dir
+ # /tmp/home2/bobby/.ssh/authorized_keys2 = rsa
+ authorized_keys = self.tmp_path('authorized_keys2',
+ dir=user_ssh_folder)
+ util.write_file(authorized_keys, VALID_CONTENT['rsa'])
+ # /tmp/home2/bobby/.ssh/user_keys3 = dsa
+ user_keys = self.tmp_path('user_keys3', dir=user_ssh_folder)
+ util.write_file(user_keys, VALID_CONTENT['dsa'])
+
+ fpw2 = FakePwEnt(pw_name='suzie', pw_dir='/tmp/home/suzie')
+ user_ssh_folder = "%s/.ssh" % fpw2.pw_dir
+ # /tmp/home/suzie/.ssh/authorized_keys2 = ssh-xmss@openssh.com
+ authorized_keys2 = self.tmp_path('authorized_keys2',
+ dir=user_ssh_folder)
+ util.write_file(authorized_keys2,
+ VALID_CONTENT['ssh-xmss@openssh.com'])
+
+ # /tmp/etc/ssh/authorized_keys = ecdsa
+ authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys2',
+ dir="/tmp")
+ util.write_file(authorized_keys_global, VALID_CONTENT['ecdsa'])
+
+ # /tmp/sshd_config
+ sshd_config = self.tmp_path('sshd_config', dir="/tmp")
+ util.write_file(
+ sshd_config,
+ "AuthorizedKeysFile %s %%h/.ssh/authorized_keys2 %s" %
+ (authorized_keys_global, user_keys)
+ )
+
+ # process first user
+ (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
+ fpw.pw_name, sshd_config)
+ content = ssh_util.update_authorized_keys(auth_key_entries, [])
+
+ self.assertEqual(user_keys, auth_key_fn)
+ self.assertTrue(VALID_CONTENT['rsa'] in content)
+ self.assertTrue(VALID_CONTENT['ecdsa'] in content)
+ self.assertTrue(VALID_CONTENT['dsa'] in content)
+ self.assertFalse(VALID_CONTENT['ssh-xmss@openssh.com'] in content)
+
+ m_getpwnam.return_value = fpw2
+ # process second user
+ (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
+ fpw2.pw_name, sshd_config)
+ content = ssh_util.update_authorized_keys(auth_key_entries, [])
+
+ self.assertEqual(authorized_keys2, auth_key_fn)
+ self.assertTrue(VALID_CONTENT['ssh-xmss@openssh.com'] in content)
+ self.assertTrue(VALID_CONTENT['ecdsa'] in content)
+ self.assertTrue(VALID_CONTENT['dsa'] in content)
+ self.assertFalse(VALID_CONTENT['rsa'] in content)
+
+ @patch("cloudinit.ssh_util.pwd.getpwnam")
+ def test_multiple_authorizedkeys_file_multiuser2(self, m_getpwnam):
+ fpw = FakePwEnt(pw_name='bobby', pw_dir='/tmp/home/bobby')
+ m_getpwnam.return_value = fpw
+ user_ssh_folder = "%s/.ssh" % fpw.pw_dir
+ # /tmp/home/bobby/.ssh/authorized_keys2 = rsa
+ authorized_keys = self.tmp_path('authorized_keys2',
+ dir=user_ssh_folder)
+ util.write_file(authorized_keys, VALID_CONTENT['rsa'])
+ # /tmp/home/bobby/.ssh/user_keys3 = dsa
+ user_keys = self.tmp_path('user_keys3', dir=user_ssh_folder)
+ util.write_file(user_keys, VALID_CONTENT['dsa'])
+
+ fpw2 = FakePwEnt(pw_name='badguy', pw_dir='/tmp/home/badguy')
+ user_ssh_folder = "%s/.ssh" % fpw2.pw_dir
+ # /tmp/home/badguy/home/bobby = ""
+ authorized_keys2 = self.tmp_path('home/bobby', dir="/tmp/home/badguy")
+
+ # /tmp/etc/ssh/authorized_keys = ecdsa
+ authorized_keys_global = self.tmp_path('etc/ssh/authorized_keys2',
+ dir="/tmp")
+ util.write_file(authorized_keys_global, VALID_CONTENT['ecdsa'])
+
+ # /tmp/sshd_config
+ sshd_config = self.tmp_path('sshd_config', dir="/tmp")
+ util.write_file(
+ sshd_config,
+ "AuthorizedKeysFile %s %%h/.ssh/authorized_keys2 %s %s" %
+ (authorized_keys_global, user_keys, authorized_keys2)
+ )
+
+ # process first user
+ (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
+ fpw.pw_name, sshd_config)
+ content = ssh_util.update_authorized_keys(auth_key_entries, [])
+
+ self.assertEqual(user_keys, auth_key_fn)
+ self.assertTrue(VALID_CONTENT['rsa'] in content)
+ self.assertTrue(VALID_CONTENT['ecdsa'] in content)
+ self.assertTrue(VALID_CONTENT['dsa'] in content)
+
+ m_getpwnam.return_value = fpw2
+ # process second user
+ (auth_key_fn, auth_key_entries) = ssh_util.extract_authorized_keys(
+ fpw2.pw_name, sshd_config)
+ content = ssh_util.update_authorized_keys(auth_key_entries, [])
+
+ # badguy should not take the key from the other user!
+ self.assertEqual(authorized_keys2, auth_key_fn)
+ self.assertTrue(VALID_CONTENT['ecdsa'] in content)
self.assertTrue(VALID_CONTENT['dsa'] in content)
+ self.assertFalse(VALID_CONTENT['rsa'] in content)
# vi: ts=4 expandtab
--
2.27.0

View File

@ -1,85 +0,0 @@
From 7d4e16bfc1cefbdd4d1477480b02b1d6c1399e4d Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Mon, 20 Sep 2021 12:16:36 +0200
Subject: [PATCH] ssh_utils.py: ignore when sshd_config options are not
key/value pairs (#1007)
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 31: ssh_utils.py: ignore when sshd_config options are not key/value pairs (#1007)
RH-Commit: [1/1] 9007fb8a116e98036ff17df0168a76e9a5843671 (eesposit/cloud-init)
RH-Bugzilla: 1862933
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
TESTED: by me
BREW: 39832462
commit 2ce857248162957a785af61c135ca8433fdbbcde
Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Wed Sep 8 02:08:36 2021 +0200
ssh_utils.py: ignore when sshd_config options are not key/value pairs (#1007)
As specified in #LP 1845552,
In cloudinit/ssh_util.py, in parse_ssh_config_lines(), we attempt to
parse each line of sshd_config. This function expects each line to
be one of the following forms:
\# comment
key value
key=value
However, options like DenyGroups and DenyUsers are specified to
*optionally* accepts values in sshd_config.
Cloud-init should comply to this and skip the option if a value
is not provided.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
cloudinit/ssh_util.py | 8 +++++++-
tests/unittests/test_sshutil.py | 8 ++++++++
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/cloudinit/ssh_util.py b/cloudinit/ssh_util.py
index 9ccadf09..33679dcc 100644
--- a/cloudinit/ssh_util.py
+++ b/cloudinit/ssh_util.py
@@ -484,7 +484,13 @@ def parse_ssh_config_lines(lines):
try:
key, val = line.split(None, 1)
except ValueError:
- key, val = line.split('=', 1)
+ try:
+ key, val = line.split('=', 1)
+ except ValueError:
+ LOG.debug(
+ "sshd_config: option \"%s\" has no key/value pair,"
+ " skipping it", line)
+ continue
ret.append(SshdConfigLine(line, key, val))
return ret
diff --git a/tests/unittests/test_sshutil.py b/tests/unittests/test_sshutil.py
index a66788bf..08e20050 100644
--- a/tests/unittests/test_sshutil.py
+++ b/tests/unittests/test_sshutil.py
@@ -525,6 +525,14 @@ class TestUpdateSshConfigLines(test_helpers.CiTestCase):
self.assertEqual([self.pwauth], result)
self.check_line(lines[-1], self.pwauth, "no")
+ def test_option_without_value(self):
+ """Implementation only accepts key-value pairs."""
+ extended_exlines = self.exlines.copy()
+ denyusers_opt = "DenyUsers"
+ extended_exlines.append(denyusers_opt)
+ lines = ssh_util.parse_ssh_config_lines(list(extended_exlines))
+ self.assertNotIn(denyusers_opt, str(lines))
+
def test_single_option_updated(self):
"""A single update should have change made and line updated."""
opt, val = ("UsePAM", "no")
--
2.27.0

View File

@ -0,0 +1,47 @@
From 866817455283619c706e837a77fb31adf3bdd3ce Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Fri, 23 Jun 2023 17:54:04 +0530
Subject: [PATCH 07/11] test fixes: changes to apply RHEL specific config
settings to tests
X-downstream-only: true
fixes: c4d66915520554adedff9b ("Add initial redhat changes")
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
tests/unittests/cmd/test_main.py | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/tests/unittests/cmd/test_main.py b/tests/unittests/cmd/test_main.py
index e9ad0bb8..435d3be3 100644
--- a/tests/unittests/cmd/test_main.py
+++ b/tests/unittests/cmd/test_main.py
@@ -119,14 +119,19 @@ class TestMain(FilesystemMockingTestCase):
{
"def_log_file": "/var/log/cloud-init.log",
"log_cfgs": [],
- "syslog_fix_perms": [
- "syslog:adm",
- "root:adm",
- "root:wheel",
- "root:root",
- ],
"vendor_data": {"enabled": True, "prefix": []},
"vendor_data2": {"enabled": True, "prefix": []},
+ "syslog_fix_perms": [],
+ "ssh_deletekeys": False,
+ "ssh_genkeytypes": [],
+ "mount_default_fields": [
+ None,
+ None,
+ "auto",
+ "defaults,nofail",
+ "0",
+ "2",
+ ],
}
)
updated_cfg.pop("system_info")
--
2.39.3

View File

@ -0,0 +1,286 @@
From 3a070f23440c9eb6e0e5fb3605e36285e8a5b727 Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Fri, 23 Jun 2023 16:54:24 +0530
Subject: [PATCH 03/11] test fixes: remove NM_CONTROLLED=no from tests
X-downstream-only: true
fixes: b3b96bff187e9 ("Do not write NM_CONTROLLED=no in generated interface config files")
Signed-off-by: Ani Sinha <anisinha@redhat.com>
---
tests/unittests/cmd/devel/test_net_convert.py | 1 -
tests/unittests/distros/test_netconfig.py | 8 -------
tests/unittests/test_net.py | 23 -------------------
3 files changed, 32 deletions(-)
diff --git a/tests/unittests/cmd/devel/test_net_convert.py b/tests/unittests/cmd/devel/test_net_convert.py
index 71654750..e0114a2e 100644
--- a/tests/unittests/cmd/devel/test_net_convert.py
+++ b/tests/unittests/cmd/devel/test_net_convert.py
@@ -62,7 +62,6 @@ SAMPLE_SYSCONFIG_CONTENT = """\
#
BOOTPROTO=dhcp
DEVICE=eth0
-NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
diff --git a/tests/unittests/distros/test_netconfig.py b/tests/unittests/distros/test_netconfig.py
index b1c89ce3..7f9ac054 100644
--- a/tests/unittests/distros/test_netconfig.py
+++ b/tests/unittests/distros/test_netconfig.py
@@ -723,7 +723,6 @@ class TestNetCfgDistroRedhat(TestNetCfgDistroBase):
GATEWAY=192.168.1.254
IPADDR=192.168.1.5
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -733,7 +732,6 @@ class TestNetCfgDistroRedhat(TestNetCfgDistroBase):
"""\
BOOTPROTO=dhcp
DEVICE=eth1
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -764,7 +762,6 @@ class TestNetCfgDistroRedhat(TestNetCfgDistroBase):
IPV6_AUTOCONF=no
IPV6_DEFAULTGW=2607:f0d0:1002:0011::1
IPV6_FORCE_ACCEPT_RA=no
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -774,7 +771,6 @@ class TestNetCfgDistroRedhat(TestNetCfgDistroBase):
"""\
BOOTPROTO=dhcp
DEVICE=eth1
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -821,7 +817,6 @@ class TestNetCfgDistroRedhat(TestNetCfgDistroBase):
HWADDR=00:16:3e:60:7c:df
IPADDR=192.10.1.2
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -833,7 +828,6 @@ class TestNetCfgDistroRedhat(TestNetCfgDistroBase):
DEVICE=infra0
IPADDR=10.0.1.2
NETMASK=255.255.0.0
- NM_CONTROLLED=no
ONBOOT=yes
PHYSDEV=eth0
USERCTL=no
@@ -869,7 +863,6 @@ class TestNetCfgDistroRedhat(TestNetCfgDistroBase):
DEVICE=eth0
IPADDR=192.10.1.2
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -881,7 +874,6 @@ class TestNetCfgDistroRedhat(TestNetCfgDistroBase):
DEVICE=eth0.1001
IPADDR=10.0.1.2
NETMASK=255.255.0.0
- NM_CONTROLLED=no
ONBOOT=yes
PHYSDEV=eth0
USERCTL=no
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 7abe61b9..6274f12d 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -1495,7 +1495,6 @@ NETWORK_CONFIGS = {
DHCPV6C=yes
IPV6INIT=yes
DEVICE=iface0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -1586,7 +1585,6 @@ NETWORK_CONFIGS = {
IPV6INIT=yes
IPV6_FORCE_ACCEPT_RA=yes
DEVICE=iface0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -1662,7 +1660,6 @@ NETWORK_CONFIGS = {
IPV6INIT=yes
IPV6_FORCE_ACCEPT_RA=no
DEVICE=iface0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -1726,7 +1723,6 @@ NETWORK_CONFIGS = {
IPV6_AUTOCONF=yes
IPV6INIT=yes
DEVICE=iface0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -1781,7 +1777,6 @@ NETWORK_CONFIGS = {
IPV6_AUTOCONF=no
IPV6_FORCE_ACCEPT_RA=no
DEVICE=iface0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -1838,7 +1833,6 @@ NETWORK_CONFIGS = {
IPV6_AUTOCONF=yes
IPV6INIT=yes
DEVICE=iface0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -1920,7 +1914,6 @@ NETWORK_CONFIGS = {
IPV6_AUTOCONF=no
IPV6_FORCE_ACCEPT_RA=yes
DEVICE=iface0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -1961,7 +1954,6 @@ NETWORK_CONFIGS = {
"""\
BOOTPROTO=dhcp
DEVICE=iface0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -2038,7 +2030,6 @@ NETWORK_CONFIGS = {
BOOTPROTO=dhcp
DEVICE=iface0
ETHTOOL_OPTS="wol g"
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -2504,7 +2495,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
IPADDR=192.168.200.7
MTU=9000
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=InfiniBand
USERCTL=no"""
@@ -3576,7 +3566,6 @@ iface bond0 inet6 static
IPV6INIT=yes
IPV6_AUTOCONF=no
IPV6_FORCE_ACCEPT_RA=no
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -3592,7 +3581,6 @@ iface bond0 inet6 static
IPV6INIT=yes
IPV6_AUTOCONF=no
IPV6_FORCE_ACCEPT_RA=no
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -3882,7 +3870,6 @@ iface bond0 inet6 static
BOOTPROTO=none
DEVICE=eth0
HWADDR=cf:d6:af:48:e8:80
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no"""
@@ -4718,7 +4705,6 @@ HWADDR=fa:16:3e:25:b4:59
IPADDR=51.68.89.122
MTU=1500
NETMASK=255.255.240.0
-NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -4732,7 +4718,6 @@ DEVICE=eth1
DHCLIENT_SET_DEFAULT_ROUTE=no
HWADDR=fa:16:3e:b1:ca:29
MTU=9000
-NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -4983,7 +4968,6 @@ USERCTL=no
IPV6_FORCE_ACCEPT_RA=no
IPV6_DEFAULTGW=2001:db8::1
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -5015,7 +4999,6 @@ USERCTL=no
"""\
BOOTPROTO=none
DEVICE=eno1
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -5028,7 +5011,6 @@ USERCTL=no
IPADDR=192.6.1.9
MTU=1495
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
PHYSDEV=eno1
USERCTL=no
@@ -5064,7 +5046,6 @@ USERCTL=no
IPADDR=10.101.8.65
MTU=1334
NETMASK=255.255.255.192
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Bond
USERCTL=no
@@ -5076,7 +5057,6 @@ USERCTL=no
BOOTPROTO=none
DEVICE=enp0s0
MASTER=bond0
- NM_CONTROLLED=no
ONBOOT=yes
SLAVE=yes
TYPE=Bond
@@ -5089,7 +5069,6 @@ USERCTL=no
BOOTPROTO=none
DEVICE=enp0s1
MASTER=bond0
- NM_CONTROLLED=no
ONBOOT=yes
SLAVE=yes
TYPE=Bond
@@ -5120,7 +5099,6 @@ USERCTL=no
DEVICE=eno1
HWADDR=07-1c-c6-75-a4-be
METRIC=100
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
@@ -5211,7 +5189,6 @@ USERCTL=no
IPV6_FORCE_ACCEPT_RA=no
MTU=1400
NETMASK=255.255.248.0
- NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
--
2.39.3

View File

@ -0,0 +1,117 @@
From 32d3430eb9e8ef5c354ee294ec6b8de61f05292a Mon Sep 17 00:00:00 2001
From: Ani Sinha <anisinha@redhat.com>
Date: Thu, 20 Jul 2023 00:19:25 +0530
Subject: [PATCH 02/11] tools/read-version: fix the tool so that it can handle
version parsing errors (#4234)
git describe may not return version/tags in the format that the read-version
tool expects. Make the tool robust so that it can gracefully handle
version strings that are not in the regular format.
We use regex to capture the details we care about, but if we cannot find them,
we won't traceback and will continue to use version and version_long as
expected.
Signed-off-by: Ani Sinha <anisinha@redhat.com>
(cherry picked from commit 6543c88e0781b3c2e170fdaffbe6ba9f268e986c)
---
tools/read-version | 68 +++++++++++++++++++++++++++++-----------------
1 file changed, 43 insertions(+), 25 deletions(-)
diff --git a/tools/read-version b/tools/read-version
index 5a71e6c7..7575683c 100755
--- a/tools/read-version
+++ b/tools/read-version
@@ -2,6 +2,7 @@
import os
import json
+import re
import subprocess
import sys
@@ -50,6 +51,37 @@ def is_gitdir(path):
return False
+def get_version_details(version, version_long):
+ release = None
+ extra = None
+ commit = None
+ distance = None
+
+ # Should match upstream version number. E.g., 23.1 or 23.1.2
+ short_regex = r"(\d+\.\d+\.?\d*)"
+ # Should match version including upstream version, distance, and commit
+ # E.g., 23.1.2-10-g12ab34cd
+ long_regex = r"(\d+\.\d+\.?\d*){1}.*-(\d+)+-g([a-f0-9]{8}){1}.*"
+
+ short_match = re.search(short_regex, version)
+ long_match = re.search(long_regex, version_long)
+ if long_match:
+ release, distance, commit = long_match.groups()
+ extra = f"-{distance}-g{commit}"
+ elif short_match:
+ release = short_match.groups()[0]
+
+ return {
+ "release": release,
+ "version": version,
+ "version_long": version_long,
+ "extra": extra,
+ "commit": commit,
+ "distance": distance,
+ "is_release_branch_ci": is_release_branch_ci,
+ }
+
+
use_long = "--long" in sys.argv or os.environ.get("CI_RV_LONG")
use_tags = "--tags" in sys.argv or os.environ.get("CI_RV_TAGS")
output_json = "--json" in sys.argv
@@ -104,33 +136,19 @@ else:
version = src_version
version_long = ""
-# version is X.Y.Z[+xxx.gHASH]
-# version_long is None or X.Y.Z-xxx-gHASH
-release = version.partition("-")[0]
-extra = None
-commit = None
-distance = None
-
-if version_long:
- info = version_long.partition("-")[2]
- extra = f"-{info}"
- distance, commit = info.split("-")
- # remove the 'g' from gHASH
- commit = commit[1:]
-
-data = {
- "release": release,
- "version": version,
- "version_long": version_long,
- "extra": extra,
- "commit": commit,
- "distance": distance,
- "is_release_branch_ci": is_release_branch_ci,
-}
+
+details = get_version_details(version, version_long)
if output_json:
- sys.stdout.write(json.dumps(data, indent=1) + "\n")
+ sys.stdout.write(json.dumps(details, indent=1) + "\n")
else:
- sys.stdout.write(version + "\n")
+ output = ""
+ if details["release"]:
+ output += details["release"]
+ if details["extra"]:
+ output += details["extra"]
+ if not output:
+ output = src_version
+ sys.stdout.write(output + "\n")
sys.exit(0)
--
2.39.3

View File

@ -1,369 +0,0 @@
From 769b9f8c9b1ecc294a197575108ae7cb54ad7f4b Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Mon, 5 Jul 2021 14:13:45 +0200
Subject: [PATCH] write passwords only to serial console, lock down
cloud-init-output.log (#847)
RH-Author: Eduardo Otubo <otubo@redhat.com>
RH-MergeRequest: 21: write passwords only to serial console, lock down cloud-init-output.log (#847)
RH-Commit: [1/1] 8f30f2b7d0d6f9dca19994dbd0827b44e998f238 (otubo/cloud-init)
RH-Bugzilla: 1945891
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Mohamed Gamal Morsy <mmorsy@redhat.com>
commit b794d426b9ab43ea9d6371477466070d86e10668
Author: Daniel Watkins <oddbloke@ubuntu.com>
Date: Fri Mar 19 10:06:42 2021 -0400
write passwords only to serial console, lock down cloud-init-output.log (#847)
Prior to this commit, when a user specified configuration which would
generate random passwords for users, cloud-init would cause those
passwords to be written to the serial console by emitting them on
stderr. In the default configuration, any stdout or stderr emitted by
cloud-init is also written to `/var/log/cloud-init-output.log`. This
file is world-readable, meaning that those randomly-generated passwords
were available to be read by any user with access to the system. This
presents an obvious security issue.
This commit responds to this issue in two ways:
* We address the direct issue by moving from writing the passwords to
sys.stderr to writing them directly to /dev/console (via
util.multi_log); this means that the passwords will never end up in
cloud-init-output.log
* To avoid future issues like this, we also modify the logging code so
that any files created in a log sink subprocess will only be
owner/group readable and, if it exists, will be owned by the adm
group. This results in `/var/log/cloud-init-output.log` no longer
being world-readable, meaning that if there are other parts of the
codebase that are emitting sensitive data intended for the serial
console, that data is no longer available to all users of the system.
LP: #1918303
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
---
cloudinit/config/cc_set_passwords.py | 5 +-
cloudinit/config/tests/test_set_passwords.py | 40 +++++++++----
cloudinit/tests/test_util.py | 56 +++++++++++++++++++
cloudinit/util.py | 38 +++++++++++--
.../modules/test_set_password.py | 24 ++++++++
tests/integration_tests/test_logging.py | 22 ++++++++
tests/unittests/test_util.py | 4 ++
7 files changed, 173 insertions(+), 16 deletions(-)
create mode 100644 tests/integration_tests/test_logging.py
diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
index d6b5682d..433de751 100755
--- a/cloudinit/config/cc_set_passwords.py
+++ b/cloudinit/config/cc_set_passwords.py
@@ -78,7 +78,6 @@ password.
"""
import re
-import sys
from cloudinit.distros import ug_util
from cloudinit import log as logging
@@ -214,7 +213,9 @@ def handle(_name, cfg, cloud, log, args):
if len(randlist):
blurb = ("Set the following 'random' passwords\n",
'\n'.join(randlist))
- sys.stderr.write("%s\n%s\n" % blurb)
+ util.multi_log(
+ "%s\n%s\n" % blurb, stderr=False, fallback_to_stdout=False
+ )
if expire:
expired_users = []
diff --git a/cloudinit/config/tests/test_set_passwords.py b/cloudinit/config/tests/test_set_passwords.py
index daa1ef51..bbe2ee8f 100644
--- a/cloudinit/config/tests/test_set_passwords.py
+++ b/cloudinit/config/tests/test_set_passwords.py
@@ -74,10 +74,6 @@ class TestSetPasswordsHandle(CiTestCase):
with_logs = True
- def setUp(self):
- super(TestSetPasswordsHandle, self).setUp()
- self.add_patch('cloudinit.config.cc_set_passwords.sys.stderr', 'm_err')
-
def test_handle_on_empty_config(self, *args):
"""handle logs that no password has changed when config is empty."""
cloud = self.tmp_cloud(distro='ubuntu')
@@ -129,10 +125,12 @@ class TestSetPasswordsHandle(CiTestCase):
mock.call(['pw', 'usermod', 'ubuntu', '-p', '01-Jan-1970'])],
m_subp.call_args_list)
+ @mock.patch(MODPATH + "util.multi_log")
@mock.patch(MODPATH + "util.is_BSD")
@mock.patch(MODPATH + "subp.subp")
- def test_handle_on_chpasswd_list_creates_random_passwords(self, m_subp,
- m_is_bsd):
+ def test_handle_on_chpasswd_list_creates_random_passwords(
+ self, m_subp, m_is_bsd, m_multi_log
+ ):
"""handle parses command set random passwords."""
m_is_bsd.return_value = False
cloud = self.tmp_cloud(distro='ubuntu')
@@ -146,10 +144,32 @@ class TestSetPasswordsHandle(CiTestCase):
self.assertIn(
'DEBUG: Handling input for chpasswd as list.',
self.logs.getvalue())
- self.assertNotEqual(
- [mock.call(['chpasswd'],
- '\n'.join(valid_random_pwds) + '\n')],
- m_subp.call_args_list)
+
+ self.assertEqual(1, m_subp.call_count)
+ args, _kwargs = m_subp.call_args
+ self.assertEqual(["chpasswd"], args[0])
+
+ stdin = args[1]
+ user_pass = {
+ user: password
+ for user, password
+ in (line.split(":") for line in stdin.splitlines())
+ }
+
+ self.assertEqual(1, m_multi_log.call_count)
+ self.assertEqual(
+ mock.call(mock.ANY, stderr=False, fallback_to_stdout=False),
+ m_multi_log.call_args
+ )
+
+ self.assertEqual(set(["root", "ubuntu"]), set(user_pass.keys()))
+ written_lines = m_multi_log.call_args[0][0].splitlines()
+ for password in user_pass.values():
+ for line in written_lines:
+ if password in line:
+ break
+ else:
+ self.fail("Password not emitted to console")
# vi: ts=4 expandtab
diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
index b7a302f1..e811917e 100644
--- a/cloudinit/tests/test_util.py
+++ b/cloudinit/tests/test_util.py
@@ -851,4 +851,60 @@ class TestEnsureFile:
assert "ab" == kwargs["omode"]
+@mock.patch("cloudinit.util.grp.getgrnam")
+@mock.patch("cloudinit.util.os.setgid")
+@mock.patch("cloudinit.util.os.umask")
+class TestRedirectOutputPreexecFn:
+ """This tests specifically the preexec_fn used in redirect_output."""
+
+ @pytest.fixture(params=["outfmt", "errfmt"])
+ def preexec_fn(self, request):
+ """A fixture to gather the preexec_fn used by redirect_output.
+
+ This enables simpler direct testing of it, and parameterises any tests
+ using it to cover both the stdout and stderr code paths.
+ """
+ test_string = "| piped output to invoke subprocess"
+ if request.param == "outfmt":
+ args = (test_string, None)
+ elif request.param == "errfmt":
+ args = (None, test_string)
+ with mock.patch("cloudinit.util.subprocess.Popen") as m_popen:
+ util.redirect_output(*args)
+
+ assert 1 == m_popen.call_count
+ _args, kwargs = m_popen.call_args
+ assert "preexec_fn" in kwargs, "preexec_fn not passed to Popen"
+ return kwargs["preexec_fn"]
+
+ def test_preexec_fn_sets_umask(
+ self, m_os_umask, _m_setgid, _m_getgrnam, preexec_fn
+ ):
+ """preexec_fn should set a mask that avoids world-readable files."""
+ preexec_fn()
+
+ assert [mock.call(0o037)] == m_os_umask.call_args_list
+
+ def test_preexec_fn_sets_group_id_if_adm_group_present(
+ self, _m_os_umask, m_setgid, m_getgrnam, preexec_fn
+ ):
+ """We should setgrp to adm if present, so files are owned by them."""
+ fake_group = mock.Mock(gr_gid=mock.sentinel.gr_gid)
+ m_getgrnam.return_value = fake_group
+
+ preexec_fn()
+
+ assert [mock.call("adm")] == m_getgrnam.call_args_list
+ assert [mock.call(mock.sentinel.gr_gid)] == m_setgid.call_args_list
+
+ def test_preexec_fn_handles_absent_adm_group_gracefully(
+ self, _m_os_umask, m_setgid, m_getgrnam, preexec_fn
+ ):
+ """We should handle an absent adm group gracefully."""
+ m_getgrnam.side_effect = KeyError("getgrnam(): name not found: 'adm'")
+
+ preexec_fn()
+
+ assert 0 == m_setgid.call_count
+
# vi: ts=4 expandtab
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 769f3425..4e0a72db 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -359,7 +359,7 @@ def find_modules(root_dir):
def multi_log(text, console=True, stderr=True,
- log=None, log_level=logging.DEBUG):
+ log=None, log_level=logging.DEBUG, fallback_to_stdout=True):
if stderr:
sys.stderr.write(text)
if console:
@@ -368,7 +368,7 @@ def multi_log(text, console=True, stderr=True,
with open(conpath, 'w') as wfh:
wfh.write(text)
wfh.flush()
- else:
+ elif fallback_to_stdout:
# A container may lack /dev/console (arguably a container bug). If
# it does not exist, then write output to stdout. this will result
# in duplicate stderr and stdout messages if stderr was True.
@@ -623,6 +623,26 @@ def redirect_output(outfmt, errfmt, o_out=None, o_err=None):
if not o_err:
o_err = sys.stderr
+ # pylint: disable=subprocess-popen-preexec-fn
+ def set_subprocess_umask_and_gid():
+ """Reconfigure umask and group ID to create output files securely.
+
+ This is passed to subprocess.Popen as preexec_fn, so it is executed in
+ the context of the newly-created process. It:
+
+ * sets the umask of the process so created files aren't world-readable
+ * if an adm group exists in the system, sets that as the process' GID
+ (so that the created file(s) are owned by root:adm)
+ """
+ os.umask(0o037)
+ try:
+ group_id = grp.getgrnam("adm").gr_gid
+ except KeyError:
+ # No adm group, don't set a group
+ pass
+ else:
+ os.setgid(group_id)
+
if outfmt:
LOG.debug("Redirecting %s to %s", o_out, outfmt)
(mode, arg) = outfmt.split(" ", 1)
@@ -632,7 +652,12 @@ def redirect_output(outfmt, errfmt, o_out=None, o_err=None):
owith = "wb"
new_fp = open(arg, owith)
elif mode == "|":
- proc = subprocess.Popen(arg, shell=True, stdin=subprocess.PIPE)
+ proc = subprocess.Popen(
+ arg,
+ shell=True,
+ stdin=subprocess.PIPE,
+ preexec_fn=set_subprocess_umask_and_gid,
+ )
new_fp = proc.stdin
else:
raise TypeError("Invalid type for output format: %s" % outfmt)
@@ -654,7 +679,12 @@ def redirect_output(outfmt, errfmt, o_out=None, o_err=None):
owith = "wb"
new_fp = open(arg, owith)
elif mode == "|":
- proc = subprocess.Popen(arg, shell=True, stdin=subprocess.PIPE)
+ proc = subprocess.Popen(
+ arg,
+ shell=True,
+ stdin=subprocess.PIPE,
+ preexec_fn=set_subprocess_umask_and_gid,
+ )
new_fp = proc.stdin
else:
raise TypeError("Invalid type for error format: %s" % errfmt)
diff --git a/tests/integration_tests/modules/test_set_password.py b/tests/integration_tests/modules/test_set_password.py
index b13f76fb..d7cf91a5 100644
--- a/tests/integration_tests/modules/test_set_password.py
+++ b/tests/integration_tests/modules/test_set_password.py
@@ -116,6 +116,30 @@ class Mixin:
# Which are not the same
assert shadow_users["harry"] != shadow_users["dick"]
+ def test_random_passwords_not_stored_in_cloud_init_output_log(
+ self, class_client
+ ):
+ """We should not emit passwords to the in-instance log file.
+
+ LP: #1918303
+ """
+ cloud_init_output = class_client.read_from_file(
+ "/var/log/cloud-init-output.log"
+ )
+ assert "dick:" not in cloud_init_output
+ assert "harry:" not in cloud_init_output
+
+ def test_random_passwords_emitted_to_serial_console(self, class_client):
+ """We should emit passwords to the serial console. (LP: #1918303)"""
+ try:
+ console_log = class_client.instance.console_log()
+ except NotImplementedError:
+ # Assume that an exception here means that we can't use the console
+ # log
+ pytest.skip("NotImplementedError when requesting console log")
+ assert "dick:" in console_log
+ assert "harry:" in console_log
+
def test_explicit_password_set_correctly(self, class_client):
"""Test that an explicitly-specified password is set correctly."""
shadow_users, _ = self._fetch_and_parse_etc_shadow(class_client)
diff --git a/tests/integration_tests/test_logging.py b/tests/integration_tests/test_logging.py
new file mode 100644
index 00000000..b31a0434
--- /dev/null
+++ b/tests/integration_tests/test_logging.py
@@ -0,0 +1,22 @@
+"""Integration tests relating to cloud-init's logging."""
+
+
+class TestVarLogCloudInitOutput:
+ """Integration tests relating to /var/log/cloud-init-output.log."""
+
+ def test_var_log_cloud_init_output_not_world_readable(self, client):
+ """
+ The log can contain sensitive data, it shouldn't be world-readable.
+
+ LP: #1918303
+ """
+ # Check the file exists
+ assert client.execute("test -f /var/log/cloud-init-output.log").ok
+
+ # Check its permissions are as we expect
+ perms, user, group = client.execute(
+ "stat -c %a:%U:%G /var/log/cloud-init-output.log"
+ ).split(":")
+ assert "640" == perms
+ assert "root" == user
+ assert "adm" == group
diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
index 857629f1..e5292001 100644
--- a/tests/unittests/test_util.py
+++ b/tests/unittests/test_util.py
@@ -572,6 +572,10 @@ class TestMultiLog(helpers.FilesystemMockingTestCase):
util.multi_log(logged_string)
self.assertEqual(logged_string, self.stdout.getvalue())
+ def test_logs_dont_go_to_stdout_if_fallback_to_stdout_is_false(self):
+ util.multi_log('something', fallback_to_stdout=False)
+ self.assertEqual('', self.stdout.getvalue())
+
def test_logs_go_to_log_if_given(self):
log = mock.MagicMock()
logged_string = 'something very important'
--
2.27.0

Binary file not shown.

View File

@ -5,8 +5,8 @@
%global debug_package %{nil}
Name: cloud-init
Version: 21.1
Release: 15%{?dist}.1
Version: 23.1.1
Release: 10%{?dist}
Summary: Cloud instance init scripts
Group: System Environment/Base
@ -14,67 +14,46 @@ License: GPLv3
URL: http://launchpad.net/cloud-init
Source0: https://launchpad.net/cloud-init/trunk/%{version}/+download/%{name}-%{version}.tar.gz
Source1: cloud-init-tmpfiles.conf
Source2: test_version_change.pkl
Patch0001: 0001-Add-initial-redhat-setup.patch
Patch0002: 0002-Do-not-write-NM_CONTROLLED-no-in-generated-interface.patch
Patch0003: 0003-limit-permissions-on-def_log_file.patch
Patch0004: 0004-sysconfig-Don-t-write-BOOTPROTO-dhcp-for-ipv6-dhcp.patch
Patch0005: 0005-DataSourceAzure.py-use-hostnamectl-to-set-hostname.patch
Patch0006: 0006-include-NOZEROCONF-yes-in-etc-sysconfig-network.patch
Patch0007: 0007-Remove-race-condition-between-cloud-init-and-Network.patch
Patch0008: 0008-net-exclude-OVS-internal-interfaces-in-get_interface.patch
Patch0009: 0009-Fix-requiring-device-number-on-EC2-derivatives-836.patch
# For bz#1957532 - [cloud-init] From RHEL 82+ cloud-init no longer displays sshd keys fingerprints from instance launched from a backup image
Patch10: ci-rhel-cloud.cfg-remove-ssh_genkeytypes-in-settings.py.patch
# For bz#1945891 - CVE-2021-3429 cloud-init: randomly generated passwords logged in clear-text to world-readable file [rhel-8]
Patch11: ci-write-passwords-only-to-serial-console-lock-down-clo.patch
# For bz#1862967 - [cloud-init]Customize ssh AuthorizedKeysFile causes login failure
Patch12: ci-ssh-util-allow-cloudinit-to-merge-all-ssh-keys-into-.patch
# For bz#1862967 - [cloud-init]Customize ssh AuthorizedKeysFile causes login failure
Patch13: ci-Stop-copying-ssh-system-keys-and-check-folder-permis.patch
# For bz#1995840 - [cloudinit] Fix home permissions modified by ssh module
Patch14: ci-Fix-home-permissions-modified-by-ssh-module-SC-338-9.patch
# For bz#1862933 - cloud-init fails with ValueError: need more than 1 value to unpack[rhel-8]
Patch15: ci-ssh_utils.py-ignore-when-sshd_config-options-are-not.patch
# For bz#2013644 - cloud-init fails to set host key permissions correctly
Patch16: ci-cc_ssh.py-fix-private-key-group-owner-and-permission.patch
# For bz#2021538 - cloud-init.service fails to start after package update
Patch17: ci-fix-error-on-upgrade-caused-by-new-vendordata2-attri.patch
# For bz#2028028 - [RHEL-8] Above 19.2 of cloud-init fails to configure routes when configuring static and default routes to the same destination IP
Patch18: ci-cloudinit-net-handle-two-different-routes-for-the-sa.patch
# For bz#2039697 - [RHEL8] [Azure] cloud-init fails to configure the system
# For bz#2026587 - [cloud-init][RHEL8] Support for cloud-init datasource 'cloud-init-vmware-guestinfo'
Patch20: ci-Datasource-for-VMware-953.patch
# For bz#2026587 - [cloud-init][RHEL8] Support for cloud-init datasource 'cloud-init-vmware-guestinfo'
Patch21: ci-Change-netifaces-dependency-to-0.10.4-965.patch
# For bz#2026587 - [cloud-init][RHEL8] Support for cloud-init datasource 'cloud-init-vmware-guestinfo'
Patch22: ci-Update-dscheck_VMware-s-rpctool-check-970.patch
# For bz#2026587 - [cloud-init][RHEL8] Support for cloud-init datasource 'cloud-init-vmware-guestinfo'
Patch23: ci-Revert-unnecesary-lcase-in-ds-identify-978.patch
# For bz#2023940 - [RHEL-8] Support for provisioning Azure VM with userdata
Patch24: ci-Add-flexibility-to-IMDS-api-version-793.patch
# For bz#2023940 - [RHEL-8] Support for provisioning Azure VM with userdata
Patch25: ci-Azure-helper-Ensure-Azure-http-handler-sleeps-betwee.patch
# For bz#2023940 - [RHEL-8] Support for provisioning Azure VM with userdata
Patch26: ci-azure-Removing-ability-to-invoke-walinuxagent-799.patch
# For bz#2023940 - [RHEL-8] Support for provisioning Azure VM with userdata
Patch27: ci-Azure-eject-the-provisioning-iso-before-reporting-re.patch
# For bz#2023940 - [RHEL-8] Support for provisioning Azure VM with userdata
Patch28: ci-Azure-Retrieve-username-and-hostname-from-IMDS-865.patch
# For bz#2023940 - [RHEL-8] Support for provisioning Azure VM with userdata
Patch29: ci-Azure-Retry-net-metadata-during-nic-attach-for-non-t.patch
# For bz#2023940 - [RHEL-8] Support for provisioning Azure VM with userdata
Patch30: ci-Azure-adding-support-for-consuming-userdata-from-IMD.patch
# For bz#2046540 - cloud-init writes route6-$DEVICE config with a HEX netmask. ip route does not like : Error: inet6 prefix is expected rather than "fd00:fd00:fd00::/ffff:ffff:ffff:ffff::".
Patch31: ci-Fix-IPv6-netmask-format-for-sysconfig-1215.patch
# For bz#1935826 - [rhel-8] Cloud-init init stage fails after upgrade from RHEL7 to RHEL8.
Patch32: ci-Detect-a-Python-version-change-and-clear-the-cache-8.patch
# For bz#1935826 - [rhel-8] Cloud-init init stage fails after upgrade from RHEL7 to RHEL8.
Patch33: ci-Fix-MIME-policy-failure-on-python-version-upgrade-93.patch
# For bz#2088028 - [RHEL-8.7] SSH keys with \r\n line breaks are not properly handled on Azure [rhel-8.6.0.z]
Patch34: ci-Add-r-n-check-for-SSH-keys-in-Azure-889.patch
# For bz#2026587 - [cloud-init][RHEL8] Support for cloud-init datasource 'cloud-init-vmware-guestinfo'
Patch0004: 0004-include-NOZEROCONF-yes-in-etc-sysconfig-network.patch
Patch0005: 0005-Manual-revert-Use-Network-Manager-and-Netplan-as-def.patch
Patch0006: 0006-Revert-Add-native-NetworkManager-support-1224.patch
Patch0007: 0007-settings.py-update-settings-for-rhel.patch
# For bz#2182407 - cloud-init strips new line from "/etc/hostname" when processing "/var/lib/cloud/data/previous-hostname"
Patch8: ci-rhel-make-sure-previous-hostname-file-ends-with-a-ne.patch
# For bz#2182947 - Request to backport "Don't change permissions of netrules target (#2076)"
Patch9: ci-Don-t-change-permissions-of-netrules-target-2076.patch
# For bz#2190081 - CVE-2023-1786 cloud-init: sensitive data could be exposed in logs [rhel-8]
Patch10: ci-Make-user-vendor-data-sensitive-and-remove-log-permi.patch
# For bz#2219528 - [RHEL8] Support configuring network by NM keyfiles
Patch11: ci-Revert-Manual-revert-Use-Network-Manager-and-Netplan.patch
# For bz#2219528 - [RHEL8] Support configuring network by NM keyfiles
Patch12: ci-Revert-Revert-Add-native-NetworkManager-support-1224.patch
# For bz#2219528 - [RHEL8] Support configuring network by NM keyfiles
Patch13: ci-nm-generate-ipv6-stateful-dhcp-config-at-par-with-sy.patch
# For bz#2219528 - [RHEL8] Support configuring network by NM keyfiles
Patch14: ci-network_manager-add-a-method-for-ipv6-static-IP-conf.patch
# For bz#2219528 - [RHEL8] Support configuring network by NM keyfiles
Patch15: ci-net-sysconfig-enable-sysconfig-renderer-if-network-m.patch
# For bz#2219528 - [RHEL8] Support configuring network by NM keyfiles
Patch16: ci-network-manager-Set-higher-autoconnect-priority-for-.patch
# For bz#2219528 - [RHEL8] Support configuring network by NM keyfiles
Patch17: ci-Set-default-renderer-as-sysconfig-for-centos-rhel-41.patch
Patch19: ci-tools-read-version-fix-the-tool-so-that-it-can-handl.patch
Patch20: ci-test-fixes-remove-NM_CONTROLLED-no-from-tests.patch
Patch21: ci-Enable-SUSE-based-distros-for-ca-handling-2036.patch
Patch22: ci-Handle-non-existent-ca-cert-config-situation-2073.patch
Patch23: ci-Revert-limit-permissions-on-def_log_file.patch
Patch24: ci-test-fixes-changes-to-apply-RHEL-specific-config-set.patch
Patch25: ci-cosmetic-fix-tox-formatting.patch
# For bz#2222501 - Don't change log permissions if they are already more restrictive [rhel-8]
Patch28: ci-logging-keep-current-file-mode-of-log-file-if-its-st.patch
# For bz#2223810 - [cloud-init] [RHEL8.9]There are warning logs if dev has more than one IPV6 address on ESXi
Patch29: ci-DS-VMware-modify-a-few-log-level-4284.patch
# For bz#2229460 - [rhel-8.9] [RFE] Configure "ipv6.addr-gen-mode=eui64' as default in NetworkManager
Patch30: ci-NM-renderer-set-default-IPv6-addr-gen-mode-for-all-i.patch
BuildArch: noarch
@ -144,8 +123,6 @@ ssh keys and to let the user run various scripts.
sed -i -e 's|#!/usr/bin/env python|#!/usr/bin/env python3|' \
-e 's|#!/usr/bin/python|#!/usr/bin/python3|' tools/* cloudinit/ssh_util.py
cp -f %{SOURCE2} tests/integration_tests/assets/test_version_change.pkl
%build
%py3_build
@ -153,8 +130,6 @@ cp -f %{SOURCE2} tests/integration_tests/assets/test_version_change.pkl
%install
%py3_install --
python3 tools/render-cloudcfg --variant fedora > $RPM_BUILD_ROOT/%{_sysconfdir}/cloud/cloud.cfg
sed -i "s,@@PACKAGED_VERSION@@,%{version}-%{release}," $RPM_BUILD_ROOT/%{python3_sitelib}/cloudinit/version.py
mkdir -p $RPM_BUILD_ROOT/var/lib/cloud
@ -164,9 +139,6 @@ mkdir -p $RPM_BUILD_ROOT/run/cloud-init
mkdir -p $RPM_BUILD_ROOT/%{_tmpfilesdir}
cp -p %{SOURCE1} $RPM_BUILD_ROOT/%{_tmpfilesdir}/%{name}.conf
# We supply our own config file since our software differs from Ubuntu's.
cp -p rhel/cloud.cfg $RPM_BUILD_ROOT/%{_sysconfdir}/cloud/cloud.cfg
mkdir -p $RPM_BUILD_ROOT/%{_sysconfdir}/rsyslog.d
cp -p tools/21-cloudinit.conf $RPM_BUILD_ROOT/%{_sysconfdir}/rsyslog.d/21-cloudinit.conf
@ -174,17 +146,10 @@ cp -p tools/21-cloudinit.conf $RPM_BUILD_ROOT/%{_sysconfdir}/rsyslog.d/21-cloudi
mv $RPM_BUILD_ROOT/etc/NetworkManager/dispatcher.d/hook-network-manager \
$RPM_BUILD_ROOT/etc/NetworkManager/dispatcher.d/cloud-init-azure-hook
# Install our own systemd units (rhbz#1440831)
mkdir -p $RPM_BUILD_ROOT%{_unitdir}
cp rhel/systemd/* $RPM_BUILD_ROOT%{_unitdir}/
[ ! -d $RPM_BUILD_ROOT/usr/lib/systemd/system-generators ] && mkdir -p $RPM_BUILD_ROOT/usr/lib/systemd/system-generators
python3 tools/render-cloudcfg --variant rhel systemd/cloud-init-generator.tmpl > $RPM_BUILD_ROOT/usr/lib/systemd/system-generators/cloud-init-generator
chmod 755 $RPM_BUILD_ROOT/usr/lib/systemd/system-generators/cloud-init-generator
[ ! -d $RPM_BUILD_ROOT/usr/lib/%{name} ] && mkdir -p $RPM_BUILD_ROOT/usr/lib/%{name}
cp -p tools/ds-identify $RPM_BUILD_ROOT%{_libexecdir}/%{name}/ds-identify
# installing man pages
mkdir -p ${RPM_BUILD_ROOT}%{_mandir}/man1/
for man in cloud-id.1 cloud-init.1 cloud-init-per.1; do
@ -206,7 +171,27 @@ if [ $1 -eq 1 ] ; then
/bin/systemctl enable cloud-init-local.service >/dev/null 2>&1 || :
/bin/systemctl enable cloud-init.target >/dev/null 2>&1 || :
elif [ $1 -eq 2 ]; then
# Upgrade. If the upgrade is from a version older than 0.7.9-8,
# Upgrade
# RHBZ 2210012 - check for null ssh_genkeytypes value in cloud.cfg that
# breaks ssh connectivity after upgrade to a newer version of cloud-init.
if [ -f %{_sysconfdir}/cloud/cloud.cfg.rpmnew ] && grep -q '^\s*ssh_genkeytypes:\s*~\s*$' %{_sysconfdir}/cloud/cloud.cfg ; then
echo "***********************************************"
echo "*** WARNING!!!! ***"
echo ""
echo "ssh_genkeytypes set to null in /etc/cloud/cloud.cfg!"
echo "SSH access might be broken after reboot. Please check the following KCS"
echo "for more detailed information:"
echo ""
echo "https://access.redhat.com/solutions/6988034"
echo ""
echo "Please reconcile the differences between /etc/cloud/cloud.cfg and "
echo "/etc/cloud/cloud.cfg.rpmnew and update ssh_genkeytypes configuration in "
echo "/etc/cloud/cloud.cfg to a list of keytype values, something like:"
echo "ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519']"
echo ""
echo "************************************************"
fi
# If the upgrade is from a version older than 0.7.9-8,
# there will be stale systemd config
/bin/systemctl is-enabled cloud-config.service >/dev/null 2>&1 &&
/bin/systemctl reenable cloud-config.service >/dev/null 2>&1 || :
@ -238,18 +223,32 @@ fi
%postun
%systemd_postun cloud-config.service cloud-config.target cloud-final.service cloud-init.service cloud-init.target cloud-init-local.service
if [ -f /etc/ssh/sshd_config.d/50-cloud-init.conf ] ; then
echo "/etc/ssh/sshd_config.d/50-cloud-init.conf not removed"
fi
if [ -f /etc/NetworkManager/conf.d/99-cloud-init.conf ] ; then
echo "/etc/NetworkManager/conf.d/99-cloud-init.conf not removed"
fi
if [ -f /etc/NetworkManager/conf.d/30-cloud-init-ip6-addr-gen-mode.conf ] ; then
echo "/etc/NetworkManager/conf.d/30-cloud-init-ip6-addr-gen-mode.conf not removed"
fi
%files
%license LICENSE
%doc ChangeLog rhel/README.rhel
%config(noreplace) %{_sysconfdir}/cloud/cloud.cfg
%dir %{_sysconfdir}/cloud/cloud.cfg.d
%config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/*.cfg
%doc %{_sysconfdir}/cloud/cloud.cfg.d/README
%doc %{_sysconfdir}/cloud/clean.d/README
%dir %{_sysconfdir}/cloud/templates
%config(noreplace) %{_sysconfdir}/cloud/templates/*
%{_unitdir}/cloud-config.service
%{_unitdir}/cloud-config.target
%{_unitdir}/cloud-final.service
%{_unitdir}/cloud-init-hotplugd.service
%{_unitdir}/cloud-init-hotplugd.socket
%{_unitdir}/cloud-init-local.service
%{_unitdir}/cloud-init.service
%{_unitdir}/cloud-init.target
@ -262,21 +261,126 @@ fi
%dir %verify(not mode) /run/cloud-init
%dir /var/lib/cloud
/etc/NetworkManager/dispatcher.d/cloud-init-azure-hook
/etc/dhcp/dhclient-exit-hooks.d/hook-dhclient
%{_udevrulesdir}/66-azure-ephemeral.rules
%{_sysconfdir}/bash_completion.d/cloud-init
%{_datadir}/bash-completion/completions/cloud-init
%{_bindir}/cloud-id
%{_libexecdir}/%{name}/ds-identify
/usr/lib/systemd/system-generators/cloud-init-generator
%{_sysconfdir}/systemd/system/sshd-keygen@.service.d/disable-sshd-keygen-if-cloud-init-active.conf
%dir %{_sysconfdir}/rsyslog.d
%config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf
%changelog
* Wed May 18 2022 Miroslav Rezanina <mrezanin@redhat.com> - 21.1-15.el8_6.1
- ci-Add-r-n-check-for-SSH-keys-in-Azure-889.patch [bz#2088028]
- Resolves: bz#2088028
([RHEL-8.7] SSH keys with \r\n line breaks are not properly handled on Azure [rhel-8.6.0.z])
* Fri Aug 25 2023 Camilla Conte <cconte@redhat.com> - 23.1.1-10
- Resolves: bz#2233047
([RHEL 8.9] Inform user when cloud-init generated config files are left during uninstalling)
* Wed Aug 09 2023 Jon Maloy <jmaloy@redhat.com> - 23.1.1-9
- ci-NM-renderer-set-default-IPv6-addr-gen-mode-for-all-i.patch [bz#2229460]
- Resolves: bz#2229460
([rhel-8.9] [RFE] Configure "ipv6.addr-gen-mode=eui64' as default in NetworkManager)
* Thu Jul 27 2023 Camilla Conte <cconte@redhat.com> - 23.1.1-8
- ci-DS-VMware-modify-a-few-log-level-4284.patch [bz#2223810]
- Resolves: bz#2223810
([cloud-init] [RHEL8.9]There are warning logs if dev has more than one IPV6 address on ESXi)
* Tue Jul 25 2023 Miroslav Rezanina <mrezanin@redhat.com> - 23.1.1-7
- ci-logging-keep-current-file-mode-of-log-file-if-its-st.patch [bz#2222501]
- Resolves: bz#2222501
(Don't change log permissions if they are already more restrictive [rhel-8])
* Mon Jul 10 2023 Miroslav Rezanina <mrezanin@redhat.com> - 23.1.1-6
- ci-Revert-Manual-revert-Use-Network-Manager-and-Netplan.patch [bz#2219528]
- ci-Revert-Revert-Add-native-NetworkManager-support-1224.patch [bz#2219528]
- ci-nm-generate-ipv6-stateful-dhcp-config-at-par-with-sy.patch [bz#2219528]
- ci-network_manager-add-a-method-for-ipv6-static-IP-conf.patch [bz#2219528]
- ci-net-sysconfig-enable-sysconfig-renderer-if-network-m.patch [bz#2219528]
- ci-network-manager-Set-higher-autoconnect-priority-for-.patch [bz#2219528]
- ci-Set-default-renderer-as-sysconfig-for-centos-rhel-41.patch [bz#2219528]
- Resolves: bz#2219528
([RHEL8] Support configuring network by NM keyfiles)
* Tue Jul 4 2023 Camilla Conte <cconte@redhat.com> - 23.1.1-5
- ci-Add-warning-during-upgrade-from-an-old-version-with-.patch [bz#2210012]
- Resolves: bz#2210012
([cloud-init] System didn't generate ssh host keys and lost ssh connection after cloud-init removed them with updated cloud-init package.)
* Wed May 03 2023 Jon Maloy <jmaloy@redhat.com> - 23.1.1-3
- ci-Don-t-change-permissions-of-netrules-target-2076.patch [bz#2182947]
- ci-Make-user-vendor-data-sensitive-and-remove-log-permi.patch [bz#2190081]
- Resolves: bz#2182947
(Request to backport "Don't change permissions of netrules target (#2076)")
- Resolves: bz#2190081
(CVE-2023-1786 cloud-init: sensitive data could be exposed in logs [rhel-8])
* Tue Apr 25 2023 Jon Maloy <jmaloy@redhat.com> - 23.1.1-2
- ci-rhel-make-sure-previous-hostname-file-ends-with-a-ne.patch [bz#2182407]
- Resolves: bz#2182407
(cloud-init strips new line from "/etc/hostname" when processing "/var/lib/cloud/data/previous-hostname")
* Fri Apr 21 2023 Jon Maloy <jmaloy@redhat.com> - 23.1.1-1
- limit-permissions-on-def_log_file.patch
- Resolves bz#1424612
- include-NOZEROCONF-yes-in-etc-sysconfig-network.patch
- Resolves bz#1653131
- Rebase to 23.1.1 [bz#2172821]
- Resolves: bz#2172821
* Mon Jan 30 2023 Camilla Conte <cconte@redhat.com> - 22.1-8
- ci-cc_set_hostname-ignore-var-lib-cloud-data-set-hostna.patch [bz#2162258]
- Resolves: bz#2162258
(systemd[1]: Failed to start Initial cloud-init job after reboot system via sysrq 'b' [RHEL-8])
* Wed Dec 28 2022 Camilla Conte <cconte@redhat.com> - 22.1-7
- ci-Ensure-network-ready-before-cloud-init-service-runs-.patch [bz#2151861]
- Resolves: bz#2151861
([RHEL-8] Ensure network ready before cloud-init service runs on RHEL)
* Mon Oct 17 2022 Jon Maloy <jmaloy@redhat.com> - 22.1-6
- ci-cloud.cfg.tmpl-make-sure-centos-settings-are-identic.patch [bz#2115576]
- Resolves: bz#2115576
(cloud-init configures user "centos" or "rhel" instead of "cloud-user" with cloud-init-22.1)
* Wed Aug 17 2022 Jon Maloy <jmaloy@redhat.com> - 22.1-5
- ci-Revert-Add-native-NetworkManager-support-1224.patch [bz#2107464 bz#2110066 bz#2117526 bz#2104393 bz#2098624]
- ci-Revert-Use-Network-Manager-and-Netplan-as-default-re.patch [bz#2107464 bz#2110066 bz#2117526 bz#2104393 bz#2098624]
- Resolves: bz#2107464
([RHEL-8.7] Cannot run sysconfig when changing the priority of network renderers)
- Resolves: bz#2110066
(DNS integration with OpenStack/cloud-init/NetworkManager is not working)
- Resolves: bz#2117526
([RHEL8.7] Revert patch of configuring networking by NM keyfiles)
- Resolves: bz#2104393
([RHEL-8.7]Failed to config static IP and IPv6 according to VMware Customization Config File)
- Resolves: bz#2098624
([RHEL-8.7] IPv6 not workable when cloud-init configure network using NM keyfiles)
* Tue Jul 12 2022 Miroslav Rezanina <mrezanin@redhat.com> - 22.1-4
- ci-cloud-init.spec-adjust-path-for-66-azure-ephemeral.r.patch [bz#2096269]
- ci-setup.py-adjust-udev-rules-default-path-1513.patch [bz#2096269]
- Resolves: bz#2096269
(Adjust udev/rules default path[RHEL-8])
* Thu Jun 23 2022 Jon Maloy <jmaloy@redhat.com> - 22.1-3
- ci-Support-EC2-tags-in-instance-metadata-1309.patch [bz#2082686]
- Resolves: bz#2082686
([cloud][init] Add support for reading tags from instance metadata)
* Tue May 31 2022 Jon Maloy <jmaloy@redhat.com> - 22.1-2
- ci-Add-native-NetworkManager-support-1224.patch [bz#2059872]
- ci-Use-Network-Manager-and-Netplan-as-default-renderers.patch [bz#2059872]
- ci-Align-rhel-custom-files-with-upstream-1431.patch [bz#2082071]
- ci-Remove-rhel-specific-files.patch [bz#2082071]
- Resolves: bz#2059872
([RHEL-8]Rebase cloud-init from Fedora so it can configure networking using NM keyfiles)
- Resolves: bz#2082071
(Align cloud.cfg file and systemd with cloud-init upstream .tmpl files)
* Mon Apr 25 2022 Amy Chen <xiachen@redhat.com> - 22.1-1
- Rebaes to 22.1 [bz#2065544]
- Resolves: bz#2065544
([RHEL-8.7.0] cloud-init rebase to 22.1)
* Fri Apr 01 2022 Camilla Conte <cconte@redhat.com> - 21.1-15
- ci-Detect-a-Python-version-change-and-clear-the-cache-8.patch [bz#1935826]
@ -370,6 +474,17 @@ fi
- Resolves: bz#1958174
([RHEL-8.5.0] Rebase cloud-init to 21.1)
* Thu May 13 2021 Miroslav Rezanina <mrezanin@redhat.com> - 20.3-10.el8_4.3
- ci-get_interfaces-don-t-exclude-Open-vSwitch-bridge-bon.patch [bz#1957135]
- ci-net-exclude-OVS-internal-interfaces-in-get_interface.patch [bz#1957135]
- Resolves: bz#1957135
(Intermittent failure to start cloud-init due to failure to detect macs [rhel-8.4.0.z])
* Tue Apr 06 2021 Miroslav Rezanina <mrezanin@redhat.com> - 20.3-10.el8_4.1
- ci-Fix-requiring-device-number-on-EC2-derivatives-836.patch [bz#1942699]
- Resolves: bz#1942699
([Aliyun][RHEL8.4][cloud-init] cloud-init service failed to start with Alibaba instance [rhel-8.4.0.z])
* Tue Feb 02 2021 Miroslav Rezanina <mrezanin@redhat.com> - 20.3-10.el8
- ci-fix-a-typo-in-man-page-cloud-init.1-752.patch [bz#1913127]
- Resolves: bz#1913127