import cloud-init-19.4-7.el8

This commit is contained in:
CentOS Sources 2020-07-28 10:15:08 -04:00 committed by Stepan Oksanichenko
parent e0051c29f6
commit 8bee6f500f
35 changed files with 1151 additions and 3393 deletions

View File

@ -1 +1 @@
a862d6618a4c56c79d3fb0e279f6c93d0f0141cd SOURCES/cloud-init-18.5.tar.gz
5f4de38850f9691dc9789bd4db4be512c9717d7b SOURCES/cloud-init-19.4.tar.gz

2
.gitignore vendored
View File

@ -1 +1 @@
SOURCES/cloud-init-18.5.tar.gz
SOURCES/cloud-init-19.4.tar.gz

View File

@ -1,4 +1,4 @@
From bfdc177f6127043eac555d356403d9e1d5c52243 Mon Sep 17 00:00:00 2001
From 4114343d0cd2fc3e5566eed27272480e003c89cc Mon Sep 17 00:00:00 2001
From: Miroslav Rezanina <mrezanin@redhat.com>
Date: Thu, 31 May 2018 16:45:23 +0200
Subject: Add initial redhat setup
@ -7,47 +7,54 @@ Rebase notes (18.5):
- added bash_completition file
- added cloud-id file
Merged patches (19.4):
- 4ab5a61 Fix for network configuration not persisting after reboot
- 84cf125 Removing cloud-user from wheel
- 31290ab Adding gating tests for Azure, ESXi and AWS
Merged patches (18.5):
- 2d6b469 add power-state-change module to cloud_final_modules
- 764159f Adding systemd mount options to wait for cloud-init
- da4d99e Adding disk_setup to rhel/cloud.cfg
- f5c6832 Enable cloud-init by default on vmware
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
.gitignore | 1 +
cloudinit/config/cc_chef.py | 6 +-
cloudinit/settings.py | 7 +-
redhat/.gitignore | 1 +
redhat/Makefile | 71 ++++++
redhat/Makefile.common | 35 +++
redhat/Makefile.common | 37 +++
redhat/cloud-init-tmpfiles.conf | 1 +
redhat/cloud-init.spec.template | 352 ++++++++++++++++++++++++++
redhat/cloud-init.spec.template | 438 ++++++++++++++++++++++++++++++++++
redhat/gating.yaml | 9 +
redhat/rpmbuild/BUILD/.gitignore | 3 +
redhat/rpmbuild/RPMS/.gitignore | 3 +
redhat/rpmbuild/SOURCES/.gitignore | 3 +
redhat/rpmbuild/SPECS/.gitignore | 3 +
redhat/rpmbuild/SRPMS/.gitignore | 3 +
redhat/scripts/frh.py | 27 ++
redhat/scripts/git-backport-diff | 327 ++++++++++++++++++++++++
redhat/scripts/git-compile-check | 215 ++++++++++++++++
redhat/scripts/frh.py | 27 +++
redhat/scripts/git-backport-diff | 327 +++++++++++++++++++++++++
redhat/scripts/git-compile-check | 215 +++++++++++++++++
redhat/scripts/process-patches.sh | 73 ++++++
redhat/scripts/tarball_checksum.sh | 3 +
rhel/README.rhel | 5 +
rhel/cloud-init-tmpfiles.conf | 1 +
rhel/cloud.cfg | 69 +++++
rhel/cloud.cfg | 69 ++++++
rhel/systemd/cloud-config.service | 18 ++
rhel/systemd/cloud-config.target | 11 +
rhel/systemd/cloud-final.service | 19 ++
rhel/systemd/cloud-init-local.service | 31 +++
rhel/systemd/cloud-init.service | 25 ++
setup.py | 64 +----
tools/read-version | 25 +-
27 files changed, 1311 insertions(+), 90 deletions(-)
rhel/systemd/cloud-init.target | 7 +
setup.py | 70 +-----
tools/read-version | 28 +--
30 files changed, 1417 insertions(+), 98 deletions(-)
create mode 100644 redhat/.gitignore
create mode 100644 redhat/Makefile
create mode 100644 redhat/Makefile.common
create mode 100644 redhat/cloud-init-tmpfiles.conf
create mode 100644 redhat/cloud-init.spec.template
create mode 100644 redhat/gating.yaml
create mode 100644 redhat/rpmbuild/BUILD/.gitignore
create mode 100644 redhat/rpmbuild/RPMS/.gitignore
create mode 100644 redhat/rpmbuild/SOURCES/.gitignore
@ -66,9 +73,10 @@ Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
create mode 100644 rhel/systemd/cloud-final.service
create mode 100644 rhel/systemd/cloud-init-local.service
create mode 100644 rhel/systemd/cloud-init.service
create mode 100644 rhel/systemd/cloud-init.target
diff --git a/cloudinit/config/cc_chef.py b/cloudinit/config/cc_chef.py
index 46abedd1..fe7bda8c 100644
index 0ad6b7f..e4408a4 100644
--- a/cloudinit/config/cc_chef.py
+++ b/cloudinit/config/cc_chef.py
@@ -33,7 +33,7 @@ file).
@ -80,7 +88,7 @@ index 46abedd1..fe7bda8c 100644
validation_cert: (optional string to be written to file validation_key)
special value 'system' means set use existing file
validation_key: (optional the path for validation_cert. default
@@ -88,7 +88,7 @@ CHEF_DIRS = tuple([
@@ -89,7 +89,7 @@ CHEF_DIRS = tuple([
'/var/lib/chef',
'/var/cache/chef',
'/var/backups/chef',
@ -89,20 +97,20 @@ index 46abedd1..fe7bda8c 100644
])
REQUIRED_CHEF_DIRS = tuple([
'/etc/chef',
@@ -112,7 +112,7 @@ CHEF_RB_TPL_DEFAULTS = {
@@ -113,7 +113,7 @@ CHEF_RB_TPL_DEFAULTS = {
'json_attribs': CHEF_FB_PATH,
'file_cache_path': "/var/cache/chef",
'file_backup_path': "/var/backups/chef",
- 'pid_file': "/var/run/chef/client.pid",
+ 'pid_file': "/run/chef/client.pid",
'show_time': True,
'encrypted_data_bag_secret': None,
}
CHEF_RB_TPL_BOOL_KEYS = frozenset(['show_time'])
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index b1ebaade..c5367687 100644
index ca4ffa8..3a04a58 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -44,13 +44,16 @@ CFG_BUILTIN = {
@@ -46,13 +46,16 @@ CFG_BUILTIN = {
],
'def_log_file': '/var/log/cloud-init.log',
'log_cfgs': [],
@ -123,7 +131,7 @@ index b1ebaade..c5367687 100644
'vendor_data': {'enabled': True, 'prefix': []},
diff --git a/rhel/README.rhel b/rhel/README.rhel
new file mode 100644
index 00000000..aa29630d
index 0000000..aa29630
--- /dev/null
+++ b/rhel/README.rhel
@@ -0,0 +1,5 @@
@ -134,14 +142,14 @@ index 00000000..aa29630d
+ - grub_dpkg
diff --git a/rhel/cloud-init-tmpfiles.conf b/rhel/cloud-init-tmpfiles.conf
new file mode 100644
index 00000000..0c6d2a3b
index 0000000..0c6d2a3
--- /dev/null
+++ b/rhel/cloud-init-tmpfiles.conf
@@ -0,0 +1 @@
+d /run/cloud-init 0700 root root - -
diff --git a/rhel/cloud.cfg b/rhel/cloud.cfg
new file mode 100644
index 00000000..f0db3c12
index 0000000..82e8bf6
--- /dev/null
+++ b/rhel/cloud.cfg
@@ -0,0 +1,69 @@
@ -204,7 +212,7 @@ index 00000000..f0db3c12
+ name: cloud-user
+ lock_passwd: true
+ gecos: Cloud User
+ groups: [wheel, adm, systemd-journal]
+ groups: [adm, systemd-journal]
+ sudo: ["ALL=(ALL) NOPASSWD:ALL"]
+ shell: /bin/bash
+ distro: rhel
@ -216,7 +224,7 @@ index 00000000..f0db3c12
+# vim:syntax=yaml
diff --git a/rhel/systemd/cloud-config.service b/rhel/systemd/cloud-config.service
new file mode 100644
index 00000000..12ca9dfd
index 0000000..f3dcd4b
--- /dev/null
+++ b/rhel/systemd/cloud-config.service
@@ -0,0 +1,18 @@
@ -237,10 +245,10 @@ index 00000000..12ca9dfd
+StandardOutput=journal+console
+
+[Install]
+WantedBy=multi-user.target
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-config.target b/rhel/systemd/cloud-config.target
new file mode 100644
index 00000000..ae9b7d02
index 0000000..ae9b7d0
--- /dev/null
+++ b/rhel/systemd/cloud-config.target
@@ -0,0 +1,11 @@
@ -257,7 +265,7 @@ index 00000000..ae9b7d02
+After=cloud-init-local.service cloud-init.service
diff --git a/rhel/systemd/cloud-final.service b/rhel/systemd/cloud-final.service
new file mode 100644
index 00000000..32a83d85
index 0000000..739b7e3
--- /dev/null
+++ b/rhel/systemd/cloud-final.service
@@ -0,0 +1,19 @@
@ -279,10 +287,10 @@ index 00000000..32a83d85
+StandardOutput=journal+console
+
+[Install]
+WantedBy=multi-user.target
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-init-local.service b/rhel/systemd/cloud-init-local.service
new file mode 100644
index 00000000..656eddb9
index 0000000..8f9f6c9
--- /dev/null
+++ b/rhel/systemd/cloud-init-local.service
@@ -0,0 +1,31 @@
@ -316,10 +324,10 @@ index 00000000..656eddb9
+StandardOutput=journal+console
+
+[Install]
+WantedBy=multi-user.target
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-init.service b/rhel/systemd/cloud-init.service
new file mode 100644
index 00000000..68fc5f19
index 0000000..d0023a0
--- /dev/null
+++ b/rhel/systemd/cloud-init.service
@@ -0,0 +1,25 @@
@ -347,24 +355,40 @@ index 00000000..68fc5f19
+StandardOutput=journal+console
+
+[Install]
+WantedBy=multi-user.target
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-init.target b/rhel/systemd/cloud-init.target
new file mode 100644
index 0000000..083c3b6
--- /dev/null
+++ b/rhel/systemd/cloud-init.target
@@ -0,0 +1,7 @@
+# cloud-init target is enabled by cloud-init-generator
+# To disable it you can either:
+# a.) boot with kernel cmdline of 'cloud-init=disabled'
+# b.) touch a file /etc/cloud/cloud-init.disabled
+[Unit]
+Description=Cloud-init target
+After=multi-user.target
diff --git a/setup.py b/setup.py
index ea37efc3..06ae48a6 100755
index 01a67b9..b2ac9bb 100755
--- a/setup.py
+++ b/setup.py
@@ -135,11 +135,6 @@ INITSYS_FILES = {
@@ -139,14 +139,6 @@ INITSYS_FILES = {
'sysvinit_deb': [f for f in glob('sysvinit/debian/*') if is_f(f)],
'sysvinit_openrc': [f for f in glob('sysvinit/gentoo/*') if is_f(f)],
'sysvinit_suse': [f for f in glob('sysvinit/suse/*') if is_f(f)],
- 'systemd': [render_tmpl(f)
- for f in (glob('systemd/*.tmpl') +
- glob('systemd/*.service') +
- glob('systemd/*.target')) if is_f(f)],
- 'systemd.generators': [f for f in glob('systemd/*-generator') if is_f(f)],
- glob('systemd/*.target'))
- if (is_f(f) and not is_generator(f))],
- 'systemd.generators': [
- render_tmpl(f, mode=0o755)
- for f in glob('systemd/*') if is_f(f) and is_generator(f)],
'upstart': [f for f in glob('upstart/*') if is_f(f)],
}
INITSYS_ROOTS = {
@@ -148,9 +143,6 @@ INITSYS_ROOTS = {
@@ -155,9 +147,6 @@ INITSYS_ROOTS = {
'sysvinit_deb': 'etc/init.d',
'sysvinit_openrc': 'etc/init.d',
'sysvinit_suse': 'etc/init.d',
@ -374,7 +398,7 @@ index ea37efc3..06ae48a6 100755
'upstart': 'etc/init/',
}
INITSYS_TYPES = sorted([f.partition(".")[0] for f in INITSYS_ROOTS.keys()])
@@ -188,47 +180,6 @@ class MyEggInfo(egg_info):
@@ -208,47 +197,6 @@ class MyEggInfo(egg_info):
return ret
@ -422,20 +446,24 @@ index ea37efc3..06ae48a6 100755
if not in_virtualenv():
USR = "/" + USR
ETC = "/" + ETC
@@ -239,11 +190,9 @@ if not in_virtualenv():
@@ -258,14 +206,11 @@ if not in_virtualenv():
INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k]
data_files = [
(ETC + '/bash_completion.d', ['bash_completion/cloud-init']),
- (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]),
+ (ETC + '/bash_completion.d', ['bash_completion/cloud-init']),
(ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')),
(ETC + '/cloud/templates', glob('templates/*')),
- (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify',
- 'tools/uncloud-init',
+ (USR_LIB_EXEC + '/cloud-init', ['tools/uncloud-init',
'tools/write-ssh-key-fingerprints']),
- (USR + '/share/bash-completion/completions',
- ['bash_completion/cloud-init']),
(USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]),
(USR + '/share/doc/cloud-init/examples',
@@ -255,15 +204,8 @@ if os.uname()[0] != 'FreeBSD':
[f for f in glob('doc/examples/*') if is_f(f)]),
@@ -276,15 +221,8 @@ if os.uname()[0] != 'FreeBSD':
data_files.extend([
(ETC + '/NetworkManager/dispatcher.d/',
['tools/hook-network-manager']),
@ -452,7 +480,7 @@ index ea37efc3..06ae48a6 100755
requirements = read_requires()
@@ -278,8 +220,6 @@ setuptools.setup(
@@ -299,8 +237,6 @@ setuptools.setup(
scripts=['tools/cloud-init-per'],
license='Dual-licensed under GPLv3 or Apache 2.0',
data_files=data_files,
@ -462,10 +490,10 @@ index ea37efc3..06ae48a6 100755
'console_scripts': [
'cloud-init = cloudinit.cmd.main:main',
diff --git a/tools/read-version b/tools/read-version
index e69c2ce0..d43cc8f0 100755
index 6dca659..d43cc8f 100755
--- a/tools/read-version
+++ b/tools/read-version
@@ -65,29 +65,8 @@ output_json = '--json' in sys.argv
@@ -65,32 +65,8 @@ output_json = '--json' in sys.argv
src_version = ci_version.version_string()
version_long = None
@ -475,9 +503,12 @@ index e69c2ce0..d43cc8f0 100755
- flags = ['--tags']
- cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags
-
- version = tiny_p(cmd).strip()
- try:
- version = tiny_p(cmd).strip()
- except RuntimeError:
- version = None
-
- if not version.startswith(src_version):
- if version is None or not version.startswith(src_version):
- sys.stderr.write("git describe version (%s) differs from "
- "cloudinit.version (%s)\n" % (version, src_version))
- sys.stderr.write(
@ -498,5 +529,5 @@ index e69c2ce0..d43cc8f0 100755
# version is X.Y.Z[+xxx.gHASH]
# version_long is None or X.Y.Z-xxx-gHASH
--
2.20.1
1.8.3.1

View File

@ -1,271 +1,271 @@
From 0bff7d73c49043b0820d0231c9a47539287f35e3 Mon Sep 17 00:00:00 2001
From aa7ae9da7e10a5bcf190f8df3072e3864b2d8fb3 Mon Sep 17 00:00:00 2001
From: Miroslav Rezanina <mrezanin@redhat.com>
Date: Thu, 31 May 2018 19:37:55 +0200
Subject: Do not write NM_CONTROLLED=no in generated interface config files
X-downstream-only: true
Signed-off-by: Ryan McCabe <rmccabe@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
cloudinit/net/sysconfig.py | 1 -
tests/unittests/test_net.py | 30 ------------------------------
2 files changed, 31 deletions(-)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 17293e1d..ae0554ef 100644
index 310cdf0..8bd7e88 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -250,7 +250,6 @@ class Renderer(renderer.Renderer):
@@ -272,7 +272,6 @@ class Renderer(renderer.Renderer):
iface_defaults = tuple([
('ONBOOT', True),
('USERCTL', False),
- ('NM_CONTROLLED', False),
('BOOTPROTO', 'none'),
('STARTMODE', 'auto'),
])
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 195f261c..5f1aa3e7 100644
index 01119e0..a931a3e 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -175,7 +175,6 @@ GATEWAY=172.19.3.254
@@ -530,7 +530,6 @@ GATEWAY=172.19.3.254
HWADDR=fa:16:3e:ed:9a:59
IPADDR=172.19.1.34
NETMASK=255.255.252.0
-NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
@@ -279,7 +278,6 @@ IPADDR=172.19.1.34
@@ -636,7 +635,6 @@ IPADDR=172.19.1.34
IPADDR1=10.0.0.10
NETMASK=255.255.252.0
NETMASK1=255.255.255.0
-NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
@@ -407,7 +405,6 @@ IPV6ADDR_SECONDARIES="2001:DB9::10/64 2001:DB10::10/64"
@@ -772,7 +770,6 @@ IPV6ADDR_SECONDARIES="2001:DB9::10/64 2001:DB10::10/64"
IPV6INIT=yes
IPV6_DEFAULTGW=2001:DB8::1
NETMASK=255.255.252.0
-NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
@@ -523,7 +520,6 @@ NETWORK_CONFIGS = {
@@ -889,7 +886,6 @@ NETWORK_CONFIGS = {
BOOTPROTO=none
DEVICE=eth1
HWADDR=cf:d6:af:48:e8:80
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no"""),
@@ -539,7 +535,6 @@ NETWORK_CONFIGS = {
@@ -907,7 +903,6 @@ NETWORK_CONFIGS = {
IPADDR=192.168.21.3
NETMASK=255.255.255.0
METRIC=10000
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no"""),
@@ -652,7 +647,6 @@ NETWORK_CONFIGS = {
@@ -1022,7 +1017,6 @@ NETWORK_CONFIGS = {
IPV6ADDR=2001:1::1/64
IPV6INIT=yes
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
@@ -894,14 +888,12 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -1491,7 +1485,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
DHCPV6C=yes
IPV6INIT=yes
MACADDR=aa:bb:cc:dd:ee:ff
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Bond
USERCTL=no"""),
'ifcfg-bond0.200': textwrap.dedent("""\
@@ -1500,7 +1493,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BOOTPROTO=dhcp
DEVICE=bond0.200
DHCLIENT_SET_DEFAULT_ROUTE=no
- NM_CONTROLLED=no
ONBOOT=yes
PHYSDEV=bond0
TYPE=Ethernet
@@ -918,7 +910,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
STARTMODE=auto
@@ -1519,7 +1511,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
IPV6_DEFAULTGW=2001:4800:78ff:1b::1
MACADDR=bb:bb:bb:bb:bb:aa
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
PRIO=22
STP=no
@@ -928,7 +919,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
STARTMODE=auto
@@ -1530,7 +1521,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BOOTPROTO=none
DEVICE=eth0
HWADDR=c0:d6:9f:2c:e8:80
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no"""),
@@ -945,7 +935,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -1548,7 +1538,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
MTU=1500
NETMASK=255.255.255.0
NETMASK1=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
PHYSDEV=eth0
TYPE=Ethernet
@@ -956,7 +945,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
STARTMODE=auto
@@ -1560,7 +1549,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
DEVICE=eth1
HWADDR=aa:d6:9f:2c:e8:80
MASTER=bond0
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
SLAVE=yes
TYPE=Ethernet
@@ -966,7 +954,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -1571,7 +1559,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
DEVICE=eth2
HWADDR=c0:bb:9f:2c:e8:80
MASTER=bond0
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
SLAVE=yes
TYPE=Ethernet
@@ -976,7 +963,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -1582,7 +1569,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BRIDGE=br0
DEVICE=eth3
HWADDR=66:bb:9f:2c:e8:80
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no"""),
@@ -985,7 +971,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -1592,7 +1578,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BRIDGE=br0
DEVICE=eth4
HWADDR=98:bb:9f:2c:e8:80
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no"""),
@@ -993,7 +978,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
BOOTPROTO=dhcp
@@ -1602,7 +1587,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
DEVICE=eth5
DHCLIENT_SET_DEFAULT_ROUTE=no
HWADDR=98:bb:9f:2c:e8:8a
- NM_CONTROLLED=no
ONBOOT=no
STARTMODE=manual
TYPE=Ethernet
USERCTL=no""")
@@ -1356,7 +1340,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -2088,7 +2072,6 @@ iface bond0 inet6 static
MTU=9000
NETMASK=255.255.255.0
NETMASK1=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Bond
USERCTL=no
@@ -1366,7 +1349,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -2099,7 +2082,6 @@ iface bond0 inet6 static
DEVICE=bond0s0
HWADDR=aa:bb:cc:dd:e8:00
MASTER=bond0
- NM_CONTROLLED=no
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
@@ -1388,7 +1370,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
STARTMODE=auto
@@ -2122,7 +2104,6 @@ iface bond0 inet6 static
DEVICE=bond0s1
HWADDR=aa:bb:cc:dd:e8:01
MASTER=bond0
- NM_CONTROLLED=no
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
@@ -1426,7 +1407,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
STARTMODE=auto
@@ -2161,7 +2142,6 @@ iface bond0 inet6 static
BOOTPROTO=none
DEVICE=en0
HWADDR=aa:bb:cc:dd:e8:00
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no"""),
@@ -1443,7 +1423,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -2180,7 +2160,6 @@ iface bond0 inet6 static
MTU=2222
NETMASK=255.255.255.0
NETMASK1=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
PHYSDEV=en0
TYPE=Ethernet
@@ -1484,7 +1463,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
STARTMODE=auto
@@ -2222,7 +2201,6 @@ iface bond0 inet6 static
DEVICE=br0
IPADDR=192.168.2.2
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=yes
PRIO=22
STP=no
@@ -1498,7 +1476,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
HWADDR=52:54:00:12:34:00
STARTMODE=auto
@@ -2238,7 +2216,6 @@ iface bond0 inet6 static
IPADDR6=2001:1::100/96
IPV6ADDR=2001:1::100/96
IPV6INIT=yes
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
@@ -1510,7 +1487,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
HWADDR=52:54:00:12:34:01
@@ -2252,7 +2229,6 @@ iface bond0 inet6 static
IPADDR6=2001:1::101/96
IPV6ADDR=2001:1::101/96
IPV6INIT=yes
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
@@ -1584,7 +1560,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -2327,7 +2303,6 @@ iface bond0 inet6 static
HWADDR=52:54:00:12:34:00
IPADDR=192.168.1.2
NETMASK=255.255.255.0
- NM_CONTROLLED=no
ONBOOT=no
STARTMODE=manual
TYPE=Ethernet
USERCTL=no
@@ -1594,7 +1569,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -2338,7 +2313,6 @@ iface bond0 inet6 static
DEVICE=eth1
HWADDR=52:54:00:12:34:aa
MTU=1480
- NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
@@ -1603,7 +1577,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -2348,7 +2322,6 @@ iface bond0 inet6 static
BOOTPROTO=none
DEVICE=eth2
HWADDR=52:54:00:12:34:ff
- NM_CONTROLLED=no
ONBOOT=no
STARTMODE=manual
TYPE=Ethernet
USERCTL=no
@@ -1969,7 +1942,6 @@ class TestRhelSysConfigRendering(CiTestCase):
@@ -2766,7 +2739,6 @@ class TestRhelSysConfigRendering(CiTestCase):
BOOTPROTO=dhcp
DEVICE=eth1000
HWADDR=07-1C-C6-75-A4-BE
HWADDR=07-1c-c6-75-a4-be
-NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
@@ -2090,7 +2062,6 @@ GATEWAY=10.0.2.2
@@ -2888,7 +2860,6 @@ GATEWAY=10.0.2.2
HWADDR=52:54:00:12:34:00
IPADDR=10.0.2.15
NETMASK=255.255.255.0
-NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
@@ -2111,7 +2082,6 @@ USERCTL=no
@@ -2961,7 +2932,6 @@ USERCTL=no
#
BOOTPROTO=dhcp
DEVICE=eth0
-NM_CONTROLLED=no
ONBOOT=yes
STARTMODE=auto
TYPE=Ethernet
USERCTL=no
--
2.20.1
1.8.3.1

View File

@ -1,4 +1,4 @@
From fa8f782f5dd24e81f7072bfc24c75340f0972af5 Mon Sep 17 00:00:00 2001
From f15946568fe731dc9bf477f3f06c9c4e0f74f7c1 Mon Sep 17 00:00:00 2001
From: Lars Kellogg-Stedman <lars@redhat.com>
Date: Fri, 7 Apr 2017 18:50:54 -0400
Subject: limit permissions on def_log_file
@ -9,7 +9,6 @@ configurable via the def_log_file_mode option in cloud.cfg.
LP: #1541196
Resolves: rhbz#1424612
X-approved-upstream: true
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
cloudinit/settings.py | 1 +
cloudinit/stages.py | 3 ++-
@ -17,10 +16,10 @@ Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
3 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index c5367687..d982a4d6 100644
index 3a04a58..439eee0 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -43,6 +43,7 @@ CFG_BUILTIN = {
@@ -45,6 +45,7 @@ CFG_BUILTIN = {
'None',
],
'def_log_file': '/var/log/cloud-init.log',
@ -29,10 +28,10 @@ index c5367687..d982a4d6 100644
'mount_default_fields': [None, None, 'auto', 'defaults,nofail', '0', '2'],
'ssh_deletekeys': False,
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index 8a064124..4f15484d 100644
index 71f3a49..68b83af 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -148,8 +148,9 @@ class Init(object):
@@ -149,8 +149,9 @@ class Init(object):
def _initialize_filesystem(self):
util.ensure_dirs(self._initial_subdirs())
log_file = util.get_cfg_option_str(self.cfg, 'def_log_file')
@ -44,7 +43,7 @@ index 8a064124..4f15484d 100644
if not perms:
perms = {}
diff --git a/doc/examples/cloud-config.txt b/doc/examples/cloud-config.txt
index eb84dcf5..0e82b83e 100644
index eb84dcf..0e82b83 100644
--- a/doc/examples/cloud-config.txt
+++ b/doc/examples/cloud-config.txt
@@ -413,10 +413,14 @@ timezone: US/Eastern
@ -63,5 +62,5 @@ index eb84dcf5..0e82b83e 100644
# you can set passwords for a user or multiple users
--
2.20.1
1.8.3.1

View File

@ -1,64 +0,0 @@
From 8a8af21fc8fff984f2b4285e9993cfd50cad70c4 Mon Sep 17 00:00:00 2001
From: Lars Kellogg-Stedman <lars@redhat.com>
Date: Thu, 15 Jun 2017 12:20:39 -0400
Subject: azure: ensure that networkmanager hook script runs
The networkmanager hook script was failing to run due to the changes
we made to resolve rhbz#1440831. This corrects the regression by
allowing the NM hook script to run regardless of whether or not
cloud-init is "enabled".
Resolves: rhbz#1460206
X-downstream-only: true
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
tools/hook-dhclient | 3 +--
tools/hook-network-manager | 3 +--
tools/hook-rhel.sh | 3 +--
3 files changed, 3 insertions(+), 6 deletions(-)
diff --git a/tools/hook-dhclient b/tools/hook-dhclient
index 02122f37..181cd51e 100755
--- a/tools/hook-dhclient
+++ b/tools/hook-dhclient
@@ -13,8 +13,7 @@ is_azure() {
}
is_enabled() {
- # only execute hooks if cloud-init is enabled and on azure
- [ -e /run/cloud-init/enabled ] || return 1
+ # only execute hooks if cloud-init is running on azure
is_azure
}
diff --git a/tools/hook-network-manager b/tools/hook-network-manager
index 67d9044a..1d52cad7 100755
--- a/tools/hook-network-manager
+++ b/tools/hook-network-manager
@@ -13,8 +13,7 @@ is_azure() {
}
is_enabled() {
- # only execute hooks if cloud-init is enabled and on azure
- [ -e /run/cloud-init/enabled ] || return 1
+ # only execute hooks if cloud-init running on azure
is_azure
}
diff --git a/tools/hook-rhel.sh b/tools/hook-rhel.sh
index 513a5515..d75767e2 100755
--- a/tools/hook-rhel.sh
+++ b/tools/hook-rhel.sh
@@ -13,8 +13,7 @@ is_azure() {
}
is_enabled() {
- # only execute hooks if cloud-init is enabled and on azure
- [ -e /run/cloud-init/enabled ] || return 1
+ # only execute hooks if cloud-init is running on azure
is_azure
}
--
2.20.1

View File

@ -0,0 +1,34 @@
From e2b22710db558df261883eaf5dde866c69ba17dd Mon Sep 17 00:00:00 2001
From: Miroslav Rezanina <mrezanin@redhat.com>
Date: Thu, 31 May 2018 20:00:32 +0200
Subject: sysconfig: Don't write BOOTPROTO=dhcp for ipv6 dhcp
Don't write BOOTPROTO=dhcp for ipv6 dhcp, as BOOTPROTO applies
only to ipv4. Explicitly write IPV6_AUTOCONF=no for dhcp on ipv6.
X-downstream-only: yes
Resolves: rhbz#1519271
Signed-off-by: Ryan McCabe <rmccabe@redhat.com>
Merged patches (19.4):
- 6444df4 sysconfig: Don't disable IPV6_AUTOCONF
---
tests/unittests/test_net.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index a931a3e..1306a0f 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -1483,6 +1483,7 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BOOTPROTO=none
DEVICE=bond0
DHCPV6C=yes
+ IPV6_AUTOCONF=no
IPV6INIT=yes
MACADDR=aa:bb:cc:dd:ee:ff
ONBOOT=yes
--
1.8.3.1

View File

@ -1,4 +1,4 @@
From 21ea1cda0055416119edea44de95b5606f0b0e15 Mon Sep 17 00:00:00 2001
From 9a09efb49c2d7cade1f0ac309293166c3c2d8d7b Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Tue, 17 Apr 2018 13:07:54 +0200
Subject: DataSourceAzure.py: use hostnamectl to set hostname
@ -34,16 +34,15 @@ X-downstream-only: yes
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
cloudinit/sources/DataSourceAzure.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index e076d5dc..7dbeb04c 100644
index 24f448c..6fb889c 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -238,7 +238,7 @@ def get_hostname(hostname_command='hostname'):
@@ -256,7 +256,7 @@ def get_hostname(hostname_command='hostname'):
def set_hostname(hostname, hostname_command='hostname'):
@ -51,7 +50,7 @@ index e076d5dc..7dbeb04c 100644
+ util.subp(['hostnamectl', 'set-hostname', str(hostname)])
@contextlib.contextmanager
@azure_ds_telemetry_reporter
--
2.20.1
1.8.3.1

View File

@ -1,45 +0,0 @@
From 471353b3c3bf5cba5cab4d1b203b1c259c709fde Mon Sep 17 00:00:00 2001
From: Miroslav Rezanina <mrezanin@redhat.com>
Date: Thu, 31 May 2018 20:00:32 +0200
Subject: sysconfig: Don't write BOOTPROTO=dhcp for ipv6 dhcp
Don't write BOOTPROTO=dhcp for ipv6 dhcp, as BOOTPROTO applies
only to ipv4. Explicitly write IPV6_AUTOCONF=no for dhcp on ipv6.
X-downstream-only: yes
Resolves: rhbz#1519271
Signed-off-by: Ryan McCabe <rmccabe@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
cloudinit/net/sysconfig.py | 1 +
tests/unittests/test_net.py | 1 +
2 files changed, 2 insertions(+)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index ae0554ef..ec166cf1 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -310,6 +310,7 @@ class Renderer(renderer.Renderer):
if subnet_type == 'dhcp6':
iface_cfg['IPV6INIT'] = True
iface_cfg['DHCPV6C'] = True
+ iface_cfg['IPV6_AUTOCONF'] = False
elif subnet_type in ['dhcp4', 'dhcp']:
iface_cfg['BOOTPROTO'] = 'dhcp'
elif subnet_type == 'static':
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 5f1aa3e7..8bcafe08 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -886,6 +886,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
BOOTPROTO=none
DEVICE=bond0
DHCPV6C=yes
+ IPV6_AUTOCONF=no
IPV6INIT=yes
MACADDR=aa:bb:cc:dd:ee:ff
ONBOOT=yes
--
2.20.1

View File

@ -1,4 +1,4 @@
From ffabcbbf0d4e990f04ab755dd87bb24e70c4fe78 Mon Sep 17 00:00:00 2001
From 13ee71a3add0dd2e7c60fc672134e696bd7f6a77 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 20 Mar 2019 11:45:59 +0100
Subject: include 'NOZEROCONF=yes' in /etc/sysconfig/network
@ -21,17 +21,16 @@ Resolves: rhbz#1653131
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
cloudinit/net/sysconfig.py | 11 ++++++++++-
tests/unittests/test_net.py | 1 -
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index dc1815d9..52bb8483 100644
index 8bd7e88..810b283 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -684,7 +684,16 @@ class Renderer(renderer.Renderer):
@@ -754,7 +754,16 @@ class Renderer(renderer.Renderer):
# Distros configuring /etc/sysconfig/network as a file e.g. Centos
if sysconfig_path.endswith('network'):
util.ensure_dir(os.path.dirname(sysconfig_path))
@ -50,10 +49,10 @@ index dc1815d9..52bb8483 100644
netcfg.append('NETWORKING_IPV6=yes')
netcfg.append('IPV6_AUTOCONF=no')
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 526a30ed..012c43b5 100644
index 1306a0f..a931a3e 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -887,7 +887,6 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
@@ -1483,7 +1483,6 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
BOOTPROTO=none
DEVICE=bond0
DHCPV6C=yes
@ -62,5 +61,5 @@ index 526a30ed..012c43b5 100644
MACADDR=aa:bb:cc:dd:ee:ff
ONBOOT=yes
--
2.20.1
1.8.3.1

View File

@ -0,0 +1,56 @@
From 9d951d55a1be44bbeb5df485d14d4f84ddf01142 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Mon, 2 Mar 2020 10:46:35 +0100
Subject: Remove race condition between cloud-init and NetworkManager
Message-id: <20200302104635.11648-1-otubo@redhat.com>
Patchwork-id: 94098
O-Subject: [RHEL-7.9/RHEL-8.2.0 cloud-init PATCH] Remove race condition between cloud-init and NetworkManager
Bugzilla: 1807797
RH-Acked-by: Cathy Avery <cavery@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
BZ: 1748015
BRANCH: rhel7/master-18.5
BREW: 26924611
BZ: 1807797
BRANCH: rhel820/master-18.5
BREW: 26924957
cloud-init service is set to start before NetworkManager service starts,
but this does not avoid a race condition between them. NetworkManager
starts before cloud-init can write `dns=none' to the file:
/etc/NetworkManager/conf.d/99-cloud-init.conf. This way NetworkManager
doesn't read the configuration and erases all resolv.conf values upon
shutdown. On the next reboot neither cloud-init or NetworkManager will
write anything to resolv.conf, leaving it blank.
This patch introduces a NM reload (try-restart) at the end of cloud-init
start up so it won't erase resolv.conf upon first shutdown.
x-downstream-only: yes
resolves: rhbz#1748015, rhbz#1807797 and rhbz#1804780
Signed-off-by: Eduardo Otubo otubo@redhat.com
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
rhel/systemd/cloud-final.service | 2 ++
1 file changed, 2 insertions(+)
diff --git a/rhel/systemd/cloud-final.service b/rhel/systemd/cloud-final.service
index 739b7e3..f303483 100644
--- a/rhel/systemd/cloud-final.service
+++ b/rhel/systemd/cloud-final.service
@@ -11,6 +11,8 @@ ExecStart=/usr/bin/cloud-init modules --mode=final
RemainAfterExit=yes
TimeoutSec=0
KillMode=process
+ExecStartPost=/bin/echo "try restart NetworkManager.service"
+ExecStartPost=/usr/bin/systemctl try-restart NetworkManager.service
# Output needs to appear in instance console output
StandardOutput=journal+console
--
1.8.3.1

View File

@ -1,50 +0,0 @@
From 6444df4c91c611c65bb292e75e2726f767edcf2b Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Thu, 26 Apr 2018 09:27:49 +0200
Subject: sysconfig: Don't disable IPV6_AUTOCONF
RH-Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-id: <20180426092749.7251-2-vkuznets@redhat.com>
Patchwork-id: 79904
O-Subject: [RHEL7.6/7.5.z cloud-init PATCH 1/1] sysconfig: Don't disable IPV6_AUTOCONF
Bugzilla: 1578702
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
RH-Acked-by: Eduardo Otubo <otubo@redhat.com>
Downstream-only commit 118458a3fb ("sysconfig: Don't write BOOTPROTO=dhcp
for ipv6 dhcp") did two things:
1) Disabled BOOTPROTO='dhcp' for dhcp6 setups. This change seems to be
correct as BOOTPROTO is unrelated to IPv6. The change was since merged
upstream (commit a57928d3c314d9568712cd190cb1e721e14c108b).
2) Explicitly disabled AUTOCONF and this broke many valid configurations
using it instead of DHCPV6C. Revert this part of the change. In case
DHCPV6C-only support is needed something like a new 'dhcpv6c_only'
network type needs to be suggested upstream.
X-downstream-only: yes
Resolves: rhbz#1558854
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
cloudinit/net/sysconfig.py | 1 -
1 file changed, 1 deletion(-)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index ec166cf1..ae0554ef 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -310,7 +310,6 @@ class Renderer(renderer.Renderer):
if subnet_type == 'dhcp6':
iface_cfg['IPV6INIT'] = True
iface_cfg['DHCPV6C'] = True
- iface_cfg['IPV6_AUTOCONF'] = False
elif subnet_type in ['dhcp4', 'dhcp']:
iface_cfg['BOOTPROTO'] = 'dhcp'
elif subnet_type == 'static':
--
2.20.1

View File

@ -1,217 +0,0 @@
From 86bd1e20fc802edfb920fa53bd611d469f83250b Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Fri, 18 Jan 2019 16:55:36 +0100
Subject: net: Make sysconfig renderer compatible with Network Manager.
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190118165536.25963-1-otubo@redhat.com>
Patchwork-id: 84052
O-Subject: [RHEL-8.0 cloud-init PATCH] net: Make sysconfig renderer compatible with Network Manager.
Bugzilla: 1602784
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1602784
Brew: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=19877292
Tested by: upstream maintainers and me
commit 3861102fcaf47a882516d8b6daab518308eb3086
Author: Eduardo Otubo <otubo@redhat.com>
Date: Fri Jan 18 15:36:19 2019 +0000
net: Make sysconfig renderer compatible with Network Manager.
The 'sysconfig' renderer is activated if, and only if, there's ifup and
ifdown commands present in its search dictonary or the network-scripts
configuration files are found. This patch adds a check for Network-
Manager configuration file as well.
This solution is based on the use of the plugin 'ifcfg-rh' present in
Network-Manager and is designed to support Fedora 29 or other
distributions that also replaced network-scripts by Network-Manager.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
cloudinit/net/sysconfig.py | 36 +++++++++++++++++++
tests/unittests/test_net.py | 71 +++++++++++++++++++++++++++++++++++++
2 files changed, 107 insertions(+)
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index ae0554ef..dc1815d9 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -10,11 +10,14 @@ from cloudinit.distros.parsers import resolv_conf
from cloudinit import log as logging
from cloudinit import util
+from configobj import ConfigObj
+
from . import renderer
from .network_state import (
is_ipv6_addr, net_prefix_to_ipv4_mask, subnet_is_ipv6)
LOG = logging.getLogger(__name__)
+NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf"
def _make_header(sep='#'):
@@ -46,6 +49,24 @@ def _quote_value(value):
return value
+def enable_ifcfg_rh(path):
+ """Add ifcfg-rh to NetworkManager.cfg plugins if main section is present"""
+ config = ConfigObj(path)
+ if 'main' in config:
+ if 'plugins' in config['main']:
+ if 'ifcfg-rh' in config['main']['plugins']:
+ return
+ else:
+ config['main']['plugins'] = []
+
+ if isinstance(config['main']['plugins'], list):
+ config['main']['plugins'].append('ifcfg-rh')
+ else:
+ config['main']['plugins'] = [config['main']['plugins'], 'ifcfg-rh']
+ config.write()
+ LOG.debug('Enabled ifcfg-rh NetworkManager plugins')
+
+
class ConfigMap(object):
"""Sysconfig like dictionary object."""
@@ -656,6 +677,8 @@ class Renderer(renderer.Renderer):
netrules_content = self._render_persistent_net(network_state)
netrules_path = util.target_path(target, self.netrules_path)
util.write_file(netrules_path, netrules_content, file_mode)
+ if available_nm(target=target):
+ enable_ifcfg_rh(util.target_path(target, path=NM_CFG_FILE))
sysconfig_path = util.target_path(target, templates.get('control'))
# Distros configuring /etc/sysconfig/network as a file e.g. Centos
@@ -670,6 +693,13 @@ class Renderer(renderer.Renderer):
def available(target=None):
+ sysconfig = available_sysconfig(target=target)
+ nm = available_nm(target=target)
+
+ return any([nm, sysconfig])
+
+
+def available_sysconfig(target=None):
expected = ['ifup', 'ifdown']
search = ['/sbin', '/usr/sbin']
for p in expected:
@@ -685,4 +715,10 @@ def available(target=None):
return True
+def available_nm(target=None):
+ if not os.path.isfile(util.target_path(target, path=NM_CFG_FILE)):
+ return False
+ return True
+
+
# vi: ts=4 expandtab
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 8bcafe08..526a30ed 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -22,6 +22,7 @@ import os
import textwrap
import yaml
+
DHCP_CONTENT_1 = """
DEVICE='eth0'
PROTO='dhcp'
@@ -1854,6 +1855,7 @@ class TestRhelSysConfigRendering(CiTestCase):
with_logs = True
+ nm_cfg_file = "/etc/NetworkManager/NetworkManager.conf"
scripts_dir = '/etc/sysconfig/network-scripts'
header = ('# Created by cloud-init on instance boot automatically, '
'do not edit.\n#\n')
@@ -2497,6 +2499,75 @@ iface eth0 inet dhcp
self.assertEqual(
expected, dir2dict(tmp_dir)['/etc/network/interfaces'])
+ def test_check_ifcfg_rh(self):
+ """ifcfg-rh plugin is added NetworkManager.conf if conf present."""
+ render_dir = self.tmp_dir()
+ nm_cfg = util.target_path(render_dir, path=self.nm_cfg_file)
+ util.ensure_dir(os.path.dirname(nm_cfg))
+
+ # write a template nm.conf, note plugins is a list here
+ with open(nm_cfg, 'w') as fh:
+ fh.write('# test_check_ifcfg_rh\n[main]\nplugins=foo,bar\n')
+ self.assertTrue(os.path.exists(nm_cfg))
+
+ # render and read
+ entry = NETWORK_CONFIGS['small']
+ found = self._render_and_read(network_config=yaml.load(entry['yaml']),
+ dir=render_dir)
+ self._compare_files_to_expected(entry[self.expected_name], found)
+ self._assert_headers(found)
+
+ # check ifcfg-rh is in the 'plugins' list
+ config = sysconfig.ConfigObj(nm_cfg)
+ self.assertIn('ifcfg-rh', config['main']['plugins'])
+
+ def test_check_ifcfg_rh_plugins_string(self):
+ """ifcfg-rh plugin is append when plugins is a string."""
+ render_dir = self.tmp_path("render")
+ os.makedirs(render_dir)
+ nm_cfg = util.target_path(render_dir, path=self.nm_cfg_file)
+ util.ensure_dir(os.path.dirname(nm_cfg))
+
+ # write a template nm.conf, note plugins is a value here
+ util.write_file(nm_cfg, '# test_check_ifcfg_rh\n[main]\nplugins=foo\n')
+
+ # render and read
+ entry = NETWORK_CONFIGS['small']
+ found = self._render_and_read(network_config=yaml.load(entry['yaml']),
+ dir=render_dir)
+ self._compare_files_to_expected(entry[self.expected_name], found)
+ self._assert_headers(found)
+
+ # check raw content has plugin
+ nm_file_content = util.load_file(nm_cfg)
+ self.assertIn('ifcfg-rh', nm_file_content)
+
+ # check ifcfg-rh is in the 'plugins' list
+ config = sysconfig.ConfigObj(nm_cfg)
+ self.assertIn('ifcfg-rh', config['main']['plugins'])
+
+ def test_check_ifcfg_rh_plugins_no_plugins(self):
+ """enable_ifcfg_plugin creates plugins value if missing."""
+ render_dir = self.tmp_path("render")
+ os.makedirs(render_dir)
+ nm_cfg = util.target_path(render_dir, path=self.nm_cfg_file)
+ util.ensure_dir(os.path.dirname(nm_cfg))
+
+ # write a template nm.conf, note plugins is missing
+ util.write_file(nm_cfg, '# test_check_ifcfg_rh\n[main]\n')
+ self.assertTrue(os.path.exists(nm_cfg))
+
+ # render and read
+ entry = NETWORK_CONFIGS['small']
+ found = self._render_and_read(network_config=yaml.load(entry['yaml']),
+ dir=render_dir)
+ self._compare_files_to_expected(entry[self.expected_name], found)
+ self._assert_headers(found)
+
+ # check ifcfg-rh is in the 'plugins' list
+ config = sysconfig.ConfigObj(nm_cfg)
+ self.assertIn('ifcfg-rh', config['main']['plugins'])
+
class TestNetplanNetRendering(CiTestCase):
--
2.20.1

View File

@ -1,296 +0,0 @@
From 2e070086275341dfceb6d5b1e12f06f22e7bbfcd Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 23 Jan 2019 12:30:21 +0100
Subject: net: Wait for dhclient to daemonize before reading lease file
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190123123021.32708-1-otubo@redhat.com>
Patchwork-id: 84095
O-Subject: [RHEL-7.7 cloud-init PATCH] net: Wait for dhclient to daemonize before reading lease file
Bugzilla: 1632967
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1632967
Brew: https://bugzilla.redhat.com/show_bug.cgi?id=1632967
Tested: Me and upstream
commit fdadcb5fae51f4e6799314ab98e3aec56c79b17c
Author: Jason Zions <jasonzio@microsoft.com>
Date: Tue Jan 15 21:37:17 2019 +0000
net: Wait for dhclient to daemonize before reading lease file
cloud-init uses dhclient to fetch the DHCP lease so it can extract
DHCP options. dhclient creates the leasefile, then writes to it;
simply waiting for the leasefile to appear creates a race between
dhclient and cloud-init. Instead, wait for dhclient to be parented by
init. At that point, we know it has written to the leasefile, so it's
safe to copy the file and kill the process.
cloud-init creates a temporary directory in which to execute dhclient,
and deletes that directory after it has killed the process. If
cloud-init abandons waiting for dhclient to daemonize, it will still
attempt to delete the temporary directory, but will not report an
exception should that attempt fail.
LP: #1794399
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
cloudinit/net/dhcp.py | 44 +++++++++++++++++++++---------
cloudinit/net/tests/test_dhcp.py | 15 ++++++++--
cloudinit/temp_utils.py | 4 +--
cloudinit/tests/test_temp_utils.py | 18 +++++++++++-
cloudinit/util.py | 16 ++++++++++-
tests/unittests/test_util.py | 6 ++++
6 files changed, 83 insertions(+), 20 deletions(-)
diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
index 0db991db..c98a97cd 100644
--- a/cloudinit/net/dhcp.py
+++ b/cloudinit/net/dhcp.py
@@ -9,6 +9,7 @@ import logging
import os
import re
import signal
+import time
from cloudinit.net import (
EphemeralIPv4Network, find_fallback_nic, get_devicelist,
@@ -127,7 +128,9 @@ def maybe_perform_dhcp_discovery(nic=None):
if not dhclient_path:
LOG.debug('Skip dhclient configuration: No dhclient command found.')
return []
- with temp_utils.tempdir(prefix='cloud-init-dhcp-', needs_exe=True) as tdir:
+ with temp_utils.tempdir(rmtree_ignore_errors=True,
+ prefix='cloud-init-dhcp-',
+ needs_exe=True) as tdir:
# Use /var/tmp because /run/cloud-init/tmp is mounted noexec
return dhcp_discovery(dhclient_path, nic, tdir)
@@ -195,24 +198,39 @@ def dhcp_discovery(dhclient_cmd_path, interface, cleandir):
'-pf', pid_file, interface, '-sf', '/bin/true']
util.subp(cmd, capture=True)
- # dhclient doesn't write a pid file until after it forks when it gets a
- # proper lease response. Since cleandir is a temp directory that gets
- # removed, we need to wait for that pidfile creation before the
- # cleandir is removed, otherwise we get FileNotFound errors.
+ # Wait for pid file and lease file to appear, and for the process
+ # named by the pid file to daemonize (have pid 1 as its parent). If we
+ # try to read the lease file before daemonization happens, we might try
+ # to read it before the dhclient has actually written it. We also have
+ # to wait until the dhclient has become a daemon so we can be sure to
+ # kill the correct process, thus freeing cleandir to be deleted back
+ # up the callstack.
missing = util.wait_for_files(
[pid_file, lease_file], maxwait=5, naplen=0.01)
if missing:
LOG.warning("dhclient did not produce expected files: %s",
', '.join(os.path.basename(f) for f in missing))
return []
- pid_content = util.load_file(pid_file).strip()
- try:
- pid = int(pid_content)
- except ValueError:
- LOG.debug(
- "pid file contains non-integer content '%s'", pid_content)
- else:
- os.kill(pid, signal.SIGKILL)
+
+ ppid = 'unknown'
+ for _ in range(0, 1000):
+ pid_content = util.load_file(pid_file).strip()
+ try:
+ pid = int(pid_content)
+ except ValueError:
+ pass
+ else:
+ ppid = util.get_proc_ppid(pid)
+ if ppid == 1:
+ LOG.debug('killing dhclient with pid=%s', pid)
+ os.kill(pid, signal.SIGKILL)
+ return parse_dhcp_lease_file(lease_file)
+ time.sleep(0.01)
+
+ LOG.error(
+ 'dhclient(pid=%s, parentpid=%s) failed to daemonize after %s seconds',
+ pid_content, ppid, 0.01 * 1000
+ )
return parse_dhcp_lease_file(lease_file)
diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py
index cd3e7328..79e8842f 100644
--- a/cloudinit/net/tests/test_dhcp.py
+++ b/cloudinit/net/tests/test_dhcp.py
@@ -145,16 +145,20 @@ class TestDHCPDiscoveryClean(CiTestCase):
'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}],
dhcp_discovery(dhclient_script, 'eth9', tmpdir))
self.assertIn(
- "pid file contains non-integer content ''", self.logs.getvalue())
+ "dhclient(pid=, parentpid=unknown) failed "
+ "to daemonize after 10.0 seconds",
+ self.logs.getvalue())
m_kill.assert_not_called()
+ @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid')
@mock.patch('cloudinit.net.dhcp.os.kill')
@mock.patch('cloudinit.net.dhcp.util.wait_for_files')
@mock.patch('cloudinit.net.dhcp.util.subp')
def test_dhcp_discovery_run_in_sandbox_waits_on_lease_and_pid(self,
m_subp,
m_wait,
- m_kill):
+ m_kill,
+ m_getppid):
"""dhcp_discovery waits for the presence of pidfile and dhcp.leases."""
tmpdir = self.tmp_dir()
dhclient_script = os.path.join(tmpdir, 'dhclient.orig')
@@ -164,6 +168,7 @@ class TestDHCPDiscoveryClean(CiTestCase):
pidfile = self.tmp_path('dhclient.pid', tmpdir)
leasefile = self.tmp_path('dhcp.leases', tmpdir)
m_wait.return_value = [pidfile] # Return the missing pidfile wait for
+ m_getppid.return_value = 1 # Indicate that dhclient has daemonized
self.assertEqual([], dhcp_discovery(dhclient_script, 'eth9', tmpdir))
self.assertEqual(
mock.call([pidfile, leasefile], maxwait=5, naplen=0.01),
@@ -173,9 +178,10 @@ class TestDHCPDiscoveryClean(CiTestCase):
self.logs.getvalue())
m_kill.assert_not_called()
+ @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid')
@mock.patch('cloudinit.net.dhcp.os.kill')
@mock.patch('cloudinit.net.dhcp.util.subp')
- def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill):
+ def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill, m_getppid):
"""dhcp_discovery brings up the interface and runs dhclient.
It also returns the parsed dhcp.leases file generated in the sandbox.
@@ -197,6 +203,7 @@ class TestDHCPDiscoveryClean(CiTestCase):
pid_file = os.path.join(tmpdir, 'dhclient.pid')
my_pid = 1
write_file(pid_file, "%d\n" % my_pid)
+ m_getppid.return_value = 1 # Indicate that dhclient has daemonized
self.assertItemsEqual(
[{'interface': 'eth9', 'fixed-address': '192.168.2.74',
@@ -355,3 +362,5 @@ class TestEphemeralDhcpNoNetworkSetup(HttprettyTestCase):
self.assertEqual(fake_lease, lease)
# Ensure that dhcp discovery occurs
m_dhcp.called_once_with()
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/temp_utils.py b/cloudinit/temp_utils.py
index c98a1b53..346276ec 100644
--- a/cloudinit/temp_utils.py
+++ b/cloudinit/temp_utils.py
@@ -81,7 +81,7 @@ def ExtendedTemporaryFile(**kwargs):
@contextlib.contextmanager
-def tempdir(**kwargs):
+def tempdir(rmtree_ignore_errors=False, **kwargs):
# This seems like it was only added in python 3.2
# Make it since its useful...
# See: http://bugs.python.org/file12970/tempdir.patch
@@ -89,7 +89,7 @@ def tempdir(**kwargs):
try:
yield tdir
finally:
- shutil.rmtree(tdir)
+ shutil.rmtree(tdir, ignore_errors=rmtree_ignore_errors)
def mkdtemp(**kwargs):
diff --git a/cloudinit/tests/test_temp_utils.py b/cloudinit/tests/test_temp_utils.py
index ffbb92cd..4a52ef89 100644
--- a/cloudinit/tests/test_temp_utils.py
+++ b/cloudinit/tests/test_temp_utils.py
@@ -2,8 +2,9 @@
"""Tests for cloudinit.temp_utils"""
-from cloudinit.temp_utils import mkdtemp, mkstemp
+from cloudinit.temp_utils import mkdtemp, mkstemp, tempdir
from cloudinit.tests.helpers import CiTestCase, wrap_and_call
+import os
class TestTempUtils(CiTestCase):
@@ -98,4 +99,19 @@ class TestTempUtils(CiTestCase):
self.assertEqual('/fake/return/path', retval)
self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls)
+ def test_tempdir_error_suppression(self):
+ """test tempdir suppresses errors during directory removal."""
+
+ with self.assertRaises(OSError):
+ with tempdir(prefix='cloud-init-dhcp-') as tdir:
+ os.rmdir(tdir)
+ # As a result, the directory is already gone,
+ # so shutil.rmtree should raise OSError
+
+ with tempdir(rmtree_ignore_errors=True,
+ prefix='cloud-init-dhcp-') as tdir:
+ os.rmdir(tdir)
+ # Since the directory is already gone, shutil.rmtree would raise
+ # OSError, but we suppress that
+
# vi: ts=4 expandtab
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 7800f7bc..a84112a9 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -2861,7 +2861,6 @@ def mount_is_read_write(mount_point):
mount_opts = result[-1].split(',')
return mount_opts[0] == 'rw'
-
def udevadm_settle(exists=None, timeout=None):
"""Invoke udevadm settle with optional exists and timeout parameters"""
settle_cmd = ["udevadm", "settle"]
@@ -2875,5 +2874,20 @@ def udevadm_settle(exists=None, timeout=None):
return subp(settle_cmd)
+def get_proc_ppid(pid):
+ """
+ Return the parent pid of a process.
+ """
+ ppid = 0
+ try:
+ contents = load_file("/proc/%s/stat" % pid, quiet=True)
+ except IOError as e:
+ LOG.warning('Failed to load /proc/%s/stat. %s', pid, e)
+ if contents:
+ parts = contents.split(" ", 4)
+ # man proc says
+ # ppid %d (4) The PID of the parent.
+ ppid = int(parts[3])
+ return ppid
# vi: ts=4 expandtab
diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
index 5a14479a..8aebcd62 100644
--- a/tests/unittests/test_util.py
+++ b/tests/unittests/test_util.py
@@ -1114,6 +1114,12 @@ class TestLoadShellContent(helpers.TestCase):
'key3="val3 #tricky"',
''])))
+ def test_get_proc_ppid(self):
+ """get_proc_ppid returns correct parent pid value."""
+ my_pid = os.getpid()
+ my_ppid = os.getppid()
+ self.assertEqual(my_ppid, util.get_proc_ppid(my_pid))
+
class TestGetProcEnv(helpers.TestCase):
"""test get_proc_env."""
--
2.20.1

View File

@ -1,90 +0,0 @@
From 8a3bf53398f312b46ed4f304df4c66d061e612c7 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Thu, 28 Feb 2019 12:38:36 +0100
Subject: cloud-init-per: don't use dashes in sem names
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190228123836.17979-1-otubo@redhat.com>
Patchwork-id: 84743
O-Subject: [RHEL-7.7 cloud-init PATCH] This is to fix https://bugs.launchpad.net/cloud-init/+bug/1812676
Bugzilla: 1664876
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
From: Vitaly Kuznetsov <vkuznets@redhat.com>
It was found that when there is a dash in cloud-init-per command
name and cloud-init-per is executed through cloud-init's bootcmd, e.g:
bootcmd:
- cloud-init-per instance mycmd-bootcmd /usr/bin/mycmd
the command is executed on each boot. However, running the same
cloud-init-per command manually after boot doesn't reveal the issue. Turns
out the issue comes from 'migrator' cloud-init module which renames all
files in /var/lib/cloud/instance/sem/ replacing dashes with underscores. As
migrator runs before bootcmd it renames
/var/lib/cloud/instance/sem/bootper.mycmd-bootcmd.instance
to
/var/lib/cloud/instance/sem/bootper.mycmd_bootcmd.instance
so cloud-init-per doesn't see it and thinks that the comment was never ran
before. On next boot the sequence repeats.
There are multiple ways to resolve the issue. This patch takes the
following approach: 'canonicalize' sem names by replacing dashes with
underscores (this is consistent with post-'migrator' contents of
/var/lib/cloud/instance/sem/). We, however, need to be careful: in case
someone had a command with dashes before and he had migrator module enables
we need to see the old sem file (or the command will run again and this can
be as bad as formatting a partition!) so we add a small 'migrator' part to
cloud-init-per script itself checking for legacy sem names.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
commit 9cf9d8cdd3a8fd7d4d425f7051122d0ac8af2bbd
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Mon Feb 18 22:55:49 2019 +0000
This is to fix https://bugs.launchpad.net/cloud-init/+bug/1812676
Resolves: rhbz#1664876
X-downstream-only: false
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
tools/cloud-init-per | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/tools/cloud-init-per b/tools/cloud-init-per
index 7d6754b6..eae3e93f 100755
--- a/tools/cloud-init-per
+++ b/tools/cloud-init-per
@@ -38,7 +38,7 @@ fi
[ "$1" = "-h" -o "$1" = "--help" ] && { Usage ; exit 0; }
[ $# -ge 3 ] || { Usage 1>&2; exit 1; }
freq=$1
-name=$2
+name=${2/-/_}
shift 2;
[ "${name#*/}" = "${name}" ] || fail "name cannot contain a /"
@@ -53,6 +53,12 @@ esac
[ -d "${sem%/*}" ] || mkdir -p "${sem%/*}" ||
fail "failed to make directory for ${sem}"
+# Rename legacy sem files with dashes in their names. Do not overwrite existing
+# sem files to prevent clobbering those which may have been created from calls
+# outside of cloud-init.
+sem_legacy="${sem/_/-}"
+[ "$sem" != "$sem_legacy" -a -e "$sem_legacy" ] && mv -n "$sem_legacy" "$sem"
+
[ "$freq" != "always" -a -e "$sem" ] && exit 0
"$@"
ret=$?
--
2.20.1

View File

@ -1,572 +0,0 @@
From 8e168f17b0c138d589f7b3bea4a4b6fcc8e5e03f Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 6 Mar 2019 14:20:18 +0100
Subject: azure: Filter list of ssh keys pulled from fabric
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190306142018.8902-1-otubo@redhat.com>
Patchwork-id: 84807
O-Subject: [RHEL-7.7 cloud-init PATCH] azure: Filter list of ssh keys pulled from fabric
Bugzilla: 1684040
RH-Acked-by: Cathy Avery <cavery@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
From: "Jason Zions (MSFT)" <jasonzio@microsoft.com>
commit 34f54360fcc1e0f805002a0b639d0a84eb2cb8ee
Author: Jason Zions (MSFT) <jasonzio@microsoft.com>
Date: Fri Feb 22 13:26:31 2019 +0000
azure: Filter list of ssh keys pulled from fabric
The Azure data source is expected to expose a list of
ssh keys for the user-to-be-provisioned in the crawled
metadata. When configured to use the __builtin__ agent
this list is built by the WALinuxAgentShim. The shim
retrieves the full set of certificates and public keys
exposed to the VM from the wireserver, extracts any
ssh keys it can, and returns that list.
This fix reduces that list of ssh keys to just the
ones whose fingerprints appear in the "administrative
user" section of the ovf-env.xml file. The Azure
control plane exposes other ssh keys to the VM for
other reasons, but those should not be added to the
authorized_keys file for the provisioned user.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
cloudinit/sources/DataSourceAzure.py | 13 +-
cloudinit/sources/helpers/azure.py | 109 +++++++++----
.../azure/parse_certificates_fingerprints | 4 +
tests/data/azure/parse_certificates_pem | 152 ++++++++++++++++++
tests/data/azure/pubkey_extract_cert | 13 ++
tests/data/azure/pubkey_extract_ssh_key | 1 +
.../test_datasource/test_azure_helper.py | 71 +++++++-
7 files changed, 322 insertions(+), 41 deletions(-)
create mode 100644 tests/data/azure/parse_certificates_fingerprints
create mode 100644 tests/data/azure/parse_certificates_pem
create mode 100644 tests/data/azure/pubkey_extract_cert
create mode 100644 tests/data/azure/pubkey_extract_ssh_key
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 7dbeb04c..2062ca5d 100644
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -627,9 +627,11 @@ class DataSourceAzure(sources.DataSource):
if self.ds_cfg['agent_command'] == AGENT_START_BUILTIN:
self.bounce_network_with_azure_hostname()
+ pubkey_info = self.cfg.get('_pubkeys', None)
metadata_func = partial(get_metadata_from_fabric,
fallback_lease_file=self.
- dhclient_lease_file)
+ dhclient_lease_file,
+ pubkey_info=pubkey_info)
else:
metadata_func = self.get_metadata_from_agent
@@ -642,6 +644,7 @@ class DataSourceAzure(sources.DataSource):
"Error communicating with Azure fabric; You may experience."
"connectivity issues.", exc_info=True)
return False
+
util.del_file(REPORTED_READY_MARKER_FILE)
util.del_file(REPROVISION_MARKER_FILE)
return fabric_data
@@ -909,13 +912,15 @@ def find_child(node, filter_func):
def load_azure_ovf_pubkeys(sshnode):
# This parses a 'SSH' node formatted like below, and returns
# an array of dicts.
- # [{'fp': '6BE7A7C3C8A8F4B123CCA5D0C2F1BE4CA7B63ED7',
- # 'path': 'where/to/go'}]
+ # [{'fingerprint': '6BE7A7C3C8A8F4B123CCA5D0C2F1BE4CA7B63ED7',
+ # 'path': '/where/to/go'}]
#
# <SSH><PublicKeys>
- # <PublicKey><Fingerprint>ABC</FingerPrint><Path>/ABC</Path>
+ # <PublicKey><Fingerprint>ABC</FingerPrint><Path>/x/y/z</Path>
# ...
# </PublicKeys></SSH>
+ # Under some circumstances, there may be a <Value> element along with the
+ # Fingerprint and Path. Pass those along if they appear.
results = find_child(sshnode, lambda n: n.localName == "PublicKeys")
if len(results) == 0:
return []
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index e5696b1f..2829dd20 100644
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -138,9 +138,36 @@ class OpenSSLManager(object):
self.certificate = certificate
LOG.debug('New certificate generated.')
- def parse_certificates(self, certificates_xml):
- tag = ElementTree.fromstring(certificates_xml).find(
- './/Data')
+ @staticmethod
+ def _run_x509_action(action, cert):
+ cmd = ['openssl', 'x509', '-noout', action]
+ result, _ = util.subp(cmd, data=cert)
+ return result
+
+ def _get_ssh_key_from_cert(self, certificate):
+ pub_key = self._run_x509_action('-pubkey', certificate)
+ keygen_cmd = ['ssh-keygen', '-i', '-m', 'PKCS8', '-f', '/dev/stdin']
+ ssh_key, _ = util.subp(keygen_cmd, data=pub_key)
+ return ssh_key
+
+ def _get_fingerprint_from_cert(self, certificate):
+ """openssl x509 formats fingerprints as so:
+ 'SHA1 Fingerprint=07:3E:19:D1:4D:1C:79:92:24:C6:A0:FD:8D:DA:\
+ B6:A8:BF:27:D4:73\n'
+
+ Azure control plane passes that fingerprint as so:
+ '073E19D14D1C799224C6A0FD8DDAB6A8BF27D473'
+ """
+ raw_fp = self._run_x509_action('-fingerprint', certificate)
+ eq = raw_fp.find('=')
+ octets = raw_fp[eq+1:-1].split(':')
+ return ''.join(octets)
+
+ def _decrypt_certs_from_xml(self, certificates_xml):
+ """Decrypt the certificates XML document using the our private key;
+ return the list of certs and private keys contained in the doc.
+ """
+ tag = ElementTree.fromstring(certificates_xml).find('.//Data')
certificates_content = tag.text
lines = [
b'MIME-Version: 1.0',
@@ -151,32 +178,30 @@ class OpenSSLManager(object):
certificates_content.encode('utf-8'),
]
with cd(self.tmpdir):
- with open('Certificates.p7m', 'wb') as f:
- f.write(b'\n'.join(lines))
out, _ = util.subp(
- 'openssl cms -decrypt -in Certificates.p7m -inkey'
+ 'openssl cms -decrypt -in /dev/stdin -inkey'
' {private_key} -recip {certificate} | openssl pkcs12 -nodes'
' -password pass:'.format(**self.certificate_names),
- shell=True)
- private_keys, certificates = [], []
+ shell=True, data=b'\n'.join(lines))
+ return out
+
+ def parse_certificates(self, certificates_xml):
+ """Given the Certificates XML document, return a dictionary of
+ fingerprints and associated SSH keys derived from the certs."""
+ out = self._decrypt_certs_from_xml(certificates_xml)
current = []
+ keys = {}
for line in out.splitlines():
current.append(line)
if re.match(r'[-]+END .*?KEY[-]+$', line):
- private_keys.append('\n'.join(current))
+ # ignore private_keys
current = []
elif re.match(r'[-]+END .*?CERTIFICATE[-]+$', line):
- certificates.append('\n'.join(current))
+ certificate = '\n'.join(current)
+ ssh_key = self._get_ssh_key_from_cert(certificate)
+ fingerprint = self._get_fingerprint_from_cert(certificate)
+ keys[fingerprint] = ssh_key
current = []
- keys = []
- for certificate in certificates:
- with cd(self.tmpdir):
- public_key, _ = util.subp(
- 'openssl x509 -noout -pubkey |'
- 'ssh-keygen -i -m PKCS8 -f /dev/stdin',
- data=certificate,
- shell=True)
- keys.append(public_key)
return keys
@@ -206,7 +231,6 @@ class WALinuxAgentShim(object):
self.dhcpoptions = dhcp_options
self._endpoint = None
self.openssl_manager = None
- self.values = {}
self.lease_file = fallback_lease_file
def clean_up(self):
@@ -328,8 +352,9 @@ class WALinuxAgentShim(object):
LOG.debug('Azure endpoint found at %s', endpoint_ip_address)
return endpoint_ip_address
- def register_with_azure_and_fetch_data(self):
- self.openssl_manager = OpenSSLManager()
+ def register_with_azure_and_fetch_data(self, pubkey_info=None):
+ if self.openssl_manager is None:
+ self.openssl_manager = OpenSSLManager()
http_client = AzureEndpointHttpClient(self.openssl_manager.certificate)
LOG.info('Registering with Azure...')
attempts = 0
@@ -347,16 +372,37 @@ class WALinuxAgentShim(object):
attempts += 1
LOG.debug('Successfully fetched GoalState XML.')
goal_state = GoalState(response.contents, http_client)
- public_keys = []
- if goal_state.certificates_xml is not None:
+ ssh_keys = []
+ if goal_state.certificates_xml is not None and pubkey_info is not None:
LOG.debug('Certificate XML found; parsing out public keys.')
- public_keys = self.openssl_manager.parse_certificates(
+ keys_by_fingerprint = self.openssl_manager.parse_certificates(
goal_state.certificates_xml)
- data = {
- 'public-keys': public_keys,
- }
+ ssh_keys = self._filter_pubkeys(keys_by_fingerprint, pubkey_info)
self._report_ready(goal_state, http_client)
- return data
+ return {'public-keys': ssh_keys}
+
+ def _filter_pubkeys(self, keys_by_fingerprint, pubkey_info):
+ """cloud-init expects a straightforward array of keys to be dropped
+ into the user's authorized_keys file. Azure control plane exposes
+ multiple public keys to the VM via wireserver. Select just the
+ user's key(s) and return them, ignoring any other certs.
+ """
+ keys = []
+ for pubkey in pubkey_info:
+ if 'value' in pubkey and pubkey['value']:
+ keys.append(pubkey['value'])
+ elif 'fingerprint' in pubkey and pubkey['fingerprint']:
+ fingerprint = pubkey['fingerprint']
+ if fingerprint in keys_by_fingerprint:
+ keys.append(keys_by_fingerprint[fingerprint])
+ else:
+ LOG.warning("ovf-env.xml specified PublicKey fingerprint "
+ "%s not found in goalstate XML", fingerprint)
+ else:
+ LOG.warning("ovf-env.xml specified PublicKey with neither "
+ "value nor fingerprint: %s", pubkey)
+
+ return keys
def _report_ready(self, goal_state, http_client):
LOG.debug('Reporting ready to Azure fabric.')
@@ -373,11 +419,12 @@ class WALinuxAgentShim(object):
LOG.info('Reported ready to Azure fabric.')
-def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None):
+def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None,
+ pubkey_info=None):
shim = WALinuxAgentShim(fallback_lease_file=fallback_lease_file,
dhcp_options=dhcp_opts)
try:
- return shim.register_with_azure_and_fetch_data()
+ return shim.register_with_azure_and_fetch_data(pubkey_info=pubkey_info)
finally:
shim.clean_up()
diff --git a/tests/data/azure/parse_certificates_fingerprints b/tests/data/azure/parse_certificates_fingerprints
new file mode 100644
index 00000000..f7293c56
--- /dev/null
+++ b/tests/data/azure/parse_certificates_fingerprints
@@ -0,0 +1,4 @@
+ECEDEB3B8488D31AF3BC4CCED493F64B7D27D7B1
+073E19D14D1C799224C6A0FD8DDAB6A8BF27D473
+4C16E7FAD6297D74A9B25EB8F0A12808CEBE293E
+929130695289B450FE45DCD5F6EF0CDE69865867
diff --git a/tests/data/azure/parse_certificates_pem b/tests/data/azure/parse_certificates_pem
new file mode 100644
index 00000000..3521ea3a
--- /dev/null
+++ b/tests/data/azure/parse_certificates_pem
@@ -0,0 +1,152 @@
+Bag Attributes
+ localKeyID: 01 00 00 00
+ Microsoft CSP Name: Microsoft Enhanced Cryptographic Provider v1.0
+Key Attributes
+ X509v3 Key Usage: 10
+-----BEGIN PRIVATE KEY-----
+MIIEwAIBADANBgkqhkiG9w0BAQEFAASCBKowggSmAgEAAoIBAQDlEe5fUqwdrQTP
+W2oVlGK2f31q/8ULT8KmOTyUvL0RPdJQ69vvHOc5Q2CKg2eviHC2LWhF8WmpnZj6
+61RL0GeFGizwvU8Moebw5p3oqdcgoGpHVtxf+mr4QcWF58/Fwez0dA4hcsimVNBz
+eNpBBUIKNBMTBG+4d6hcQBUAGKUdGRcCGEyTqXLU0MgHjxC9JgVqWJl+X2LcAGj5
+7J+tGYGTLzKJmeCeGVNN5ZtJ0T85MYHCKQk1/FElK+Kq5akovXffQHjlnCPcx0NJ
+47NBjlPaFp2gjnAChn79bT4iCjOFZ9avWpqRpeU517UCnY7djOr3fuod/MSQyh3L
+Wuem1tWBAgMBAAECggEBAM4ZXQRs6Kjmo95BHGiAEnSqrlgX+dycjcBq3QPh8KZT
+nifqnf48XhnackENy7tWIjr3DctoUq4mOp8AHt77ijhqfaa4XSg7fwKeK9NLBGC5
+lAXNtAey0o2894/sKrd+LMkgphoYIUnuI4LRaGV56potkj/ZDP/GwTcG/R4SDnTn
+C1Nb05PNTAPQtPZrgPo7TdM6gGsTnFbVrYHQLyg2Sq/osHfF15YohB01esRLCAwb
+EF8JkRC4hWIZoV7BsyQ39232zAJQGGla7+wKFs3kObwh3VnFkQpT94KZnNiZuEfG
+x5pW4Pn3gXgNsftscXsaNe/M9mYZqo//Qw7NvUIvAvECgYEA9AVveyK0HOA06fhh
++3hUWdvw7Pbrl+e06jO9+bT1RjQMbHKyI60DZyVGuAySN86iChJRoJr5c6xj+iXU
+cR6BVJDjGH5t1tyiK2aYf6hEpK9/j8Z54UiVQ486zPP0PGfT2TO4lBLK+8AUmoaH
+gk21ul8QeVCeCJa/o+xEoRFvzcUCgYEA8FCbbvInrUtNY+9eKaUYoNodsgBVjm5X
+I0YPUL9D4d+1nvupHSV2NVmQl0w1RaJwrNTafrl5LkqjhQbmuWNta6QgfZzSA3LB
+lWXo1Mm0azKdcD3qMGbvn0Q3zU+yGNEgmB/Yju3/NtgYRG6tc+FCWRbPbiCnZWT8
+v3C2Y0XggI0CgYEA2/jCZBgGkTkzue5kNVJlh5OS/aog+pCvL6hxCtarfBuTT3ed
+Sje+p46cz3DVpmUpATc+Si8py7KNdYQAm/BJ2be6X+woi9Xcgo87zWgcaPCjZzId
+0I2jsIE/Gl6XvpRCDrxnGWRPgt3GNP4szbPLrDPiH9oie8+Y9eYYf7G+PZkCgYEA
+nRSzZOPYV4f/QDF4pVQLMykfe/iH9B/fyWjEHg3He19VQmRReIHCMMEoqBziPXAe
+onpHj8oAkeer1wpZyhhZr6CKtFDLXgGm09bXSC/IRMHC81klORovyzU2HHfZfCtG
+WOmIDnU2+0xpIGIP8sztJ3qnf97MTJSkOSadsWo9gwkCgYEAh5AQmJQmck88Dff2
+qIfJIX8d+BDw47BFJ89OmMFjGV8TNB+JO+AV4Vkodg4hxKpLqTFZTTUFgoYfy5u1
+1/BhAjpmCDCrzubCFhx+8VEoM2+2+MmnuQoMAm9+/mD/IidwRaARgXgvEmp7sfdt
+RyWd+p2lYvFkC/jORQtDMY4uW1o=
+-----END PRIVATE KEY-----
+Bag Attributes
+ localKeyID: 02 00 00 00
+ Microsoft CSP Name: Microsoft Strong Cryptographic Provider
+Key Attributes
+ X509v3 Key Usage: 10
+-----BEGIN PRIVATE KEY-----
+MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDlQhPrZwVQYFV4
+FBc0H1iTXYaznMpwZvEITKtXWACzTdguUderEVOkXW3HTi5HvC2rMayt0nqo3zcd
+x1eGiqdjpZQ/wMrkz9wNEM/nNMsXntEwxk0jCVNKB/jz6vf+BOtrSI01SritAGZW
+dpKoTUyztT8C2mA3X6D8g3m4Dd07ltnzxaDqAQIU5jBHh3f/Q14tlPNZWUIiqVTC
+gDxgAe7MDmfs9h3CInTBX1XM5J4UsLTL23/padgeSvP5YF5qr1+0c7Tdftxr2lwA
+N3rLkisf5EiLAToVyJJlgP/exo2I8DaIKe7DZzD3Y1CrurOpkcMKYu5kM1Htlbua
+tDkAa2oDAgMBAAECggEAOvdueS9DyiMlCKAeQb1IQosdQOh0l0ma+FgEABC2CWhd
+0LgjQTBRM6cGO+urcq7/jhdWQ1UuUG4tVn71z7itCi/F/Enhxc2C22d2GhFVpWsn
+giSXJYpZ/mIjkdVfWNo6FRuRmmHwMys1p0qTOS+8qUJWhSzW75csqJZGgeUrAI61
+LBV5F0SGR7dR2xZfy7PeDs9xpD0QivDt5DpsZWPaPvw4QlhdLgw6/YU1h9vtm6ci
+xLjnPRLZ7JMpcQHO8dUDl6FiEI7yQ11BDm253VQAVMddYRPQABn7SpEF8kD/aZVh
+2Clvz61Rz80SKjPUthMPLWMCRp7zB0xDMzt3/1i+tQKBgQD6Ar1/oD3eFnRnpi4u
+n/hdHJtMuXWNfUA4dspNjP6WGOid9sgIeUUdif1XyVJ+afITzvgpWc7nUWIqG2bQ
+WxJ/4q2rjUdvjNXTy1voVungR2jD5WLQ9DKeaTR0yCliWlx4JgdPG7qGI5MMwsr+
+R/PUoUUhGeEX+o/sCSieO3iUrQKBgQDqwBEMvIdhAv/CK2sG3fsKYX8rFT55ZNX3
+Tix9DbUGY3wQColNuI8U1nDlxE9U6VOfT9RPqKelBLCgbzB23kdEJnjSlnqlTxrx
+E+Hkndyf2ckdJAR3XNxoQ6SRLJNBsgoBj/z5tlfZE9/Jc+uh0mYy3e6g6XCVPBcz
+MgoIc+ofbwKBgQCGQhZ1hR30N+bHCozeaPW9OvGDIE0qcEqeh9xYDRFilXnF6pK9
+SjJ9jG7KR8jPLiHb1VebDSl5O1EV/6UU2vNyTc6pw7LLCryBgkGW4aWy1WZDXNnW
+EG1meGS9GghvUss5kmJ2bxOZmV0Mi0brisQ8OWagQf+JGvtS7BAt+Q3l+QKBgAb9
+8YQPmXiqPjPqVyW9Ntz4SnFeEJ5NApJ7IZgX8GxgSjGwHqbR+HEGchZl4ncE/Bii
+qBA3Vcb0fM5KgYcI19aPzsl28fA6ivLjRLcqfIfGVNcpW3iyq13vpdctHLW4N9QU
+FdTaOYOds+ysJziKq8CYG6NvUIshXw+HTgUybqbBAoGBAIIOqcmmtgOClAwipA17
+dAHsI9Sjk+J0+d4JU6o+5TsmhUfUKIjXf5+xqJkJcQZMEe5GhxcCuYkgFicvh4Hz
+kv2H/EU35LcJTqC6KTKZOWIbGcn1cqsvwm3GQJffYDiO8fRZSwCaif2J3F2lfH4Y
+R/fA67HXFSTT+OncdRpY1NOn
+-----END PRIVATE KEY-----
+Bag Attributes: <Empty Attributes>
+subject=/CN=CRP/OU=AzureRT/O=Microsoft Corporation/L=Redmond/ST=WA/C=US
+issuer=/CN=Root Agency
+-----BEGIN CERTIFICATE-----
+MIIB+TCCAeOgAwIBAgIBATANBgkqhkiG9w0BAQUFADAWMRQwEgYDVQQDDAtSb290
+IEFnZW5jeTAeFw0xOTAyMTUxOTA0MDRaFw0yOTAyMTUxOTE0MDRaMGwxDDAKBgNV
+BAMMA0NSUDEQMA4GA1UECwwHQXp1cmVSVDEeMBwGA1UECgwVTWljcm9zb2Z0IENv
+cnBvcmF0aW9uMRAwDgYDVQQHDAdSZWRtb25kMQswCQYDVQQIDAJXQTELMAkGA1UE
+BhMCVVMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDIlPjJXzrRih4C
+k/XsoI01oqo7IUxH3dA2F7vHGXQoIpKCp8Qe6Z6cFfdD8Uj+s+B1BX6hngwzIwjN
+jE/23X3SALVzJVWzX4Y/IEjbgsuao6sOyNyB18wIU9YzZkVGj68fmMlUw3LnhPbe
+eWkufZaJCaLyhQOwlRMbOcn48D6Ys8fccOyXNzpq3rH1OzeQpxS2M8zaJYP4/VZ/
+sf6KRpI7bP+QwyFvNKfhcaO9/gj4kMo9lVGjvDU20FW6g8UVNJCV9N4GO6mOcyqo
+OhuhVfjCNGgW7N1qi0TIVn0/MQM4l4dcT2R7Z/bV9fhMJLjGsy5A4TLAdRrhKUHT
+bzi9HyDvAgMBAAEwDQYJKoZIhvcNAQEFBQADAQA=
+-----END CERTIFICATE-----
+Bag Attributes
+ localKeyID: 01 00 00 00
+subject=/C=US/ST=WASHINGTON/L=Seattle/O=Microsoft/OU=Azure/CN=AnhVo/emailAddress=redacted@microsoft.com
+issuer=/C=US/ST=WASHINGTON/L=Seattle/O=Microsoft/OU=Azure/CN=AnhVo/emailAddress=redacted@microsoft.com
+-----BEGIN CERTIFICATE-----
+MIID7TCCAtWgAwIBAgIJALQS3yMg3R41MA0GCSqGSIb3DQEBCwUAMIGMMQswCQYD
+VQQGEwJVUzETMBEGA1UECAwKV0FTSElOR1RPTjEQMA4GA1UEBwwHU2VhdHRsZTES
+MBAGA1UECgwJTWljcm9zb2Z0MQ4wDAYDVQQLDAVBenVyZTEOMAwGA1UEAwwFQW5o
+Vm8xIjAgBgkqhkiG9w0BCQEWE2FuaHZvQG1pY3Jvc29mdC5jb20wHhcNMTkwMjE0
+MjMxMjQwWhcNMjExMTEwMjMxMjQwWjCBjDELMAkGA1UEBhMCVVMxEzARBgNVBAgM
+CldBU0hJTkdUT04xEDAOBgNVBAcMB1NlYXR0bGUxEjAQBgNVBAoMCU1pY3Jvc29m
+dDEOMAwGA1UECwwFQXp1cmUxDjAMBgNVBAMMBUFuaFZvMSIwIAYJKoZIhvcNAQkB
+FhNhbmh2b0BtaWNyb3NvZnQuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
+CgKCAQEA5RHuX1KsHa0Ez1tqFZRitn99av/FC0/Cpjk8lLy9ET3SUOvb7xznOUNg
+ioNnr4hwti1oRfFpqZ2Y+utUS9BnhRos8L1PDKHm8Oad6KnXIKBqR1bcX/pq+EHF
+hefPxcHs9HQOIXLIplTQc3jaQQVCCjQTEwRvuHeoXEAVABilHRkXAhhMk6ly1NDI
+B48QvSYFaliZfl9i3ABo+eyfrRmBky8yiZngnhlTTeWbSdE/OTGBwikJNfxRJSvi
+quWpKL1330B45Zwj3MdDSeOzQY5T2hadoI5wAoZ+/W0+IgozhWfWr1qakaXlOde1
+Ap2O3Yzq937qHfzEkMody1rnptbVgQIDAQABo1AwTjAdBgNVHQ4EFgQUPvdgLiv3
+pAk4r0QTPZU3PFOZJvgwHwYDVR0jBBgwFoAUPvdgLiv3pAk4r0QTPZU3PFOZJvgw
+DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAVUHZT+h9+uCPLTEl5IDg
+kqd9WpzXA7PJd/V+7DeDDTkEd06FIKTWZLfxLVVDjQJnQqubQb//e0zGu1qKbXnX
+R7xqWabGU4eyPeUFWddmt1OHhxKLU3HbJNJJdL6XKiQtpGGUQt/mqNQ/DEr6hhNF
+im5I79iA8H/dXA2gyZrj5Rxea4mtsaYO0mfp1NrFtJpAh2Djy4B1lBXBIv4DWG9e
+mMEwzcLCOZj2cOMA6+mdLMUjYCvIRtnn5MKUHyZX5EmX79wsqMTvVpddlVLB9Kgz
+Qnvft9+SBWh9+F3ip7BsL6Q4Q9v8eHRbnP0ya7ddlgh64uwf9VOfZZdKCnwqudJP
+3g==
+-----END CERTIFICATE-----
+Bag Attributes
+ localKeyID: 02 00 00 00
+subject=/CN=/subscriptions/redacted/resourcegroups/redacted/providers/Microsoft.Compute/virtualMachines/redacted
+issuer=/CN=Microsoft.ManagedIdentity
+-----BEGIN CERTIFICATE-----
+MIIDnTCCAoWgAwIBAgIUB2lauSRccvFkoJybUfIwOUqBN7MwDQYJKoZIhvcNAQEL
+BQAwJDEiMCAGA1UEAxMZTWljcm9zb2Z0Lk1hbmFnZWRJZGVudGl0eTAeFw0xOTAy
+MTUxOTA5MDBaFw0xOTA4MTQxOTA5MDBaMIGUMYGRMIGOBgNVBAMTgYYvc3Vic2Ny
+aXB0aW9ucy8yN2I3NTBjZC1lZDQzLTQyZmQtOTA0NC04ZDc1ZTEyNGFlNTUvcmVz
+b3VyY2Vncm91cHMvYW5oZXh0cmFzc2gvcHJvdmlkZXJzL01pY3Jvc29mdC5Db21w
+dXRlL3ZpcnR1YWxNYWNoaW5lcy9hbmh0ZXN0Y2VydDCCASIwDQYJKoZIhvcNAQEB
+BQADggEPADCCAQoCggEBAOVCE+tnBVBgVXgUFzQfWJNdhrOcynBm8QhMq1dYALNN
+2C5R16sRU6RdbcdOLke8LasxrK3SeqjfNx3HV4aKp2OllD/AyuTP3A0Qz+c0yxee
+0TDGTSMJU0oH+PPq9/4E62tIjTVKuK0AZlZ2kqhNTLO1PwLaYDdfoPyDebgN3TuW
+2fPFoOoBAhTmMEeHd/9DXi2U81lZQiKpVMKAPGAB7swOZ+z2HcIidMFfVczknhSw
+tMvbf+lp2B5K8/lgXmqvX7RztN1+3GvaXAA3esuSKx/kSIsBOhXIkmWA/97GjYjw
+Nogp7sNnMPdjUKu6s6mRwwpi7mQzUe2Vu5q0OQBragMCAwEAAaNWMFQwDgYDVR0P
+AQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwEwYDVR0lBAwwCgYIKwYBBQUHAwIwHwYD
+VR0jBBgwFoAUOJvzEsriQWdJBndPrK+Me1bCPjYwDQYJKoZIhvcNAQELBQADggEB
+AFGP/g8o7Hv/to11M0UqfzJuW/AyH9RZtSRcNQFLZUndwweQ6fap8lFsA4REUdqe
+7Quqp5JNNY1XzKLWXMPoheIDH1A8FFXdsAroArzlNs9tO3TlIHE8A7HxEVZEmR4b
+7ZiixmkQPS2RkjEoV/GM6fheBrzuFn7X5kVZyE6cC5sfcebn8xhk3ZcXI0VmpdT0
+jFBsf5IvFCIXXLLhJI4KXc8VMoKFU1jT9na/jyaoGmfwovKj4ib8s2aiXGAp7Y38
+UCmY+bJapWom6Piy5Jzi/p/kzMVdJcSa+GqpuFxBoQYEVs2XYVl7cGu/wPM+NToC
+pkSoWwF1QAnHn0eokR9E1rU=
+-----END CERTIFICATE-----
+Bag Attributes: <Empty Attributes>
+subject=/CN=CRP/OU=AzureRT/O=Microsoft Corporation/L=Redmond/ST=WA/C=US
+issuer=/CN=Root Agency
+-----BEGIN CERTIFICATE-----
+MIIB+TCCAeOgAwIBAgIBATANBgkqhkiG9w0BAQUFADAWMRQwEgYDVQQDDAtSb290
+IEFnZW5jeTAeFw0xOTAyMTUxOTA0MDRaFw0yOTAyMTUxOTE0MDRaMGwxDDAKBgNV
+BAMMA0NSUDEQMA4GA1UECwwHQXp1cmVSVDEeMBwGA1UECgwVTWljcm9zb2Z0IENv
+cnBvcmF0aW9uMRAwDgYDVQQHDAdSZWRtb25kMQswCQYDVQQIDAJXQTELMAkGA1UE
+BhMCVVMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDHU9IDclbKVYVb
+Yuv0+zViX+wTwlKspslmy/uf3hkWLh7pyzyrq70S7qtSW2EGixUPxZS/R8pOLHoi
+nlKF9ILgj0gVTCJsSwnWpXRg3rhZwIVoYMHN50BHS1SqVD0lsWNMXmo76LoJcjmW
+vwIznvj5C/gnhU+K7+c3m7AlCyU2wjwpBAEYj7PQs6l/wTqpEiaqC5NytNBd7qp+
+lYYysVrpa1PFL0Nj4MMZARIfjkiJtL9qDhy9YZeJRQ6q/Fhz0kjvkZnfxixfKF4y
+WzOfhBrAtpF6oOnuYKk3hxjh9KjTTX4/U8zdLojalX09iyHyEjwJKGlGEpzh1aY7
+t5btUyvpAgMBAAEwDQYJKoZIhvcNAQEFBQADAQA=
+-----END CERTIFICATE-----
diff --git a/tests/data/azure/pubkey_extract_cert b/tests/data/azure/pubkey_extract_cert
new file mode 100644
index 00000000..ce9b852d
--- /dev/null
+++ b/tests/data/azure/pubkey_extract_cert
@@ -0,0 +1,13 @@
+-----BEGIN CERTIFICATE-----
+MIIB+TCCAeOgAwIBAgIBATANBgkqhkiG9w0BAQUFADAWMRQwEgYDVQQDDAtSb290
+IEFnZW5jeTAeFw0xOTAyMTUxOTA0MDRaFw0yOTAyMTUxOTE0MDRaMGwxDDAKBgNV
+BAMMA0NSUDEQMA4GA1UECwwHQXp1cmVSVDEeMBwGA1UECgwVTWljcm9zb2Z0IENv
+cnBvcmF0aW9uMRAwDgYDVQQHDAdSZWRtb25kMQswCQYDVQQIDAJXQTELMAkGA1UE
+BhMCVVMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDHU9IDclbKVYVb
+Yuv0+zViX+wTwlKspslmy/uf3hkWLh7pyzyrq70S7qtSW2EGixUPxZS/R8pOLHoi
+nlKF9ILgj0gVTCJsSwnWpXRg3rhZwIVoYMHN50BHS1SqVD0lsWNMXmo76LoJcjmW
+vwIznvj5C/gnhU+K7+c3m7AlCyU2wjwpBAEYj7PQs6l/wTqpEiaqC5NytNBd7qp+
+lYYysVrpa1PFL0Nj4MMZARIfjkiJtL9qDhy9YZeJRQ6q/Fhz0kjvkZnfxixfKF4y
+WzOfhBrAtpF6oOnuYKk3hxjh9KjTTX4/U8zdLojalX09iyHyEjwJKGlGEpzh1aY7
+t5btUyvpAgMBAAEwDQYJKoZIhvcNAQEFBQADAQA=
+-----END CERTIFICATE-----
diff --git a/tests/data/azure/pubkey_extract_ssh_key b/tests/data/azure/pubkey_extract_ssh_key
new file mode 100644
index 00000000..54d749ed
--- /dev/null
+++ b/tests/data/azure/pubkey_extract_ssh_key
@@ -0,0 +1 @@
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHU9IDclbKVYVbYuv0+zViX+wTwlKspslmy/uf3hkWLh7pyzyrq70S7qtSW2EGixUPxZS/R8pOLHoinlKF9ILgj0gVTCJsSwnWpXRg3rhZwIVoYMHN50BHS1SqVD0lsWNMXmo76LoJcjmWvwIznvj5C/gnhU+K7+c3m7AlCyU2wjwpBAEYj7PQs6l/wTqpEiaqC5NytNBd7qp+lYYysVrpa1PFL0Nj4MMZARIfjkiJtL9qDhy9YZeJRQ6q/Fhz0kjvkZnfxixfKF4yWzOfhBrAtpF6oOnuYKk3hxjh9KjTTX4/U8zdLojalX09iyHyEjwJKGlGEpzh1aY7t5btUyvp
diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
index 26b2b93d..02556165 100644
--- a/tests/unittests/test_datasource/test_azure_helper.py
+++ b/tests/unittests/test_datasource/test_azure_helper.py
@@ -1,11 +1,13 @@
# This file is part of cloud-init. See LICENSE file for license information.
import os
+import unittest2
from textwrap import dedent
from cloudinit.sources.helpers import azure as azure_helper
from cloudinit.tests.helpers import CiTestCase, ExitStack, mock, populate_dir
+from cloudinit.util import load_file
from cloudinit.sources.helpers.azure import WALinuxAgentShim as wa_shim
GOAL_STATE_TEMPLATE = """\
@@ -289,6 +291,50 @@ class TestOpenSSLManager(CiTestCase):
self.assertEqual([mock.call(manager.tmpdir)], del_dir.call_args_list)
+class TestOpenSSLManagerActions(CiTestCase):
+
+ def setUp(self):
+ super(TestOpenSSLManagerActions, self).setUp()
+
+ self.allowed_subp = True
+
+ def _data_file(self, name):
+ path = 'tests/data/azure'
+ return os.path.join(path, name)
+
+ @unittest2.skip("todo move to cloud_test")
+ def test_pubkey_extract(self):
+ cert = load_file(self._data_file('pubkey_extract_cert'))
+ good_key = load_file(self._data_file('pubkey_extract_ssh_key'))
+ sslmgr = azure_helper.OpenSSLManager()
+ key = sslmgr._get_ssh_key_from_cert(cert)
+ self.assertEqual(good_key, key)
+
+ good_fingerprint = '073E19D14D1C799224C6A0FD8DDAB6A8BF27D473'
+ fingerprint = sslmgr._get_fingerprint_from_cert(cert)
+ self.assertEqual(good_fingerprint, fingerprint)
+
+ @unittest2.skip("todo move to cloud_test")
+ @mock.patch.object(azure_helper.OpenSSLManager, '_decrypt_certs_from_xml')
+ def test_parse_certificates(self, mock_decrypt_certs):
+ """Azure control plane puts private keys as well as certificates
+ into the Certificates XML object. Make sure only the public keys
+ from certs are extracted and that fingerprints are converted to
+ the form specified in the ovf-env.xml file.
+ """
+ cert_contents = load_file(self._data_file('parse_certificates_pem'))
+ fingerprints = load_file(self._data_file(
+ 'parse_certificates_fingerprints')
+ ).splitlines()
+ mock_decrypt_certs.return_value = cert_contents
+ sslmgr = azure_helper.OpenSSLManager()
+ keys_by_fp = sslmgr.parse_certificates('')
+ for fp in keys_by_fp.keys():
+ self.assertIn(fp, fingerprints)
+ for fp in fingerprints:
+ self.assertIn(fp, keys_by_fp)
+
+
class TestWALinuxAgentShim(CiTestCase):
def setUp(self):
@@ -329,18 +375,31 @@ class TestWALinuxAgentShim(CiTestCase):
def test_certificates_used_to_determine_public_keys(self):
shim = wa_shim()
- data = shim.register_with_azure_and_fetch_data()
+ """if register_with_azure_and_fetch_data() isn't passed some info about
+ the user's public keys, there's no point in even trying to parse
+ the certificates
+ """
+ mypk = [{'fingerprint': 'fp1', 'path': 'path1'},
+ {'fingerprint': 'fp3', 'path': 'path3', 'value': ''}]
+ certs = {'fp1': 'expected-key',
+ 'fp2': 'should-not-be-found',
+ 'fp3': 'expected-no-value-key',
+ }
+ sslmgr = self.OpenSSLManager.return_value
+ sslmgr.parse_certificates.return_value = certs
+ data = shim.register_with_azure_and_fetch_data(pubkey_info=mypk)
self.assertEqual(
[mock.call(self.GoalState.return_value.certificates_xml)],
- self.OpenSSLManager.return_value.parse_certificates.call_args_list)
- self.assertEqual(
- self.OpenSSLManager.return_value.parse_certificates.return_value,
- data['public-keys'])
+ sslmgr.parse_certificates.call_args_list)
+ self.assertIn('expected-key', data['public-keys'])
+ self.assertIn('expected-no-value-key', data['public-keys'])
+ self.assertNotIn('should-not-be-found', data['public-keys'])
def test_absent_certificates_produces_empty_public_keys(self):
+ mypk = [{'fingerprint': 'fp1', 'path': 'path1'}]
self.GoalState.return_value.certificates_xml = None
shim = wa_shim()
- data = shim.register_with_azure_and_fetch_data()
+ data = shim.register_with_azure_and_fetch_data(pubkey_info=mypk)
self.assertEqual([], data['public-keys'])
def test_correct_url_used_for_report_ready(self):
--
2.20.1

View File

@ -1,405 +0,0 @@
From f919e65e4a462b385a6daa6b7cccc6af1358cbcf Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 29 May 2019 13:41:47 +0200
Subject: [PATCH 3/5] Azure: Changes to the Hyper-V KVP Reporter
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190529134149.842-4-otubo@redhat.com>
Patchwork-id: 88266
O-Subject: [RHEL-8.0.1/RHEL-8.1.0 cloud-init PATCHv2 3/5] Azure: Changes to the Hyper-V KVP Reporter
Bugzilla: 1691986
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
From: Anh Vo <anhvo@microsoft.com>
commit 86674f013dfcea3c075ab41373ffb475881066f6
Author: Anh Vo <anhvo@microsoft.com>
Date: Mon Apr 29 20:22:16 2019 +0000
Azure: Changes to the Hyper-V KVP Reporter
 + Truncate KVP Pool file to prevent stale entries from
being processed by the Hyper-V KVP reporter.
 + Drop filtering of KVPs as it is no longer needed.
 + Batch appending of existing KVP entries.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/reporting/handlers.py | 117 +++++++++++++++----------------
tests/unittests/test_reporting_hyperv.py | 104 +++++++++++++--------------
2 files changed, 106 insertions(+), 115 deletions(-)
mode change 100644 => 100755 cloudinit/reporting/handlers.py
mode change 100644 => 100755 tests/unittests/test_reporting_hyperv.py
diff --git a/cloudinit/reporting/handlers.py b/cloudinit/reporting/handlers.py
old mode 100644
new mode 100755
index 6d23558..10165ae
--- a/cloudinit/reporting/handlers.py
+++ b/cloudinit/reporting/handlers.py
@@ -5,7 +5,6 @@ import fcntl
import json
import six
import os
-import re
import struct
import threading
import time
@@ -14,6 +13,7 @@ from cloudinit import log as logging
from cloudinit.registry import DictRegistry
from cloudinit import (url_helper, util)
from datetime import datetime
+from six.moves.queue import Empty as QueueEmptyError
if six.PY2:
from multiprocessing.queues import JoinableQueue as JQueue
@@ -129,24 +129,50 @@ class HyperVKvpReportingHandler(ReportingHandler):
DESC_IDX_KEY = 'msg_i'
JSON_SEPARATORS = (',', ':')
KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1'
+ _already_truncated_pool_file = False
def __init__(self,
kvp_file_path=KVP_POOL_FILE_GUEST,
event_types=None):
super(HyperVKvpReportingHandler, self).__init__()
self._kvp_file_path = kvp_file_path
+ HyperVKvpReportingHandler._truncate_guest_pool_file(
+ self._kvp_file_path)
+
self._event_types = event_types
self.q = JQueue()
- self.kvp_file = None
self.incarnation_no = self._get_incarnation_no()
self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX,
self.incarnation_no)
- self._current_offset = 0
self.publish_thread = threading.Thread(
target=self._publish_event_routine)
self.publish_thread.daemon = True
self.publish_thread.start()
+ @classmethod
+ def _truncate_guest_pool_file(cls, kvp_file):
+ """
+ Truncate the pool file if it has not been truncated since boot.
+ This should be done exactly once for the file indicated by
+ KVP_POOL_FILE_GUEST constant above. This method takes a filename
+ so that we can use an arbitrary file during unit testing.
+ Since KVP is a best-effort telemetry channel we only attempt to
+ truncate the file once and only if the file has not been modified
+ since boot. Additional truncation can lead to loss of existing
+ KVPs.
+ """
+ if cls._already_truncated_pool_file:
+ return
+ boot_time = time.time() - float(util.uptime())
+ try:
+ if os.path.getmtime(kvp_file) < boot_time:
+ with open(kvp_file, "w"):
+ pass
+ except (OSError, IOError) as e:
+ LOG.warning("failed to truncate kvp pool file, %s", e)
+ finally:
+ cls._already_truncated_pool_file = True
+
def _get_incarnation_no(self):
"""
use the time passed as the incarnation number.
@@ -162,20 +188,15 @@ class HyperVKvpReportingHandler(ReportingHandler):
def _iterate_kvps(self, offset):
"""iterate the kvp file from the current offset."""
- try:
- with open(self._kvp_file_path, 'rb+') as f:
- self.kvp_file = f
- fcntl.flock(f, fcntl.LOCK_EX)
- f.seek(offset)
+ with open(self._kvp_file_path, 'rb') as f:
+ fcntl.flock(f, fcntl.LOCK_EX)
+ f.seek(offset)
+ record_data = f.read(self.HV_KVP_RECORD_SIZE)
+ while len(record_data) == self.HV_KVP_RECORD_SIZE:
+ kvp_item = self._decode_kvp_item(record_data)
+ yield kvp_item
record_data = f.read(self.HV_KVP_RECORD_SIZE)
- while len(record_data) == self.HV_KVP_RECORD_SIZE:
- self._current_offset += self.HV_KVP_RECORD_SIZE
- kvp_item = self._decode_kvp_item(record_data)
- yield kvp_item
- record_data = f.read(self.HV_KVP_RECORD_SIZE)
- fcntl.flock(f, fcntl.LOCK_UN)
- finally:
- self.kvp_file = None
+ fcntl.flock(f, fcntl.LOCK_UN)
def _event_key(self, event):
"""
@@ -207,23 +228,13 @@ class HyperVKvpReportingHandler(ReportingHandler):
return {'key': k, 'value': v}
- def _update_kvp_item(self, record_data):
- if self.kvp_file is None:
- raise ReportException(
- "kvp file '{0}' not opened."
- .format(self._kvp_file_path))
- self.kvp_file.seek(-self.HV_KVP_RECORD_SIZE, 1)
- self.kvp_file.write(record_data)
-
def _append_kvp_item(self, record_data):
- with open(self._kvp_file_path, 'rb+') as f:
+ with open(self._kvp_file_path, 'ab') as f:
fcntl.flock(f, fcntl.LOCK_EX)
- # seek to end of the file
- f.seek(0, 2)
- f.write(record_data)
+ for data in record_data:
+ f.write(data)
f.flush()
fcntl.flock(f, fcntl.LOCK_UN)
- self._current_offset = f.tell()
def _break_down(self, key, meta_data, description):
del meta_data[self.MSG_KEY]
@@ -279,40 +290,26 @@ class HyperVKvpReportingHandler(ReportingHandler):
def _publish_event_routine(self):
while True:
+ items_from_queue = 0
try:
event = self.q.get(block=True)
- need_append = True
+ items_from_queue += 1
+ encoded_data = []
+ while event is not None:
+ encoded_data += self._encode_event(event)
+ try:
+ # get all the rest of the events in the queue
+ event = self.q.get(block=False)
+ items_from_queue += 1
+ except QueueEmptyError:
+ event = None
try:
- if not os.path.exists(self._kvp_file_path):
- LOG.warning(
- "skip writing events %s to %s. file not present.",
- event.as_string(),
- self._kvp_file_path)
- encoded_event = self._encode_event(event)
- # for each encoded_event
- for encoded_data in (encoded_event):
- for kvp in self._iterate_kvps(self._current_offset):
- match = (
- re.match(
- r"^{0}\|(\d+)\|.+"
- .format(self.EVENT_PREFIX),
- kvp['key']
- ))
- if match:
- match_groups = match.groups(0)
- if int(match_groups[0]) < self.incarnation_no:
- need_append = False
- self._update_kvp_item(encoded_data)
- continue
- if need_append:
- self._append_kvp_item(encoded_data)
- except IOError as e:
- LOG.warning(
- "failed posting event to kvp: %s e:%s",
- event.as_string(), e)
+ self._append_kvp_item(encoded_data)
+ except (OSError, IOError) as e:
+ LOG.warning("failed posting events to kvp, %s", e)
finally:
- self.q.task_done()
-
+ for _ in range(items_from_queue):
+ self.q.task_done()
# when main process exits, q.get() will through EOFError
# indicating we should exit this thread.
except EOFError:
@@ -322,7 +319,7 @@ class HyperVKvpReportingHandler(ReportingHandler):
# if the kvp pool already contains a chunk of data,
# so defer it to another thread.
def publish_event(self, event):
- if (not self._event_types or event.event_type in self._event_types):
+ if not self._event_types or event.event_type in self._event_types:
self.q.put(event)
def flush(self):
diff --git a/tests/unittests/test_reporting_hyperv.py b/tests/unittests/test_reporting_hyperv.py
old mode 100644
new mode 100755
index 2e64c6c..d01ed5b
--- a/tests/unittests/test_reporting_hyperv.py
+++ b/tests/unittests/test_reporting_hyperv.py
@@ -1,10 +1,12 @@
# This file is part of cloud-init. See LICENSE file for license information.
from cloudinit.reporting import events
-from cloudinit.reporting import handlers
+from cloudinit.reporting.handlers import HyperVKvpReportingHandler
import json
import os
+import struct
+import time
from cloudinit import util
from cloudinit.tests.helpers import CiTestCase
@@ -13,7 +15,7 @@ from cloudinit.tests.helpers import CiTestCase
class TestKvpEncoding(CiTestCase):
def test_encode_decode(self):
kvp = {'key': 'key1', 'value': 'value1'}
- kvp_reporting = handlers.HyperVKvpReportingHandler()
+ kvp_reporting = HyperVKvpReportingHandler()
data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value'])
self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE)
decoded_kvp = kvp_reporting._decode_kvp_item(data)
@@ -26,57 +28,9 @@ class TextKvpReporter(CiTestCase):
self.tmp_file_path = self.tmp_path('kvp_pool_file')
util.ensure_file(self.tmp_file_path)
- def test_event_type_can_be_filtered(self):
- reporter = handlers.HyperVKvpReportingHandler(
- kvp_file_path=self.tmp_file_path,
- event_types=['foo', 'bar'])
-
- reporter.publish_event(
- events.ReportingEvent('foo', 'name', 'description'))
- reporter.publish_event(
- events.ReportingEvent('some_other', 'name', 'description3'))
- reporter.q.join()
-
- kvps = list(reporter._iterate_kvps(0))
- self.assertEqual(1, len(kvps))
-
- reporter.publish_event(
- events.ReportingEvent('bar', 'name', 'description2'))
- reporter.q.join()
- kvps = list(reporter._iterate_kvps(0))
- self.assertEqual(2, len(kvps))
-
- self.assertIn('foo', kvps[0]['key'])
- self.assertIn('bar', kvps[1]['key'])
- self.assertNotIn('some_other', kvps[0]['key'])
- self.assertNotIn('some_other', kvps[1]['key'])
-
- def test_events_are_over_written(self):
- reporter = handlers.HyperVKvpReportingHandler(
- kvp_file_path=self.tmp_file_path)
-
- self.assertEqual(0, len(list(reporter._iterate_kvps(0))))
-
- reporter.publish_event(
- events.ReportingEvent('foo', 'name1', 'description'))
- reporter.publish_event(
- events.ReportingEvent('foo', 'name2', 'description'))
- reporter.q.join()
- self.assertEqual(2, len(list(reporter._iterate_kvps(0))))
-
- reporter2 = handlers.HyperVKvpReportingHandler(
- kvp_file_path=self.tmp_file_path)
- reporter2.incarnation_no = reporter.incarnation_no + 1
- reporter2.publish_event(
- events.ReportingEvent('foo', 'name3', 'description'))
- reporter2.q.join()
-
- self.assertEqual(2, len(list(reporter2._iterate_kvps(0))))
-
def test_events_with_higher_incarnation_not_over_written(self):
- reporter = handlers.HyperVKvpReportingHandler(
+ reporter = HyperVKvpReportingHandler(
kvp_file_path=self.tmp_file_path)
-
self.assertEqual(0, len(list(reporter._iterate_kvps(0))))
reporter.publish_event(
@@ -86,7 +40,7 @@ class TextKvpReporter(CiTestCase):
reporter.q.join()
self.assertEqual(2, len(list(reporter._iterate_kvps(0))))
- reporter3 = handlers.HyperVKvpReportingHandler(
+ reporter3 = HyperVKvpReportingHandler(
kvp_file_path=self.tmp_file_path)
reporter3.incarnation_no = reporter.incarnation_no - 1
reporter3.publish_event(
@@ -95,7 +49,7 @@ class TextKvpReporter(CiTestCase):
self.assertEqual(3, len(list(reporter3._iterate_kvps(0))))
def test_finish_event_result_is_logged(self):
- reporter = handlers.HyperVKvpReportingHandler(
+ reporter = HyperVKvpReportingHandler(
kvp_file_path=self.tmp_file_path)
reporter.publish_event(
events.FinishReportingEvent('name2', 'description1',
@@ -105,7 +59,7 @@ class TextKvpReporter(CiTestCase):
def test_file_operation_issue(self):
os.remove(self.tmp_file_path)
- reporter = handlers.HyperVKvpReportingHandler(
+ reporter = HyperVKvpReportingHandler(
kvp_file_path=self.tmp_file_path)
reporter.publish_event(
events.FinishReportingEvent('name2', 'description1',
@@ -113,7 +67,7 @@ class TextKvpReporter(CiTestCase):
reporter.q.join()
def test_event_very_long(self):
- reporter = handlers.HyperVKvpReportingHandler(
+ reporter = HyperVKvpReportingHandler(
kvp_file_path=self.tmp_file_path)
description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE
long_event = events.FinishReportingEvent(
@@ -132,3 +86,43 @@ class TextKvpReporter(CiTestCase):
self.assertEqual(msg_slice['msg_i'], i)
full_description += msg_slice['msg']
self.assertEqual(description, full_description)
+
+ def test_not_truncate_kvp_file_modified_after_boot(self):
+ with open(self.tmp_file_path, "wb+") as f:
+ kvp = {'key': 'key1', 'value': 'value1'}
+ data = (struct.pack("%ds%ds" % (
+ HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE,
+ HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE),
+ kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8')))
+ f.write(data)
+ cur_time = time.time()
+ os.utime(self.tmp_file_path, (cur_time, cur_time))
+
+ # reset this because the unit test framework
+ # has already polluted the class variable
+ HyperVKvpReportingHandler._already_truncated_pool_file = False
+
+ reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
+ kvps = list(reporter._iterate_kvps(0))
+ self.assertEqual(1, len(kvps))
+
+ def test_truncate_stale_kvp_file(self):
+ with open(self.tmp_file_path, "wb+") as f:
+ kvp = {'key': 'key1', 'value': 'value1'}
+ data = (struct.pack("%ds%ds" % (
+ HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE,
+ HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE),
+ kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8')))
+ f.write(data)
+
+ # set the time ways back to make it look like
+ # we had an old kvp file
+ os.utime(self.tmp_file_path, (1000000, 1000000))
+
+ # reset this because the unit test framework
+ # has already polluted the class variable
+ HyperVKvpReportingHandler._already_truncated_pool_file = False
+
+ reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
+ kvps = list(reporter._iterate_kvps(0))
+ self.assertEqual(0, len(kvps))
--
1.8.3.1

View File

@ -1,156 +0,0 @@
From ae9b545cef4a68dfb9f9356dd27e43ff71ec26aa Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 29 May 2019 13:41:45 +0200
Subject: [PATCH 1/5] Azure: Ensure platform random_seed is always serializable
as JSON.
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190529134149.842-2-otubo@redhat.com>
Patchwork-id: 88272
O-Subject: [RHEL-8.0.1/RHEL-8.1.0 cloud-init PATCHv2 1/5] Azure: Ensure platform random_seed is always serializable as JSON.
Bugzilla: 1691986
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
From: "Jason Zions (MSFT)" <jasonzio@microsoft.com>
commit 0dc3a77f41f4544e4cb5a41637af7693410d4cdf
Author: Jason Zions (MSFT) <jasonzio@microsoft.com>
Date: Tue Mar 26 18:53:50 2019 +0000
Azure: Ensure platform random_seed is always serializable as JSON.
The Azure platform surfaces random bytes into /sys via Hyper-V.
Python 2.7 json.dump() raises an exception if asked to convert
a str with non-character content, and python 3.0 json.dump()
won't serialize a "bytes" value. As a result, c-i instance
data is often not written by Azure, making reboots slower (c-i
has to repeat work).
The random data is base64-encoded and then decoded into a string
(str or unicode depending on the version of Python in use). The
base64 string has just as many bits of entropy, so we're not
throwing away useful "information", but we can be certain
json.dump() will correctly serialize the bits.
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Conflicts:
tests/unittests/test_datasource/test_azure.py
Skipped the commit edf052c as it removes support for python-2.6
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
---
cloudinit/sources/DataSourceAzure.py | 24 +++++++++++++++++++-----
tests/data/azure/non_unicode_random_string | 1 +
tests/unittests/test_datasource/test_azure.py | 24 ++++++++++++++++++++++--
3 files changed, 42 insertions(+), 7 deletions(-)
create mode 100644 tests/data/azure/non_unicode_random_string
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 2062ca5..a768b2c 100644
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -54,6 +54,7 @@ REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds"
REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready"
AGENT_SEED_DIR = '/var/lib/waagent'
IMDS_URL = "http://169.254.169.254/metadata/"
+PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0"
# List of static scripts and network config artifacts created by
# stock ubuntu suported images.
@@ -195,6 +196,8 @@ if util.is_FreeBSD():
RESOURCE_DISK_PATH = "/dev/" + res_disk
else:
LOG.debug("resource disk is None")
+ # TODO Find where platform entropy data is surfaced
+ PLATFORM_ENTROPY_SOURCE = None
BUILTIN_DS_CONFIG = {
'agent_command': AGENT_START_BUILTIN,
@@ -1100,16 +1103,27 @@ def _check_freebsd_cdrom(cdrom_dev):
return False
-def _get_random_seed():
+def _get_random_seed(source=PLATFORM_ENTROPY_SOURCE):
"""Return content random seed file if available, otherwise,
return None."""
# azure / hyper-v provides random data here
- # TODO. find the seed on FreeBSD platform
# now update ds_cfg to reflect contents pass in config
- if util.is_FreeBSD():
+ if source is None:
return None
- return util.load_file("/sys/firmware/acpi/tables/OEM0",
- quiet=True, decode=False)
+ seed = util.load_file(source, quiet=True, decode=False)
+
+ # The seed generally contains non-Unicode characters. load_file puts
+ # them into a str (in python 2) or bytes (in python 3). In python 2,
+ # bad octets in a str cause util.json_dumps() to throw an exception. In
+ # python 3, bytes is a non-serializable type, and the handler load_file
+ # uses applies b64 encoding *again* to handle it. The simplest solution
+ # is to just b64encode the data and then decode it to a serializable
+ # string. Same number of bits of entropy, just with 25% more zeroes.
+ # There's no need to undo this base64-encoding when the random seed is
+ # actually used in cc_seed_random.py.
+ seed = base64.b64encode(seed).decode()
+
+ return seed
def list_possible_azure_ds_devs():
diff --git a/tests/data/azure/non_unicode_random_string b/tests/data/azure/non_unicode_random_string
new file mode 100644
index 0000000..b9ecefb
--- /dev/null
+++ b/tests/data/azure/non_unicode_random_string
@@ -0,0 +1 @@
+OEM0d\x00\x00\x00\x01\x80VRTUALMICROSFT\x02\x17\x00\x06MSFT\x97\x00\x00\x00C\xb4{V\xf4X%\x061x\x90\x1c\xfen\x86\xbf~\xf5\x8c\x94&\x88\xed\x84\xf9B\xbd\xd3\xf1\xdb\xee:\xd9\x0fc\x0e\x83(\xbd\xe3'\xfc\x85,\xdf\xf4\x13\x99N\xc5\xf3Y\x1e\xe3\x0b\xa4H\x08J\xb9\xdcdb$
\ No newline at end of file
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index 417d86a..eacf225 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -7,11 +7,11 @@ from cloudinit.sources import (
UNSET, DataSourceAzure as dsaz, InvalidMetaDataException)
from cloudinit.util import (b64e, decode_binary, load_file, write_file,
find_freebsd_part, get_path_dev_freebsd,
- MountFailedError)
+ MountFailedError, json_dumps, load_json)
from cloudinit.version import version_string as vs
from cloudinit.tests.helpers import (
HttprettyTestCase, CiTestCase, populate_dir, mock, wrap_and_call,
- ExitStack, PY26, SkipTest)
+ ExitStack, PY26, SkipTest, resourceLocation)
import crypt
import httpretty
@@ -1924,4 +1924,24 @@ class TestWBIsPlatformViable(CiTestCase):
self.logs.getvalue())
+class TestRandomSeed(CiTestCase):
+ """Test proper handling of random_seed"""
+
+ def test_non_ascii_seed_is_serializable(self):
+ """Pass if a random string from the Azure infrastructure which
+ contains at least one non-Unicode character can be converted to/from
+ JSON without alteration and without throwing an exception.
+ """
+ path = resourceLocation("azure/non_unicode_random_string")
+ result = dsaz._get_random_seed(path)
+
+ obj = {'seed': result}
+ try:
+ serialized = json_dumps(obj)
+ deserialized = load_json(serialized)
+ except UnicodeDecodeError:
+ self.fail("Non-serializable random seed returned")
+
+ self.assertEqual(deserialized['seed'], result)
+
# vi: ts=4 expandtab
--
1.8.3.1

View File

@ -1,102 +0,0 @@
From ae0f85f867714009af7d4ec6c58e30c552bab556 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 3 Jul 2019 13:06:49 +0200
Subject: [PATCH 2/2] Azure: Return static fallback address as if failed to
find endpoint
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190703130649.14511-1-otubo@redhat.com>
Patchwork-id: 89353
O-Subject: [RHEL-8.0.1/RHEL-8.1.0 cloud-init PATCH] Azure: Return static fallback address as if failed to find endpoint
Bugzilla: 1691986
RH-Acked-by: Bandan Das <bsd@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
commit ade77012c8bbcd215b7e26065981194ce1b6a157
Author: Jason Zions (MSFT) <jasonzio@microsoft.com>
Date: Fri May 10 18:38:55 2019 +0000
Azure: Return static fallback address as if failed to find endpoint
The Azure data source helper attempts to use information in the dhcp
lease to find the Wireserver endpoint (IP address). Under some unusual
circumstances, those attempts will fail. This change uses a static
address, known to be always correct in the Azure public and sovereign
clouds, when the helper fails to locate a valid dhcp lease. This
address is not guaranteed to be correct in Azure Stack environments;
it's still best to use the information from the lease whenever possible.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/sources/helpers/azure.py | 14 +++++++++++---
tests/unittests/test_datasource/test_azure_helper.py | 9 +++++++--
2 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index d3af05e..82c4c8c 100755
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -20,6 +20,9 @@ from cloudinit.reporting import events
LOG = logging.getLogger(__name__)
+# This endpoint matches the format as found in dhcp lease files, since this
+# value is applied if the endpoint can't be found within a lease file
+DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10"
azure_ds_reporter = events.ReportEventStack(
name="azure-ds",
@@ -297,7 +300,12 @@ class WALinuxAgentShim(object):
@azure_ds_telemetry_reporter
def _get_value_from_leases_file(fallback_lease_file):
leases = []
- content = util.load_file(fallback_lease_file)
+ try:
+ content = util.load_file(fallback_lease_file)
+ except IOError as ex:
+ LOG.error("Failed to read %s: %s", fallback_lease_file, ex)
+ return None
+
LOG.debug("content is %s", content)
option_name = _get_dhcp_endpoint_option_name()
for line in content.splitlines():
@@ -372,9 +380,9 @@ class WALinuxAgentShim(object):
fallback_lease_file)
value = WALinuxAgentShim._get_value_from_leases_file(
fallback_lease_file)
-
if value is None:
- raise ValueError('No endpoint found.')
+ LOG.warning("No lease found; using default endpoint")
+ value = DEFAULT_WIRESERVER_ENDPOINT
endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value)
LOG.debug('Azure endpoint found at %s', endpoint_ip_address)
diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
index 0255616..bd006ab 100644
--- a/tests/unittests/test_datasource/test_azure_helper.py
+++ b/tests/unittests/test_datasource/test_azure_helper.py
@@ -67,12 +67,17 @@ class TestFindEndpoint(CiTestCase):
self.networkd_leases.return_value = None
def test_missing_file(self):
- self.assertRaises(ValueError, wa_shim.find_endpoint)
+ """wa_shim find_endpoint uses default endpoint if leasefile not found
+ """
+ self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16")
def test_missing_special_azure_line(self):
+ """wa_shim find_endpoint uses default endpoint if leasefile is found
+ but does not contain DHCP Option 245 (whose value is the endpoint)
+ """
self.load_file.return_value = ''
self.dhcp_options.return_value = {'eth0': {'key': 'value'}}
- self.assertRaises(ValueError, wa_shim.find_endpoint)
+ self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16")
@staticmethod
def _build_lease_content(encoded_address):
--
1.8.3.1

View File

@ -0,0 +1,46 @@
From 65b26a20b550ae301ca33eafe062a873f53969de Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 24 Jun 2020 07:34:32 +0200
Subject: [PATCH 3/4] Change from redhat to rhel in systemd generator tmpl
(#450)
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200623154034.28563-3-otubo@redhat.com>
Patchwork-id: 97783
O-Subject: [RHEL-8.3.0/RHEL-8.2.1 cloud-init PATCH 2/3] Change from redhat to rhel in systemd generator tmpl (#450)
Bugzilla: 1834173
RH-Acked-by: Cathy Avery <cavery@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
commit 650d53d656b612442773453813d8417b234d3752
Author: Eduardo Otubo <otubo@redhat.com>
Date: Tue Jun 23 14:41:15 2020 +0200
Change from redhat to rhel in systemd generator tmpl (#450)
The name `redhat' is not used but rather `rhel' to identify the distro.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
systemd/cloud-init-generator.tmpl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/systemd/cloud-init-generator.tmpl b/systemd/cloud-init-generator.tmpl
index 45efa24..0773356 100755
--- a/systemd/cloud-init-generator.tmpl
+++ b/systemd/cloud-init-generator.tmpl
@@ -83,7 +83,7 @@ default() {
check_for_datasource() {
local ds_rc=""
-{% if variant in ["redhat", "fedora", "centos"] %}
+{% if variant in ["rhel", "fedora", "centos"] %}
local dsidentify="/usr/libexec/cloud-init/ds-identify"
{% else %}
local dsidentify="/usr/lib/cloud-init/ds-identify"
--
1.8.3.1

View File

@ -1,111 +0,0 @@
From b2500e258b930479cef36f514fcf9581ba68c976 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 29 May 2019 13:41:48 +0200
Subject: [PATCH 4/5] DataSourceAzure: Adjust timeout for polling IMDS
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190529134149.842-5-otubo@redhat.com>
Patchwork-id: 88267
O-Subject: [RHEL-8.0.1/RHEL-8.1.0 cloud-init PATCHv2 4/5] DataSourceAzure: Adjust timeout for polling IMDS
Bugzilla: 1691986
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
From: Anh Vo <anhvo@microsoft.com>
commit ab6621d849b24bb652243e88c79f6f3b446048d7
Author: Anh Vo <anhvo@microsoft.com>
Date: Wed May 8 14:54:03 2019 +0000
DataSourceAzure: Adjust timeout for polling IMDS
If the IMDS primary server is not available, falling back to the
secondary server takes about 1s. The net result is that the
expected E2E time is slightly more than 1s. This change increases
the timeout to 2s to prevent the infinite loop of timeouts.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/sources/DataSourceAzure.py | 15 ++++++++++-----
tests/unittests/test_datasource/test_azure.py | 10 +++++++---
2 files changed, 17 insertions(+), 8 deletions(-)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index c827816..5baf8da 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -57,7 +57,12 @@ AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77'
REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds"
REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready"
AGENT_SEED_DIR = '/var/lib/waagent'
+
+# In the event where the IMDS primary server is not
+# available, it takes 1s to fallback to the secondary one
+IMDS_TIMEOUT_IN_SECONDS = 2
IMDS_URL = "http://169.254.169.254/metadata/"
+
PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0"
# List of static scripts and network config artifacts created by
@@ -582,9 +587,9 @@ class DataSourceAzure(sources.DataSource):
return
self._ephemeral_dhcp_ctx.clean_network()
else:
- return readurl(url, timeout=1, headers=headers,
- exception_cb=exc_cb, infinite=True,
- log_req_resp=False).contents
+ return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS,
+ headers=headers, exception_cb=exc_cb,
+ infinite=True, log_req_resp=False).contents
except UrlError:
# Teardown our EphemeralDHCPv4 context on failure as we retry
self._ephemeral_dhcp_ctx.clean_network()
@@ -1291,8 +1296,8 @@ def _get_metadata_from_imds(retries):
headers = {"Metadata": "true"}
try:
response = readurl(
- url, timeout=1, headers=headers, retries=retries,
- exception_cb=retry_on_url_exc)
+ url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,
+ retries=retries, exception_cb=retry_on_url_exc)
except Exception as e:
LOG.debug('Ignoring IMDS instance metadata: %s', e)
return {}
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index eacf225..bc8b42c 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -163,7 +163,8 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
m_readurl.assert_called_with(
self.network_md_url, exception_cb=mock.ANY,
- headers={'Metadata': 'true'}, retries=2, timeout=1)
+ headers={'Metadata': 'true'}, retries=2,
+ timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS)
@mock.patch('cloudinit.url_helper.time.sleep')
@mock.patch(MOCKPATH + 'net.is_up')
@@ -1789,7 +1790,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
headers={'Metadata': 'true',
'User-Agent':
'Cloud-Init/%s' % vs()
- }, method='GET', timeout=1,
+ }, method='GET',
+ timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS,
url=full_url)])
self.assertEqual(m_dhcp.call_count, 2)
m_net.assert_any_call(
@@ -1826,7 +1828,9 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
headers={'Metadata': 'true',
'User-Agent':
'Cloud-Init/%s' % vs()},
- method='GET', timeout=1, url=full_url)])
+ method='GET',
+ timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS,
+ url=full_url)])
self.assertEqual(m_dhcp.call_count, 2)
m_net.assert_any_call(
broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
--
1.8.3.1

View File

@ -1,642 +0,0 @@
From 4e4e73df5ce7a819f48bed0ae6d2f0b1fb7e243b Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 29 May 2019 13:41:46 +0200
Subject: [PATCH 2/5] DatasourceAzure: add additional logging for azure
datasource
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190529134149.842-3-otubo@redhat.com>
Patchwork-id: 88268
O-Subject: [RHEL-8.0.1/RHEL-8.1.0 cloud-init PATCHv2 2/5] DatasourceAzure: add additional logging for azure datasource
Bugzilla: 1691986
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
From: Anh Vo <anhvo@microsoft.com>
commit 0d8c88393b51db6454491a379dcc2e691551217a
Author: Anh Vo <anhvo@microsoft.com>
Date: Wed Apr 3 18:23:18 2019 +0000
DatasourceAzure: add additional logging for azure datasource
Create an Azure logging decorator and use additional ReportEventStack
context managers to provide additional logging details.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/sources/DataSourceAzure.py | 231 ++++++++++++++++++++++-------------
cloudinit/sources/helpers/azure.py | 31 +++++
2 files changed, 179 insertions(+), 83 deletions(-)
mode change 100644 => 100755 cloudinit/sources/DataSourceAzure.py
mode change 100644 => 100755 cloudinit/sources/helpers/azure.py
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
old mode 100644
new mode 100755
index a768b2c..c827816
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -21,10 +21,14 @@ from cloudinit import net
from cloudinit.event import EventType
from cloudinit.net.dhcp import EphemeralDHCPv4
from cloudinit import sources
-from cloudinit.sources.helpers.azure import get_metadata_from_fabric
from cloudinit.sources.helpers import netlink
from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc
from cloudinit import util
+from cloudinit.reporting import events
+
+from cloudinit.sources.helpers.azure import (azure_ds_reporter,
+ azure_ds_telemetry_reporter,
+ get_metadata_from_fabric)
LOG = logging.getLogger(__name__)
@@ -244,6 +248,7 @@ def set_hostname(hostname, hostname_command='hostname'):
util.subp(['hostnamectl', 'set-hostname', str(hostname)])
+@azure_ds_telemetry_reporter
@contextlib.contextmanager
def temporary_hostname(temp_hostname, cfg, hostname_command='hostname'):
"""
@@ -290,6 +295,7 @@ class DataSourceAzure(sources.DataSource):
root = sources.DataSource.__str__(self)
return "%s [seed=%s]" % (root, self.seed)
+ @azure_ds_telemetry_reporter
def bounce_network_with_azure_hostname(self):
# When using cloud-init to provision, we have to set the hostname from
# the metadata and "bounce" the network to force DDNS to update via
@@ -315,6 +321,7 @@ class DataSourceAzure(sources.DataSource):
util.logexc(LOG, "handling set_hostname failed")
return False
+ @azure_ds_telemetry_reporter
def get_metadata_from_agent(self):
temp_hostname = self.metadata.get('local-hostname')
agent_cmd = self.ds_cfg['agent_command']
@@ -344,15 +351,18 @@ class DataSourceAzure(sources.DataSource):
LOG.debug("ssh authentication: "
"using fingerprint from fabirc")
- # wait very long for public SSH keys to arrive
- # https://bugs.launchpad.net/cloud-init/+bug/1717611
- missing = util.log_time(logfunc=LOG.debug,
- msg="waiting for SSH public key files",
- func=util.wait_for_files,
- args=(fp_files, 900))
-
- if len(missing):
- LOG.warning("Did not find files, but going on: %s", missing)
+ with events.ReportEventStack(
+ name="waiting-for-ssh-public-key",
+ description="wait for agents to retrieve ssh keys",
+ parent=azure_ds_reporter):
+ # wait very long for public SSH keys to arrive
+ # https://bugs.launchpad.net/cloud-init/+bug/1717611
+ missing = util.log_time(logfunc=LOG.debug,
+ msg="waiting for SSH public key files",
+ func=util.wait_for_files,
+ args=(fp_files, 900))
+ if len(missing):
+ LOG.warning("Did not find files, but going on: %s", missing)
metadata = {}
metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files)
@@ -366,6 +376,7 @@ class DataSourceAzure(sources.DataSource):
subplatform_type = 'seed-dir'
return '%s (%s)' % (subplatform_type, self.seed)
+ @azure_ds_telemetry_reporter
def crawl_metadata(self):
"""Walk all instance metadata sources returning a dict on success.
@@ -467,6 +478,7 @@ class DataSourceAzure(sources.DataSource):
super(DataSourceAzure, self).clear_cached_attrs(attr_defaults)
self._metadata_imds = sources.UNSET
+ @azure_ds_telemetry_reporter
def _get_data(self):
"""Crawl and process datasource metadata caching metadata as attrs.
@@ -513,6 +525,7 @@ class DataSourceAzure(sources.DataSource):
# quickly (local check only) if self.instance_id is still valid
return sources.instance_id_matches_system_uuid(self.get_instance_id())
+ @azure_ds_telemetry_reporter
def setup(self, is_new_instance):
if self._negotiated is False:
LOG.debug("negotiating for %s (new_instance=%s)",
@@ -580,6 +593,7 @@ class DataSourceAzure(sources.DataSource):
if nl_sock:
nl_sock.close()
+ @azure_ds_telemetry_reporter
def _report_ready(self, lease):
"""Tells the fabric provisioning has completed """
try:
@@ -617,9 +631,14 @@ class DataSourceAzure(sources.DataSource):
def _reprovision(self):
"""Initiate the reprovisioning workflow."""
contents = self._poll_imds()
- md, ud, cfg = read_azure_ovf(contents)
- return (md, ud, cfg, {'ovf-env.xml': contents})
-
+ with events.ReportEventStack(
+ name="reprovisioning-read-azure-ovf",
+ description="read azure ovf during reprovisioning",
+ parent=azure_ds_reporter):
+ md, ud, cfg = read_azure_ovf(contents)
+ return (md, ud, cfg, {'ovf-env.xml': contents})
+
+ @azure_ds_telemetry_reporter
def _negotiate(self):
"""Negotiate with fabric and return data from it.
@@ -652,6 +671,7 @@ class DataSourceAzure(sources.DataSource):
util.del_file(REPROVISION_MARKER_FILE)
return fabric_data
+ @azure_ds_telemetry_reporter
def activate(self, cfg, is_new_instance):
address_ephemeral_resize(is_new_instance=is_new_instance,
preserve_ntfs=self.ds_cfg.get(
@@ -690,12 +710,14 @@ def _partitions_on_device(devpath, maxnum=16):
return []
+@azure_ds_telemetry_reporter
def _has_ntfs_filesystem(devpath):
ntfs_devices = util.find_devs_with("TYPE=ntfs", no_cache=True)
LOG.debug('ntfs_devices found = %s', ntfs_devices)
return os.path.realpath(devpath) in ntfs_devices
+@azure_ds_telemetry_reporter
def can_dev_be_reformatted(devpath, preserve_ntfs):
"""Determine if the ephemeral drive at devpath should be reformatted.
@@ -744,43 +766,59 @@ def can_dev_be_reformatted(devpath, preserve_ntfs):
(cand_part, cand_path, devpath))
return False, msg
+ @azure_ds_telemetry_reporter
def count_files(mp):
ignored = set(['dataloss_warning_readme.txt'])
return len([f for f in os.listdir(mp) if f.lower() not in ignored])
bmsg = ('partition %s (%s) on device %s was ntfs formatted' %
(cand_part, cand_path, devpath))
- try:
- file_count = util.mount_cb(cand_path, count_files, mtype="ntfs",
- update_env_for_mount={'LANG': 'C'})
- except util.MountFailedError as e:
- if "unknown filesystem type 'ntfs'" in str(e):
- return True, (bmsg + ' but this system cannot mount NTFS,'
- ' assuming there are no important files.'
- ' Formatting allowed.')
- return False, bmsg + ' but mount of %s failed: %s' % (cand_part, e)
-
- if file_count != 0:
- LOG.warning("it looks like you're using NTFS on the ephemeral disk, "
- 'to ensure that filesystem does not get wiped, set '
- '%s.%s in config', '.'.join(DS_CFG_PATH),
- DS_CFG_KEY_PRESERVE_NTFS)
- return False, bmsg + ' but had %d files on it.' % file_count
+
+ with events.ReportEventStack(
+ name="mount-ntfs-and-count",
+ description="mount-ntfs-and-count",
+ parent=azure_ds_reporter) as evt:
+ try:
+ file_count = util.mount_cb(cand_path, count_files, mtype="ntfs",
+ update_env_for_mount={'LANG': 'C'})
+ except util.MountFailedError as e:
+ evt.description = "cannot mount ntfs"
+ if "unknown filesystem type 'ntfs'" in str(e):
+ return True, (bmsg + ' but this system cannot mount NTFS,'
+ ' assuming there are no important files.'
+ ' Formatting allowed.')
+ return False, bmsg + ' but mount of %s failed: %s' % (cand_part, e)
+
+ if file_count != 0:
+ evt.description = "mounted and counted %d files" % file_count
+ LOG.warning("it looks like you're using NTFS on the ephemeral"
+ " disk, to ensure that filesystem does not get wiped,"
+ " set %s.%s in config", '.'.join(DS_CFG_PATH),
+ DS_CFG_KEY_PRESERVE_NTFS)
+ return False, bmsg + ' but had %d files on it.' % file_count
return True, bmsg + ' and had no important files. Safe for reformatting.'
+@azure_ds_telemetry_reporter
def address_ephemeral_resize(devpath=RESOURCE_DISK_PATH, maxwait=120,
is_new_instance=False, preserve_ntfs=False):
# wait for ephemeral disk to come up
naplen = .2
- missing = util.wait_for_files([devpath], maxwait=maxwait, naplen=naplen,
- log_pre="Azure ephemeral disk: ")
-
- if missing:
- LOG.warning("ephemeral device '%s' did not appear after %d seconds.",
- devpath, maxwait)
- return
+ with events.ReportEventStack(
+ name="wait-for-ephemeral-disk",
+ description="wait for ephemeral disk",
+ parent=azure_ds_reporter):
+ missing = util.wait_for_files([devpath],
+ maxwait=maxwait,
+ naplen=naplen,
+ log_pre="Azure ephemeral disk: ")
+
+ if missing:
+ LOG.warning("ephemeral device '%s' did"
+ " not appear after %d seconds.",
+ devpath, maxwait)
+ return
result = False
msg = None
@@ -808,6 +846,7 @@ def address_ephemeral_resize(devpath=RESOURCE_DISK_PATH, maxwait=120,
return
+@azure_ds_telemetry_reporter
def perform_hostname_bounce(hostname, cfg, prev_hostname):
# set the hostname to 'hostname' if it is not already set to that.
# then, if policy is not off, bounce the interface using command
@@ -843,6 +882,7 @@ def perform_hostname_bounce(hostname, cfg, prev_hostname):
return True
+@azure_ds_telemetry_reporter
def crtfile_to_pubkey(fname, data=None):
pipeline = ('openssl x509 -noout -pubkey < "$0" |'
'ssh-keygen -i -m PKCS8 -f /dev/stdin')
@@ -851,6 +891,7 @@ def crtfile_to_pubkey(fname, data=None):
return out.rstrip()
+@azure_ds_telemetry_reporter
def pubkeys_from_crt_files(flist):
pubkeys = []
errors = []
@@ -866,6 +907,7 @@ def pubkeys_from_crt_files(flist):
return pubkeys
+@azure_ds_telemetry_reporter
def write_files(datadir, files, dirmode=None):
def _redact_password(cnt, fname):
@@ -893,6 +935,7 @@ def write_files(datadir, files, dirmode=None):
util.write_file(filename=fname, content=content, mode=0o600)
+@azure_ds_telemetry_reporter
def invoke_agent(cmd):
# this is a function itself to simplify patching it for test
if cmd:
@@ -912,6 +955,7 @@ def find_child(node, filter_func):
return ret
+@azure_ds_telemetry_reporter
def load_azure_ovf_pubkeys(sshnode):
# This parses a 'SSH' node formatted like below, and returns
# an array of dicts.
@@ -964,6 +1008,7 @@ def load_azure_ovf_pubkeys(sshnode):
return found
+@azure_ds_telemetry_reporter
def read_azure_ovf(contents):
try:
dom = minidom.parseString(contents)
@@ -1064,6 +1109,7 @@ def read_azure_ovf(contents):
return (md, ud, cfg)
+@azure_ds_telemetry_reporter
def _extract_preprovisioned_vm_setting(dom):
"""Read the preprovision flag from the ovf. It should not
exist unless true."""
@@ -1092,6 +1138,7 @@ def encrypt_pass(password, salt_id="$6$"):
return crypt.crypt(password, salt_id + util.rand_str(strlen=16))
+@azure_ds_telemetry_reporter
def _check_freebsd_cdrom(cdrom_dev):
"""Return boolean indicating path to cdrom device has content."""
try:
@@ -1103,6 +1150,7 @@ def _check_freebsd_cdrom(cdrom_dev):
return False
+@azure_ds_telemetry_reporter
def _get_random_seed(source=PLATFORM_ENTROPY_SOURCE):
"""Return content random seed file if available, otherwise,
return None."""
@@ -1126,6 +1174,7 @@ def _get_random_seed(source=PLATFORM_ENTROPY_SOURCE):
return seed
+@azure_ds_telemetry_reporter
def list_possible_azure_ds_devs():
devlist = []
if util.is_FreeBSD():
@@ -1140,6 +1189,7 @@ def list_possible_azure_ds_devs():
return devlist
+@azure_ds_telemetry_reporter
def load_azure_ds_dir(source_dir):
ovf_file = os.path.join(source_dir, "ovf-env.xml")
@@ -1162,47 +1212,54 @@ def parse_network_config(imds_metadata):
@param: imds_metadata: Dict of content read from IMDS network service.
@return: Dictionary containing network version 2 standard configuration.
"""
- if imds_metadata != sources.UNSET and imds_metadata:
- netconfig = {'version': 2, 'ethernets': {}}
- LOG.debug('Azure: generating network configuration from IMDS')
- network_metadata = imds_metadata['network']
- for idx, intf in enumerate(network_metadata['interface']):
- nicname = 'eth{idx}'.format(idx=idx)
- dev_config = {}
- for addr4 in intf['ipv4']['ipAddress']:
- privateIpv4 = addr4['privateIpAddress']
- if privateIpv4:
- if dev_config.get('dhcp4', False):
- # Append static address config for nic > 1
- netPrefix = intf['ipv4']['subnet'][0].get(
- 'prefix', '24')
- if not dev_config.get('addresses'):
- dev_config['addresses'] = []
- dev_config['addresses'].append(
- '{ip}/{prefix}'.format(
- ip=privateIpv4, prefix=netPrefix))
- else:
- dev_config['dhcp4'] = True
- for addr6 in intf['ipv6']['ipAddress']:
- privateIpv6 = addr6['privateIpAddress']
- if privateIpv6:
- dev_config['dhcp6'] = True
- break
- if dev_config:
- mac = ':'.join(re.findall(r'..', intf['macAddress']))
- dev_config.update(
- {'match': {'macaddress': mac.lower()},
- 'set-name': nicname})
- netconfig['ethernets'][nicname] = dev_config
- else:
- blacklist = ['mlx4_core']
- LOG.debug('Azure: generating fallback configuration')
- # generate a network config, blacklist picking mlx4_core devs
- netconfig = net.generate_fallback_config(
- blacklist_drivers=blacklist, config_driver=True)
- return netconfig
+ with events.ReportEventStack(
+ name="parse_network_config",
+ description="",
+ parent=azure_ds_reporter) as evt:
+ if imds_metadata != sources.UNSET and imds_metadata:
+ netconfig = {'version': 2, 'ethernets': {}}
+ LOG.debug('Azure: generating network configuration from IMDS')
+ network_metadata = imds_metadata['network']
+ for idx, intf in enumerate(network_metadata['interface']):
+ nicname = 'eth{idx}'.format(idx=idx)
+ dev_config = {}
+ for addr4 in intf['ipv4']['ipAddress']:
+ privateIpv4 = addr4['privateIpAddress']
+ if privateIpv4:
+ if dev_config.get('dhcp4', False):
+ # Append static address config for nic > 1
+ netPrefix = intf['ipv4']['subnet'][0].get(
+ 'prefix', '24')
+ if not dev_config.get('addresses'):
+ dev_config['addresses'] = []
+ dev_config['addresses'].append(
+ '{ip}/{prefix}'.format(
+ ip=privateIpv4, prefix=netPrefix))
+ else:
+ dev_config['dhcp4'] = True
+ for addr6 in intf['ipv6']['ipAddress']:
+ privateIpv6 = addr6['privateIpAddress']
+ if privateIpv6:
+ dev_config['dhcp6'] = True
+ break
+ if dev_config:
+ mac = ':'.join(re.findall(r'..', intf['macAddress']))
+ dev_config.update(
+ {'match': {'macaddress': mac.lower()},
+ 'set-name': nicname})
+ netconfig['ethernets'][nicname] = dev_config
+ evt.description = "network config from imds"
+ else:
+ blacklist = ['mlx4_core']
+ LOG.debug('Azure: generating fallback configuration')
+ # generate a network config, blacklist picking mlx4_core devs
+ netconfig = net.generate_fallback_config(
+ blacklist_drivers=blacklist, config_driver=True)
+ evt.description = "network config from fallback"
+ return netconfig
+@azure_ds_telemetry_reporter
def get_metadata_from_imds(fallback_nic, retries):
"""Query Azure's network metadata service, returning a dictionary.
@@ -1227,6 +1284,7 @@ def get_metadata_from_imds(fallback_nic, retries):
return util.log_time(**kwargs)
+@azure_ds_telemetry_reporter
def _get_metadata_from_imds(retries):
url = IMDS_URL + "instance?api-version=2017-12-01"
@@ -1246,6 +1304,7 @@ def _get_metadata_from_imds(retries):
return {}
+@azure_ds_telemetry_reporter
def maybe_remove_ubuntu_network_config_scripts(paths=None):
"""Remove Azure-specific ubuntu network config for non-primary nics.
@@ -1283,14 +1342,20 @@ def maybe_remove_ubuntu_network_config_scripts(paths=None):
def _is_platform_viable(seed_dir):
- """Check platform environment to report if this datasource may run."""
- asset_tag = util.read_dmi_data('chassis-asset-tag')
- if asset_tag == AZURE_CHASSIS_ASSET_TAG:
- return True
- LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag)
- if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')):
- return True
- return False
+ with events.ReportEventStack(
+ name="check-platform-viability",
+ description="found azure asset tag",
+ parent=azure_ds_reporter) as evt:
+
+ """Check platform environment to report if this datasource may run."""
+ asset_tag = util.read_dmi_data('chassis-asset-tag')
+ if asset_tag == AZURE_CHASSIS_ASSET_TAG:
+ return True
+ LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag)
+ evt.description = "Non-Azure DMI asset tag '%s' discovered.", asset_tag
+ if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')):
+ return True
+ return False
class BrokenAzureDataSource(Exception):
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
old mode 100644
new mode 100755
index 2829dd2..d3af05e
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -16,10 +16,27 @@ from xml.etree import ElementTree
from cloudinit import url_helper
from cloudinit import util
+from cloudinit.reporting import events
LOG = logging.getLogger(__name__)
+azure_ds_reporter = events.ReportEventStack(
+ name="azure-ds",
+ description="initialize reporter for azure ds",
+ reporting_enabled=True)
+
+
+def azure_ds_telemetry_reporter(func):
+ def impl(*args, **kwargs):
+ with events.ReportEventStack(
+ name=func.__name__,
+ description=func.__name__,
+ parent=azure_ds_reporter):
+ return func(*args, **kwargs)
+ return impl
+
+
@contextmanager
def cd(newdir):
prevdir = os.getcwd()
@@ -119,6 +136,7 @@ class OpenSSLManager(object):
def clean_up(self):
util.del_dir(self.tmpdir)
+ @azure_ds_telemetry_reporter
def generate_certificate(self):
LOG.debug('Generating certificate for communication with fabric...')
if self.certificate is not None:
@@ -139,17 +157,20 @@ class OpenSSLManager(object):
LOG.debug('New certificate generated.')
@staticmethod
+ @azure_ds_telemetry_reporter
def _run_x509_action(action, cert):
cmd = ['openssl', 'x509', '-noout', action]
result, _ = util.subp(cmd, data=cert)
return result
+ @azure_ds_telemetry_reporter
def _get_ssh_key_from_cert(self, certificate):
pub_key = self._run_x509_action('-pubkey', certificate)
keygen_cmd = ['ssh-keygen', '-i', '-m', 'PKCS8', '-f', '/dev/stdin']
ssh_key, _ = util.subp(keygen_cmd, data=pub_key)
return ssh_key
+ @azure_ds_telemetry_reporter
def _get_fingerprint_from_cert(self, certificate):
"""openssl x509 formats fingerprints as so:
'SHA1 Fingerprint=07:3E:19:D1:4D:1C:79:92:24:C6:A0:FD:8D:DA:\
@@ -163,6 +184,7 @@ class OpenSSLManager(object):
octets = raw_fp[eq+1:-1].split(':')
return ''.join(octets)
+ @azure_ds_telemetry_reporter
def _decrypt_certs_from_xml(self, certificates_xml):
"""Decrypt the certificates XML document using the our private key;
return the list of certs and private keys contained in the doc.
@@ -185,6 +207,7 @@ class OpenSSLManager(object):
shell=True, data=b'\n'.join(lines))
return out
+ @azure_ds_telemetry_reporter
def parse_certificates(self, certificates_xml):
"""Given the Certificates XML document, return a dictionary of
fingerprints and associated SSH keys derived from the certs."""
@@ -265,11 +288,13 @@ class WALinuxAgentShim(object):
return socket.inet_ntoa(packed_bytes)
@staticmethod
+ @azure_ds_telemetry_reporter
def _networkd_get_value_from_leases(leases_d=None):
return dhcp.networkd_get_option_from_leases(
'OPTION_245', leases_d=leases_d)
@staticmethod
+ @azure_ds_telemetry_reporter
def _get_value_from_leases_file(fallback_lease_file):
leases = []
content = util.load_file(fallback_lease_file)
@@ -287,6 +312,7 @@ class WALinuxAgentShim(object):
return leases[-1]
@staticmethod
+ @azure_ds_telemetry_reporter
def _load_dhclient_json():
dhcp_options = {}
hooks_dir = WALinuxAgentShim._get_hooks_dir()
@@ -305,6 +331,7 @@ class WALinuxAgentShim(object):
return dhcp_options
@staticmethod
+ @azure_ds_telemetry_reporter
def _get_value_from_dhcpoptions(dhcp_options):
if dhcp_options is None:
return None
@@ -318,6 +345,7 @@ class WALinuxAgentShim(object):
return _value
@staticmethod
+ @azure_ds_telemetry_reporter
def find_endpoint(fallback_lease_file=None, dhcp245=None):
value = None
if dhcp245 is not None:
@@ -352,6 +380,7 @@ class WALinuxAgentShim(object):
LOG.debug('Azure endpoint found at %s', endpoint_ip_address)
return endpoint_ip_address
+ @azure_ds_telemetry_reporter
def register_with_azure_and_fetch_data(self, pubkey_info=None):
if self.openssl_manager is None:
self.openssl_manager = OpenSSLManager()
@@ -404,6 +433,7 @@ class WALinuxAgentShim(object):
return keys
+ @azure_ds_telemetry_reporter
def _report_ready(self, goal_state, http_client):
LOG.debug('Reporting ready to Azure fabric.')
document = self.REPORT_READY_XML_TEMPLATE.format(
@@ -419,6 +449,7 @@ class WALinuxAgentShim(object):
LOG.info('Reported ready to Azure fabric.')
+@azure_ds_telemetry_reporter
def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None,
pubkey_info=None):
shim = WALinuxAgentShim(fallback_lease_file=fallback_lease_file,
--
1.8.3.1

View File

@ -0,0 +1,41 @@
From 251836a62eb3061b8d26177fd5997a96dccec21b Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Thu, 28 May 2020 08:44:06 +0200
Subject: [PATCH 3/4] Enable ssh_deletekeys by default
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200317091705.15715-1-otubo@redhat.com>
Patchwork-id: 94365
O-Subject: [RHEL-7.9/RHEL-8.2.0 cloud-init PATCH] Enable ssh_deletekeys by default
Bugzilla: 1814152
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
The configuration option ssh_deletekeys will trigger the generation
of new ssh keys for every new instance deployed.
x-downstream-only: yes
resolves: rhbz#1814152
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
rhel/cloud.cfg | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/rhel/cloud.cfg b/rhel/cloud.cfg
index 82e8bf6..9ecba21 100644
--- a/rhel/cloud.cfg
+++ b/rhel/cloud.cfg
@@ -6,7 +6,7 @@ ssh_pwauth: 0
mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2']
resize_rootfs_tmp: /dev
-ssh_deletekeys: 0
+ssh_deletekeys: 1
ssh_genkeytypes: ~
syslog_fix_perms: ~
disable_vmware_customization: false
--
1.8.3.1

View File

@ -1,107 +0,0 @@
From 4ab5a61c0f8378889d2a6b78710f49b0fb71d3c9 Mon Sep 17 00:00:00 2001
From: Miroslav Rezanina <mrezanin@redhat.com>
Date: Fri, 22 Nov 2019 07:46:38 +0100
Subject: [PATCH 1/2] Fix for network configuration not persisting after reboot
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190906121211.23172-1-otubo@redhat.com>
Patchwork-id: 90300
O-Subject: [RHEL-7.8/RHEL-8.1.0 cloud-init PATCH] Fix for network configuration not persisting after reboot
Bugzilla: 1706482
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
The reasons the configuration does not persist after reboot includes
different aspects and they're all fixed on this patch:
1) The rpm package doesn't include the systemd-generator and
ds-identify. The systemd-generator is called early in the boot process
that calls ds-identify to check if there's any Data Source available in
the current boot. In the current use case, the Data Source is removed
from the VM on the second boot, this means cloud-init should disable
itself in order to keep the configuration it did in the first boot.
2) Even after adding those scripts, cloud-init was still being
executed and the configurations were being lost. The reason for this is
that the cloud-init systemd units had a wrong dependency
WantedBy: multi-user.target
Which would start them every time no matter the return of
ds-identify. The fix is to replace the dependency by the systemd unit to
cloud-init.target, which is the main cloud-init target enabled - or in
this case, disabled by ds-identify. The file cloud-init.target was also
missing on rpm package.
After adding both scripts, the main cloud-init systemd target and
adjusting the systemd dependencies the configuration persists after
reboots and shutdowns.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
---
redhat/cloud-init.spec.template | 15 +++++++++++++++
rhel/systemd/cloud-config.service | 2 +-
rhel/systemd/cloud-final.service | 2 +-
rhel/systemd/cloud-init-local.service | 2 +-
rhel/systemd/cloud-init.service | 2 +-
rhel/systemd/cloud-init.target | 7 +++++++
6 files changed, 26 insertions(+), 4 deletions(-)
create mode 100644 rhel/systemd/cloud-init.target
diff --git a/rhel/systemd/cloud-config.service b/rhel/systemd/cloud-config.service
index 12ca9df..f3dcd4b 100644
--- a/rhel/systemd/cloud-config.service
+++ b/rhel/systemd/cloud-config.service
@@ -15,4 +15,4 @@ TimeoutSec=0
StandardOutput=journal+console
[Install]
-WantedBy=multi-user.target
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-final.service b/rhel/systemd/cloud-final.service
index 32a83d8..739b7e3 100644
--- a/rhel/systemd/cloud-final.service
+++ b/rhel/systemd/cloud-final.service
@@ -16,4 +16,4 @@ KillMode=process
StandardOutput=journal+console
[Install]
-WantedBy=multi-user.target
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-init-local.service b/rhel/systemd/cloud-init-local.service
index 656eddb..8f9f6c9 100644
--- a/rhel/systemd/cloud-init-local.service
+++ b/rhel/systemd/cloud-init-local.service
@@ -28,4 +28,4 @@ TimeoutSec=0
StandardOutput=journal+console
[Install]
-WantedBy=multi-user.target
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-init.service b/rhel/systemd/cloud-init.service
index 68fc5f1..d0023a0 100644
--- a/rhel/systemd/cloud-init.service
+++ b/rhel/systemd/cloud-init.service
@@ -22,4 +22,4 @@ TimeoutSec=0
StandardOutput=journal+console
[Install]
-WantedBy=multi-user.target
+WantedBy=cloud-init.target
diff --git a/rhel/systemd/cloud-init.target b/rhel/systemd/cloud-init.target
new file mode 100644
index 0000000..083c3b6
--- /dev/null
+++ b/rhel/systemd/cloud-init.target
@@ -0,0 +1,7 @@
+# cloud-init target is enabled by cloud-init-generator
+# To disable it you can either:
+# a.) boot with kernel cmdline of 'cloud-init=disabled'
+# b.) touch a file /etc/cloud/cloud-init.disabled
+[Unit]
+Description=Cloud-init target
+After=multi-user.target
--
1.8.3.1

View File

@ -0,0 +1,40 @@
From 301b1770d3e2580c3ee168261a9a97d143cc5f59 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Mon, 1 Jun 2020 11:58:06 +0200
Subject: [PATCH] Make cloud-init.service execute after network is up
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200526090804.2047-1-otubo@redhat.com>
Patchwork-id: 96809
O-Subject: [RHEL-8.2.1 cloud-init PATCH] Make cloud-init.service execute after network is up
Bugzilla: 1803928
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
cloud-init.service needs to wait until network is fully up before
continuing executing and configuring its service.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
x-downstream-only: yes
Resolves: rhbz#1831646
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
rhel/systemd/cloud-init.service | 1 +
1 file changed, 1 insertion(+)
diff --git a/rhel/systemd/cloud-init.service b/rhel/systemd/cloud-init.service
index d0023a0..0b3d796 100644
--- a/rhel/systemd/cloud-init.service
+++ b/rhel/systemd/cloud-init.service
@@ -5,6 +5,7 @@ Wants=sshd-keygen.service
Wants=sshd.service
After=cloud-init-local.service
After=NetworkManager.service network.service
+After=NetworkManager-wait-online.service
Before=network-online.target
Before=sshd-keygen.service
Before=sshd.service
--
1.8.3.1

View File

@ -0,0 +1,52 @@
From 0422ba0e773d1a8257a3f2bf3db05f3bc7917eb7 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Thu, 28 May 2020 08:44:08 +0200
Subject: [PATCH 4/4] Remove race condition between cloud-init and
NetworkManager
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200327121911.17699-1-otubo@redhat.com>
Patchwork-id: 94453
O-Subject: [RHEL-7.9/RHEL-8.2.0 cloud-init PATCHv2] Remove race condition between cloud-init and NetworkManager
Bugzilla: 1840648
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
cloud-init service is set to start before NetworkManager service starts,
but this does not avoid a race condition between them. NetworkManager
starts before cloud-init can write `dns=none' to the file:
/etc/NetworkManager/conf.d/99-cloud-init.conf. This way NetworkManager
doesn't read the configuration and erases all resolv.conf values upon
shutdown. On the next reboot neither cloud-init or NetworkManager will
write anything to resolv.conf, leaving it blank.
This patch introduces a NM reload (try-reload-or-restart) at the end of cloud-init
start up so it won't erase resolv.conf upon first shutdown.
x-downstream-only: yes
Signed-off-by: Eduardo Otubo otubo@redhat.com
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
rhel/systemd/cloud-final.service | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/rhel/systemd/cloud-final.service b/rhel/systemd/cloud-final.service
index f303483..05add07 100644
--- a/rhel/systemd/cloud-final.service
+++ b/rhel/systemd/cloud-final.service
@@ -11,8 +11,8 @@ ExecStart=/usr/bin/cloud-init modules --mode=final
RemainAfterExit=yes
TimeoutSec=0
KillMode=process
-ExecStartPost=/bin/echo "try restart NetworkManager.service"
-ExecStartPost=/usr/bin/systemctl try-restart NetworkManager.service
+ExecStartPost=/bin/echo "trying to reload or restart NetworkManager.service"
+ExecStartPost=/usr/bin/systemctl try-reload-or-restart NetworkManager.service
# Output needs to appear in instance console output
StandardOutput=journal+console
--
1.8.3.1

View File

@ -1,86 +0,0 @@
From a00bafbfc54b010d7bbe95536929c678288d4c80 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Mon, 1 Jul 2019 11:22:52 +0200
Subject: [PATCH 1/2] Revert: azure: ensure that networkmanager hook script
runs
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190701112252.32674-1-otubo@redhat.com>
Patchwork-id: 89162
O-Subject: [rhel-8.1.0 cloud-init PATCH] Revert: azure: ensure that networkmanager hook script runs
Bugzilla: 1692914
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
This patch reverts the commit:
commit c48497435e8195dbd87262c2f00e484e63fe3343
Author: Lars Kellogg-Stedman <lars@redhat.com>
Date: Thu Jun 15 12:20:39 2017 -0400
azure: ensure that networkmanager hook script runs
The networkmanager hook script was failing to run due to the changes
we made to resolve rhbz#1440831. This corrects the regression by
allowing the NM hook script to run regardless of whether or not
cloud-init is "enabled".
Resolves: rhbz#1460206
X-downstream-only: true
Resolves: rhbz#1692914
X-downstream-only: true
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
tools/hook-dhclient | 3 ++-
tools/hook-network-manager | 3 ++-
tools/hook-rhel.sh | 3 ++-
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/tools/hook-dhclient b/tools/hook-dhclient
index 181cd51..02122f3 100755
--- a/tools/hook-dhclient
+++ b/tools/hook-dhclient
@@ -13,7 +13,8 @@ is_azure() {
}
is_enabled() {
- # only execute hooks if cloud-init is running on azure
+ # only execute hooks if cloud-init is enabled and on azure
+ [ -e /run/cloud-init/enabled ] || return 1
is_azure
}
diff --git a/tools/hook-network-manager b/tools/hook-network-manager
index 1d52cad..67d9044 100755
--- a/tools/hook-network-manager
+++ b/tools/hook-network-manager
@@ -13,7 +13,8 @@ is_azure() {
}
is_enabled() {
- # only execute hooks if cloud-init running on azure
+ # only execute hooks if cloud-init is enabled and on azure
+ [ -e /run/cloud-init/enabled ] || return 1
is_azure
}
diff --git a/tools/hook-rhel.sh b/tools/hook-rhel.sh
index d75767e..513a551 100755
--- a/tools/hook-rhel.sh
+++ b/tools/hook-rhel.sh
@@ -13,7 +13,8 @@ is_azure() {
}
is_enabled() {
- # only execute hooks if cloud-init is running on azure
+ # only execute hooks if cloud-init is enabled and on azure
+ [ -e /run/cloud-init/enabled ] || return 1
is_azure
}
--
1.8.3.1

View File

@ -1,129 +0,0 @@
From 2604984fc44dde89dac847de9f95011713d448ff Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 29 May 2019 13:41:49 +0200
Subject: [PATCH 5/5] cc_mounts: check if mount -a on no-change fstab path
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20190529134149.842-6-otubo@redhat.com>
Patchwork-id: 88269
O-Subject: [RHEL-8.0.1/RHEL-8.1.0 cloud-init PATCHv2 5/5] cc_mounts: check if mount -a on no-change fstab path
Bugzilla: 1691986
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
From: "Jason Zions (MSFT)" <jasonzio@microsoft.com>
commit acc25d8d7d603313059ac35b4253b504efc560a9
Author: Jason Zions (MSFT) <jasonzio@microsoft.com>
Date: Wed May 8 22:47:07 2019 +0000
cc_mounts: check if mount -a on no-change fstab path
Under some circumstances, cc_disk_setup may reformat volumes which
already appear in /etc/fstab (e.g. Azure ephemeral drive is reformatted
from NTFS to ext4 after service-heal). Normally, cc_mounts only calls
mount -a if it altered /etc/fstab. With this change cc_mounts will read
/proc/mounts and verify if configured mounts are already mounted and if
not raise flag to request a mount -a. This handles the case where no
changes to fstab occur but a mount -a is required due to change in
underlying device which prevented the .mount unit from running until
after disk was reformatted.
LP: #1825596
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/config/cc_mounts.py | 11 ++++++++
.../unittests/test_handler/test_handler_mounts.py | 30 +++++++++++++++++++++-
2 files changed, 40 insertions(+), 1 deletion(-)
diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py
index 339baba..123ffb8 100644
--- a/cloudinit/config/cc_mounts.py
+++ b/cloudinit/config/cc_mounts.py
@@ -439,6 +439,7 @@ def handle(_name, cfg, cloud, log, _args):
cc_lines = []
needswap = False
+ need_mount_all = False
dirs = []
for line in actlist:
# write 'comment' in the fs_mntops, entry, claiming this
@@ -449,11 +450,18 @@ def handle(_name, cfg, cloud, log, _args):
dirs.append(line[1])
cc_lines.append('\t'.join(line))
+ mount_points = [v['mountpoint'] for k, v in util.mounts().items()
+ if 'mountpoint' in v]
for d in dirs:
try:
util.ensure_dir(d)
except Exception:
util.logexc(log, "Failed to make '%s' config-mount", d)
+ # dirs is list of directories on which a volume should be mounted.
+ # If any of them does not already show up in the list of current
+ # mount points, we will definitely need to do mount -a.
+ if not need_mount_all and d not in mount_points:
+ need_mount_all = True
sadds = [WS.sub(" ", n) for n in cc_lines]
sdrops = [WS.sub(" ", n) for n in fstab_removed]
@@ -473,6 +481,9 @@ def handle(_name, cfg, cloud, log, _args):
log.debug("No changes to /etc/fstab made.")
else:
log.debug("Changes to fstab: %s", sops)
+ need_mount_all = True
+
+ if need_mount_all:
activate_cmds.append(["mount", "-a"])
if uses_systemd:
activate_cmds.append(["systemctl", "daemon-reload"])
diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py
index 8fea6c2..0fb160b 100644
--- a/tests/unittests/test_handler/test_handler_mounts.py
+++ b/tests/unittests/test_handler/test_handler_mounts.py
@@ -154,7 +154,15 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase):
return_value=True)
self.add_patch('cloudinit.config.cc_mounts.util.subp',
- 'mock_util_subp')
+ 'm_util_subp')
+
+ self.add_patch('cloudinit.config.cc_mounts.util.mounts',
+ 'mock_util_mounts',
+ return_value={
+ '/dev/sda1': {'fstype': 'ext4',
+ 'mountpoint': '/',
+ 'opts': 'rw,relatime,discard'
+ }})
self.mock_cloud = mock.Mock()
self.mock_log = mock.Mock()
@@ -230,4 +238,24 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase):
fstab_new_content = fd.read()
self.assertEqual(fstab_expected_content, fstab_new_content)
+ def test_no_change_fstab_sets_needs_mount_all(self):
+ '''verify unchanged fstab entries are mounted if not call mount -a'''
+ fstab_original_content = (
+ 'LABEL=cloudimg-rootfs / ext4 defaults 0 0\n'
+ 'LABEL=UEFI /boot/efi vfat defaults 0 0\n'
+ '/dev/vdb /mnt auto defaults,noexec,comment=cloudconfig 0 2\n'
+ )
+ fstab_expected_content = fstab_original_content
+ cc = {'mounts': [
+ ['/dev/vdb', '/mnt', 'auto', 'defaults,noexec']]}
+ with open(cc_mounts.FSTAB_PATH, 'w') as fd:
+ fd.write(fstab_original_content)
+ with open(cc_mounts.FSTAB_PATH, 'r') as fd:
+ fstab_new_content = fd.read()
+ self.assertEqual(fstab_expected_content, fstab_new_content)
+ cc_mounts.handle(None, cc, self.mock_cloud, self.mock_log, [])
+ self.m_util_subp.assert_has_calls([
+ mock.call(['mount', '-a']),
+ mock.call(['systemctl', 'daemon-reload'])])
+
# vi: ts=4 expandtab
--
1.8.3.1

View File

@ -0,0 +1,42 @@
From e7a0cd9aa71dfd7715eca4b393db0aa348e05f8f Mon Sep 17 00:00:00 2001
From: jmaloy <jmaloy@redhat.com>
Date: Thu, 28 May 2020 08:43:58 +0200
Subject: [PATCH 1/4] cc_set_password: increase random pwlength from 9 to 20
(#189)
RH-Author: jmaloy <jmaloy@redhat.com>
Message-id: <20200313015002.3297-2-jmaloy@redhat.com>
Patchwork-id: 94253
O-Subject: [RHEL-8.2 cloud-init PATCH 1/1] cc_set_password: increase random pwlength from 9 to 20 (#189)
Bugzilla: 1812171
RH-Acked-by: Eduardo Otubo <eterrell@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
From: Ryan Harper <ryan.harper@canonical.com>
Increasing the bits of security from 52 to 115.
LP: #1860795
(cherry picked from commit 42788bf24a1a0a5421a2d00a7f59b59e38ba1a14)
Signed-off-by: Jon Maloy <jmaloy@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/config/cc_set_passwords.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
index c3c5b0f..0742234 100755
--- a/cloudinit/config/cc_set_passwords.py
+++ b/cloudinit/config/cc_set_passwords.py
@@ -236,7 +236,7 @@ def handle(_name, cfg, cloud, log, args):
raise errors[-1]
-def rand_user_password(pwlen=9):
+def rand_user_password(pwlen=20):
return util.rand_str(pwlen, select_from=PW_SET)
--
1.8.3.1

View File

@ -0,0 +1,46 @@
From f67f56e85c0fdb1c94527a6a1795bbacd2e6fdb0 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 24 Jun 2020 07:34:34 +0200
Subject: [PATCH 4/4] cloud-init.service.tmpl: use "rhel" instead of "redhat"
(#452)
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200623154034.28563-4-otubo@redhat.com>
Patchwork-id: 97784
O-Subject: [RHEL-8.3.0/RHEL-8.2.1 cloud-init PATCH 3/3] cloud-init.service.tmpl: use "rhel" instead of "redhat" (#452)
Bugzilla: 1834173
RH-Acked-by: Cathy Avery <cavery@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
From: Daniel Watkins <oddbloke@ubuntu.com>
commit ddc4c2de1b1e716b31384af92f5356bfc6136944
Author: Daniel Watkins <oddbloke@ubuntu.com>
Date: Tue Jun 23 09:43:04 2020 -0400
cloud-init.service.tmpl: use "rhel" instead of "redhat" (#452)
We use "rhel" consistently everywhere else.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
systemd/cloud-init.service.tmpl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/systemd/cloud-init.service.tmpl b/systemd/cloud-init.service.tmpl
index 9ad3574..af6d9a8 100644
--- a/systemd/cloud-init.service.tmpl
+++ b/systemd/cloud-init.service.tmpl
@@ -10,7 +10,7 @@ After=systemd-networkd-wait-online.service
{% if variant in ["ubuntu", "unknown", "debian"] %}
After=networking.service
{% endif %}
-{% if variant in ["centos", "fedora", "redhat"] %}
+{% if variant in ["centos", "fedora", "rhel"] %}
After=network.service
After=NetworkManager.service
{% endif %}
--
1.8.3.1

View File

@ -0,0 +1,350 @@
From f6dc3cf39a4884657478a47894ce8a76ec9a72c5 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 24 Jun 2020 07:34:29 +0200
Subject: [PATCH 1/4] ec2: Do not log IMDSv2 token values, instead use REDACTED
(#219)
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200505082940.18316-1-otubo@redhat.com>
Patchwork-id: 96264
O-Subject: [RHEL-7.9/RHEL-8.3 cloud-init PATCH] ec2: Do not log IMDSv2 token values, instead use REDACTED (#219)
Bugzilla: 1822343
RH-Acked-by: Cathy Avery <cavery@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Note: There's no RHEL-8.3/cloud-init-19.4 branch yet, but it should be
queued to be applied on top of it when it's created.
commit 87cd040ed8fe7195cbb357ed3bbf53cd2a81436c
Author: Ryan Harper <ryan.harper@canonical.com>
Date: Wed Feb 19 15:01:09 2020 -0600
ec2: Do not log IMDSv2 token values, instead use REDACTED (#219)
Instead of logging the token values used log the headers and replace the actual
values with the string 'REDACTED'. This allows users to examine cloud-init.log
and see that the IMDSv2 token header is being used but avoids leaving the value
used in the log file itself.
LP: #1863943
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/ec2_utils.py | 12 ++++++++--
cloudinit/sources/DataSourceEc2.py | 35 +++++++++++++++++++----------
cloudinit/url_helper.py | 27 ++++++++++++++++------
tests/unittests/test_datasource/test_ec2.py | 17 ++++++++++++++
4 files changed, 70 insertions(+), 21 deletions(-)
diff --git a/cloudinit/ec2_utils.py b/cloudinit/ec2_utils.py
index 57708c1..34acfe8 100644
--- a/cloudinit/ec2_utils.py
+++ b/cloudinit/ec2_utils.py
@@ -142,7 +142,8 @@ def skip_retry_on_codes(status_codes, _request_args, cause):
def get_instance_userdata(api_version='latest',
metadata_address='http://169.254.169.254',
ssl_details=None, timeout=5, retries=5,
- headers_cb=None, exception_cb=None):
+ headers_cb=None, headers_redact=None,
+ exception_cb=None):
ud_url = url_helper.combine_url(metadata_address, api_version)
ud_url = url_helper.combine_url(ud_url, 'user-data')
user_data = ''
@@ -155,7 +156,8 @@ def get_instance_userdata(api_version='latest',
SKIP_USERDATA_CODES)
response = url_helper.read_file_or_url(
ud_url, ssl_details=ssl_details, timeout=timeout,
- retries=retries, exception_cb=exception_cb, headers_cb=headers_cb)
+ retries=retries, exception_cb=exception_cb, headers_cb=headers_cb,
+ headers_redact=headers_redact)
user_data = response.contents
except url_helper.UrlError as e:
if e.code not in SKIP_USERDATA_CODES:
@@ -169,11 +171,13 @@ def _get_instance_metadata(tree, api_version='latest',
metadata_address='http://169.254.169.254',
ssl_details=None, timeout=5, retries=5,
leaf_decoder=None, headers_cb=None,
+ headers_redact=None,
exception_cb=None):
md_url = url_helper.combine_url(metadata_address, api_version, tree)
caller = functools.partial(
url_helper.read_file_or_url, ssl_details=ssl_details,
timeout=timeout, retries=retries, headers_cb=headers_cb,
+ headers_redact=headers_redact,
exception_cb=exception_cb)
def mcaller(url):
@@ -197,6 +201,7 @@ def get_instance_metadata(api_version='latest',
metadata_address='http://169.254.169.254',
ssl_details=None, timeout=5, retries=5,
leaf_decoder=None, headers_cb=None,
+ headers_redact=None,
exception_cb=None):
# Note, 'meta-data' explicitly has trailing /.
# this is required for CloudStack (LP: #1356855)
@@ -204,6 +209,7 @@ def get_instance_metadata(api_version='latest',
metadata_address=metadata_address,
ssl_details=ssl_details, timeout=timeout,
retries=retries, leaf_decoder=leaf_decoder,
+ headers_redact=headers_redact,
headers_cb=headers_cb,
exception_cb=exception_cb)
@@ -212,12 +218,14 @@ def get_instance_identity(api_version='latest',
metadata_address='http://169.254.169.254',
ssl_details=None, timeout=5, retries=5,
leaf_decoder=None, headers_cb=None,
+ headers_redact=None,
exception_cb=None):
return _get_instance_metadata(tree='dynamic/instance-identity',
api_version=api_version,
metadata_address=metadata_address,
ssl_details=ssl_details, timeout=timeout,
retries=retries, leaf_decoder=leaf_decoder,
+ headers_redact=headers_redact,
headers_cb=headers_cb,
exception_cb=exception_cb)
# vi: ts=4 expandtab
diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
index b9f346a..0f2bfef 100644
--- a/cloudinit/sources/DataSourceEc2.py
+++ b/cloudinit/sources/DataSourceEc2.py
@@ -31,6 +31,9 @@ STRICT_ID_DEFAULT = "warn"
API_TOKEN_ROUTE = 'latest/api/token'
API_TOKEN_DISABLED = '_ec2_disable_api_token'
AWS_TOKEN_TTL_SECONDS = '21600'
+AWS_TOKEN_PUT_HEADER = 'X-aws-ec2-metadata-token'
+AWS_TOKEN_REQ_HEADER = AWS_TOKEN_PUT_HEADER + '-ttl-seconds'
+AWS_TOKEN_REDACT = [AWS_TOKEN_PUT_HEADER, AWS_TOKEN_REQ_HEADER]
class CloudNames(object):
@@ -158,7 +161,8 @@ class DataSourceEc2(sources.DataSource):
for api_ver in self.extended_metadata_versions:
url = url_tmpl.format(self.metadata_address, api_ver)
try:
- resp = uhelp.readurl(url=url, headers=headers)
+ resp = uhelp.readurl(url=url, headers=headers,
+ headers_redact=AWS_TOKEN_REDACT)
except uhelp.UrlError as e:
LOG.debug('url %s raised exception %s', url, e)
else:
@@ -180,6 +184,7 @@ class DataSourceEc2(sources.DataSource):
self.identity = ec2.get_instance_identity(
api_version, self.metadata_address,
headers_cb=self._get_headers,
+ headers_redact=AWS_TOKEN_REDACT,
exception_cb=self._refresh_stale_aws_token_cb).get(
'document', {})
return self.identity.get(
@@ -205,7 +210,8 @@ class DataSourceEc2(sources.DataSource):
LOG.debug('Fetching Ec2 IMDSv2 API Token')
url, response = uhelp.wait_for_url(
urls=urls, max_wait=1, timeout=1, status_cb=self._status_cb,
- headers_cb=self._get_headers, request_method=request_method)
+ headers_cb=self._get_headers, request_method=request_method,
+ headers_redact=AWS_TOKEN_REDACT)
if url and response:
self._api_token = response
@@ -252,7 +258,8 @@ class DataSourceEc2(sources.DataSource):
url, _ = uhelp.wait_for_url(
urls=urls, max_wait=url_params.max_wait_seconds,
timeout=url_params.timeout_seconds, status_cb=LOG.warning,
- headers_cb=self._get_headers, request_method=request_method)
+ headers_redact=AWS_TOKEN_REDACT, headers_cb=self._get_headers,
+ request_method=request_method)
if url:
metadata_address = url2base[url]
@@ -420,6 +427,7 @@ class DataSourceEc2(sources.DataSource):
if not self.wait_for_metadata_service():
return {}
api_version = self.get_metadata_api_version()
+ redact = AWS_TOKEN_REDACT
crawled_metadata = {}
if self.cloud_name == CloudNames.AWS:
exc_cb = self._refresh_stale_aws_token_cb
@@ -429,14 +437,17 @@ class DataSourceEc2(sources.DataSource):
try:
crawled_metadata['user-data'] = ec2.get_instance_userdata(
api_version, self.metadata_address,
- headers_cb=self._get_headers, exception_cb=exc_cb_ud)
+ headers_cb=self._get_headers, headers_redact=redact,
+ exception_cb=exc_cb_ud)
crawled_metadata['meta-data'] = ec2.get_instance_metadata(
api_version, self.metadata_address,
- headers_cb=self._get_headers, exception_cb=exc_cb)
+ headers_cb=self._get_headers, headers_redact=redact,
+ exception_cb=exc_cb)
if self.cloud_name == CloudNames.AWS:
identity = ec2.get_instance_identity(
api_version, self.metadata_address,
- headers_cb=self._get_headers, exception_cb=exc_cb)
+ headers_cb=self._get_headers, headers_redact=redact,
+ exception_cb=exc_cb)
crawled_metadata['dynamic'] = {'instance-identity': identity}
except Exception:
util.logexc(
@@ -455,11 +466,12 @@ class DataSourceEc2(sources.DataSource):
if self.cloud_name != CloudNames.AWS:
return None
LOG.debug("Refreshing Ec2 metadata API token")
- request_header = {'X-aws-ec2-metadata-token-ttl-seconds': seconds}
+ request_header = {AWS_TOKEN_REQ_HEADER: seconds}
token_url = '{}/{}'.format(self.metadata_address, API_TOKEN_ROUTE)
try:
- response = uhelp.readurl(
- token_url, headers=request_header, request_method="PUT")
+ response = uhelp.readurl(token_url, headers=request_header,
+ headers_redact=AWS_TOKEN_REDACT,
+ request_method="PUT")
except uhelp.UrlError as e:
LOG.warning(
'Unable to get API token: %s raised exception %s',
@@ -500,8 +512,7 @@ class DataSourceEc2(sources.DataSource):
API_TOKEN_DISABLED):
return {}
# Request a 6 hour token if URL is API_TOKEN_ROUTE
- request_token_header = {
- 'X-aws-ec2-metadata-token-ttl-seconds': AWS_TOKEN_TTL_SECONDS}
+ request_token_header = {AWS_TOKEN_REQ_HEADER: AWS_TOKEN_TTL_SECONDS}
if API_TOKEN_ROUTE in url:
return request_token_header
if not self._api_token:
@@ -511,7 +522,7 @@ class DataSourceEc2(sources.DataSource):
self._api_token = self._refresh_api_token()
if not self._api_token:
return {}
- return {'X-aws-ec2-metadata-token': self._api_token}
+ return {AWS_TOKEN_PUT_HEADER: self._api_token}
class DataSourceEc2Local(DataSourceEc2):
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 1496a47..3e7de9f 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -8,6 +8,7 @@
#
# This file is part of cloud-init. See LICENSE file for license information.
+import copy
import json
import os
import requests
@@ -41,6 +42,7 @@ else:
SSL_ENABLED = False
CONFIG_ENABLED = False # This was added in 0.7 (but taken out in >=1.0)
_REQ_VER = None
+REDACTED = 'REDACTED'
try:
from distutils.version import LooseVersion
import pkg_resources
@@ -199,9 +201,9 @@ def _get_ssl_args(url, ssl_details):
def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
- headers=None, headers_cb=None, ssl_details=None,
- check_status=True, allow_redirects=True, exception_cb=None,
- session=None, infinite=False, log_req_resp=True,
+ headers=None, headers_cb=None, headers_redact=None,
+ ssl_details=None, check_status=True, allow_redirects=True,
+ exception_cb=None, session=None, infinite=False, log_req_resp=True,
request_method=None):
"""Wrapper around requests.Session to read the url and retry if necessary
@@ -217,6 +219,7 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
:param headers: Optional dict of headers to send during request
:param headers_cb: Optional callable returning a dict of values to send as
headers during request
+ :param headers_redact: Optional list of header names to redact from the log
:param ssl_details: Optional dict providing key_file, ca_certs, and
cert_file keys for use on in ssl connections.
:param check_status: Optional boolean set True to raise when HTTPError
@@ -243,6 +246,8 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
req_args['method'] = request_method
if timeout is not None:
req_args['timeout'] = max(float(timeout), 0)
+ if headers_redact is None:
+ headers_redact = []
# It doesn't seem like config
# was added in older library versions (or newer ones either), thus we
# need to manually do the retries if it wasn't...
@@ -287,6 +292,12 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
if k == 'data':
continue
filtered_req_args[k] = v
+ if k == 'headers':
+ for hkey, _hval in v.items():
+ if hkey in headers_redact:
+ filtered_req_args[k][hkey] = (
+ copy.deepcopy(req_args[k][hkey]))
+ filtered_req_args[k][hkey] = REDACTED
try:
if log_req_resp:
@@ -339,8 +350,8 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
return None # Should throw before this...
-def wait_for_url(urls, max_wait=None, timeout=None,
- status_cb=None, headers_cb=None, sleep_time=1,
+def wait_for_url(urls, max_wait=None, timeout=None, status_cb=None,
+ headers_cb=None, headers_redact=None, sleep_time=1,
exception_cb=None, sleep_time_cb=None, request_method=None):
"""
urls: a list of urls to try
@@ -352,6 +363,7 @@ def wait_for_url(urls, max_wait=None, timeout=None,
status_cb: call method with string message when a url is not available
headers_cb: call method with single argument of url to get headers
for request.
+ headers_redact: a list of header names to redact from the log
exception_cb: call method with 2 arguments 'msg' (per status_cb) and
'exception', the exception that occurred.
sleep_time_cb: call method with 2 arguments (response, loop_n) that
@@ -415,8 +427,9 @@ def wait_for_url(urls, max_wait=None, timeout=None,
headers = {}
response = readurl(
- url, headers=headers, timeout=timeout,
- check_status=False, request_method=request_method)
+ url, headers=headers, headers_redact=headers_redact,
+ timeout=timeout, check_status=False,
+ request_method=request_method)
if not response.contents:
reason = "empty response [%s]" % (response.code)
url_exc = UrlError(ValueError(reason), code=response.code,
diff --git a/tests/unittests/test_datasource/test_ec2.py b/tests/unittests/test_datasource/test_ec2.py
index 34a089f..bd5bd4c 100644
--- a/tests/unittests/test_datasource/test_ec2.py
+++ b/tests/unittests/test_datasource/test_ec2.py
@@ -429,6 +429,23 @@ class TestEc2(test_helpers.HttprettyTestCase):
self.assertTrue(ds.get_data())
self.assertFalse(ds.is_classic_instance())
+ def test_aws_token_redacted(self):
+ """Verify that aws tokens are redacted when logged."""
+ ds = self._setup_ds(
+ platform_data=self.valid_platform_data,
+ sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
+ md={'md': DEFAULT_METADATA})
+ self.assertTrue(ds.get_data())
+ all_logs = self.logs.getvalue().splitlines()
+ REDACT_TTL = "'X-aws-ec2-metadata-token-ttl-seconds': 'REDACTED'"
+ REDACT_TOK = "'X-aws-ec2-metadata-token': 'REDACTED'"
+ logs_with_redacted_ttl = [log for log in all_logs if REDACT_TTL in log]
+ logs_with_redacted = [log for log in all_logs if REDACT_TOK in log]
+ logs_with_token = [log for log in all_logs if 'API-TOKEN' in log]
+ self.assertEqual(1, len(logs_with_redacted_ttl))
+ self.assertEqual(79, len(logs_with_redacted))
+ self.assertEqual(0, len(logs_with_token))
+
@mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
def test_valid_platform_with_strict_true(self, m_dhcp):
"""Valid platform data should return true with strict_id true."""
--
1.8.3.1

View File

@ -0,0 +1,128 @@
From dc9460f161efce6770f66bb95d60cea6d27df722 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Thu, 25 Jun 2020 08:03:59 +0200
Subject: [PATCH] ec2: only redact token request headers in logs, avoid
altering request (#230)
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20200624112104.376-1-otubo@redhat.com>
Patchwork-id: 97793
O-Subject: [RHEL-8.3.0 cloud-init PATCH] ec2: only redact token request headers in logs, avoid altering request (#230)
Bugzilla: 1822343
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
RH-Acked-by: Cathy Avery <cavery@redhat.com>
From: Chad Smith <chad.smith@canonical.com>
commit fa1abfec27050a4fb71cad950a17e42f9b43b478
Author: Chad Smith <chad.smith@canonical.com>
Date: Tue Mar 3 15:23:33 2020 -0700
ec2: only redact token request headers in logs, avoid altering request (#230)
Our header redact logic was redacting both logged request headers and
the actual source request. This results in DataSourceEc2 sending the
invalid header "X-aws-ec2-metadata-token-ttl-seconds: REDACTED" which
gets an HTTP status response of 400.
Cloud-init retries this failed token request for 2 minutes before
falling back to IMDSv1.
LP: #1865882
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/tests/test_url_helper.py | 34 +++++++++++++++++++++++++++++++++-
cloudinit/url_helper.py | 15 ++++++++-------
2 files changed, 41 insertions(+), 8 deletions(-)
diff --git a/cloudinit/tests/test_url_helper.py b/cloudinit/tests/test_url_helper.py
index 1674120..29b3937 100644
--- a/cloudinit/tests/test_url_helper.py
+++ b/cloudinit/tests/test_url_helper.py
@@ -1,7 +1,8 @@
# This file is part of cloud-init. See LICENSE file for license information.
from cloudinit.url_helper import (
- NOT_FOUND, UrlError, oauth_headers, read_file_or_url, retry_on_url_exc)
+ NOT_FOUND, UrlError, REDACTED, oauth_headers, read_file_or_url,
+ retry_on_url_exc)
from cloudinit.tests.helpers import CiTestCase, mock, skipIf
from cloudinit import util
from cloudinit import version
@@ -50,6 +51,9 @@ class TestOAuthHeaders(CiTestCase):
class TestReadFileOrUrl(CiTestCase):
+
+ with_logs = True
+
def test_read_file_or_url_str_from_file(self):
"""Test that str(result.contents) on file is text version of contents.
It should not be "b'data'", but just "'data'" """
@@ -71,6 +75,34 @@ class TestReadFileOrUrl(CiTestCase):
self.assertEqual(result.contents, data)
self.assertEqual(str(result), data.decode('utf-8'))
+ @httpretty.activate
+ def test_read_file_or_url_str_from_url_redacting_headers_from_logs(self):
+ """Headers are redacted from logs but unredacted in requests."""
+ url = 'http://hostname/path'
+ headers = {'sensitive': 'sekret', 'server': 'blah'}
+ httpretty.register_uri(httpretty.GET, url)
+
+ read_file_or_url(url, headers=headers, headers_redact=['sensitive'])
+ logs = self.logs.getvalue()
+ for k in headers.keys():
+ self.assertEqual(headers[k], httpretty.last_request().headers[k])
+ self.assertIn(REDACTED, logs)
+ self.assertNotIn('sekret', logs)
+
+ @httpretty.activate
+ def test_read_file_or_url_str_from_url_redacts_noheaders(self):
+ """When no headers_redact, header values are in logs and requests."""
+ url = 'http://hostname/path'
+ headers = {'sensitive': 'sekret', 'server': 'blah'}
+ httpretty.register_uri(httpretty.GET, url)
+
+ read_file_or_url(url, headers=headers)
+ for k in headers.keys():
+ self.assertEqual(headers[k], httpretty.last_request().headers[k])
+ logs = self.logs.getvalue()
+ self.assertNotIn(REDACTED, logs)
+ self.assertIn('sekret', logs)
+
@mock.patch(M_PATH + 'readurl')
def test_read_file_or_url_passes_params_to_readurl(self, m_readurl):
"""read_file_or_url passes all params through to readurl."""
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 3e7de9f..e6188ea 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -291,13 +291,14 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
for (k, v) in req_args.items():
if k == 'data':
continue
- filtered_req_args[k] = v
- if k == 'headers':
- for hkey, _hval in v.items():
- if hkey in headers_redact:
- filtered_req_args[k][hkey] = (
- copy.deepcopy(req_args[k][hkey]))
- filtered_req_args[k][hkey] = REDACTED
+ if k == 'headers' and headers_redact:
+ matched_headers = [k for k in headers_redact if v.get(k)]
+ if matched_headers:
+ filtered_req_args[k] = copy.deepcopy(v)
+ for key in matched_headers:
+ filtered_req_args[k][key] = REDACTED
+ else:
+ filtered_req_args[k] = v
try:
if log_req_resp:
--
1.8.3.1

View File

@ -1,144 +0,0 @@
From 4f1c3f5be0306da485135544ced4a676753a9373 Mon Sep 17 00:00:00 2001
From: Eduardo Otubo <otubo@redhat.com>
Date: Wed, 16 Oct 2019 12:10:24 +0200
Subject: [PATCH 2/2] util: json.dumps on python 2.7 will handle
UnicodeDecodeError on binary
RH-Author: Eduardo Otubo <otubo@redhat.com>
Message-id: <20191016121024.23694-1-otubo@redhat.com>
Patchwork-id: 91812
O-Subject: [RHEL-7.8/RHEL-8.1.0 cloud-init PATCH] util: json.dumps on python 2.7 will handle UnicodeDecodeError on binary
Bugzilla: 1744718
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
commit 067516d7bc917e4921b9f1424b7a64e92cae0ad2
Author: Chad Smith <chad.smith@canonical.com>
Date: Fri Sep 27 20:46:00 2019 +0000
util: json.dumps on python 2.7 will handle UnicodeDecodeError on binary
Since python 2.7 doesn't handle UnicodeDecodeErrors with the default
handler
LP: #1801364
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/sources/tests/test_init.py | 12 +++++-------
cloudinit/tests/test_util.py | 20 ++++++++++++++++++++
cloudinit/util.py | 27 +++++++++++++++++++++++++--
3 files changed, 50 insertions(+), 9 deletions(-)
diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py
index 6378e98..9698261 100644
--- a/cloudinit/sources/tests/test_init.py
+++ b/cloudinit/sources/tests/test_init.py
@@ -457,19 +457,17 @@ class TestDataSource(CiTestCase):
instance_json['ds']['meta_data'])
@skipIf(not six.PY2, "Only python2 hits UnicodeDecodeErrors on non-utf8")
- def test_non_utf8_encoding_logs_warning(self):
- """When non-utf-8 values exist in py2 instance-data is not written."""
+ def test_non_utf8_encoding_gets_b64encoded(self):
+ """When non-utf-8 values exist in py2 instance-data is b64encoded."""
tmp = self.tmp_dir()
datasource = DataSourceTestSubclassNet(
self.sys_cfg, self.distro, Paths({'run_dir': tmp}),
custom_metadata={'key1': 'val1', 'key2': {'key2.1': b'ab\xaadef'}})
self.assertTrue(datasource.get_data())
json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
- self.assertFalse(os.path.exists(json_file))
- self.assertIn(
- "WARNING: Error persisting instance-data.json: 'utf8' codec can't"
- " decode byte 0xaa in position 2: invalid start byte",
- self.logs.getvalue())
+ instance_json = util.load_json(util.load_file(json_file))
+ key21_value = instance_json['ds']['meta_data']['key2']['key2.1']
+ self.assertEqual('ci-b64:' + util.b64e(b'ab\xaadef'), key21_value)
def test_get_hostname_subclass_support(self):
"""Validate get_hostname signature on all subclasses of DataSource."""
diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
index e3d2dba..f4f95e9 100644
--- a/cloudinit/tests/test_util.py
+++ b/cloudinit/tests/test_util.py
@@ -2,7 +2,9 @@
"""Tests for cloudinit.util"""
+import base64
import logging
+import json
import platform
import cloudinit.util as util
@@ -528,6 +530,24 @@ class TestGetLinuxDistro(CiTestCase):
self.assertEqual(('foo', '1.1', 'aarch64'), dist)
+class TestJsonDumps(CiTestCase):
+ def test_is_str(self):
+ """json_dumps should return a string."""
+ self.assertTrue(isinstance(util.json_dumps({'abc': '123'}), str))
+
+ def test_utf8(self):
+ smiley = '\\ud83d\\ude03'
+ self.assertEqual(
+ {'smiley': smiley},
+ json.loads(util.json_dumps({'smiley': smiley})))
+
+ def test_non_utf8(self):
+ blob = b'\xba\x03Qx-#y\xea'
+ self.assertEqual(
+ {'blob': 'ci-b64:' + base64.b64encode(blob).decode('utf-8')},
+ json.loads(util.json_dumps({'blob': blob})))
+
+
@mock.patch('os.path.exists')
class TestIsLXD(CiTestCase):
diff --git a/cloudinit/util.py b/cloudinit/util.py
index a84112a..2c9ac66 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -1590,10 +1590,33 @@ def json_serialize_default(_obj):
return 'Warning: redacted unserializable type {0}'.format(type(_obj))
+def json_preserialize_binary(data):
+ """Preserialize any discovered binary values to avoid json.dumps issues.
+
+ Used only on python 2.7 where default type handling is not honored for
+ failure to encode binary data. LP: #1801364.
+ TODO(Drop this function when py2.7 support is dropped from cloud-init)
+ """
+ data = obj_copy.deepcopy(data)
+ for key, value in data.items():
+ if isinstance(value, (dict)):
+ data[key] = json_preserialize_binary(value)
+ if isinstance(value, bytes):
+ data[key] = 'ci-b64:{0}'.format(b64e(value))
+ return data
+
+
def json_dumps(data):
"""Return data in nicely formatted json."""
- return json.dumps(data, indent=1, sort_keys=True,
- separators=(',', ': '), default=json_serialize_default)
+ try:
+ return json.dumps(
+ data, indent=1, sort_keys=True, separators=(',', ': '),
+ default=json_serialize_default)
+ except UnicodeDecodeError:
+ if sys.version_info[:2] == (2, 7):
+ data = json_preserialize_binary(data)
+ return json.dumps(data)
+ raise
def yaml_dumps(obj, explicit_start=True, explicit_end=True):
--
1.8.3.1

View File

@ -0,0 +1,46 @@
From ebbc83c1ca52620179d94dc1d92c44883273e4ef Mon Sep 17 00:00:00 2001
From: jmaloy <jmaloy@redhat.com>
Date: Thu, 28 May 2020 08:44:02 +0200
Subject: [PATCH 2/4] utils: use SystemRandom when generating random password.
(#204)
RH-Author: jmaloy <jmaloy@redhat.com>
Message-id: <20200313184329.16696-2-jmaloy@redhat.com>
Patchwork-id: 94294
O-Subject: [RHEL-8.2 cloud-init PATCH 1/1] utils: use SystemRandom when generating random password. (#204)
Bugzilla: 1812174
RH-Acked-by: Eduardo Otubo <eterrell@redhat.com>
RH-Acked-by: Vitaly Kuznetsov <vkuznets@redhat.com>
RH-Acked-by: Mohammed Gamal <mgamal@redhat.com>
From: Dimitri John Ledkov <xnox@ubuntu.com>
As noticed by Seth Arnold, non-deterministic SystemRandom should be
used when creating security sensitive random strings.
(cherry picked from commit 3e2f7356effc9e9cccc5ae945846279804eedc46)
Signed-off-by: Jon Maloy <jmaloy@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
cloudinit/util.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 9d9d5c7..5d51ba8 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -401,9 +401,10 @@ def translate_bool(val, addons=None):
def rand_str(strlen=32, select_from=None):
+ r = random.SystemRandom()
if not select_from:
select_from = string.ascii_letters + string.digits
- return "".join([random.choice(select_from) for _x in range(0, strlen)])
+ return "".join([r.choice(select_from) for _x in range(0, strlen)])
def rand_dict_key(dictionary, postfix=None):
--
1.8.3.1

View File

@ -5,8 +5,8 @@
%global debug_package %{nil}
Name: cloud-init
Version: 18.5
Release: 8%{?dist}
Version: 19.4
Release: 7%{?dist}
Summary: Cloud instance init scripts
Group: System Environment/Base
@ -18,33 +18,28 @@ Source1: cloud-init-tmpfiles.conf
Patch0001: 0001-Add-initial-redhat-setup.patch
Patch0002: 0002-Do-not-write-NM_CONTROLLED-no-in-generated-interface.patch
Patch0003: 0003-limit-permissions-on-def_log_file.patch
Patch0004: 0004-azure-ensure-that-networkmanager-hook-script-runs.patch
Patch0005: 0005-sysconfig-Don-t-write-BOOTPROTO-dhcp-for-ipv6-dhcp.patch
Patch0006: 0006-DataSourceAzure.py-use-hostnamectl-to-set-hostname.patch
Patch0007: 0007-sysconfig-Don-t-disable-IPV6_AUTOCONF.patch
Patch0008: 0008-net-Make-sysconfig-renderer-compatible-with-Network-.patch
Patch0009: 0009-net-Wait-for-dhclient-to-daemonize-before-reading-le.patch
Patch0010: 0010-cloud-init-per-don-t-use-dashes-in-sem-names.patch
Patch0011: 0011-azure-Filter-list-of-ssh-keys-pulled-from-fabric.patch
Patch0012: 0012-include-NOZEROCONF-yes-in-etc-sysconfig-network.patch
# For bz#1691986 - [Azure] [RHEL 8.1] Cloud-init fixes to support fast provisioning for Azure
Patch13: ci-Azure-Ensure-platform-random_seed-is-always-serializ.patch
# For bz#1691986 - [Azure] [RHEL 8.1] Cloud-init fixes to support fast provisioning for Azure
Patch14: ci-DatasourceAzure-add-additional-logging-for-azure-dat.patch
# For bz#1691986 - [Azure] [RHEL 8.1] Cloud-init fixes to support fast provisioning for Azure
Patch15: ci-Azure-Changes-to-the-Hyper-V-KVP-Reporter.patch
# For bz#1691986 - [Azure] [RHEL 8.1] Cloud-init fixes to support fast provisioning for Azure
Patch16: ci-DataSourceAzure-Adjust-timeout-for-polling-IMDS.patch
# For bz#1691986 - [Azure] [RHEL 8.1] Cloud-init fixes to support fast provisioning for Azure
Patch17: ci-cc_mounts-check-if-mount-a-on-no-change-fstab-path.patch
# For bz#1692914 - [8.1] [WALA][cloud] cloud-init dhclient-hook script has some unexpected side-effects on Azure
Patch18: ci-Revert-azure-ensure-that-networkmanager-hook-script-.patch
# For bz#1691986 - [Azure] [RHEL 8.1] Cloud-init fixes to support fast provisioning for Azure
Patch19: ci-Azure-Return-static-fallback-address-as-if-failed-to.patch
# For bz#1706482 - [cloud-init][RHVM]cloud-init network configuration does not persist reboot [RHEL 8.2.0]
Patch20: ci-Fix-for-network-configuration-not-persisting-after-r.patch
# For bz#1744718 - [cloud-init][RHEL8][OpenStack] cloud-init can't persist instance-data.json
Patch21: ci-util-json.dumps-on-python-2.7-will-handle-UnicodeDec.patch
Patch0004: 0004-sysconfig-Don-t-write-BOOTPROTO-dhcp-for-ipv6-dhcp.patch
Patch0005: 0005-DataSourceAzure.py-use-hostnamectl-to-set-hostname.patch
Patch0006: 0006-include-NOZEROCONF-yes-in-etc-sysconfig-network.patch
Patch0007: 0007-Remove-race-condition-between-cloud-init-and-Network.patch
# For bz#1812171 - CVE-2020-8632 cloud-init: Too short random password length in cc_set_password in config/cc_set_passwords.py [rhel-8]
Patch8: ci-cc_set_password-increase-random-pwlength-from-9-to-2.patch
# For bz#1812174 - CVE-2020-8631 cloud-init: Use of random.choice when generating random password [rhel-8]
Patch9: ci-utils-use-SystemRandom-when-generating-random-passwo.patch
# For bz#1814152 - CVE-2018-10896 cloud-init: default configuration disabled deletion of SSH host keys [rhel-8]
Patch10: ci-Enable-ssh_deletekeys-by-default.patch
# For bz#1840648 - [cloud-init][RHEL-8.2.0] /etc/resolv.conf lose config after reboot (initial instance is ok)
Patch11: ci-Remove-race-condition-between-cloud-init-and-Network.patch
# For bz#1803928 - [RHEL8.3] Race condition of starting cloud-init and NetworkManager
Patch12: ci-Make-cloud-init.service-execute-after-network-is-up.patch
# For bz#1822343 - [RHEL8.3] Do not log IMDSv2 token values into cloud-init.log
Patch13: ci-ec2-Do-not-log-IMDSv2-token-values-instead-use-REDAC.patch
# For bz#1834173 - [rhel-8.3]Incorrect ds-identify check in cloud-init-generator
Patch14: ci-Change-from-redhat-to-rhel-in-systemd-generator-tmpl.patch
# For bz#1834173 - [rhel-8.3]Incorrect ds-identify check in cloud-init-generator
Patch15: ci-cloud-init.service.tmpl-use-rhel-instead-of-redhat-4.patch
# For bz#1822343 - [RHEL8.3] Do not log IMDSv2 token values into cloud-init.log
Patch16: ci-ec2-only-redact-token-request-headers-in-logs-avoid-.patch
BuildArch: noarch
@ -140,10 +135,11 @@ mkdir -p $RPM_BUILD_ROOT%{_unitdir}
cp rhel/systemd/* $RPM_BUILD_ROOT%{_unitdir}/
[ ! -d $RPM_BUILD_ROOT/usr/lib/systemd/system-generators ] && mkdir -p $RPM_BUILD_ROOT/usr/lib/systemd/system-generators
cp -p systemd/cloud-init-generator $RPM_BUILD_ROOT/usr/lib/systemd/system-generators
python3 tools/render-cloudcfg --variant rhel systemd/cloud-init-generator.tmpl > $RPM_BUILD_ROOT/usr/lib/systemd/system-generators/cloud-init-generator
chmod 755 $RPM_BUILD_ROOT/usr/lib/systemd/system-generators/cloud-init-generator
[ ! -d $RPM_BUILD_ROOT/usr/lib/%{name} ] && mkdir -p $RPM_BUILD_ROOT/usr/lib/%{name}
cp -p tools/ds-identify $RPM_BUILD_ROOT/usr/lib/%{name}/ds-identify
cp -p tools/ds-identify $RPM_BUILD_ROOT%{_libexecdir}/%{name}/ds-identify
%clean
@ -219,7 +215,7 @@ fi
%{_udevrulesdir}/66-azure-ephemeral.rules
%{_sysconfdir}/bash_completion.d/cloud-init
%{_bindir}/cloud-id
/usr/lib/%{name}/ds-identify
%{_libexecdir}/%{name}/ds-identify
/usr/lib/systemd/system-generators/cloud-init-generator
@ -227,6 +223,79 @@ fi
%config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf
%changelog
* Fri Jun 26 2020 Miroslav Rezanina <mrezanin@redhat.com> - 19.4-7.el8
- Fixing cloud-init-generator permissions [bz#1834173]
- Resolves: bz#1834173
([rhel-8.3]Incorrect ds-identify check in cloud-init-generator)
* Thu Jun 25 2020 Miroslav Rezanina <mrezanin@redhat.com> - 19.4-6.el8
- ci-ec2-only-redact-token-request-headers-in-logs-avoid-.patch [bz#1822343]
- Resolves: bz#1822343
([RHEL8.3] Do not log IMDSv2 token values into cloud-init.log)
* Wed Jun 24 2020 Miroslav Rezanina <mrezanin@redhat.com> - 19.4-5.el8
- ci-ec2-Do-not-log-IMDSv2-token-values-instead-use-REDAC.patch [bz#1822343]
- ci-Render-the-generator-from-template-instead-of-cp.patch [bz#1834173]
- ci-Change-from-redhat-to-rhel-in-systemd-generator-tmpl.patch [bz#1834173]
- ci-cloud-init.service.tmpl-use-rhel-instead-of-redhat-4.patch [bz#1834173]
- Resolves: bz#1822343
([RHEL8.3] Do not log IMDSv2 token values into cloud-init.log)
- Resolves: bz#1834173
([rhel-8.3]Incorrect ds-identify check in cloud-init-generator)
* Tue Jun 09 2020 Miroslav Rezanina <mrezanin@redhat.com> - 19.4-4.el8
- ci-changing-ds-identify-patch-from-usr-lib-to-usr-libex.patch [bz#1834173]
- Resolves: bz#1834173
([rhel-8.3]Incorrect ds-identify check in cloud-init-generator)
* Mon Jun 01 2020 Miroslav Rezanina <mrezanin@redhat.com> - 19.4-3.el8
- ci-Make-cloud-init.service-execute-after-network-is-up.patch [bz#1803928]
- Resolves: bz#1803928
([RHEL8.3] Race condition of starting cloud-init and NetworkManager)
* Thu May 28 2020 Miroslav Rezanina <mrezanin@redhat.com> - 19.4-2.el8
- ci-cc_set_password-increase-random-pwlength-from-9-to-2.patch [bz#1812171]
- ci-utils-use-SystemRandom-when-generating-random-passwo.patch [bz#1812174]
- ci-Enable-ssh_deletekeys-by-default.patch [bz#1814152]
- ci-Remove-race-condition-between-cloud-init-and-Network.patch [bz#1840648]
- Resolves: bz#1812171
(CVE-2020-8632 cloud-init: Too short random password length in cc_set_password in config/cc_set_passwords.py [rhel-8])
- Resolves: bz#1812174
(CVE-2020-8631 cloud-init: Use of random.choice when generating random password [rhel-8])
- Resolves: bz#1814152
(CVE-2018-10896 cloud-init: default configuration disabled deletion of SSH host keys [rhel-8])
- Resolves: bz#1840648
([cloud-init][RHEL-8.2.0] /etc/resolv.conf lose config after reboot (initial instance is ok))
* Mon Apr 20 2020 Miroslav Rezanina <mrezanin@redhat.coM> - 19.4-1.el8
- Rebase to cloud-init 19.4 [bz#1803095]
- Resolves: bz#1803095
([RHEL-8.3.0] cloud-init rebase to 19.4)
* Tue Mar 10 2020 Miroslav Rezanina <mrezanin@redhat.com> - 18.5-12.el8
- ci-Remove-race-condition-between-cloud-init-and-Network.patch [bz#1807797]
- Resolves: bz#1807797
([cloud-init][RHEL-8.2.0] /etc/resolv.conf lose config after reboot (initial instance is ok))
* Thu Feb 20 2020 Miroslav Rezanina <mrezanin@redhat.com> - 18.5-11.el8
- ci-azure-avoid-re-running-cloud-init-when-instance-id-i.patch [bz#1788684]
- ci-net-skip-bond-interfaces-in-get_interfaces.patch [bz#1768770]
- ci-net-add-is_master-check-for-filtering-device-list.patch [bz#1768770]
- Resolves: bz#1768770
(cloud-init complaining about enslaved mac)
- Resolves: bz#1788684
([RHEL-8] cloud-init Azure byte swap (hyperV Gen2 Only))
* Thu Feb 13 2020 Miroslav Rezanina <mrezanin@redhat.com> - 18.5-10.el8
- ci-cmd-main.py-Fix-missing-modules-init-key-in-modes-di.patch [bz#1802140]
- Resolves: bz#1802140
([cloud-init][RHEL8.2]cloud-init cloud-final.service fail with KeyError: 'modules-init' after upgrade to version 18.2-1.el7_6.1 in RHV)
* Tue Jan 28 2020 Miroslav Rezanina <mrezanin@redhat.com> - 18.5-9.el8
- ci-Removing-cloud-user-from-wheel.patch [bz#1785648]
- Resolves: bz#1785648
([RHEL8]cloud-user added to wheel group and sudoers.d causes 'sudo -v' prompts for passphrase)
* Fri Nov 22 2019 Miroslav Rezanina <mrezanin@redhat.com> - 18.5-8.el8
- ci-Fix-for-network-configuration-not-persisting-after-r.patch [bz#1706482]
- ci-util-json.dumps-on-python-2.7-will-handle-UnicodeDec.patch [bz#1744718]
@ -248,11 +317,6 @@ fi
- Resolves: bz#1692914
([8.1] [WALA][cloud] cloud-init dhclient-hook script has some unexpected side-effects on Azure)
* Mon Jun 10 2019 Miroslav Rezanina <mrezanin@redhat.com> - 18.5-5.el8
- Improved gating testing [bz#1682786]
- Resolves: bz#1682786
(cloud-init changes blocked until gating tests are added)
* Mon Jun 03 2019 Miroslav Rezanina <mrezanin@redhat.com> - 18.5-4.el8
- ci-Azure-Ensure-platform-random_seed-is-always-serializ.patch [bz#1691986]
- ci-DatasourceAzure-add-additional-logging-for-azure-dat.patch [bz#1691986]
@ -272,6 +336,7 @@ fi
- Resolves: rhbz#1682786
(cloud-init changes blocked until gating tests are added)
* Wed Apr 10 2019 Danilo de Paula <ddepaula@redhat.com: - 18.5-1.el8
- Rebase to cloud-init 18.5
- Resolves: bz#1687563