Compare commits

...

No commits in common. "c8-beta" and "c8" have entirely different histories.
c8-beta ... c8

34 changed files with 3182 additions and 38 deletions

16
.gitignore vendored
View File

@ -1,12 +1,12 @@
SOURCES/ad_integration-1.4.2.tar.gz SOURCES/ad_integration-1.4.6.tar.gz
SOURCES/ansible-posix-1.5.4.tar.gz SOURCES/ansible-posix-1.5.4.tar.gz
SOURCES/ansible-sshd-v0.23.2.tar.gz SOURCES/ansible-sshd-v0.25.0.tar.gz
SOURCES/auto-maintenance-11ad785c9bb72611244e7909450ca4247e12db4d.tar.gz SOURCES/auto-maintenance-11ad785c9bb72611244e7909450ca4247e12db4d.tar.gz
SOURCES/bootloader-1.0.3.tar.gz SOURCES/bootloader-1.0.7.tar.gz
SOURCES/certificate-1.3.3.tar.gz SOURCES/certificate-1.3.3.tar.gz
SOURCES/cockpit-1.5.5.tar.gz SOURCES/cockpit-1.5.10.tar.gz
SOURCES/community-general-8.3.0.tar.gz SOURCES/community-general-8.3.0.tar.gz
SOURCES/containers-podman-1.12.0.tar.gz SOURCES/containers-podman-1.15.4.tar.gz
SOURCES/crypto_policies-1.3.2.tar.gz SOURCES/crypto_policies-1.3.2.tar.gz
SOURCES/fapolicyd-1.1.1.tar.gz SOURCES/fapolicyd-1.1.1.tar.gz
SOURCES/firewall-1.7.4.tar.gz SOURCES/firewall-1.7.4.tar.gz
@ -15,9 +15,9 @@ SOURCES/journald-1.2.3.tar.gz
SOURCES/kdump-1.4.4.tar.gz SOURCES/kdump-1.4.4.tar.gz
SOURCES/kernel_settings-1.2.2.tar.gz SOURCES/kernel_settings-1.2.2.tar.gz
SOURCES/keylime_server-1.1.2.tar.gz SOURCES/keylime_server-1.1.2.tar.gz
SOURCES/logging-1.12.4.tar.gz SOURCES/logging-1.13.4.tar.gz
SOURCES/metrics-1.10.1.tar.gz SOURCES/metrics-1.10.1.tar.gz
SOURCES/nbde_client-1.2.17.tar.gz SOURCES/nbde_client-1.3.0.tar.gz
SOURCES/nbde_server-1.4.3.tar.gz SOURCES/nbde_server-1.4.3.tar.gz
SOURCES/network-1.15.1.tar.gz SOURCES/network-1.15.1.tar.gz
SOURCES/podman-1.4.7.tar.gz SOURCES/podman-1.4.7.tar.gz
@ -29,6 +29,6 @@ SOURCES/snapshot-1.3.1.tar.gz
SOURCES/ssh-1.3.2.tar.gz SOURCES/ssh-1.3.2.tar.gz
SOURCES/storage-1.16.2.tar.gz SOURCES/storage-1.16.2.tar.gz
SOURCES/systemd-1.1.2.tar.gz SOURCES/systemd-1.1.2.tar.gz
SOURCES/timesync-1.8.2.tar.gz SOURCES/timesync-1.9.0.tar.gz
SOURCES/tlog-1.3.3.tar.gz SOURCES/tlog-1.3.3.tar.gz
SOURCES/vpn-1.6.3.tar.gz SOURCES/vpn-1.6.3.tar.gz

View File

@ -1,12 +1,12 @@
5141bc9c1084d08fc347e017bb1c91963bcbe441 SOURCES/ad_integration-1.4.2.tar.gz 11b58e43e1b78cb75eda26724359f4d748173d5f SOURCES/ad_integration-1.4.6.tar.gz
da646eb9ba655f1693cc950ecb5c24af39ee1af6 SOURCES/ansible-posix-1.5.4.tar.gz da646eb9ba655f1693cc950ecb5c24af39ee1af6 SOURCES/ansible-posix-1.5.4.tar.gz
ae49d8b3f623a4bf132644f0f2256a92bd4c41ae SOURCES/ansible-sshd-v0.23.2.tar.gz 5829f61d848d1fe52ecd1702c055eeed8ef56e70 SOURCES/ansible-sshd-v0.25.0.tar.gz
e4df3548cf129b61c40b2d013917e07be2f3ba4e SOURCES/auto-maintenance-11ad785c9bb72611244e7909450ca4247e12db4d.tar.gz e4df3548cf129b61c40b2d013917e07be2f3ba4e SOURCES/auto-maintenance-11ad785c9bb72611244e7909450ca4247e12db4d.tar.gz
b84e7d8110638848b22a22c7ed8e94cf865a13d8 SOURCES/bootloader-1.0.3.tar.gz 7ae4b79529d14c0c8958cf9633f8d560d718f4e7 SOURCES/bootloader-1.0.7.tar.gz
9eaac83b306b2fb8dd8e82bc4b03b30285d2024f SOURCES/certificate-1.3.3.tar.gz 9eaac83b306b2fb8dd8e82bc4b03b30285d2024f SOURCES/certificate-1.3.3.tar.gz
dd3d8b4b12aeeeab4cf78bfdad911a194027fc8e SOURCES/cockpit-1.5.5.tar.gz 15677bec6ddafb75911d7c29fe1eb1c24b9b4f1c SOURCES/cockpit-1.5.10.tar.gz
15fd2f2c08ae17cc47efb76bd14fb9ab6f33bc26 SOURCES/community-general-8.3.0.tar.gz 15fd2f2c08ae17cc47efb76bd14fb9ab6f33bc26 SOURCES/community-general-8.3.0.tar.gz
ff961816d6e01326a413552b4f3c290ccdb9dc20 SOURCES/containers-podman-1.12.0.tar.gz 2c0a98aedb2c031bfc94609bc9553d192224b159 SOURCES/containers-podman-1.15.4.tar.gz
6705818b1fdf3cc82083937265f7942e3d3ccc2d SOURCES/crypto_policies-1.3.2.tar.gz 6705818b1fdf3cc82083937265f7942e3d3ccc2d SOURCES/crypto_policies-1.3.2.tar.gz
29505121f6798f527045c5f66656fd5c19bed5fe SOURCES/fapolicyd-1.1.1.tar.gz 29505121f6798f527045c5f66656fd5c19bed5fe SOURCES/fapolicyd-1.1.1.tar.gz
1a7a875cebbd3e146f6ca554269ee20845cf877b SOURCES/firewall-1.7.4.tar.gz 1a7a875cebbd3e146f6ca554269ee20845cf877b SOURCES/firewall-1.7.4.tar.gz
@ -15,9 +15,9 @@ e96ba9f5b3ae08a12dbf072f118e316036553b94 SOURCES/journald-1.2.3.tar.gz
de6c6103b7023aa21782906696e712b428600a92 SOURCES/kdump-1.4.4.tar.gz de6c6103b7023aa21782906696e712b428600a92 SOURCES/kdump-1.4.4.tar.gz
0f28a0919874f650ef0149409116bae12d2363e0 SOURCES/kernel_settings-1.2.2.tar.gz 0f28a0919874f650ef0149409116bae12d2363e0 SOURCES/kernel_settings-1.2.2.tar.gz
85c14c7e260b247eb7947c8706af82ff5aac07d2 SOURCES/keylime_server-1.1.2.tar.gz 85c14c7e260b247eb7947c8706af82ff5aac07d2 SOURCES/keylime_server-1.1.2.tar.gz
15e86d6989a25cd50edb58f5fb479e0d1ae3b755 SOURCES/logging-1.12.4.tar.gz 4825923fc0fa29e80c08864b0afc50e2e075be91 SOURCES/logging-1.13.4.tar.gz
e795238995d2dfb2cbdb5cc9bf4923f7410ac49a SOURCES/metrics-1.10.1.tar.gz e795238995d2dfb2cbdb5cc9bf4923f7410ac49a SOURCES/metrics-1.10.1.tar.gz
8218c40eb96540e2f5d9f5be95deea9e2110ddd8 SOURCES/nbde_client-1.2.17.tar.gz 544c5c9e53beef034b0d39ecf944e0bb13231535 SOURCES/nbde_client-1.3.0.tar.gz
dce6435ca145b3143c1326a8e413e8173e5655ef SOURCES/nbde_server-1.4.3.tar.gz dce6435ca145b3143c1326a8e413e8173e5655ef SOURCES/nbde_server-1.4.3.tar.gz
e89a4d6974a089f035b1f3fc79a1f9cacfa1f933 SOURCES/network-1.15.1.tar.gz e89a4d6974a089f035b1f3fc79a1f9cacfa1f933 SOURCES/network-1.15.1.tar.gz
fc242b6f776088720ef04e5891c75fd33e6e1e96 SOURCES/podman-1.4.7.tar.gz fc242b6f776088720ef04e5891c75fd33e6e1e96 SOURCES/podman-1.4.7.tar.gz
@ -29,6 +29,6 @@ b519a4e35b55e97bf954916d77f1f1f82ec2615b SOURCES/rhc-1.6.0.tar.gz
d2c153993e51ce949db861db2aa15e8ec90b45af SOURCES/ssh-1.3.2.tar.gz d2c153993e51ce949db861db2aa15e8ec90b45af SOURCES/ssh-1.3.2.tar.gz
e08c1df6c6842f6ad37fff34d2e9d96e9cdddd70 SOURCES/storage-1.16.2.tar.gz e08c1df6c6842f6ad37fff34d2e9d96e9cdddd70 SOURCES/storage-1.16.2.tar.gz
df8f2896ad761da73872d17a0f0cd8cfd34e0671 SOURCES/systemd-1.1.2.tar.gz df8f2896ad761da73872d17a0f0cd8cfd34e0671 SOURCES/systemd-1.1.2.tar.gz
47be5d4b967970eefecf72a79a1d8d9bc7c72f18 SOURCES/timesync-1.8.2.tar.gz 0a9df710ddd8a43e74cbd77e4414d5ea7e90d7b9 SOURCES/timesync-1.9.0.tar.gz
6d559dc44f44bc7e505602b36b51b4d1b60f2754 SOURCES/tlog-1.3.3.tar.gz 6d559dc44f44bc7e505602b36b51b4d1b60f2754 SOURCES/tlog-1.3.3.tar.gz
27395883fa555658257e70287e709f8ccc1d8392 SOURCES/vpn-1.6.3.tar.gz 27395883fa555658257e70287e709f8ccc1d8392 SOURCES/vpn-1.6.3.tar.gz

View File

@ -0,0 +1,74 @@
From 8b3cfc1a30da1ab681eb8c250baa2d6395ecc0d2 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Wed, 3 Apr 2024 15:12:00 +0200
Subject: [PATCH 01/10] test: fix sector-based disk size calculation from
ansible_devices
Device sizes specified in sectors are in general in 512 sectors
regardless of the actual device physical sector size. Example of
ansible_devices facts for a 4k sector size drive:
...
"sectors": "41943040",
"sectorsize": "4096",
"size": "20.00 GB"
...
Resolves: RHEL-30959
Signed-off-by: Vojtech Trefny <vtrefny@redhat.com>
(cherry picked from commit bb1eb23ccd6e9475cd698f0a6f2f497ffefbccd2)
---
tests/tests_create_lv_size_equal_to_vg.yml | 3 +--
tests/tests_misc.yml | 3 +--
tests/tests_resize.yml | 6 ++----
3 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/tests/tests_create_lv_size_equal_to_vg.yml b/tests/tests_create_lv_size_equal_to_vg.yml
index cab4f08..535f73b 100644
--- a/tests/tests_create_lv_size_equal_to_vg.yml
+++ b/tests/tests_create_lv_size_equal_to_vg.yml
@@ -8,8 +8,7 @@
volume_group_size: '10g'
lv_size: '10g'
unused_disk_subfact: '{{ ansible_devices[unused_disks[0]] }}'
- disk_size: '{{ unused_disk_subfact.sectors | int *
- unused_disk_subfact.sectorsize | int }}'
+ disk_size: '{{ unused_disk_subfact.sectors | int * 512 }}'
tags:
- tests::lvm
diff --git a/tests/tests_misc.yml b/tests/tests_misc.yml
index 6373897..363d843 100644
--- a/tests/tests_misc.yml
+++ b/tests/tests_misc.yml
@@ -8,8 +8,7 @@
volume_group_size: "5g"
volume1_size: "4g"
unused_disk_subfact: "{{ ansible_devices[unused_disks[0]] }}"
- too_large_size: "{{ (unused_disk_subfact.sectors | int * 1.2) *
- unused_disk_subfact.sectorsize | int }}"
+ too_large_size: "{{ (unused_disk_subfact.sectors | int * 1.2) * 512 }}"
tags:
- tests::lvm
tasks:
diff --git a/tests/tests_resize.yml b/tests/tests_resize.yml
index 06fb375..1cd2176 100644
--- a/tests/tests_resize.yml
+++ b/tests/tests_resize.yml
@@ -11,10 +11,8 @@
invalid_size1: xyz GiB
invalid_size2: none
unused_disk_subfact: '{{ ansible_devices[unused_disks[0]] }}'
- too_large_size: '{{ unused_disk_subfact.sectors | int * 1.2 *
- unused_disk_subfact.sectorsize | int }}'
- disk_size: '{{ unused_disk_subfact.sectors | int *
- unused_disk_subfact.sectorsize | int }}'
+ too_large_size: '{{ unused_disk_subfact.sectors | int * 1.2 * 512 }}'
+ disk_size: '{{ unused_disk_subfact.sectors | int * 512 }}'
tags:
- tests::lvm
tasks:
--
2.46.0

View File

@ -0,0 +1,64 @@
From 9f561445271a14fee598e9a793f72297f66eae56 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Wed, 10 Apr 2024 17:05:46 +0200
Subject: [PATCH 02/10] fix: Fix recreate check for formats without labelling
support
Formats like LUKS or LVMPV don't support labels so we need to skip
the label check in BlivetVolume._reformat.
Resolves: RHEL-29874
(cherry picked from commit a70e8108110e30ebc5e7c404d39339c511f9bd09)
---
library/blivet.py | 3 +++
tests/tests_volume_relabel.yml | 20 ++++++++++++++++++++
2 files changed, 23 insertions(+)
diff --git a/library/blivet.py b/library/blivet.py
index 20389ea..18807de 100644
--- a/library/blivet.py
+++ b/library/blivet.py
@@ -826,6 +826,9 @@ class BlivetVolume(BlivetBase):
if ((fmt is None and self._device.format.type is None)
or (fmt is not None and self._device.format.type == fmt.type)):
# format is the same, no need to run reformatting
+ if not hasattr(self._device.format, "label"):
+ # not all formats support labels
+ return
dev_label = '' if self._device.format.label is None else self._device.format.label
if dev_label != fmt.label:
# ...but the label has changed - schedule modification action
diff --git a/tests/tests_volume_relabel.yml b/tests/tests_volume_relabel.yml
index 8916b73..6624fbd 100644
--- a/tests/tests_volume_relabel.yml
+++ b/tests/tests_volume_relabel.yml
@@ -111,6 +111,26 @@
- name: Verify role results
include_tasks: verify-role-results.yml
+ - name: Format the device to LVMPV which doesn't support labels
+ include_role:
+ name: linux-system-roles.storage
+ vars:
+ storage_volumes:
+ - name: test1
+ type: disk
+ fs_type: lvmpv
+ disks: "{{ unused_disks }}"
+
+ - name: Rerun to check we don't try to relabel preexisitng LVMPV (regression test for RHEL-29874)
+ include_role:
+ name: linux-system-roles.storage
+ vars:
+ storage_volumes:
+ - name: test1
+ type: disk
+ fs_type: lvmpv
+ disks: "{{ unused_disks }}"
+
- name: Clean up
include_role:
name: linux-system-roles.storage
--
2.46.0

View File

@ -0,0 +1,28 @@
From 7abfaeddab812e4eec0c3d3d6bcbabe047722c4f Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Wed, 10 Apr 2024 17:08:20 +0200
Subject: [PATCH 03/10] fix: Fix incorrent populate call
`populate()` is method of DeviceTree, not Blivet.
(cherry picked from commit 6471e65abd429c82df37cbcf07fdf909e4277aa8)
---
library/blivet.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/library/blivet.py b/library/blivet.py
index 18807de..d82b86b 100644
--- a/library/blivet.py
+++ b/library/blivet.py
@@ -630,7 +630,7 @@ class BlivetVolume(BlivetBase):
device.original_format._key_file = self._volume.get('encryption_key')
device.original_format.passphrase = self._volume.get('encryption_password')
if device.isleaf:
- self._blivet.populate()
+ self._blivet.devicetree.populate()
if not device.isleaf:
device = device.children[0]
--
2.46.0

View File

@ -0,0 +1,174 @@
From 912c33982d9cc412eb72bc9baeab6696e29e7f27 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Tue, 28 May 2024 16:23:48 +0200
Subject: [PATCH 04/10] tests: Add a new 'match_sector_size' argument to
find_unused_disks
Some storage pools cannot be created on disks with different
sector sizes so we want to be able to find unused disks with the
same sector sizes for our tests.
Related: RHEL-25994
(cherry picked from commit 368ecd0214dbaad7c42547eeac0565e51c924546)
---
library/find_unused_disk.py | 79 ++++++++++++++++++++++------------
tests/get_unused_disk.yml | 1 +
tests/unit/test_unused_disk.py | 6 +--
3 files changed, 56 insertions(+), 30 deletions(-)
diff --git a/library/find_unused_disk.py b/library/find_unused_disk.py
index 09b8ad5..098f235 100644
--- a/library/find_unused_disk.py
+++ b/library/find_unused_disk.py
@@ -39,6 +39,11 @@ options:
description: Specifies which disk interface will be accepted (scsi, virtio, nvme).
default: null
type: str
+
+ match_sector_size:
+ description: Specifies whether all returned disks must have the same (logical) sector size.
+ default: false
+ type: bool
'''
EXAMPLES = '''
@@ -138,13 +143,13 @@ def get_partitions(disk_path):
def get_disks(module):
- buf = module.run_command(["lsblk", "-p", "--pairs", "--bytes", "-o", "NAME,TYPE,SIZE,FSTYPE"])[1]
+ buf = module.run_command(["lsblk", "-p", "--pairs", "--bytes", "-o", "NAME,TYPE,SIZE,FSTYPE,LOG-SEC"])[1]
disks = dict()
for line in buf.splitlines():
if not line:
continue
- m = re.search(r'NAME="(?P<path>[^"]*)" TYPE="(?P<type>[^"]*)" SIZE="(?P<size>\d+)" FSTYPE="(?P<fstype>[^"]*)"', line)
+ m = re.search(r'NAME="(?P<path>[^"]*)" TYPE="(?P<type>[^"]*)" SIZE="(?P<size>\d+)" FSTYPE="(?P<fstype>[^"]*)" LOG-SEC="(?P<ssize>\d+)"', line)
if m is None:
module.log(line)
continue
@@ -152,31 +157,16 @@ def get_disks(module):
if m.group('type') != "disk":
continue
- disks[m.group('path')] = {"type": m.group('type'), "size": m.group('size'), "fstype": m.group('fstype')}
+ disks[m.group('path')] = {"type": m.group('type'), "size": m.group('size'),
+ "fstype": m.group('fstype'), "ssize": m.group('ssize')}
return disks
-def run_module():
- """Create the module"""
- module_args = dict(
- max_return=dict(type='int', required=False, default=10),
- min_size=dict(type='str', required=False, default='0'),
- max_size=dict(type='str', required=False, default='0'),
- with_interface=dict(type='str', required=False, default=None)
- )
-
- result = dict(
- changed=False,
- disks=[]
- )
-
- module = AnsibleModule(
- argument_spec=module_args,
- supports_check_mode=True
- )
-
+def filter_disks(module):
+ disks = {}
max_size = Size(module.params['max_size'])
+
for path, attrs in get_disks(module).items():
if is_ignored(path):
continue
@@ -204,14 +194,49 @@ def run_module():
if not can_open(path):
continue
- result['disks'].append(os.path.basename(path))
- if len(result['disks']) >= module.params['max_return']:
- break
+ disks[path] = attrs
+
+ return disks
+
+
+def run_module():
+ """Create the module"""
+ module_args = dict(
+ max_return=dict(type='int', required=False, default=10),
+ min_size=dict(type='str', required=False, default='0'),
+ max_size=dict(type='str', required=False, default='0'),
+ with_interface=dict(type='str', required=False, default=None),
+ match_sector_size=dict(type='bool', required=False, default=False)
+ )
+
+ result = dict(
+ changed=False,
+ disks=[]
+ )
+
+ module = AnsibleModule(
+ argument_spec=module_args,
+ supports_check_mode=True
+ )
+
+ disks = filter_disks(module)
+
+ if module.params['match_sector_size']:
+ # pick the most disks with the same sector size
+ sector_sizes = dict()
+ for path, ss in [(path, disks[path]["ssize"]) for path in disks.keys()]:
+ if ss in sector_sizes.keys():
+ sector_sizes[ss].append(path)
+ else:
+ sector_sizes[ss] = [path]
+ disks = [os.path.basename(p) for p in max(sector_sizes.values(), key=len)]
+ else:
+ disks = [os.path.basename(p) for p in disks.keys()]
- if not result['disks']:
+ if not disks:
result['disks'] = "Unable to find unused disk"
else:
- result['disks'].sort()
+ result['disks'] = sorted(disks)[:int(module.params['max_return'])]
module.exit_json(**result)
diff --git a/tests/get_unused_disk.yml b/tests/get_unused_disk.yml
index 685541f..a61487e 100644
--- a/tests/get_unused_disk.yml
+++ b/tests/get_unused_disk.yml
@@ -19,6 +19,7 @@
max_size: "{{ max_size | d(omit) }}"
max_return: "{{ max_return | d(omit) }}"
with_interface: "{{ storage_test_use_interface | d(omit) }}"
+ match_sector_size: "{{ match_sector_size | d(omit) }}"
register: unused_disks_return
- name: Set unused_disks if necessary
diff --git a/tests/unit/test_unused_disk.py b/tests/unit/test_unused_disk.py
index 74c9cf1..ca44d0f 100644
--- a/tests/unit/test_unused_disk.py
+++ b/tests/unit/test_unused_disk.py
@@ -10,9 +10,9 @@ import os
blkid_data_pttype = [('/dev/sdx', '/dev/sdx: PTTYPE=\"dos\"'),
('/dev/sdy', '/dev/sdy: PTTYPE=\"test\"')]
-blkid_data = [('/dev/sdx', 'UUID=\"hello-1234-56789\" TYPE=\"crypto_LUKS\"'),
- ('/dev/sdy', 'UUID=\"this-1s-a-t3st-f0r-ansible\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"'),
- ('/dev/sdz', 'LABEL=\"/data\" UUID=\"a12bcdef-345g-67h8-90i1-234j56789k10\" VERSION=\"1.0\" TYPE=\"ext4\" USAGE=\"filesystem\"')]
+blkid_data = [('/dev/sdx', 'UUID=\"hello-1234-56789\" TYPE=\"crypto_LUKS\" LOG-SEC=\"512\"'),
+ ('/dev/sdy', 'UUID=\"this-1s-a-t3st-f0r-ansible\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\" LOG-SEC=\"512\"'),
+ ('/dev/sdz', 'LABEL=\"/data\" UUID=\"a12bcdef-345g-67h8-90i1-234j56789k10\" VERSION=\"1.0\" TYPE=\"ext4\" USAGE=\"filesystem\" LOG-SEC=\"512\"')]
holders_data_none = [('/dev/sdx', ''),
('/dev/dm-99', '')]
--
2.46.0

View File

@ -0,0 +1,96 @@
From da871866f07e2990f37b3fdea404bbaf091d81b6 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Thu, 30 May 2024 10:41:26 +0200
Subject: [PATCH 05/10] tests: Require same sector size disks for LVM tests
LVM VGs cannot be created on top of disks with different sector
sizes so for tests that need multiple disks we need to make sure
we get unused disks with the same sector size.
Resolves: RHEL-25994
(cherry picked from commit d8c5938c28417cc905a647ec30246a0fc4d19297)
---
tests/tests_change_fs_use_partitions.yml | 2 +-
tests/tests_create_lvm_cache_then_remove.yml | 1 +
tests/tests_create_thinp_then_remove.yml | 1 +
tests/tests_fatals_cache_volume.yml | 1 +
tests/tests_lvm_multiple_disks_multiple_volumes.yml | 1 +
tests/tests_lvm_pool_members.yml | 1 +
6 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/tests/tests_change_fs_use_partitions.yml b/tests/tests_change_fs_use_partitions.yml
index 52afb7f..87fed69 100644
--- a/tests/tests_change_fs_use_partitions.yml
+++ b/tests/tests_change_fs_use_partitions.yml
@@ -31,7 +31,7 @@
include_tasks: get_unused_disk.yml
vars:
min_size: "{{ volume_size }}"
- max_return: 2
+ max_return: 1
- name: Create an LVM partition with the default file system type
include_role:
diff --git a/tests/tests_create_lvm_cache_then_remove.yml b/tests/tests_create_lvm_cache_then_remove.yml
index 1769a78..6b5d0a5 100644
--- a/tests/tests_create_lvm_cache_then_remove.yml
+++ b/tests/tests_create_lvm_cache_then_remove.yml
@@ -57,6 +57,7 @@
min_size: "{{ volume_group_size }}"
max_return: 2
disks_needed: 2
+ match_sector_size: true
- name: Create a cached LVM logical volume under volume group 'foo'
include_role:
diff --git a/tests/tests_create_thinp_then_remove.yml b/tests/tests_create_thinp_then_remove.yml
index bf6c4b1..2e7f046 100644
--- a/tests/tests_create_thinp_then_remove.yml
+++ b/tests/tests_create_thinp_then_remove.yml
@@ -23,6 +23,7 @@
include_tasks: get_unused_disk.yml
vars:
max_return: 3
+ match_sector_size: true
- name: Create a thinpool device
include_role:
diff --git a/tests/tests_fatals_cache_volume.yml b/tests/tests_fatals_cache_volume.yml
index c14cf3f..fcfdbb8 100644
--- a/tests/tests_fatals_cache_volume.yml
+++ b/tests/tests_fatals_cache_volume.yml
@@ -29,6 +29,7 @@
vars:
max_return: 2
disks_needed: 2
+ match_sector_size: true
- name: Verify that creating a cached partition volume fails
include_tasks: verify-role-failed.yml
diff --git a/tests/tests_lvm_multiple_disks_multiple_volumes.yml b/tests/tests_lvm_multiple_disks_multiple_volumes.yml
index 9a01ec5..68f2e76 100644
--- a/tests/tests_lvm_multiple_disks_multiple_volumes.yml
+++ b/tests/tests_lvm_multiple_disks_multiple_volumes.yml
@@ -29,6 +29,7 @@
min_size: "{{ volume_group_size }}"
max_return: 2
disks_needed: 2
+ match_sector_size: true
- name: >-
Create a logical volume spanning two physical volumes that changes its
diff --git a/tests/tests_lvm_pool_members.yml b/tests/tests_lvm_pool_members.yml
index d1b941d..63c10c7 100644
--- a/tests/tests_lvm_pool_members.yml
+++ b/tests/tests_lvm_pool_members.yml
@@ -59,6 +59,7 @@
vars:
min_size: "{{ volume_group_size }}"
disks_needed: 3
+ match_sector_size: true
- name: Create volume group 'foo' with 3 PVs
include_role:
--
2.46.0

View File

@ -0,0 +1,41 @@
From 705a9db65a230013a9118481082d2bb548cd113d Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Fri, 31 May 2024 06:31:52 +0200
Subject: [PATCH 06/10] fix: Fix 'possibly-used-before-assignment' pylint
issues (#440)
Latest pylint added a new check for values used before assignment.
This fixes these issues found in the blivet module. Some of these
are false positives, some real potential issues.
(cherry picked from commit bfaae50586681bb4b0fcad5df6f6adde2b7c8502)
---
library/blivet.py | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/library/blivet.py b/library/blivet.py
index d82b86b..a6715d9 100644
--- a/library/blivet.py
+++ b/library/blivet.py
@@ -642,6 +642,9 @@ class BlivetVolume(BlivetBase):
self._device = None
return # TODO: see if we can create this device w/ the specified name
+ # pylint doesn't understand that "luks_fmt" is always set when "encrypted" is true
+ # pylint: disable=unknown-option-value
+ # pylint: disable=possibly-used-before-assignment
def _update_from_device(self, param_name):
""" Return True if param_name's value was retrieved from a looked-up device. """
log.debug("Updating volume settings from device: %r", self._device)
@@ -1717,6 +1720,8 @@ class BlivetLVMPool(BlivetPool):
if auto_size_dev_count > 0:
calculated_thinlv_size = available_space / auto_size_dev_count
+ else:
+ calculated_thinlv_size = available_space
for thinlv in thinlvs_to_create:
--
2.46.0

View File

@ -0,0 +1,54 @@
From 18edc9af26684f03e44fe2e22c82a8f93182da4a Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 5 Jun 2024 08:49:19 -0600
Subject: [PATCH 07/10] test: lsblk can return LOG_SEC or LOG-SEC
get_unused_disk is broken on some systems because `lsblk ... LOG-SEC` can
return `LOG_SEC` with an underscore instead of the requested
`LOG-SEC` with a dash.
(cherry picked from commit 64333ce8aa42f4b961c39a443ac43cc6590097b3)
---
library/find_unused_disk.py | 4 ++--
tests/get_unused_disk.yml | 9 +++++++++
2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/library/find_unused_disk.py b/library/find_unused_disk.py
index 098f235..270fb58 100644
--- a/library/find_unused_disk.py
+++ b/library/find_unused_disk.py
@@ -149,9 +149,9 @@ def get_disks(module):
if not line:
continue
- m = re.search(r'NAME="(?P<path>[^"]*)" TYPE="(?P<type>[^"]*)" SIZE="(?P<size>\d+)" FSTYPE="(?P<fstype>[^"]*)" LOG-SEC="(?P<ssize>\d+)"', line)
+ m = re.search(r'NAME="(?P<path>[^"]*)" TYPE="(?P<type>[^"]*)" SIZE="(?P<size>\d+)" FSTYPE="(?P<fstype>[^"]*)" LOG[_-]SEC="(?P<ssize>\d+)"', line)
if m is None:
- module.log(line)
+ module.log("Line did not match: " + line)
continue
if m.group('type') != "disk":
diff --git a/tests/get_unused_disk.yml b/tests/get_unused_disk.yml
index a61487e..0402770 100644
--- a/tests/get_unused_disk.yml
+++ b/tests/get_unused_disk.yml
@@ -22,6 +22,15 @@
match_sector_size: "{{ match_sector_size | d(omit) }}"
register: unused_disks_return
+- name: Debug why there are no unused disks
+ shell: |
+ set -x
+ exec 1>&2
+ lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC
+ journalctl -ex
+ changed_when: false
+ when: "'Unable to find unused disk' in unused_disks_return.disks"
+
- name: Set unused_disks if necessary
set_fact:
unused_disks: "{{ unused_disks_return.disks }}"
--
2.46.0

View File

@ -0,0 +1,34 @@
From aa6e494963a3bded3b1ca7ef5a81e0106e68d5bc Mon Sep 17 00:00:00 2001
From: Jan Pokorny <japokorn@redhat.com>
Date: Thu, 6 Jun 2024 11:54:48 +0200
Subject: [PATCH 08/10] test: lvm pool members test fix
tests_lvm_pool_members started to fail. It tried to create a device with
a requested size (20m) that was less than minimal allowed size (300m) for that type of
volume. Role automatically resized the device to allowed size. That lead to discrepancy
in actual and expected size values.
Increasing the requested device size to be same or larger than minimal fixes the
issue.
(cherry picked from commit ee740b7b14d09e09a26dd5eb95e8950aeb15147d)
---
tests/tests_lvm_pool_members.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/tests_lvm_pool_members.yml b/tests/tests_lvm_pool_members.yml
index 63c10c7..320626e 100644
--- a/tests/tests_lvm_pool_members.yml
+++ b/tests/tests_lvm_pool_members.yml
@@ -6,7 +6,7 @@
storage_safe_mode: false
storage_use_partitions: true
volume_group_size: '10g'
- volume_size: '20m'
+ volume_size: '300m'
tags:
- tests::lvm
--
2.46.0

View File

@ -0,0 +1,40 @@
From d2b59ac3758f51ffac5156e9f006b7ce9d8a28eb Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Tue, 4 Jun 2024 10:30:03 +0200
Subject: [PATCH 09/10] fix: Fix expected error message in tests_misc.yml
Different versions of blivet return a different error message when
trying to create a filesystem with invalid parameters.
On Fedora 39 and older:
"Failed to commit changes to disk: (FSError('format failed: 1'),
'/dev/mapper/foo-test1')"
On Fedora 40 and newer:
"Failed to commit changes to disk: Process reported exit code 1:
mke2fs: invalid block size - 512\n"
(cherry picked from commit 7ef66d85bd52f339483b24dbb8bc66e22054b378)
---
tests/tests_misc.yml | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/tests/tests_misc.yml b/tests/tests_misc.yml
index 363d843..432ec16 100644
--- a/tests/tests_misc.yml
+++ b/tests/tests_misc.yml
@@ -68,8 +68,9 @@
include_tasks: verify-role-failed.yml
vars:
__storage_failed_regex: >-
- Failed to commit changes to disk.*FSError.*format failed:
- 1.*/dev/mapper/foo-test1
+ Failed to commit changes to disk.*(FSError.*format failed:
+ 1.*/dev/mapper/foo-test1|
+ Process reported exit code 1: mke2fs: invalid block size - 512)
__storage_failed_msg: >-
Unexpected behavior when creating ext4 filesystem with invalid
parameter
--
2.46.0

View File

@ -0,0 +1,180 @@
From a86f7e013fe881e477b65509363bbb5af851662f Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Fri, 12 Apr 2024 14:45:15 +0200
Subject: [PATCH 10/10] tests: Use blockdev_info to check volume mount points
We can use the information from `lsblk` we already use for other
checks instead of using the Ansible mountinfo facts. This makes
the check simpler and also makes it easier to check for Stratis
volume mount points, because of the complicated Stratis devices
structure in /dev.
(cherry picked from commit 10e657bde68ffa9495b2441ed9f472cf79edbb19)
---
library/blockdev_info.py | 2 +-
tests/test-verify-volume-fs.yml | 51 ++++++++++++++++--------------
tests/test-verify-volume-mount.yml | 48 +++++-----------------------
3 files changed, 37 insertions(+), 64 deletions(-)
diff --git a/library/blockdev_info.py b/library/blockdev_info.py
index 13858fb..ec018de 100644
--- a/library/blockdev_info.py
+++ b/library/blockdev_info.py
@@ -64,7 +64,7 @@ def fixup_md_path(path):
def get_block_info(module):
- buf = module.run_command(["lsblk", "-o", "NAME,FSTYPE,LABEL,UUID,TYPE,SIZE", "-p", "-P", "-a"])[1]
+ buf = module.run_command(["lsblk", "-o", "NAME,FSTYPE,LABEL,UUID,TYPE,SIZE,MOUNTPOINT", "-p", "-P", "-a"])[1]
info = dict()
for line in buf.splitlines():
dev = dict()
diff --git a/tests/test-verify-volume-fs.yml b/tests/test-verify-volume-fs.yml
index 8e488c5..63b2770 100644
--- a/tests/test-verify-volume-fs.yml
+++ b/tests/test-verify-volume-fs.yml
@@ -1,26 +1,31 @@
---
# type
-- name: Verify fs type
- assert:
- that: storage_test_blkinfo.info[storage_test_volume._device].fstype ==
- storage_test_volume.fs_type or
- (storage_test_blkinfo.info[storage_test_volume._device].fstype | length
- == 0 and storage_test_volume.fs_type == "unformatted")
- when: storage_test_volume.fs_type and _storage_test_volume_present
+- name: Check volume filesystem
+ when: storage_test_volume.type != "stratis"
+ block:
+ - name: Verify fs type
+ assert:
+ that: storage_test_blkinfo.info[storage_test_volume._device].fstype ==
+ storage_test_volume.fs_type or
+ (storage_test_blkinfo.info[storage_test_volume._device].fstype | length
+ == 0 and storage_test_volume.fs_type == "unformatted")
+ when:
+ - storage_test_volume.fs_type
+ - _storage_test_volume_present
-# label
-- name: Verify fs label
- assert:
- that: storage_test_blkinfo.info[storage_test_volume._device].label ==
- storage_test_volume.fs_label
- msg: >-
- Volume '{{ storage_test_volume.name }}' labels do not match when they
- should
- ('{{ storage_test_blkinfo.info[storage_test_volume._device].label }}',
- '{{ storage_test_volume.fs_label }}')
- when:
- - _storage_test_volume_present | bool
- # label for GFS2 is set manually with the extra `-t` fs_create_options
- # so we can't verify it here because it was not set with fs_label so
- # the label from blkinfo doesn't match the expected "empty" fs_label
- - storage_test_volume.fs_type != "gfs2"
+ # label
+ - name: Verify fs label
+ assert:
+ that: storage_test_blkinfo.info[storage_test_volume._device].label ==
+ storage_test_volume.fs_label
+ msg: >-
+ Volume '{{ storage_test_volume.name }}' labels do not match when they
+ should
+ ('{{ storage_test_blkinfo.info[storage_test_volume._device].label }}',
+ '{{ storage_test_volume.fs_label }}')
+ when:
+ - _storage_test_volume_present | bool
+ # label for GFS2 is set manually with the extra `-t` fs_create_options
+ # so we can't verify it here because it was not set with fs_label so
+ # the label from blkinfo doesn't match the expected "empty" fs_label
+ - storage_test_volume.fs_type != "gfs2"
diff --git a/tests/test-verify-volume-mount.yml b/tests/test-verify-volume-mount.yml
index cf86b34..17d2a01 100644
--- a/tests/test-verify-volume-mount.yml
+++ b/tests/test-verify-volume-mount.yml
@@ -15,20 +15,13 @@
- name: Set some facts
set_fact:
- storage_test_mount_device_matches: "{{ ansible_mounts |
- selectattr('device', 'match', '^' ~ storage_test_device_path ~ '$') |
- list }}"
- storage_test_mount_point_matches: "{{ ansible_mounts |
- selectattr('mount', 'match',
- '^' ~ mount_prefix ~ storage_test_volume.mount_point ~ '$') |
- list if storage_test_volume.mount_point else [] }}"
- storage_test_mount_expected_match_count: "{{ 1
- if _storage_test_volume_present and storage_test_volume.mount_point and
- storage_test_volume.mount_point.startswith('/')
- else 0 }}"
storage_test_swap_expected_matches: "{{ 1 if
_storage_test_volume_present and
storage_test_volume.fs_type == 'swap' else 0 }}"
+ storage_test_mount_expected_mount_point: "{{
+ '[SWAP]' if storage_test_volume.fs_type == 'swap' else
+ '' if storage_test_volume.mount_point == 'none' else
+ mount_prefix + storage_test_volume.mount_point if storage_test_volume.mount_point else '' }}"
vars:
# assumes /opt which is /var/opt in ostree
mount_prefix: "{{ '/var' if __storage_is_ostree | d(false)
@@ -50,23 +43,12 @@
#
- name: Verify the current mount state by device
assert:
- that: storage_test_mount_device_matches | length ==
- storage_test_mount_expected_match_count | int
+ that: storage_test_blkinfo.info[storage_test_volume._device].mountpoint ==
+ storage_test_mount_expected_mount_point
msg: >-
Found unexpected mount state for volume
'{{ storage_test_volume.name }}' device
- when: _storage_test_volume_present and storage_test_volume.mount_point
-
-#
-# Verify mount directory (state, owner, group, permissions).
-#
-- name: Verify the current mount state by mount point
- assert:
- that: storage_test_mount_point_matches | length ==
- storage_test_mount_expected_match_count | int
- msg: >-
- Found unexpected mount state for volume
- '{{ storage_test_volume.name }}' mount point
+ when: _storage_test_volume_present
- name: Verify mount directory user
assert:
@@ -104,18 +86,6 @@
storage_test_volume.mount_point and
storage_test_volume.mount_mode
-#
-# Verify mount fs type.
-#
-- name: Verify the mount fs type
- assert:
- that: storage_test_mount_point_matches[0].fstype ==
- storage_test_volume.fs_type
- msg: >-
- Found unexpected mount state for volume
- '{{ storage_test_volume.name }} fs type
- when: storage_test_mount_expected_match_count | int == 1
-
#
# Verify swap status.
#
@@ -145,10 +115,8 @@
- name: Unset facts
set_fact:
- storage_test_mount_device_matches: null
- storage_test_mount_point_matches: null
- storage_test_mount_expected_match_count: null
storage_test_swap_expected_matches: null
storage_test_sys_node: null
storage_test_swaps: null
storage_test_found_mount_stat: null
+ storage_test_mount_expected_mount_point: null
--
2.46.0

View File

@ -0,0 +1,26 @@
From 36acf32d30d106159ba9f2fa88d723d9577c9f15 Mon Sep 17 00:00:00 2001
From: Samuel Bancal <Samuel.Bancal@groupe-t2i.com>
Date: Thu, 14 Mar 2024 10:15:11 +0100
Subject: [PATCH 101/115] fix: Add support for --check flag
Fix: https://github.com/linux-system-roles/podman/issues/133
(cherry picked from commit a47e6a95e2a5ee70714bf315d3e03310365d3650)
---
tasks/main.yml | 1 +
1 file changed, 1 insertion(+)
diff --git a/tasks/main.yml b/tasks/main.yml
index 1b9ca4a..61f1d1c 100644
--- a/tasks/main.yml
+++ b/tasks/main.yml
@@ -21,6 +21,7 @@
when: (__podman_packages | difference(ansible_facts.packages))
- name: Get podman version
+ check_mode: false
command: podman --version
changed_when: false
register: __podman_version_output
--
2.46.0

View File

@ -0,0 +1,56 @@
From 53f83475c59092e2c23d1957c2fc24c8ca4b6ad9 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Tue, 9 Apr 2024 18:27:25 -0600
Subject: [PATCH 102/115] fix: use correct user for cancel linger file name
Cause: When processing a list of kube or quadlet items, the
code was using the user id associated with the list, not the
item, to specify the linger filename.
Consequence: The linger file does not exist, so the code
does not cancel linger for the actual user.
Fix: Use the correct username to construct the linger filename.
Result: Lingering is cancelled for the correct users.
QE: The test is now in tests_basic.yml
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 67b88b9aa0a1b1123c2ae24bb7ca4a527924cd13)
---
tasks/cancel_linger.yml | 2 +-
tests/tests_basic.yml | 7 +++++++
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/tasks/cancel_linger.yml b/tasks/cancel_linger.yml
index 761778b..ede71fe 100644
--- a/tasks/cancel_linger.yml
+++ b/tasks/cancel_linger.yml
@@ -59,4 +59,4 @@
- __podman_linger_secrets.stdout == ""
changed_when: true
args:
- removes: /var/lib/systemd/linger/{{ __podman_user }}
+ removes: /var/lib/systemd/linger/{{ __podman_linger_user }}
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index a9f01c9..d4f9238 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -409,6 +409,13 @@
^[ ]*podman-kube@.+-{{ item[0] }}[.]yml[.]service[ ]+loaded[
]+active
+ - name: Ensure no linger
+ stat:
+ path: /var/lib/systemd/linger/{{ item[1] }}
+ loop: "{{ test_names_users }}"
+ register: __stat
+ failed_when: __stat.stat.exists
+
rescue:
- name: Dump journal
command: journalctl -ex
--
2.46.0

View File

@ -0,0 +1,28 @@
From dd93ef65b0d1929184d458914386086fca8b8d7a Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 10 Apr 2024 16:06:28 -0600
Subject: [PATCH 103/115] test: do not check for root linger
Do not check if there is a linger file for root.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 2b29e049daa28ba6c3b38f514cff9c62be5f3caf)
---
tests/tests_basic.yml | 1 +
1 file changed, 1 insertion(+)
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index d4f9238..d578b15 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -412,6 +412,7 @@
- name: Ensure no linger
stat:
path: /var/lib/systemd/linger/{{ item[1] }}
+ when: item[1] != "root"
loop: "{{ test_names_users }}"
register: __stat
failed_when: __stat.stat.exists
--
2.46.0

View File

@ -0,0 +1,210 @@
From b2e79348094ea8d89b71727d82a80a9f3cfbb1ce Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Tue, 9 Apr 2024 18:28:19 -0600
Subject: [PATCH 104/115] fix: do not use become for changing hostdir
ownership, and expose subuid/subgid info
When creating host directories, do not use `become`, because if
it needs to change ownership, that must be done by `root`, not
as the rootless podman user.
In order to test this, I have changed the role to export the subuid and subgid
information for the rootless users as two dictionaries:
`podman_subuid_info` and `podman_subgid_info`. See `README.md` for
usage.
NOTE that depending on the namespace used by your containers, you might not
be able to use the subuid and subgid information, which comes from `getsubids`
if available, or directly from the files `/etc/subuid` and `/etc/subgid` on
the host.
QE: The test tests_basic.yml has been extended for this.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 3d02eb725355088df6c707717547f5ad6b7c400c)
---
README.md | 28 ++++++++++++
tasks/create_update_kube_spec.yml | 2 -
tasks/create_update_quadlet_spec.yml | 2 -
tasks/handle_user_group.yml | 66 +++++++++++++++++++++-------
tests/tests_basic.yml | 2 +
5 files changed, 79 insertions(+), 21 deletions(-)
diff --git a/README.md b/README.md
index ea1edfb..e5a7c12 100644
--- a/README.md
+++ b/README.md
@@ -418,6 +418,34 @@ PodmanArgs=--secret=my-app-pwd,type=env,target=MYAPP_PASSWORD
{% endif %}
```
+### podman_subuid_info, podman_subgid_info
+
+The role needs to ensure any users and groups are present in the subuid and
+subgid information. Once it extracts this data, it will be available in
+`podman_subuid_info` and `podman_subgid_info`. These are dicts. The key is the
+user or group name, and the value is a `dict` with two fields:
+
+* `start` - the start of the id range for that user or group, as an `int`
+* `range` - the id range for that user or group, as an `int`
+
+```yaml
+podman_host_directories:
+ "/var/lib/db":
+ mode: "0777"
+ owner: "{{ 1001 + podman_subuid_info['dbuser']['start'] - 1 }}"
+ group: "{{ 1001 + podman_subgid_info['dbgroup']['start'] - 1 }}"
+```
+
+Where `1001` is the uid for user `dbuser`, and `1001` is the gid for group
+`dbgroup`.
+
+**NOTE**: depending on the namespace used by your containers, you might not be
+able to use the subuid and subgid information, which comes from `getsubids` if
+available, or directly from the files `/etc/subuid` and `/etc/subgid` on the
+host. See
+[podman user namespace modes](https://www.redhat.com/sysadmin/rootless-podman-user-namespace-modes)
+for more information.
+
## Example Playbooks
Create rootless container with volume mount:
diff --git a/tasks/create_update_kube_spec.yml b/tasks/create_update_kube_spec.yml
index 95d7d35..7a8ba9c 100644
--- a/tasks/create_update_kube_spec.yml
+++ b/tasks/create_update_kube_spec.yml
@@ -32,8 +32,6 @@
__defaults: "{{ {'path': item} | combine(__podman_hostdirs_defaults) |
combine(__owner_group) }}"
loop: "{{ __podman_volumes }}"
- become: "{{ __podman_rootless | ternary(true, omit) }}"
- become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
when:
- podman_create_host_directories | bool
- __podman_volumes | d([]) | length > 0
diff --git a/tasks/create_update_quadlet_spec.yml b/tasks/create_update_quadlet_spec.yml
index c3e0095..062c105 100644
--- a/tasks/create_update_quadlet_spec.yml
+++ b/tasks/create_update_quadlet_spec.yml
@@ -16,8 +16,6 @@
__defaults: "{{ {'path': item} | combine(__podman_hostdirs_defaults) |
combine(__owner_group) }}"
loop: "{{ __podman_volumes }}"
- become: "{{ __podman_rootless | ternary(true, omit) }}"
- become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
when:
- podman_create_host_directories | bool
- __podman_volumes | d([]) | length > 0
diff --git a/tasks/handle_user_group.yml b/tasks/handle_user_group.yml
index 17300b6..ea9984d 100644
--- a/tasks/handle_user_group.yml
+++ b/tasks/handle_user_group.yml
@@ -52,10 +52,26 @@
- name: Check user with getsubids
command: getsubids {{ __podman_user | quote }}
changed_when: false
+ register: __podman_register_subuids
- name: Check group with getsubids
command: getsubids -g {{ __podman_group_name | quote }}
changed_when: false
+ register: __podman_register_subgids
+
+ - name: Set user subuid and subgid info
+ set_fact:
+ podman_subuid_info: "{{ podman_subuid_info | d({}) |
+ combine({__podman_user:
+ {'start': __subuid_data[2] | int, 'range': __subuid_data[3] | int}})
+ if __subuid_data | length > 0 else podman_subuid_info | d({}) }}"
+ podman_subgid_info: "{{ podman_subgid_info | d({}) |
+ combine({__podman_group_name:
+ {'start': __subgid_data[2] | int, 'range': __subgid_data[3] | int}})
+ if __subgid_data | length > 0 else podman_subgid_info | d({}) }}"
+ vars:
+ __subuid_data: "{{ __podman_register_subuids.stdout.split() | list }}"
+ __subgid_data: "{{ __podman_register_subgids.stdout.split() | list }}"
- name: Check subuid, subgid files if no getsubids
when:
@@ -63,32 +79,48 @@
- __podman_user not in ["root", "0"]
- __podman_group not in ["root", "0"]
block:
- - name: Check if user is in subuid file
- find:
- path: /etc
- pattern: subuid
- use_regex: true
- contains: "^{{ __podman_user }}:.*$"
- register: __podman_uid_line_found
+ - name: Get subuid file
+ slurp:
+ path: /etc/subuid
+ register: __podman_register_subuids
+
+ - name: Get subgid file
+ slurp:
+ path: /etc/subgid
+ register: __podman_register_subgids
+
+ - name: Set user subuid and subgid info
+ set_fact:
+ podman_subuid_info: "{{ podman_subuid_info | d({}) |
+ combine({__podman_user:
+ {'start': __subuid_data[1] | int, 'range': __subuid_data[2] | int}})
+ if __subuid_data else podman_subuid_info | d({}) }}"
+ podman_subgid_info: "{{ podman_subgid_info | d({}) |
+ combine({__podman_group_name:
+ {'start': __subgid_data[1] | int, 'range': __subgid_data[2] | int}})
+ if __subgid_data else podman_subgid_info | d({}) }}"
+ vars:
+ __subuid_match_line: "{{
+ (__podman_register_subuids.content | b64decode).split('\n') | list |
+ select('match', '^' ~ __podman_user ~ ':') | list }}"
+ __subuid_data: "{{ __subuid_match_line[0].split(':') | list
+ if __subuid_match_line else null }}"
+ __subgid_match_line: "{{
+ (__podman_register_subgids.content | b64decode).split('\n') | list |
+ select('match', '^' ~ __podman_group_name ~ ':') | list }}"
+ __subgid_data: "{{ __subgid_match_line[0].split(':') | list
+ if __subgid_match_line else null }}"
- name: Fail if user not in subuid file
fail:
msg: >
The given podman user [{{ __podman_user }}] is not in the
/etc/subuid file - cannot continue
- when: not __podman_uid_line_found.matched
-
- - name: Check if group is in subgid file
- find:
- path: /etc
- pattern: subgid
- use_regex: true
- contains: "^{{ __podman_group_name }}:.*$"
- register: __podman_gid_line_found
+ when: not __podman_user in podman_subuid_info
- name: Fail if group not in subgid file
fail:
msg: >
The given podman group [{{ __podman_group_name }}] is not in the
/etc/subgid file - cannot continue
- when: not __podman_gid_line_found.matched
+ when: not __podman_group_name in podman_subuid_info
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index d578b15..121c3a7 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -8,6 +8,8 @@
podman_host_directories:
"/tmp/httpd1-create":
mode: "0777"
+ owner: "{{ 1001 + podman_subuid_info['user1']['start'] - 1 }}"
+ group: "{{ 1001 + podman_subgid_info['user1']['start'] - 1 }}"
podman_run_as_user: root
test_names_users:
- [httpd1, user1, 1001]
--
2.46.0

View File

@ -0,0 +1,42 @@
From 7978bed4d52e44feae114ba56e9b5035b7dd2c1c Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 17 Apr 2024 10:14:21 -0600
Subject: [PATCH 105/115] chore: change no_log false to true; fix comment
Forgot to change a `no_log: false` back to `no_log: true` when debugging.
Fix an error in a comment
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit b37ee8fc7e12317660cca765760c32bd4ba91035)
---
tasks/handle_secret.yml | 2 +-
vars/main.yml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/tasks/handle_secret.yml b/tasks/handle_secret.yml
index b3677ef..02bc15b 100644
--- a/tasks/handle_secret.yml
+++ b/tasks/handle_secret.yml
@@ -39,7 +39,7 @@
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
when: not __podman_rootless or __podman_xdg_stat.stat.exists
- no_log: false
+ no_log: true
vars:
__params: |
{% set rc = {} %}
diff --git a/vars/main.yml b/vars/main.yml
index 47293c5..38402ff 100644
--- a/vars/main.yml
+++ b/vars/main.yml
@@ -74,5 +74,5 @@ __podman_user_kube_path: "/.config/containers/ansible-kubernetes.d"
# location for system quadlet files
__podman_system_quadlet_path: "/etc/containers/systemd"
-# location for user kubernetes yaml files
+# location for user quadlet files
__podman_user_quadlet_path: "/.config/containers/systemd"
--
2.46.0

View File

@ -0,0 +1,214 @@
From 07053a415b4a0bde557f28f6f607250915e908e6 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 17 Apr 2024 11:35:52 -0600
Subject: [PATCH 106/115] fix: make kube cleanup idempotent
Cause: The task that calls podman_play was not checking if the kube yaml
file existed when cleaning up.
Consequence: The task would give an error that the pod could not be
removed.
Fix: Do not attempt to remove the pod if the kube yaml file does not
exist.
Result: Calling the podman role repeatedly to remove a kube spec
will not fail and will not report changes for subsequent removals.
QE: tests_basic.yml has been changed to check for this case
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit e506f39b6608613a5801190091a72b013b85a888)
---
tasks/cleanup_kube_spec.yml | 9 +++++-
tests/tests_basic.yml | 62 ++++++++++++++++++++++++++-----------
2 files changed, 52 insertions(+), 19 deletions(-)
diff --git a/tasks/cleanup_kube_spec.yml b/tasks/cleanup_kube_spec.yml
index c864179..b6b47bd 100644
--- a/tasks/cleanup_kube_spec.yml
+++ b/tasks/cleanup_kube_spec.yml
@@ -25,6 +25,11 @@
vars:
__service_error: Could not find the requested service
+- name: Check if kube file exists
+ stat:
+ path: "{{ __podman_kube_file }}"
+ register: __podman_kube_file_stat
+
- name: Remove pod/containers
containers.podman.podman_play: "{{ __podman_kube_spec |
combine({'kube_file': __podman_kube_file}) }}"
@@ -33,7 +38,9 @@
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
register: __podman_removed
- when: not __podman_rootless or __podman_xdg_stat.stat.exists
+ when:
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
+ - __podman_kube_file_stat.stat.exists
- name: Remove kubernetes yaml file
file:
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index 121c3a7..b8ddc50 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -6,13 +6,16 @@
- vars/test_vars.yml
vars:
podman_host_directories:
- "/tmp/httpd1-create":
+ "{{ __test_tmpdir.path ~ '/httpd1-create' }}":
mode: "0777"
- owner: "{{ 1001 + podman_subuid_info['user1']['start'] - 1 }}"
- group: "{{ 1001 + podman_subgid_info['user1']['start'] - 1 }}"
+ owner: "{{ 1001 +
+ podman_subuid_info[__podman_test_username]['start'] - 1 }}"
+ group: "{{ 1001 +
+ podman_subgid_info[__podman_test_username]['start'] - 1 }}"
podman_run_as_user: root
+ __podman_test_username: podman_basic_user
test_names_users:
- - [httpd1, user1, 1001]
+ - [httpd1, "{{ __podman_test_username }}", 1001]
- [httpd2, root, 0]
- [httpd3, root, 0]
podman_create_host_directories: true
@@ -26,7 +29,7 @@
- state: started
debug: true
log_level: debug
- run_as_user: user1
+ run_as_user: "{{ __podman_test_username }}"
kube_file_content:
apiVersion: v1
kind: Pod
@@ -57,10 +60,10 @@
volumes:
- name: www
hostPath:
- path: /tmp/httpd1
+ path: "{{ __test_tmpdir.path ~ '/httpd1' }}"
- name: create
hostPath:
- path: /tmp/httpd1-create
+ path: "{{ __test_tmpdir.path ~ '/httpd1-create' }}"
- state: started
debug: true
log_level: debug
@@ -94,10 +97,10 @@
volumes:
- name: www
hostPath:
- path: /tmp/httpd2
+ path: "{{ __test_tmpdir.path ~ '/httpd2' }}"
- name: create
hostPath:
- path: /tmp/httpd2-create
+ path: "{{ __test_tmpdir.path ~ '/httpd2-create' }}"
__podman_kube_file_content: |
apiVersion: v1
kind: Pod
@@ -128,11 +131,23 @@
volumes:
- name: www
hostPath:
- path: /tmp/httpd3
+ path: "{{ __test_tmpdir.path ~ '/httpd3' }}"
- name: create
hostPath:
- path: /tmp/httpd3-create
+ path: "{{ __test_tmpdir.path ~ '/httpd3-create' }}"
tasks:
+ - name: Create tmpdir for testing
+ tempfile:
+ state: directory
+ prefix: lsr_
+ suffix: _podman
+ register: __test_tmpdir
+
+ - name: Change tmpdir permissions
+ file:
+ path: "{{ __test_tmpdir.path }}"
+ mode: "0777"
+
- name: Run basic tests
vars:
__podman_use_kube_file:
@@ -156,7 +171,7 @@
- name: Create user
user:
- name: user1
+ name: "{{ __podman_test_username }}"
uid: 1001
- name: Create tempfile for kube_src
@@ -171,12 +186,12 @@
copy:
content: "{{ __podman_kube_file_content }}"
dest: "{{ __kube_file_src.path }}"
- mode: 0600
+ mode: "0600"
delegate_to: localhost
- name: Create host directories for data
file:
- path: /tmp/{{ item[0] }}
+ path: "{{ __test_tmpdir.path ~ '/' ~ item[0] }}"
state: directory
mode: "0755"
owner: "{{ item[1] }}"
@@ -184,7 +199,7 @@
- name: Create data files
copy:
- dest: /tmp/{{ item[0] }}/index.txt
+ dest: "{{ __test_tmpdir.path ~ '/' ~ item[0] ~ '/index.txt' }}"
content: "123"
mode: "0644"
owner: "{{ item[1] }}"
@@ -315,7 +330,7 @@
loop: [15001, 15002]
- name: Check host directories
- command: ls -alrtF /tmp/{{ item[0] }}-create
+ command: ls -alrtF {{ __test_tmpdir.path ~ '/' ~ item[0] }}-create
loop: "{{ test_names_users }}"
changed_when: false
@@ -419,6 +434,18 @@
register: __stat
failed_when: __stat.stat.exists
+ - name: Remove pods and units again - test idempotence
+ include_role:
+ name: linux-system-roles.podman
+ vars:
+ # noqa jinja[spacing]
+ podman_kube_specs: "{{ __podman_kube_specs |
+ union([__podman_use_kube_file]) |
+ map('combine', {'state':'absent'}) | list }}"
+ podman_create_host_directories: false
+ podman_firewall: []
+ podman_selinux_ports: []
+
rescue:
- name: Dump journal
command: journalctl -ex
@@ -438,9 +465,8 @@
- name: Clean up host directories
file:
- path: /tmp/{{ item }}
+ path: "{{ __test_tmpdir.path }}"
state: absent
- loop: [httpd1, httpd2, httpd3]
tags:
- tests::cleanup
--
2.46.0

View File

@ -0,0 +1,35 @@
From 0a8ce32cdc093c388718d4fe28007259ac86854d Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Thu, 18 Apr 2024 08:39:33 -0600
Subject: [PATCH 107/115] chore: use none in jinja code, not null
Must use `none` in Jinja code, not `null`, which is used in YAML.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit fdf98595e9ecdacfed80d40c2539b18c7d715368)
---
tasks/handle_user_group.yml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tasks/handle_user_group.yml b/tasks/handle_user_group.yml
index ea9984d..0b98d99 100644
--- a/tasks/handle_user_group.yml
+++ b/tasks/handle_user_group.yml
@@ -104,12 +104,12 @@
(__podman_register_subuids.content | b64decode).split('\n') | list |
select('match', '^' ~ __podman_user ~ ':') | list }}"
__subuid_data: "{{ __subuid_match_line[0].split(':') | list
- if __subuid_match_line else null }}"
+ if __subuid_match_line else none }}"
__subgid_match_line: "{{
(__podman_register_subgids.content | b64decode).split('\n') | list |
select('match', '^' ~ __podman_group_name ~ ':') | list }}"
__subgid_data: "{{ __subgid_match_line[0].split(':') | list
- if __subgid_match_line else null }}"
+ if __subgid_match_line else none }}"
- name: Fail if user not in subuid file
fail:
--
2.46.0

View File

@ -0,0 +1,44 @@
From 4824891e596c197e49557d9d2679cabc76e598e9 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Fri, 19 Apr 2024 07:33:41 -0600
Subject: [PATCH 108/115] uid 1001 conflicts on some test systems
(cherry picked from commit 5b7ad16d23b78f6f0f68638c0d69015ebb26b3b0)
---
tests/tests_basic.yml | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index b8ddc50..c91cc5f 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -8,14 +8,14 @@
podman_host_directories:
"{{ __test_tmpdir.path ~ '/httpd1-create' }}":
mode: "0777"
- owner: "{{ 1001 +
+ owner: "{{ 3001 +
podman_subuid_info[__podman_test_username]['start'] - 1 }}"
- group: "{{ 1001 +
+ group: "{{ 3001 +
podman_subgid_info[__podman_test_username]['start'] - 1 }}"
podman_run_as_user: root
__podman_test_username: podman_basic_user
test_names_users:
- - [httpd1, "{{ __podman_test_username }}", 1001]
+ - [httpd1, "{{ __podman_test_username }}", 3001]
- [httpd2, root, 0]
- [httpd3, root, 0]
podman_create_host_directories: true
@@ -172,7 +172,7 @@
- name: Create user
user:
name: "{{ __podman_test_username }}"
- uid: 1001
+ uid: 3001
- name: Create tempfile for kube_src
tempfile:
--
2.46.0

View File

@ -0,0 +1,26 @@
From 2343663a17a42e71aa5b78ad5deca72823a0afb0 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Mon, 3 Jun 2024 13:15:07 -0600
Subject: [PATCH 109/115] fix ansible-lint octal value issues
(cherry picked from commit c684c68151f106b4a494bed865e138a0b54ecb43)
---
tests/tests_quadlet_demo.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/tests_quadlet_demo.yml b/tests/tests_quadlet_demo.yml
index a719f9c..259a694 100644
--- a/tests/tests_quadlet_demo.yml
+++ b/tests/tests_quadlet_demo.yml
@@ -98,7 +98,7 @@
get_url:
url: https://localhost:8000
dest: /run/out
- mode: 0600
+ mode: "0600"
validate_certs: false
register: __web_status
until: __web_status is success
--
2.46.0

View File

@ -0,0 +1,308 @@
From 6a5722ce2a591c57e50ac4ff702c810bf452431d Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Thu, 6 Jun 2024 15:20:22 -0600
Subject: [PATCH 110/115] fix: grab name of network to remove from quadlet file
Cause: The code was using "systemd-" + name of quadlet for
the network name when removing networks.
Consequence: If the quadlet had a different NetworkName, the
removal would fail.
Fix: Grab the network quadlet file and grab the NetworkName from
the file to use to remove the network.
Result: The removal of quadlet networks will work both with and
without a custom NetworkName in the quadlet file.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
This also adds a fix for el10 and Fedora which installs the iptables-nft
package to allow rootless podman to manage networks using nftables.
(cherry picked from commit bcd5a750250736a07605c72f98e50c1babcddf16)
---
.ostree/packages-runtime-CentOS-10.txt | 3 ++
.ostree/packages-runtime-Fedora.txt | 3 ++
.ostree/packages-runtime-RedHat-10.txt | 3 ++
tasks/cleanup_quadlet_spec.yml | 43 +++++++++++++++++++++++++-
tests/files/quadlet-basic.network | 5 +++
tests/tests_quadlet_basic.yml | 31 +++++++------------
tests/tests_quadlet_demo.yml | 19 +++---------
vars/CentOS_10.yml | 7 +++++
vars/Fedora.yml | 7 +++++
vars/RedHat_10.yml | 7 +++++
10 files changed, 94 insertions(+), 34 deletions(-)
create mode 100644 .ostree/packages-runtime-CentOS-10.txt
create mode 100644 .ostree/packages-runtime-Fedora.txt
create mode 100644 .ostree/packages-runtime-RedHat-10.txt
create mode 100644 tests/files/quadlet-basic.network
create mode 100644 vars/CentOS_10.yml
create mode 100644 vars/Fedora.yml
create mode 100644 vars/RedHat_10.yml
diff --git a/.ostree/packages-runtime-CentOS-10.txt b/.ostree/packages-runtime-CentOS-10.txt
new file mode 100644
index 0000000..16b8eae
--- /dev/null
+++ b/.ostree/packages-runtime-CentOS-10.txt
@@ -0,0 +1,3 @@
+iptables-nft
+podman
+shadow-utils-subid
diff --git a/.ostree/packages-runtime-Fedora.txt b/.ostree/packages-runtime-Fedora.txt
new file mode 100644
index 0000000..16b8eae
--- /dev/null
+++ b/.ostree/packages-runtime-Fedora.txt
@@ -0,0 +1,3 @@
+iptables-nft
+podman
+shadow-utils-subid
diff --git a/.ostree/packages-runtime-RedHat-10.txt b/.ostree/packages-runtime-RedHat-10.txt
new file mode 100644
index 0000000..16b8eae
--- /dev/null
+++ b/.ostree/packages-runtime-RedHat-10.txt
@@ -0,0 +1,3 @@
+iptables-nft
+podman
+shadow-utils-subid
diff --git a/tasks/cleanup_quadlet_spec.yml b/tasks/cleanup_quadlet_spec.yml
index ba68771..8ea069b 100644
--- a/tasks/cleanup_quadlet_spec.yml
+++ b/tasks/cleanup_quadlet_spec.yml
@@ -30,6 +30,43 @@
vars:
__service_error: Could not find the requested service
+- name: See if quadlet file exists
+ stat:
+ path: "{{ __podman_quadlet_file }}"
+ register: __podman_network_stat
+ when: __podman_quadlet_type == "network"
+
+- name: Get network quadlet network name
+ when:
+ - __podman_quadlet_type == "network"
+ - __podman_network_stat.stat.exists
+ block:
+ - name: Create tempdir
+ tempfile:
+ prefix: podman_
+ suffix: _lsr.ini
+ state: directory
+ register: __podman_network_tmpdir
+ delegate_to: localhost
+
+ - name: Fetch the network quadlet
+ fetch:
+ dest: "{{ __podman_network_tmpdir.path }}/network.ini"
+ src: "{{ __podman_quadlet_file }}"
+ flat: true
+
+ - name: Get the network name
+ set_fact:
+ __podman_network_name: "{{
+ lookup('ini', 'NetworkName section=Network file=' ~
+ __podman_network_tmpdir.path ~ '/network.ini') }}"
+ always:
+ - name: Remove tempdir
+ file:
+ path: "{{ __podman_network_tmpdir.path }}"
+ state: absent
+ delegate_to: localhost
+
- name: Remove quadlet file
file:
path: "{{ __podman_quadlet_file }}"
@@ -62,10 +99,14 @@
changed_when: true
- name: Remove network
- command: podman network rm systemd-{{ __podman_quadlet_name }}
+ command: podman network rm {{ __name | quote }}
changed_when: true
when: __podman_quadlet_type == "network"
environment:
XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ vars:
+ __name: "{{ __podman_network_name if
+ __podman_network_name | d('') | length > 0
+ else 'systemd-' ~ __podman_quadlet_name }}"
diff --git a/tests/files/quadlet-basic.network b/tests/files/quadlet-basic.network
new file mode 100644
index 0000000..7db6e0d
--- /dev/null
+++ b/tests/files/quadlet-basic.network
@@ -0,0 +1,5 @@
+[Network]
+Subnet=192.168.29.0/24
+Gateway=192.168.29.1
+Label=app=wordpress
+NetworkName=quadlet-basic
diff --git a/tests/tests_quadlet_basic.yml b/tests/tests_quadlet_basic.yml
index 1b472be..2891b1a 100644
--- a/tests/tests_quadlet_basic.yml
+++ b/tests/tests_quadlet_basic.yml
@@ -19,12 +19,8 @@
state: present
data: "{{ __json_secret_data | string }}"
__podman_quadlet_specs:
- - name: quadlet-basic
- type: network
- Network:
- Subnet: 192.168.29.0/24
- Gateway: 192.168.29.1
- Label: app=wordpress
+ - file_src: files/quadlet-basic.network
+ state: started
- name: quadlet-basic-mysql
type: volume
Volume: {}
@@ -197,7 +193,8 @@
failed_when: not __stat.stat.exists
# must clean up networks last - cannot remove a network
- # in use by a container
+ # in use by a container - using reverse assumes the network
+ # is defined first in the list
- name: Cleanup user
include_role:
name: linux-system-roles.podman
@@ -206,10 +203,7 @@
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets | map('combine', __absent) |
list }}"
- podman_quadlet_specs: "{{ ((__podman_quadlet_specs |
- rejectattr('type', 'match', '^network$') | list) +
- (__podman_quadlet_specs |
- selectattr('type', 'match', '^network$') | list)) |
+ podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
- name: Ensure no linger
@@ -242,6 +236,11 @@
changed_when: false
rescue:
+ - name: Check AVCs
+ command: grep type=AVC /var/log/audit/audit.log
+ changed_when: false
+ failed_when: false
+
- name: Dump journal
command: journalctl -ex
changed_when: false
@@ -258,10 +257,7 @@
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets |
map('combine', __absent) | list }}"
- podman_quadlet_specs: "{{ ((__podman_quadlet_specs |
- rejectattr('type', 'match', '^network$') | list) +
- (__podman_quadlet_specs |
- selectattr('type', 'match', '^network$') | list)) |
+ podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
- name: Remove test user
@@ -277,10 +273,7 @@
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets |
map('combine', __absent) | list }}"
- podman_quadlet_specs: "{{ ((__podman_quadlet_specs |
- rejectattr('type', 'match', '^network$') | list) +
- (__podman_quadlet_specs |
- selectattr('type', 'match', '^network$') | list)) |
+ podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
rescue:
diff --git a/tests/tests_quadlet_demo.yml b/tests/tests_quadlet_demo.yml
index 259a694..b6c27ef 100644
--- a/tests/tests_quadlet_demo.yml
+++ b/tests/tests_quadlet_demo.yml
@@ -11,7 +11,7 @@
podman_use_copr: false # disable copr for CI testing
podman_fail_if_too_old: false
podman_create_host_directories: true
- podman_quadlet_specs:
+ __podman_quadlet_specs:
- file_src: quadlet-demo.network
- file_src: quadlet-demo-mysql.volume
- template_src: quadlet-demo-mysql.container.j2
@@ -45,6 +45,7 @@
include_role:
name: linux-system-roles.podman
vars:
+ podman_quadlet_specs: "{{ __podman_quadlet_specs }}"
podman_pull_retry: true
podman_secrets:
- name: mysql-root-password-container
@@ -149,19 +150,9 @@
include_role:
name: linux-system-roles.podman
vars:
- podman_quadlet_specs:
- - template_src: quadlet-demo-mysql.container.j2
- state: absent
- - file_src: quadlet-demo-mysql.volume
- state: absent
- - file_src: envoy-proxy-configmap.yml
- state: absent
- - file_src: quadlet-demo.kube
- state: absent
- - template_src: quadlet-demo.yml.j2
- state: absent
- - file_src: quadlet-demo.network
- state: absent
+ __absent: {"state":"absent"}
+ podman_quadlet_specs: "{{ __podman_quadlet_specs |
+ reverse | map('combine', __absent) | list }}"
podman_secrets:
- name: mysql-root-password-container
state: absent
diff --git a/vars/CentOS_10.yml b/vars/CentOS_10.yml
new file mode 100644
index 0000000..83589d5
--- /dev/null
+++ b/vars/CentOS_10.yml
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: MIT
+---
+# shadow-utils-subid for getsubids
+__podman_packages:
+ - iptables-nft
+ - podman
+ - shadow-utils-subid
diff --git a/vars/Fedora.yml b/vars/Fedora.yml
new file mode 100644
index 0000000..83589d5
--- /dev/null
+++ b/vars/Fedora.yml
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: MIT
+---
+# shadow-utils-subid for getsubids
+__podman_packages:
+ - iptables-nft
+ - podman
+ - shadow-utils-subid
diff --git a/vars/RedHat_10.yml b/vars/RedHat_10.yml
new file mode 100644
index 0000000..83589d5
--- /dev/null
+++ b/vars/RedHat_10.yml
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: MIT
+---
+# shadow-utils-subid for getsubids
+__podman_packages:
+ - iptables-nft
+ - podman
+ - shadow-utils-subid
--
2.46.0

View File

@ -0,0 +1,615 @@
From e11e1ff198f0840fcef6cbe75c74ca69dd22f694 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Mon, 8 Jul 2024 16:35:29 -0600
Subject: [PATCH 111/115] fix: proper cleanup for networks; ensure cleanup of
resources
Cause: The code was not managing network systemd quadlet units.
Consequence: Network systemd quadlet units were not being stopped and
disabled. Subsequent runs would fail due to the network units not
being cleaned up properly.
Fix: The role manages network systemd quadlet units, including stopping
and removing.
Result: Systemd quadlet network units are properly cleaned up.
In addition - improve the removal of all types of quadlet resources,
and include code which can be used to test and debug quadlet resource
removal.
(cherry picked from commit a85908ec7f6f8e19908f8d4d18d6d7b64ab1d31e)
---
README.md | 6 +
defaults/main.yml | 4 +
tasks/cancel_linger.yml | 2 +-
tasks/cleanup_quadlet_spec.yml | 188 +++++++++++++-----
tasks/handle_quadlet_spec.yml | 2 +
tasks/manage_linger.yml | 2 +-
tasks/parse_quadlet_file.yml | 57 ++++++
tests/files/quadlet-basic.network | 2 +-
.../templates/quadlet-demo-mysql.container.j2 | 2 +-
tests/tests_quadlet_basic.yml | 69 ++++++-
tests/tests_quadlet_demo.yml | 33 +++
11 files changed, 309 insertions(+), 58 deletions(-)
create mode 100644 tasks/parse_quadlet_file.yml
diff --git a/README.md b/README.md
index e5a7c12..8b6496e 100644
--- a/README.md
+++ b/README.md
@@ -388,6 +388,12 @@ a newer version. For example, if you attempt to manage quadlet or secrets with
podman 4.3 or earlier, the role will fail with an error. If you want the role to
be skipped instead, use `podman_fail_if_too_old: false`.
+### podman_prune_images
+
+Boolean - default is `false` - by default, the role will not prune unused images
+when removing quadlets and other resources. Set this to `true` to tell the role
+to remove unused images when cleaning up.
+
## Variables Exported by the Role
### podman_version
diff --git a/defaults/main.yml b/defaults/main.yml
index 92e4eb8..02453c9 100644
--- a/defaults/main.yml
+++ b/defaults/main.yml
@@ -109,3 +109,7 @@ podman_continue_if_pull_fails: false
# If true, if a pull attempt fails, it will be retried according
# to the default Ansible `until` behavior.
podman_pull_retry: false
+
+# Prune images when removing quadlets/kube specs -
+# this will remove all unused/unreferenced images
+podman_prune_images: false
diff --git a/tasks/cancel_linger.yml b/tasks/cancel_linger.yml
index ede71fe..f233fc4 100644
--- a/tasks/cancel_linger.yml
+++ b/tasks/cancel_linger.yml
@@ -49,7 +49,7 @@
when: __podman_xdg_stat.stat.exists
- name: Cancel linger if no more resources are in use
- command: loginctl disable-linger {{ __podman_linger_user }}
+ command: loginctl disable-linger {{ __podman_linger_user | quote }}
when:
- __podman_xdg_stat.stat.exists
- __podman_container_info.containers | length == 0
diff --git a/tasks/cleanup_quadlet_spec.yml b/tasks/cleanup_quadlet_spec.yml
index 8ea069b..df69243 100644
--- a/tasks/cleanup_quadlet_spec.yml
+++ b/tasks/cleanup_quadlet_spec.yml
@@ -33,39 +33,11 @@
- name: See if quadlet file exists
stat:
path: "{{ __podman_quadlet_file }}"
- register: __podman_network_stat
- when: __podman_quadlet_type == "network"
+ register: __podman_quadlet_stat
-- name: Get network quadlet network name
- when:
- - __podman_quadlet_type == "network"
- - __podman_network_stat.stat.exists
- block:
- - name: Create tempdir
- tempfile:
- prefix: podman_
- suffix: _lsr.ini
- state: directory
- register: __podman_network_tmpdir
- delegate_to: localhost
-
- - name: Fetch the network quadlet
- fetch:
- dest: "{{ __podman_network_tmpdir.path }}/network.ini"
- src: "{{ __podman_quadlet_file }}"
- flat: true
-
- - name: Get the network name
- set_fact:
- __podman_network_name: "{{
- lookup('ini', 'NetworkName section=Network file=' ~
- __podman_network_tmpdir.path ~ '/network.ini') }}"
- always:
- - name: Remove tempdir
- file:
- path: "{{ __podman_network_tmpdir.path }}"
- state: absent
- delegate_to: localhost
+- name: Parse quadlet file
+ include_tasks: parse_quadlet_file.yml
+ when: __podman_quadlet_stat.stat.exists
- name: Remove quadlet file
file:
@@ -73,40 +45,158 @@
state: absent
register: __podman_file_removed
+- name: Refresh systemd # noqa no-handler
+ systemd:
+ daemon_reload: true
+ scope: "{{ __podman_systemd_scope }}"
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+ when: __podman_file_removed is changed # noqa no-handler
+
+- name: Remove managed resource
+ command: >-
+ podman {{ 'rm' if __podman_quadlet_type == 'container'
+ else 'network rm' if __podman_quadlet_type == 'network'
+ else 'volume rm' if __podman_quadlet_type == 'volume' }}
+ {{ __podman_quadlet_resource_name | quote }}
+ register: __podman_rm
+ failed_when:
+ - __podman_rm is failed
+ - not __podman_rm.stderr is search(__str)
+ changed_when: __podman_rm.rc == 0
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+ vars:
+ __str: " found: no such "
+ __type_to_name: # map quadlet type to quadlet property name
+ container:
+ section: Container
+ name: ContainerName
+ network:
+ section: Network
+ name: NetworkName
+ volume:
+ section: Volume
+ name: VolumeName
+ __section: "{{ __type_to_name[__podman_quadlet_type]['section'] }}"
+ __name: "{{ __type_to_name[__podman_quadlet_type]['name'] }}"
+ __podman_quadlet_resource_name: "{{
+ __podman_quadlet_parsed[__section][__name]
+ if __section in __podman_quadlet_parsed
+ and __name in __podman_quadlet_parsed[__section]
+ else 'systemd-' ~ __podman_quadlet_name }}"
+ when:
+ - __podman_file_removed is changed # noqa no-handler
+ - __podman_quadlet_type in __type_to_name
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
+ - __podman_service_name | length > 0
+ no_log: true
+
+- name: Remove volumes
+ command: podman volume rm {{ item | quote }}
+ loop: "{{ __volume_names }}"
+ when:
+ - __podman_file_removed is changed # noqa no-handler
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
+ - __podman_service_name | length == 0
+ - __podman_quadlet_file.endswith(".yml") or
+ __podman_quadlet_file.endswith(".yaml")
+ changed_when: true
+ vars:
+ __volumes: "{{ __podman_quadlet_parsed |
+ selectattr('apiVersion', 'defined') | selectattr('spec', 'defined') |
+ map(attribute='spec') | selectattr('volumes', 'defined') |
+ map(attribute='volumes') | flatten }}"
+ __config_maps: "{{ __volumes | selectattr('configMap', 'defined') |
+ map(attribute='configMap') | selectattr('name', 'defined') |
+ map(attribute='name') | list }}"
+ __secrets: "{{ __volumes | selectattr('secret', 'defined') |
+ map(attribute='secret') | selectattr('secretName', 'defined') |
+ map(attribute='secretName') | list }}"
+ __pvcs: "{{ __volumes | selectattr('persistentVolumeClaim', 'defined') |
+ map(attribute='persistentVolumeClaim') | selectattr('claimName', 'defined') |
+ map(attribute='claimName') | list }}"
+ __volume_names: "{{ __config_maps + __secrets + __pvcs }}"
+ no_log: true
+
+- name: Clear parsed podman variable
+ set_fact:
+ __podman_quadlet_parsed: null
+
+- name: Prune images no longer in use
+ command: podman image prune --all -f
+ when:
+ - podman_prune_images | bool
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
+ changed_when: true
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+
- name: Manage linger
include_tasks: manage_linger.yml
vars:
__podman_item_state: absent
-- name: Cleanup container resources
- when: __podman_file_removed is changed # noqa no-handler
+- name: Collect information for testing/debugging
+ when:
+ - __podman_test_debug | d(false)
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
block:
- - name: Reload systemctl # noqa no-handler
- systemd:
- daemon_reload: true
- scope: "{{ __podman_systemd_scope }}"
+ - name: For testing and debugging - images
+ command: podman images -n
+ register: __podman_test_debug_images
+ changed_when: false
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
environment:
XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
- - name: Prune images no longer in use
- command: podman image prune -f
+ - name: For testing and debugging - volumes
+ command: podman volume ls -n
+ register: __podman_test_debug_volumes
+ changed_when: false
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
environment:
XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+
+ - name: For testing and debugging - containers
+ command: podman ps --noheading
+ register: __podman_test_debug_containers
+ changed_when: false
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
- changed_when: true
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+
+ - name: For testing and debugging - networks
+ command: podman network ls -n -q
+ register: __podman_test_debug_networks
+ changed_when: false
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
- - name: Remove network
- command: podman network rm {{ __name | quote }}
- changed_when: true
- when: __podman_quadlet_type == "network"
+ - name: For testing and debugging - secrets
+ command: podman secret ls -n -q
+ register: __podman_test_debug_secrets
+ changed_when: false
+ no_log: true
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
environment:
XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+
+ - name: For testing and debugging - services
+ service_facts:
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
- vars:
- __name: "{{ __podman_network_name if
- __podman_network_name | d('') | length > 0
- else 'systemd-' ~ __podman_quadlet_name }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
diff --git a/tasks/handle_quadlet_spec.yml b/tasks/handle_quadlet_spec.yml
index ce6ef67..851c8a3 100644
--- a/tasks/handle_quadlet_spec.yml
+++ b/tasks/handle_quadlet_spec.yml
@@ -129,6 +129,8 @@
if __podman_quadlet_type in ['container', 'kube']
else __podman_quadlet_name ~ '-volume.service'
if __podman_quadlet_type in ['volume']
+ else __podman_quadlet_name ~ '-network.service'
+ if __podman_quadlet_type in ['network']
else none }}"
- name: Set per-container variables part 4
diff --git a/tasks/manage_linger.yml b/tasks/manage_linger.yml
index b506b70..be69490 100644
--- a/tasks/manage_linger.yml
+++ b/tasks/manage_linger.yml
@@ -10,7 +10,7 @@
- __podman_item_state | d('present') != 'absent'
block:
- name: Enable linger if needed
- command: loginctl enable-linger {{ __podman_user }}
+ command: loginctl enable-linger {{ __podman_user | quote }}
when: __podman_rootless | bool
args:
creates: /var/lib/systemd/linger/{{ __podman_user }}
diff --git a/tasks/parse_quadlet_file.yml b/tasks/parse_quadlet_file.yml
new file mode 100644
index 0000000..5f5297f
--- /dev/null
+++ b/tasks/parse_quadlet_file.yml
@@ -0,0 +1,57 @@
+---
+# Input:
+# * __podman_quadlet_file - path to quadlet file to parse
+# Output:
+# * __podman_quadlet_parsed - dict
+- name: Slurp quadlet file
+ slurp:
+ path: "{{ __podman_quadlet_file }}"
+ register: __podman_quadlet_raw
+ no_log: true
+
+- name: Parse quadlet file
+ set_fact:
+ __podman_quadlet_parsed: |-
+ {% set rv = {} %}
+ {% set section = ["DEFAULT"] %}
+ {% for line in __val %}
+ {% if line.startswith("[") %}
+ {% set val = line.replace("[", "").replace("]", "") %}
+ {% set _ = section.__setitem__(0, val) %}
+ {% else %}
+ {% set ary = line.split("=", 1) %}
+ {% set key = ary[0] %}
+ {% set val = ary[1] %}
+ {% if key in rv.get(section[0], {}) %}
+ {% set curval = rv[section[0]][key] %}
+ {% if curval is string %}
+ {% set newary = [curval, val] %}
+ {% set _ = rv[section[0]].__setitem__(key, newary) %}
+ {% else %}
+ {% set _ = rv[section[0]][key].append(val) %}
+ {% endif %}
+ {% else %}
+ {% set _ = rv.setdefault(section[0], {}).__setitem__(key, val) %}
+ {% endif %}
+ {% endif %}
+ {% endfor %}
+ {{ rv }}
+ vars:
+ __val: "{{ (__podman_quadlet_raw.content | b64decode).split('\n') |
+ select | reject('match', '#') | list }}"
+ when: __podman_service_name | length > 0
+ no_log: true
+
+- name: Parse quadlet yaml file
+ set_fact:
+ __podman_quadlet_parsed: "{{ __podman_quadlet_raw.content | b64decode |
+ from_yaml_all }}"
+ when:
+ - __podman_service_name | length == 0
+ - __podman_quadlet_file.endswith(".yml") or
+ __podman_quadlet_file.endswith(".yaml")
+ no_log: true
+
+- name: Reset raw variable
+ set_fact:
+ __podman_quadlet_raw: null
diff --git a/tests/files/quadlet-basic.network b/tests/files/quadlet-basic.network
index 7db6e0d..5b002ba 100644
--- a/tests/files/quadlet-basic.network
+++ b/tests/files/quadlet-basic.network
@@ -2,4 +2,4 @@
Subnet=192.168.29.0/24
Gateway=192.168.29.1
Label=app=wordpress
-NetworkName=quadlet-basic
+NetworkName=quadlet-basic-name
diff --git a/tests/templates/quadlet-demo-mysql.container.j2 b/tests/templates/quadlet-demo-mysql.container.j2
index c84f0e8..92097d4 100644
--- a/tests/templates/quadlet-demo-mysql.container.j2
+++ b/tests/templates/quadlet-demo-mysql.container.j2
@@ -9,7 +9,7 @@ Volume=/tmp/quadlet_demo:/var/lib/quadlet_demo:Z
Network=quadlet-demo.network
{% if podman_version is version("4.5", ">=") %}
Secret=mysql-root-password-container,type=env,target=MYSQL_ROOT_PASSWORD
-HealthCmd=/usr/bin/true
+HealthCmd=/bin/true
HealthOnFailure=kill
{% else %}
PodmanArgs=--secret=mysql-root-password-container,type=env,target=MYSQL_ROOT_PASSWORD
diff --git a/tests/tests_quadlet_basic.yml b/tests/tests_quadlet_basic.yml
index 2891b1a..0fdced4 100644
--- a/tests/tests_quadlet_basic.yml
+++ b/tests/tests_quadlet_basic.yml
@@ -21,7 +21,14 @@
__podman_quadlet_specs:
- file_src: files/quadlet-basic.network
state: started
+ - name: quadlet-basic-unused-network
+ type: network
+ Network: {}
- name: quadlet-basic-mysql
+ type: volume
+ Volume:
+ VolumeName: quadlet-basic-mysql-name
+ - name: quadlet-basic-unused-volume
type: volume
Volume: {}
- name: quadlet-basic-mysql
@@ -30,7 +37,7 @@
WantedBy: default.target
Container:
Image: "{{ mysql_image }}"
- ContainerName: quadlet-basic-mysql
+ ContainerName: quadlet-basic-mysql-name
Volume: quadlet-basic-mysql.volume:/var/lib/mysql
Network: quadlet-basic.network
# Once 4.5 is released change this line to use the quadlet Secret key
@@ -192,13 +199,14 @@
register: __stat
failed_when: not __stat.stat.exists
- # must clean up networks last - cannot remove a network
- # in use by a container - using reverse assumes the network
- # is defined first in the list
+ # must clean up in the reverse order of creating - and
+ # ensure networks are removed last
- name: Cleanup user
include_role:
name: linux-system-roles.podman
vars:
+ podman_prune_images: true
+ __podman_test_debug: true
podman_run_as_user: user_quadlet_basic
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets | map('combine', __absent) |
@@ -206,6 +214,22 @@
podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
+ - name: Ensure no resources
+ assert:
+ that:
+ - __podman_test_debug_images.stdout == ""
+ - __podman_test_debug_networks.stdout_lines |
+ reject("match", "^podman$") |
+ reject("match", "^podman-default-kube-network$") |
+ list | length == 0
+ - __podman_test_debug_volumes.stdout == ""
+ - __podman_test_debug_containers.stdout == ""
+ - __podman_test_debug_secrets.stdout == ""
+ - ansible_facts["services"] | dict2items |
+ rejectattr("value.status", "match", "not-found") |
+ selectattr("key", "match", "quadlet-demo") |
+ list | length == 0
+
- name: Ensure no linger
stat:
path: /var/lib/systemd/linger/user_quadlet_basic
@@ -230,12 +254,28 @@
- quadlet-basic-mysql.volume
- name: Check JSON
- command: podman exec quadlet-basic-mysql cat /tmp/test.json
+ command: podman exec quadlet-basic-mysql-name cat /tmp/test.json
register: __result
failed_when: __result.stdout != __json_secret_data
changed_when: false
rescue:
+ - name: Debug3
+ shell: |
+ set -x
+ set -o pipefail
+ exec 1>&2
+ #podman volume rm --all
+ #podman network prune -f
+ podman volume ls
+ podman network ls
+ podman secret ls
+ podman container ls
+ podman pod ls
+ podman images
+ systemctl list-units | grep quadlet
+ changed_when: false
+
- name: Check AVCs
command: grep type=AVC /var/log/audit/audit.log
changed_when: false
@@ -253,6 +293,7 @@
include_role:
name: linux-system-roles.podman
vars:
+ podman_prune_images: true
podman_run_as_user: user_quadlet_basic
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets |
@@ -270,12 +311,30 @@
include_role:
name: linux-system-roles.podman
vars:
+ podman_prune_images: true
+ __podman_test_debug: true
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets |
map('combine', __absent) | list }}"
podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
+ - name: Ensure no resources
+ assert:
+ that:
+ - __podman_test_debug_images.stdout == ""
+ - __podman_test_debug_networks.stdout_lines |
+ reject("match", "^podman$") |
+ reject("match", "^podman-default-kube-network$") |
+ list | length == 0
+ - __podman_test_debug_volumes.stdout == ""
+ - __podman_test_debug_containers.stdout == ""
+ - __podman_test_debug_secrets.stdout == ""
+ - ansible_facts["services"] | dict2items |
+ rejectattr("value.status", "match", "not-found") |
+ selectattr("key", "match", "quadlet-demo") |
+ list | length == 0
+
rescue:
- name: Dump journal
command: journalctl -ex
diff --git a/tests/tests_quadlet_demo.yml b/tests/tests_quadlet_demo.yml
index b6c27ef..1cc7e62 100644
--- a/tests/tests_quadlet_demo.yml
+++ b/tests/tests_quadlet_demo.yml
@@ -84,6 +84,11 @@
changed_when: false
failed_when: false
+ - name: Check volumes
+ command: podman volume ls
+ changed_when: false
+ failed_when: false
+
- name: Check pods
command: podman pod ps --ctr-ids --ctr-names --ctr-status
changed_when: false
@@ -150,6 +155,8 @@
include_role:
name: linux-system-roles.podman
vars:
+ podman_prune_images: true
+ __podman_test_debug: true
__absent: {"state":"absent"}
podman_quadlet_specs: "{{ __podman_quadlet_specs |
reverse | map('combine', __absent) | list }}"
@@ -161,7 +168,33 @@
- name: envoy-certificates
state: absent
+ - name: Ensure no resources
+ assert:
+ that:
+ - __podman_test_debug_images.stdout == ""
+ - __podman_test_debug_networks.stdout_lines |
+ reject("match", "^podman$") |
+ reject("match", "^podman-default-kube-network$") |
+ list | length == 0
+ - __podman_test_debug_volumes.stdout == ""
+ - __podman_test_debug_containers.stdout == ""
+ - __podman_test_debug_secrets.stdout == ""
+ - ansible_facts["services"] | dict2items |
+ rejectattr("value.status", "match", "not-found") |
+ selectattr("key", "match", "quadlet-demo") |
+ list | length == 0
+
rescue:
+ - name: Debug
+ shell: |
+ exec 1>&2
+ set -x
+ set -o pipefail
+ systemctl list-units --plain -l --all | grep quadlet || :
+ systemctl list-unit-files --all | grep quadlet || :
+ systemctl list-units --plain --failed -l --all | grep quadlet || :
+ changed_when: false
+
- name: Get journald
command: journalctl -ex
changed_when: false
--
2.46.0

View File

@ -0,0 +1,72 @@
From 7473a31e3a0201131e42281bce9bbf9c88ac04ca Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 31 Jul 2024 18:52:57 -0600
Subject: [PATCH 112/115] fix: Ensure user linger is closed on EL10
Cause: There is an issue with loginctl on EL10 - doing cancel-linger
will leave the user session in the closing state.
Consequence: User sessions accumulate, and the test user cannot
be removed.
Fix: As suggested in the systemd issue, the fix is to shutdown and
restart systemd-logind in this situation.
Result: User cancel-linger works as expected.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 0ceea96a12bf0b462ca62d012d86cdcbd4f20eaa)
---
tasks/cancel_linger.yml | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/tasks/cancel_linger.yml b/tasks/cancel_linger.yml
index f233fc4..00d38c2 100644
--- a/tasks/cancel_linger.yml
+++ b/tasks/cancel_linger.yml
@@ -58,5 +58,42 @@
list | length == 0
- __podman_linger_secrets.stdout == ""
changed_when: true
+ register: __cancel_linger
args:
removes: /var/lib/systemd/linger/{{ __podman_linger_user }}
+
+- name: Wait for user session to exit closing state # noqa no-handler
+ command: loginctl show-user -P State {{ __podman_linger_user | quote }}
+ register: __user_state
+ changed_when: false
+ until: __user_state.stdout != "closing"
+ when: __cancel_linger is changed
+ ignore_errors: true
+
+# see https://github.com/systemd/systemd/issues/26744#issuecomment-2261509208
+- name: Handle user stuck in closing state
+ when:
+ - __cancel_linger is changed
+ - __user_state is failed
+ block:
+ - name: Stop logind
+ service:
+ name: systemd-logind
+ state: stopped
+
+ - name: Wait for user session to exit closing state
+ command: loginctl show-user -P State {{ __podman_linger_user | quote }}
+ changed_when: false
+ register: __user_state
+ until: __user_state.stderr is match(__pat) or
+ __user_state.stdout != "closing"
+ failed_when:
+ - not __user_state.stderr is match(__pat)
+ - __user_state.stdout == "closing"
+ vars:
+ __pat: "Failed to get user: User ID .* is not logged in or lingering"
+
+ - name: Restart logind
+ service:
+ name: systemd-logind
+ state: started
--
2.46.0

View File

@ -0,0 +1,70 @@
From acc8e5458170cd653681beee8cec162e1d3e4f1f Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Mon, 19 Aug 2024 10:11:05 -0600
Subject: [PATCH 113/115] test: skip quadlet tests on non-x86_64
The images we currently use for quadlet testing are only available
on x86_64
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 4a2ab77cafd9ae330f9260a5180680036707bf92)
---
tests/tests_quadlet_basic.yml | 11 +++++++++++
tests/tests_quadlet_demo.yml | 12 ++++++++++++
2 files changed, 23 insertions(+)
diff --git a/tests/tests_quadlet_basic.yml b/tests/tests_quadlet_basic.yml
index 0fdced4..5a06864 100644
--- a/tests/tests_quadlet_basic.yml
+++ b/tests/tests_quadlet_basic.yml
@@ -48,6 +48,17 @@
- FOO=/bin/busybox-extras
- BAZ=test
tasks:
+ - name: Test is only supported on x86_64
+ debug:
+ msg: >
+ This test is only supported on x86_64 because the test images used are only
+ available on that platform.
+ when: ansible_facts["architecture"] != "x86_64"
+
+ - name: End test
+ meta: end_play
+ when: ansible_facts["architecture"] != "x86_64"
+
- name: Run test
block:
- name: See if not pulling images fails
diff --git a/tests/tests_quadlet_demo.yml b/tests/tests_quadlet_demo.yml
index 1cc7e62..f08d482 100644
--- a/tests/tests_quadlet_demo.yml
+++ b/tests/tests_quadlet_demo.yml
@@ -2,6 +2,7 @@
---
- name: Deploy the quadlet demo app
hosts: all
+ gather_facts: true
vars_files:
- vars/test_vars.yml
vars:
@@ -28,6 +29,17 @@
"/tmp/quadlet_demo":
mode: "0777"
tasks:
+ - name: Test is only supported on x86_64
+ debug:
+ msg: >
+ This test is only supported on x86_64 because the test images used are only
+ available on that platform.
+ when: ansible_facts["architecture"] != "x86_64"
+
+ - name: End test
+ meta: end_play
+ when: ansible_facts["architecture"] != "x86_64"
+
- name: Run tests
block:
- name: Generate certificates
--
2.46.0

View File

@ -0,0 +1,208 @@
From 5367219c4d12b988b531b00a90625cb6747baf13 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Thu, 29 Aug 2024 08:47:03 -0600
Subject: [PATCH 114/115] fix: subgid maps user to gids, not group to gids
Cause: The podman role was looking up groups in the subgid values, not
users.
Consequence: If the user name was different from the group name, the role
would fail to lookup the subgid values.
Fix: Ensure that the user is used to lookup the subgid values.
Result: The subgid values are looked up correctly.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit ad01b0091707fc4eae6f98f694f1a213fb9f8521)
---
README.md | 45 ++++++++++++++++++-------------------
tasks/handle_user_group.yml | 32 ++++++++------------------
2 files changed, 31 insertions(+), 46 deletions(-)
diff --git a/README.md b/README.md
index 8b6496e..6222098 100644
--- a/README.md
+++ b/README.md
@@ -35,12 +35,11 @@ restrictions:
* They must be already present on the system - the role will not create the
users or groups - the role will exit with an error if a non-existent user or
group is specified
-* They must already exist in `/etc/subuid` and `/etc/subgid`, or are otherwise
- provided by your identity management system - the role will exit with an error
- if a specified user is not present in `/etc/subuid`, or if a specified group
- is not in `/etc/subgid`. The role uses `getsubids` to check the user and
- group if available, or checks the files directly if `getsubids` is not
- available.
+* The user must already exist in `/etc/subuid` and `/etc/subgid`, or otherwise
+ be provided by your identity management system - the role will exit with an
+ error if a specified user is not present in `/etc/subuid` and `/etc/subgid`.
+ The role uses `getsubids` to check the user and group if available, or checks
+ the files directly if `getsubids` is not available.
## Role Variables
@@ -56,14 +55,15 @@ except for the following:
* `started` - Create the pods and systemd services, and start them running
* `created` - Create the pods and systemd services, but do not start them
* `absent` - Remove the pods and systemd services
-* `run_as_user` - Use this to specify a per-pod user. If you do not
- specify this, then the global default `podman_run_as_user` value will be used.
+* `run_as_user` - Use this to specify a per-pod user. If you do not specify
+ this, then the global default `podman_run_as_user` value will be used.
Otherwise, `root` will be used. NOTE: The user must already exist - the role
- will not create one. The user must be present in `/etc/subuid`.
-* `run_as_group` - Use this to specify a per-pod group. If you do not
- specify this, then the global default `podman_run_as_group` value will be
- used. Otherwise, `root` will be used. NOTE: The group must already exist -
- the role will not create one. The group must be present in `/etc/subgid`.
+ will not create one. The user must be present in `/etc/subuid` and
+ `/etc/subgid`.
+* `run_as_group` - Use this to specify a per-pod group. If you do not specify
+ this, then the global default `podman_run_as_group` value will be used.
+ Otherwise, `root` will be used. NOTE: The group must already exist - the role
+ will not create one.
* `systemd_unit_scope` - The scope to use for the systemd unit. If you do not
specify this, then the global default `podman_systemd_unit_scope` will be
used. Otherwise, the scope will be `system` for root containers, and `user`
@@ -278,14 +278,13 @@ podman_selinux_ports:
This is the name of the user to use for all rootless containers. You can also
specify per-container username with `run_as_user` in `podman_kube_specs`. NOTE:
The user must already exist - the role will not create one. The user must be
-present in `/etc/subuid`.
+present in `/etc/subuid` and `/etc/subgid`.
### podman_run_as_group
This is the name of the group to use for all rootless containers. You can also
specify per-container group name with `run_as_group` in `podman_kube_specs`.
-NOTE: The group must already exist - the role will not create one. The group must
-be present in `/etc/subgid`.
+NOTE: The group must already exist - the role will not create one.
### podman_systemd_unit_scope
@@ -426,24 +425,24 @@ PodmanArgs=--secret=my-app-pwd,type=env,target=MYAPP_PASSWORD
### podman_subuid_info, podman_subgid_info
-The role needs to ensure any users and groups are present in the subuid and
+The role needs to ensure any users are present in the subuid and
subgid information. Once it extracts this data, it will be available in
`podman_subuid_info` and `podman_subgid_info`. These are dicts. The key is the
-user or group name, and the value is a `dict` with two fields:
+user name, and the value is a `dict` with two fields:
-* `start` - the start of the id range for that user or group, as an `int`
-* `range` - the id range for that user or group, as an `int`
+* `start` - the start of the id range for that user, as an `int`
+* `range` - the id range for that user, as an `int`
```yaml
podman_host_directories:
"/var/lib/db":
mode: "0777"
owner: "{{ 1001 + podman_subuid_info['dbuser']['start'] - 1 }}"
- group: "{{ 1001 + podman_subgid_info['dbgroup']['start'] - 1 }}"
+ group: "{{ 2001 + podman_subgid_info['dbuser']['start'] - 1 }}"
```
-Where `1001` is the uid for user `dbuser`, and `1001` is the gid for group
-`dbgroup`.
+Where `1001` is the uid for user `dbuser`, and `2001` is the gid for the
+group you want to use.
**NOTE**: depending on the namespace used by your containers, you might not be
able to use the subuid and subgid information, which comes from `getsubids` if
diff --git a/tasks/handle_user_group.yml b/tasks/handle_user_group.yml
index 0b98d99..2e19cdd 100644
--- a/tasks/handle_user_group.yml
+++ b/tasks/handle_user_group.yml
@@ -25,19 +25,6 @@
{{ ansible_facts["getent_passwd"][__podman_user][2] }}
{%- endif -%}
-- name: Get group information
- getent:
- database: group
- key: "{{ __podman_group }}"
- fail_key: false
- when: "'getent_group' not in ansible_facts or
- __podman_group not in ansible_facts['getent_group']"
-
-- name: Set group name
- set_fact:
- __podman_group_name: "{{ ansible_facts['getent_group'].keys() |
- list | first }}"
-
- name: See if getsubids exists
stat:
path: /usr/bin/getsubids
@@ -49,13 +36,13 @@
- __podman_user not in ["root", "0"]
- __podman_stat_getsubids.stat.exists
block:
- - name: Check user with getsubids
+ - name: Check with getsubids for user subuids
command: getsubids {{ __podman_user | quote }}
changed_when: false
register: __podman_register_subuids
- - name: Check group with getsubids
- command: getsubids -g {{ __podman_group_name | quote }}
+ - name: Check with getsubids for user subgids
+ command: getsubids -g {{ __podman_user | quote }}
changed_when: false
register: __podman_register_subgids
@@ -66,7 +53,7 @@
{'start': __subuid_data[2] | int, 'range': __subuid_data[3] | int}})
if __subuid_data | length > 0 else podman_subuid_info | d({}) }}"
podman_subgid_info: "{{ podman_subgid_info | d({}) |
- combine({__podman_group_name:
+ combine({__podman_user:
{'start': __subgid_data[2] | int, 'range': __subgid_data[3] | int}})
if __subgid_data | length > 0 else podman_subgid_info | d({}) }}"
vars:
@@ -77,7 +64,6 @@
when:
- not __podman_stat_getsubids.stat.exists
- __podman_user not in ["root", "0"]
- - __podman_group not in ["root", "0"]
block:
- name: Get subuid file
slurp:
@@ -96,7 +82,7 @@
{'start': __subuid_data[1] | int, 'range': __subuid_data[2] | int}})
if __subuid_data else podman_subuid_info | d({}) }}"
podman_subgid_info: "{{ podman_subgid_info | d({}) |
- combine({__podman_group_name:
+ combine({__podman_user:
{'start': __subgid_data[1] | int, 'range': __subgid_data[2] | int}})
if __subgid_data else podman_subgid_info | d({}) }}"
vars:
@@ -107,7 +93,7 @@
if __subuid_match_line else none }}"
__subgid_match_line: "{{
(__podman_register_subgids.content | b64decode).split('\n') | list |
- select('match', '^' ~ __podman_group_name ~ ':') | list }}"
+ select('match', '^' ~ __podman_user ~ ':') | list }}"
__subgid_data: "{{ __subgid_match_line[0].split(':') | list
if __subgid_match_line else none }}"
@@ -118,9 +104,9 @@
/etc/subuid file - cannot continue
when: not __podman_user in podman_subuid_info
- - name: Fail if group not in subgid file
+ - name: Fail if user not in subgid file
fail:
msg: >
- The given podman group [{{ __podman_group_name }}] is not in the
+ The given podman user [{{ __podman_user }}] is not in the
/etc/subgid file - cannot continue
- when: not __podman_group_name in podman_subuid_info
+ when: not __podman_user in podman_subgid_info
--
2.46.0

View File

@ -0,0 +1,38 @@
From c78741d6d5a782f599ee42c6deb89b80426e403d Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Fri, 6 Sep 2024 14:15:20 -0600
Subject: [PATCH 115/115] fix: Cannot remove volumes from kube yaml - need to
convert yaml to list
Cause: __podman_quadlet_parsed was not converted to a list.
Consequence: On older versions of Ansible, the volumes from the kube yaml
were not removed when removing quadlets.
Fix: Convert __podman_quadlet_parsed to a list after parsing.
Result: Older versions of Ansible can remove volumes specified
in kube yaml files.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 423c98342c82893aca891d49c63713193dc96222)
---
tasks/parse_quadlet_file.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tasks/parse_quadlet_file.yml b/tasks/parse_quadlet_file.yml
index 5f5297f..2d58c4e 100644
--- a/tasks/parse_quadlet_file.yml
+++ b/tasks/parse_quadlet_file.yml
@@ -45,7 +45,7 @@
- name: Parse quadlet yaml file
set_fact:
__podman_quadlet_parsed: "{{ __podman_quadlet_raw.content | b64decode |
- from_yaml_all }}"
+ from_yaml_all | list }}"
when:
- __podman_service_name | length == 0
- __podman_quadlet_file.endswith(".yml") or
--
2.46.0

View File

@ -0,0 +1,68 @@
From e2040d110ac24ec044973674afc8269ab9ef7c11 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Fri, 25 Oct 2024 08:55:27 -0600
Subject: [PATCH 116/117] fix: ignore pod not found errors when removing kube
specs
Cause: The module uses the `podman kube play --done` command to remove
the pod specified by the kube spec, but does not check if the pod has
already been removed. That is, it is not idempotent. The command
gives an error if the pod is not found. This only happens with
podman 4.4.1 on EL8.8 and EL9.2.
Consequence: The podman role gives an error that the pod specified
by the kube spec cannot be found when removing.
Fix: The role ignores the 'pod not found' error when removing
a kube spec.
Result: The role does not give an error when removing a kube
spec.
NOTE: This has been fixed in the containers.podman.podman_play
module upstream but has not yet been released.
https://github.com/containers/ansible-podman-collections/pull/863/files#diff-6672fb7f52e2bec3450c2dd7ed9a4385accd9bab8429ea6eecf4d56447f5a1b8R304
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 3edc125005c5912926add1539be96cf3b990bb96)
---
tasks/cleanup_kube_spec.yml | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/tasks/cleanup_kube_spec.yml b/tasks/cleanup_kube_spec.yml
index b6b47bd..36610e6 100644
--- a/tasks/cleanup_kube_spec.yml
+++ b/tasks/cleanup_kube_spec.yml
@@ -30,6 +30,11 @@
path: "{{ __podman_kube_file }}"
register: __podman_kube_file_stat
+# NOTE: removing kube specs is not idempotent and will give an error on
+# RHEL 8.8 and 9.2 - seems ok on other platforms - this was fixed in the
+# module but is not released yet (as of 20241024)
+# https://github.com/containers/ansible-podman-collections/pull/863/files#diff-6672fb7f52e2bec3450c2dd7ed9a4385accd9bab8429ea6eecf4d56447f5a1b8R304
+# remove this hack when the fix is available
- name: Remove pod/containers
containers.podman.podman_play: "{{ __podman_kube_spec |
combine({'kube_file': __podman_kube_file}) }}"
@@ -38,9 +43,17 @@
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
register: __podman_removed
+ failed_when:
+ - __podman_removed is failed
+ - not __podman_removed.msg is search(__err_msg)
+ - not __is_affected_platform
when:
- not __podman_rootless or __podman_xdg_stat.stat.exists
- __podman_kube_file_stat.stat.exists
+ vars:
+ __err_msg: Failed to delete .* with {{ __podman_kube_file }}
+ __is_affected_platform: "{{ ansible_facts['distribution'] == 'RedHat' and
+ ansible_facts['distribution_version'] in ['8.8', '9.2'] }}"
- name: Remove kubernetes yaml file
file:
--
2.47.0

View File

@ -0,0 +1,33 @@
From f5d7e3088a8662798ced2294ca9059799b7e1c33 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Fri, 25 Oct 2024 11:12:08 -0600
Subject: [PATCH 117/117] test: need grubby for el8 testing for ostree
EL8 tests need grubby for ostree building
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 881a03569b6dbebaf9fc9720ffe85039d1d0b72d)
---
.ostree/packages-testing-CentOS-8.txt | 1 +
.ostree/packages-testing-RedHat-8.txt | 1 +
2 files changed, 2 insertions(+)
create mode 100644 .ostree/packages-testing-CentOS-8.txt
create mode 100644 .ostree/packages-testing-RedHat-8.txt
diff --git a/.ostree/packages-testing-CentOS-8.txt b/.ostree/packages-testing-CentOS-8.txt
new file mode 100644
index 0000000..ae5e93e
--- /dev/null
+++ b/.ostree/packages-testing-CentOS-8.txt
@@ -0,0 +1 @@
+grubby
diff --git a/.ostree/packages-testing-RedHat-8.txt b/.ostree/packages-testing-RedHat-8.txt
new file mode 100644
index 0000000..ae5e93e
--- /dev/null
+++ b/.ostree/packages-testing-RedHat-8.txt
@@ -0,0 +1 @@
+grubby
--
2.47.0

View File

@ -0,0 +1,89 @@
From e8961d4e5ca7765e97d76a76e4741825e697aa8d Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Mon, 28 Oct 2024 10:27:59 -0600
Subject: [PATCH] fix: make role work on el 8.8 and el 9.2 and podman version
less than 4.7.0
Cause: Role was using podman and loginctl features not supported on el 8.8/9.2
and podman versions less than 4.7.0. NetworkName and VolumeName not supported
until podman 4.7.0. loginctl -P not supported in el 8.8/9.2.
Consequence: The role would give failures when managing el 8.8/9.2 machines.
Fix: Do not test with NetworkName and VolumeName when podman version is less
than 4.7.0. Use loginctl --value -p instead of -P which will work on all
versions.
Result: The role can manage el 8.8/9.2 machines.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit f16c3fb3c884cf3af446d19aeda86f27dafd1d1e)
---
tasks/cancel_linger.yml | 4 ++--
.../quadlet-basic.network.j2} | 2 ++
tests/tests_quadlet_basic.yml | 6 +++---
3 files changed, 7 insertions(+), 5 deletions(-)
rename tests/{files/quadlet-basic.network => templates/quadlet-basic.network.j2} (62%)
diff --git a/tasks/cancel_linger.yml b/tasks/cancel_linger.yml
index 00d38c2..9eb67ff 100644
--- a/tasks/cancel_linger.yml
+++ b/tasks/cancel_linger.yml
@@ -63,7 +63,7 @@
removes: /var/lib/systemd/linger/{{ __podman_linger_user }}
- name: Wait for user session to exit closing state # noqa no-handler
- command: loginctl show-user -P State {{ __podman_linger_user | quote }}
+ command: loginctl show-user --value -p State {{ __podman_linger_user | quote }}
register: __user_state
changed_when: false
until: __user_state.stdout != "closing"
@@ -82,7 +82,7 @@
state: stopped
- name: Wait for user session to exit closing state
- command: loginctl show-user -P State {{ __podman_linger_user | quote }}
+ command: loginctl show-user --value -p State {{ __podman_linger_user | quote }}
changed_when: false
register: __user_state
until: __user_state.stderr is match(__pat) or
diff --git a/tests/files/quadlet-basic.network b/tests/templates/quadlet-basic.network.j2
similarity index 62%
rename from tests/files/quadlet-basic.network
rename to tests/templates/quadlet-basic.network.j2
index 5b002ba..3419e3d 100644
--- a/tests/files/quadlet-basic.network
+++ b/tests/templates/quadlet-basic.network.j2
@@ -2,4 +2,6 @@
Subnet=192.168.29.0/24
Gateway=192.168.29.1
Label=app=wordpress
+{% if podman_version is version("4.7.0", ">=") %}
NetworkName=quadlet-basic-name
+{% endif %}
diff --git a/tests/tests_quadlet_basic.yml b/tests/tests_quadlet_basic.yml
index 5a06864..9563a60 100644
--- a/tests/tests_quadlet_basic.yml
+++ b/tests/tests_quadlet_basic.yml
@@ -19,15 +19,15 @@
state: present
data: "{{ __json_secret_data | string }}"
__podman_quadlet_specs:
- - file_src: files/quadlet-basic.network
+ - template_src: templates/quadlet-basic.network.j2
state: started
- name: quadlet-basic-unused-network
type: network
Network: {}
- name: quadlet-basic-mysql
type: volume
- Volume:
- VolumeName: quadlet-basic-mysql-name
+ Volume: "{{ {} if podman_version is version('4.7.0', '<')
+ else {'VolumeName': 'quadlet-basic-mysql-name'} }}"
- name: quadlet-basic-unused-volume
type: volume
Volume: {}
--
2.47.0

View File

@ -1,5 +1,34 @@
Changelog Changelog
========= =========
[1.23.0-3] - 2024-09-11
----------------------------
### New Features
- [bootloader - bootloader role tests do not work on ostree [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58917)
- [logging - RFE - system-roles - logging: Add truncate options for local file inputs [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58485)
- [logging - redhat.rhel_system_roles.logging role fails to process logging_outputs: of type: "custom" [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58481)
- [logging - [RFE] Add the umask settings or enable a variable in linux-system-roles.logging [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58477)
- [nbde_client - feat: Allow initrd configuration to be skipped [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58519)
### Bug Fixes
- [ - package rhel-system-roles.noarch does not provide docs for ansible-doc [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58465)
- [ad_integration - fix: Sets domain name lower case in realmd.conf section header [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58494)
- [bootloader - fix: Set user.cfg path to /boot/grub2/ on EL 9 UEFI [rhel-8]](https://issues.redhat.com/browse/RHEL-45711)
- [cockpit - cockpit install all wildcard match does not work in newer el9 [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58515)
- [logging - Setup imuxsock using rhel-system-roles.logging causing an error EL8](https://issues.redhat.com/browse/RHEL-37550)
- [podman - fix: proper cleanup for networks; ensure cleanup of resources [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58525)
- [podman - fix: grab name of network to remove from quadlet file [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58511)
- [podman - Create podman secret when skip_existing=True and it does not exist [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58507)
- [podman - fix: do not use become for changing hostdir ownership, and expose subuid/subgid info [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58503)
- [podman - fix: use correct user for cancel linger file name [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58498)
- [podman - redhat.rhel_system_roles.podman fails to configure and run containers with podman rootless using different username and groupname. [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58460)
- [sshd - second SSHD service broken [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58473)
- [storage - rhel-system-role.storage is not idempotent [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58469)
- [timesync - System Roles: No module documentation [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58489)
[1.23.0] - 2024-01-15 [1.23.0] - 2024-01-15
---------------------------- ----------------------------

View File

@ -1,10 +1,10 @@
Source801: https://galaxy.ansible.com/download/ansible-posix-1.5.4.tar.gz Source801: https://galaxy.ansible.com/download/ansible-posix-1.5.4.tar.gz
Source901: https://galaxy.ansible.com/download/community-general-8.3.0.tar.gz Source901: https://galaxy.ansible.com/download/community-general-8.3.0.tar.gz
Source902: https://galaxy.ansible.com/download/containers-podman-1.12.0.tar.gz Source902: https://galaxy.ansible.com/download/containers-podman-1.15.4.tar.gz
Provides: bundled(ansible-collection(ansible.posix)) = 1.5.4 Provides: bundled(ansible-collection(ansible.posix)) = 1.5.4
Provides: bundled(ansible-collection(community.general)) = 8.3.0 Provides: bundled(ansible-collection(community.general)) = 8.3.0
Provides: bundled(ansible-collection(containers.podman)) = 1.12.0 Provides: bundled(ansible-collection(containers.podman)) = 1.15.4
Source996: CHANGELOG.rst Source996: CHANGELOG.rst
Source998: collection_readme.sh Source998: collection_readme.sh

View File

@ -22,8 +22,37 @@ declare -A plugin_map=(
[containers/podman/plugins/modules/podman_play.py]=podman [containers/podman/plugins/modules/podman_play.py]=podman
[containers/podman/plugins/modules/podman_secret.py]=podman [containers/podman/plugins/modules/podman_secret.py]=podman
[containers/podman/plugins/module_utils/podman/common.py]=podman [containers/podman/plugins/module_utils/podman/common.py]=podman
[containers/podman/plugins/module_utils/podman/quadlet.py]=podman
) )
# fix the following issue
# ERROR: Found 1 pylint issue(s) which need to be resolved:
# ERROR: plugins/modules/rhsm_repository.py:263:8: wrong-collection-deprecated: Wrong collection name ('community.general') found in call to Display.deprecated or AnsibleModule.deprecate
sed "s/collection_name='community.general'/collection_name='%{collection_namespace}.%{collection_name}'/" \
-i .external/community/general/plugins/modules/rhsm_repository.py
fix_module_documentation() {
local module_src doc_fragment_name df_dest_dir
local -a paths
module_src=".external/$1"
sed ':a;N;$!ba;s/description:\n\( *\)/description:\n\1- "WARNING: Do not use this plugin directly! It is only for role internal use."\n\1/' \
-i "$module_src"
# grab documentation fragments
for doc_fragment_name in $(awk -F'[ -]+' '/^extends_documentation_fragment:/ {reading = 1; next}; /^[ -]/ {if (reading) {print $2}; next}; /^[^ -]/ {if (reading) {exit}}' "$module_src"); do
if [ "$doc_fragment_name" = files ]; then continue; fi # this one is built-in
df_dest_dir="%{collection_build_path}/plugins/doc_fragments"
if [ ! -d "$df_dest_dir" ]; then
mkdir -p "$df_dest_dir"
fi
paths=(${doc_fragment_name//./ })
# if we ever have two different collections that have the same doc_fragment name
# with different contents, we will be in trouble . . .
# will have to make the doc fragment files unique, then edit $dest to use
# the unique name
cp ".external/${paths[0]}/${paths[1]}/plugins/doc_fragments/${paths[2]}.py" "$df_dest_dir"
done
}
declare -a modules mod_utils collection_plugins declare -a modules mod_utils collection_plugins
declare -A dests declare -A dests
# vendor in plugin files - fix documentation, fragments # vendor in plugin files - fix documentation, fragments
@ -31,9 +60,12 @@ for src in "${!plugin_map[@]}"; do
roles="${plugin_map["$src"]}" roles="${plugin_map["$src"]}"
if [ "$roles" = __collection ]; then if [ "$roles" = __collection ]; then
collection_plugins+=("$src") collection_plugins+=("$src")
case "$src" in
*/plugins/modules/*) fix_module_documentation "$src";;
esac
else else
case "$src" in case "$src" in
*/plugins/modules/*) srcdir=plugins/modules; subdir=library; modules+=("$src") ;; */plugins/modules/*) srcdir=plugins/modules; subdir=library; modules+=("$src"); fix_module_documentation "$src";;
*/plugins/module_utils/*) srcdir=plugins/module_utils; mod_utils+=("$src") ;; */plugins/module_utils/*) srcdir=plugins/module_utils; mod_utils+=("$src") ;;
*/plugins/action/*) srcdir=plugins/action ;; */plugins/action/*) srcdir=plugins/action ;;
esac esac
@ -54,9 +86,6 @@ for src in "${!plugin_map[@]}"; do
mkdir -p "$destdir" mkdir -p "$destdir"
fi fi
cp -pL ".external/$src" "$dest" cp -pL ".external/$src" "$dest"
sed -e ':a;N;$!ba;s/description:\n\( *\)/description:\n\1- WARNING: Do not use this plugin directly! It is only for role internal use.\n\1/' \
-e '/^extends_documentation_fragment:/,/^[^ -]/{/^extends/d;/^[ -]/d}' \
-i "$dest"
done done
done done
@ -92,11 +121,32 @@ done
# for podman, change the FQCN - using a non-FQCN module name doesn't seem to work, # for podman, change the FQCN - using a non-FQCN module name doesn't seem to work,
# even for the legacy role format # even for the legacy role format
for rolename in %{rolenames}; do for rolename in %{rolenames}; do
find "$rolename" -type f -exec \ find "$rolename" -type f -exec \
sed -e "s/linux-system-roles[.]${rolename}\\>/%{roleinstprefix}${rolename}/g" \ sed -e "s/linux-system-roles[.]${rolename}\\>/%{roleinstprefix}${rolename}/g" \
-e "s/fedora[.]linux_system_roles[.]/%{collection_namespace}.%{collection_name}./g" \ -e "s/fedora[.]linux_system_roles[.]/%{collection_namespace}.%{collection_name}./g" \
-e "s/containers[.]podman[.]/%{collection_namespace}.%{collection_name}./g" \ -e "s/containers[.]podman[.]/%{collection_namespace}.%{collection_name}./g" \
-e "s/community[.]general[.]/%{collection_namespace}.%{collection_name}./g" \ -e "s/community[.]general[.]/%{collection_namespace}.%{collection_name}./g" \
-e "s/ansible[.]posix[.]/%{collection_namespace}.%{collection_name}./g" \ -e "s/ansible[.]posix[.]/%{collection_namespace}.%{collection_name}./g" \
-i {} \; -i {} \;
done
# add ansible-test ignores needed due to vendoring
for ansible_ver in 2.14 2.15 2.16; do
ignore_file="podman/.sanity-ansible-ignore-${ansible_ver}.txt"
cat >> "$ignore_file" <<EOF
plugins/module_utils/podman_lsr/podman/quadlet.py compile-2.7!skip
plugins/module_utils/podman_lsr/podman/quadlet.py import-2.7!skip
plugins/modules/podman_image.py import-2.7!skip
plugins/modules/podman_play.py import-2.7!skip
EOF
done
# these platforms still use python 3.5
for ansible_ver in 2.14 2.15; do
ignore_file="podman/.sanity-ansible-ignore-${ansible_ver}.txt"
cat >> "$ignore_file" <<EOF
plugins/module_utils/podman_lsr/podman/quadlet.py compile-3.5!skip
plugins/module_utils/podman_lsr/podman/quadlet.py import-3.5!skip
plugins/modules/podman_image.py import-3.5!skip
plugins/modules/podman_play.py import-3.5!skip
EOF
done done

View File

@ -19,7 +19,7 @@ Name: linux-system-roles
Url: https://github.com/linux-system-roles Url: https://github.com/linux-system-roles
Summary: Set of interfaces for unified system management Summary: Set of interfaces for unified system management
Version: 1.23.0 Version: 1.23.0
Release: 2.21%{?dist} Release: 4%{?dist}
License: GPLv3+ and MIT and BSD and Python License: GPLv3+ and MIT and BSD and Python
%global _pkglicensedir %{_licensedir}/%{name} %global _pkglicensedir %{_licensedir}/%{name}
@ -92,7 +92,7 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%deftag 2 1.7.4 %deftag 2 1.7.4
%global rolename3 timesync %global rolename3 timesync
%deftag 3 1.8.2 %deftag 3 1.9.0
%global rolename4 kdump %global rolename4 kdump
%deftag 4 1.4.4 %deftag 4 1.4.4
@ -113,13 +113,13 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%deftag 9 1.2.2 %deftag 9 1.2.2
%global rolename10 logging %global rolename10 logging
%deftag 10 1.12.4 %deftag 10 1.13.4
%global rolename11 nbde_server %global rolename11 nbde_server
%deftag 11 1.4.3 %deftag 11 1.4.3
%global rolename12 nbde_client %global rolename12 nbde_client
%deftag 12 1.2.17 %deftag 12 1.3.0
%global rolename13 certificate %global rolename13 certificate
%deftag 13 1.3.3 %deftag 13 1.3.3
@ -130,7 +130,7 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%global forgeorg15 https://github.com/willshersystems %global forgeorg15 https://github.com/willshersystems
%global repo15 ansible-sshd %global repo15 ansible-sshd
%global rolename15 sshd %global rolename15 sshd
%deftag 15 v0.23.2 %deftag 15 v0.25.0
%global rolename16 ssh %global rolename16 ssh
%deftag 16 1.3.2 %deftag 16 1.3.2
@ -145,13 +145,13 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%deftag 19 1.7.4 %deftag 19 1.7.4
%global rolename20 cockpit %global rolename20 cockpit
%deftag 20 1.5.5 %deftag 20 1.5.10
%global rolename21 podman %global rolename21 podman
%deftag 21 1.4.7 %deftag 21 1.4.7
%global rolename22 ad_integration %global rolename22 ad_integration
%deftag 22 1.4.2 %deftag 22 1.4.6
%global rolename23 rhc %global rolename23 rhc
%deftag 23 1.6.0 %deftag 23 1.6.0
@ -172,7 +172,7 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%deftag 28 1.1.1 %deftag 28 1.1.1
%global rolename29 bootloader %global rolename29 bootloader
%deftag 29 1.0.3 %deftag 29 1.0.7
%global rolename30 snapshot %global rolename30 snapshot
%deftag 30 1.3.1 %deftag 30 1.3.1
@ -209,6 +209,38 @@ Source29: %{archiveurl29}
Source30: %{archiveurl30} Source30: %{archiveurl30}
# END AUTOGENERATED SOURCES # END AUTOGENERATED SOURCES
# storage role patches
Patch1: 0001-test-fix-sector-based-disk-size-calculation-from-ans.patch
Patch2: 0002-fix-Fix-recreate-check-for-formats-without-labelling.patch
Patch3: 0003-fix-Fix-incorrent-populate-call.patch
Patch4: 0004-tests-Add-a-new-match_sector_size-argument-to-find_u.patch
Patch5: 0005-tests-Require-same-sector-size-disks-for-LVM-tests.patch
Patch6: 0006-fix-Fix-possibly-used-before-assignment-pylint-issue.patch
Patch7: 0007-test-lsblk-can-return-LOG_SEC-or-LOG-SEC.patch
Patch8: 0008-test-lvm-pool-members-test-fix.patch
Patch9: 0009-fix-Fix-expected-error-message-in-tests_misc.yml.patch
Patch10: 0010-tests-Use-blockdev_info-to-check-volume-mount-points.patch
# podman role patches
Patch101: 0101-fix-Add-support-for-check-flag.patch
Patch102: 0102-fix-use-correct-user-for-cancel-linger-file-name.patch
Patch103: 0103-test-do-not-check-for-root-linger.patch
Patch104: 0104-fix-do-not-use-become-for-changing-hostdir-ownership.patch
Patch105: 0105-chore-change-no_log-false-to-true-fix-comment.patch
Patch106: 0106-fix-make-kube-cleanup-idempotent.patch
Patch107: 0107-chore-use-none-in-jinja-code-not-null.patch
Patch108: 0108-uid-1001-conflicts-on-some-test-systems.patch
Patch109: 0109-fix-ansible-lint-octal-value-issues.patch
Patch110: 0110-fix-grab-name-of-network-to-remove-from-quadlet-file.patch
Patch111: 0111-fix-proper-cleanup-for-networks-ensure-cleanup-of-re.patch
Patch112: 0112-fix-Ensure-user-linger-is-closed-on-EL10.patch
Patch113: 0113-test-skip-quadlet-tests-on-non-x86_64.patch
Patch114: 0114-fix-subgid-maps-user-to-gids-not-group-to-gids.patch
Patch115: 0115-fix-Cannot-remove-volumes-from-kube-yaml-need-to-con.patch
Patch116: 0116-fix-ignore-pod-not-found-errors-when-removing-kube-s.patch
Patch117: 0117-test-need-grubby-for-el8-testing-for-ostree.patch
Patch118: 0118-fix-make-role-work-on-el-8.8-and-el-9.2-and-podman-v.patch
# Includes with definitions/tags that differ between RHEL and Fedora # Includes with definitions/tags that differ between RHEL and Fedora
Source1001: extrasources.inc Source1001: extrasources.inc
@ -336,6 +368,42 @@ if [ "$rolesdir" != "$realrolesdir" ]; then
fi fi
cd .. cd ..
# storage role patches
cd %{rolename6}
%patch1 -p1
%patch2 -p1
%patch3 -p1
%patch4 -p1
%patch5 -p1
%patch6 -p1
%patch7 -p1
%patch8 -p1
%patch9 -p1
%patch10 -p1
cd ..
# podman role patches
cd %{rolename21}
%patch101 -p1
%patch102 -p1
%patch103 -p1
%patch104 -p1
%patch105 -p1
%patch106 -p1
%patch107 -p1
%patch108 -p1
%patch109 -p1
%patch110 -p1
%patch111 -p1
%patch112 -p1
%patch113 -p1
%patch114 -p1
%patch115 -p1
%patch116 -p1
%patch117 -p1
%patch118 -p1
cd ..
# vendoring build steps, if any # vendoring build steps, if any
%include %{SOURCE1004} %include %{SOURCE1004}
@ -677,6 +745,36 @@ find %{buildroot}%{ansible_roles_dir} -mindepth 1 -maxdepth 1 | \
%endif %endif
%changelog %changelog
* Fri Oct 25 2024 Rich Megginson <rmeggins@redhat.com> - 1.23.0-4
- Resolves: RHEL-58460 : podman - redhat.rhel_system_roles.podman fails to configure and run containers with podman rootless using different username and groupname. [rhel-9.4.z]
- fix issue with podman error removing kube specs on 8.8 and 9.2 managed nodes - covered by tests_basic.yml
- https://github.com/linux-system-roles/podman/pull/186
- fix issue with missing grubby testing on el8 ostree
- https://github.com/linux-system-roles/podman/pull/187
- fix issue with podman not working on 8.8/9.2
- https://github.com/linux-system-roles/podman/pull/188
* Wed Sep 11 2024 Rich Megginson <rmeggins@redhat.com> - 1.23.0-3
- Resolves: RHEL-58465 : - package rhel-system-roles.noarch does not provide docs for ansible-doc [rhel-8.10.z]
- Resolves: RHEL-58494 : ad_integration - fix: Sets domain name lower case in realmd.conf section header [rhel-8.10.z]
- Resolves: RHEL-58917 : bootloader - bootloader role tests do not work on ostree [rhel-8.10.z]
- Resolves: RHEL-45711 : bootloader - fix: Set user.cfg path to /boot/grub2/ on EL 9 UEFI [rhel-8]
- Resolves: RHEL-58515 : cockpit - cockpit install all wildcard match does not work in newer el9 [rhel-8.10.z]
- Resolves: RHEL-58485 : logging - RFE - system-roles - logging: Add truncate options for local file inputs [rhel-8.10.z]
- Resolves: RHEL-58481 : logging - redhat.rhel_system_roles.logging role fails to process logging_outputs: of type: "custom" [rhel-8.10.z]
- Resolves: RHEL-58477 : logging - [RFE] Add the umask settings or enable a variable in linux-system-roles.logging [rhel-8.10.z]
- Resolves: RHEL-37550 : logging - Setup imuxsock using rhel-system-roles.logging causing an error EL8
- Resolves: RHEL-58519 : nbde_client - feat: Allow initrd configuration to be skipped [rhel-8.10.z]
- Resolves: RHEL-58525 : podman - fix: proper cleanup for networks; ensure cleanup of resources [rhel-8.10.z]
- Resolves: RHEL-58511 : podman - fix: grab name of network to remove from quadlet file [rhel-8.10.z]
- Resolves: RHEL-58507 : podman - Create podman secret when skip_existing=True and it does not exist [rhel-8.10.z]
- Resolves: RHEL-58503 : podman - fix: do not use become for changing hostdir ownership, and expose subuid/subgid info [rhel-8.10.z]
- Resolves: RHEL-58498 : podman - fix: use correct user for cancel linger file name [rhel-8.10.z]
- Resolves: RHEL-58460 : podman - redhat.rhel_system_roles.podman fails to configure and run containers with podman rootless using different username and groupname. [rhel-8.10.z]
- Resolves: RHEL-58473 : sshd - second SSHD service broken [rhel-8.10.z]
- Resolves: RHEL-58469 : storage - rhel-system-role.storage is not idempotent [rhel-8.10.z]
- Resolves: RHEL-58489 : timesync - System Roles: No module documentation [rhel-8.10.z]
* Mon Feb 26 2024 Rich Megginson <rmeggins@redhat.com> - 1.23.0-2.21 * Mon Feb 26 2024 Rich Megginson <rmeggins@redhat.com> - 1.23.0-2.21
- Resolves: RHEL-3241 : bootloader - Create bootloader role (MVP) - Resolves: RHEL-3241 : bootloader - Create bootloader role (MVP)
fix issue with path on arches other than x86_64, and EFI systems fix issue with path on arches other than x86_64, and EFI systems