System Roles update for 1.23.0-3

Resolves: RHEL-58465
 - package rhel-system-roles.noarch does not provide docs for ansible-doc [rhel-8.10.z]

Resolves: RHEL-58494
ad_integration - fix: Sets domain name lower case in realmd.conf section header [rhel-8.10.z]

Resolves: RHEL-58917
bootloader - bootloader role tests do not work on ostree [rhel-8.10.z]

Resolves: RHEL-45711
bootloader - fix: Set user.cfg path to /boot/grub2/ on EL 9 UEFI [rhel-8]

Resolves: RHEL-58515
cockpit - cockpit install all wildcard match does not work in newer el9 [rhel-8.10.z]

Resolves: RHEL-58485
logging - RFE - system-roles - logging: Add truncate options for local file inputs [rhel-8.10.z]

Resolves: RHEL-58481
logging - redhat.rhel_system_roles.logging role fails to process logging_outputs: of type: "custom" [rhel-8.10.z]

Resolves: RHEL-58477
logging - [RFE] Add the umask settings or enable a variable in linux-system-roles.logging [rhel-8.10.z]

Resolves: RHEL-37550
logging - Setup imuxsock using rhel-system-roles.logging causing an error EL8

Resolves: RHEL-58519
nbde_client - feat: Allow initrd configuration to be skipped [rhel-8.10.z]

Resolves: RHEL-58525
podman - fix: proper cleanup for networks; ensure cleanup of resources [rhel-8.10.z]

Resolves: RHEL-58511
podman - fix: grab name of network to remove from quadlet file [rhel-8.10.z]

Resolves: RHEL-58507
podman - Create podman secret when skip_existing=True and it does not exist [rhel-8.10.z]

Resolves: RHEL-58503
podman - fix: do not use become for changing hostdir ownership, and expose subuid/subgid info [rhel-8.10.z]

Resolves: RHEL-58498
podman - fix: use correct user for cancel linger file name [rhel-8.10.z]

Resolves: RHEL-58460
podman - redhat.rhel_system_roles.podman fails to configure and run containers with podman rootless using different username and groupname. [rhel-8.10.z]

Resolves: RHEL-58473
sshd - second SSHD service broken [rhel-8.10.z]

Resolves: RHEL-58469
storage - rhel-system-role.storage is not idempotent [rhel-8.10.z]

Resolves: RHEL-58489
timesync - System Roles: No module documentation [rhel-8.10.z]

(cherry picked from commit 350d523452546e35bb0805af9ad9cc74712899d7)
This commit is contained in:
Rich Megginson 2024-09-05 14:56:25 -06:00
parent 641b0decd8
commit 2a13f189be
31 changed files with 2977 additions and 29 deletions

9
.gitignore vendored
View File

@ -365,3 +365,12 @@ SOURCES/vpn-1.5.3.tar.gz
/cockpit-1.5.5.tar.gz
/postgresql-1.3.5.tar.gz
/bootloader-1.0.3.tar.gz
/ad_integration-1.4.6.tar.gz
/ansible-sshd-v0.25.0.tar.gz
/bootloader-1.0.7.tar.gz
/cockpit-1.5.10.tar.gz
/logging-1.13.2.tar.gz
/nbde_client-1.3.0.tar.gz
/timesync-1.9.0.tar.gz
/logging-1.13.4.tar.gz
/containers-podman-1.15.4.tar.gz

View File

@ -0,0 +1,74 @@
From 8b3cfc1a30da1ab681eb8c250baa2d6395ecc0d2 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Wed, 3 Apr 2024 15:12:00 +0200
Subject: [PATCH 01/10] test: fix sector-based disk size calculation from
ansible_devices
Device sizes specified in sectors are in general in 512 sectors
regardless of the actual device physical sector size. Example of
ansible_devices facts for a 4k sector size drive:
...
"sectors": "41943040",
"sectorsize": "4096",
"size": "20.00 GB"
...
Resolves: RHEL-30959
Signed-off-by: Vojtech Trefny <vtrefny@redhat.com>
(cherry picked from commit bb1eb23ccd6e9475cd698f0a6f2f497ffefbccd2)
---
tests/tests_create_lv_size_equal_to_vg.yml | 3 +--
tests/tests_misc.yml | 3 +--
tests/tests_resize.yml | 6 ++----
3 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/tests/tests_create_lv_size_equal_to_vg.yml b/tests/tests_create_lv_size_equal_to_vg.yml
index cab4f08..535f73b 100644
--- a/tests/tests_create_lv_size_equal_to_vg.yml
+++ b/tests/tests_create_lv_size_equal_to_vg.yml
@@ -8,8 +8,7 @@
volume_group_size: '10g'
lv_size: '10g'
unused_disk_subfact: '{{ ansible_devices[unused_disks[0]] }}'
- disk_size: '{{ unused_disk_subfact.sectors | int *
- unused_disk_subfact.sectorsize | int }}'
+ disk_size: '{{ unused_disk_subfact.sectors | int * 512 }}'
tags:
- tests::lvm
diff --git a/tests/tests_misc.yml b/tests/tests_misc.yml
index 6373897..363d843 100644
--- a/tests/tests_misc.yml
+++ b/tests/tests_misc.yml
@@ -8,8 +8,7 @@
volume_group_size: "5g"
volume1_size: "4g"
unused_disk_subfact: "{{ ansible_devices[unused_disks[0]] }}"
- too_large_size: "{{ (unused_disk_subfact.sectors | int * 1.2) *
- unused_disk_subfact.sectorsize | int }}"
+ too_large_size: "{{ (unused_disk_subfact.sectors | int * 1.2) * 512 }}"
tags:
- tests::lvm
tasks:
diff --git a/tests/tests_resize.yml b/tests/tests_resize.yml
index 06fb375..1cd2176 100644
--- a/tests/tests_resize.yml
+++ b/tests/tests_resize.yml
@@ -11,10 +11,8 @@
invalid_size1: xyz GiB
invalid_size2: none
unused_disk_subfact: '{{ ansible_devices[unused_disks[0]] }}'
- too_large_size: '{{ unused_disk_subfact.sectors | int * 1.2 *
- unused_disk_subfact.sectorsize | int }}'
- disk_size: '{{ unused_disk_subfact.sectors | int *
- unused_disk_subfact.sectorsize | int }}'
+ too_large_size: '{{ unused_disk_subfact.sectors | int * 1.2 * 512 }}'
+ disk_size: '{{ unused_disk_subfact.sectors | int * 512 }}'
tags:
- tests::lvm
tasks:
--
2.46.0

View File

@ -0,0 +1,64 @@
From 9f561445271a14fee598e9a793f72297f66eae56 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Wed, 10 Apr 2024 17:05:46 +0200
Subject: [PATCH 02/10] fix: Fix recreate check for formats without labelling
support
Formats like LUKS or LVMPV don't support labels so we need to skip
the label check in BlivetVolume._reformat.
Resolves: RHEL-29874
(cherry picked from commit a70e8108110e30ebc5e7c404d39339c511f9bd09)
---
library/blivet.py | 3 +++
tests/tests_volume_relabel.yml | 20 ++++++++++++++++++++
2 files changed, 23 insertions(+)
diff --git a/library/blivet.py b/library/blivet.py
index 20389ea..18807de 100644
--- a/library/blivet.py
+++ b/library/blivet.py
@@ -826,6 +826,9 @@ class BlivetVolume(BlivetBase):
if ((fmt is None and self._device.format.type is None)
or (fmt is not None and self._device.format.type == fmt.type)):
# format is the same, no need to run reformatting
+ if not hasattr(self._device.format, "label"):
+ # not all formats support labels
+ return
dev_label = '' if self._device.format.label is None else self._device.format.label
if dev_label != fmt.label:
# ...but the label has changed - schedule modification action
diff --git a/tests/tests_volume_relabel.yml b/tests/tests_volume_relabel.yml
index 8916b73..6624fbd 100644
--- a/tests/tests_volume_relabel.yml
+++ b/tests/tests_volume_relabel.yml
@@ -111,6 +111,26 @@
- name: Verify role results
include_tasks: verify-role-results.yml
+ - name: Format the device to LVMPV which doesn't support labels
+ include_role:
+ name: linux-system-roles.storage
+ vars:
+ storage_volumes:
+ - name: test1
+ type: disk
+ fs_type: lvmpv
+ disks: "{{ unused_disks }}"
+
+ - name: Rerun to check we don't try to relabel preexisitng LVMPV (regression test for RHEL-29874)
+ include_role:
+ name: linux-system-roles.storage
+ vars:
+ storage_volumes:
+ - name: test1
+ type: disk
+ fs_type: lvmpv
+ disks: "{{ unused_disks }}"
+
- name: Clean up
include_role:
name: linux-system-roles.storage
--
2.46.0

View File

@ -0,0 +1,28 @@
From 7abfaeddab812e4eec0c3d3d6bcbabe047722c4f Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Wed, 10 Apr 2024 17:08:20 +0200
Subject: [PATCH 03/10] fix: Fix incorrent populate call
`populate()` is method of DeviceTree, not Blivet.
(cherry picked from commit 6471e65abd429c82df37cbcf07fdf909e4277aa8)
---
library/blivet.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/library/blivet.py b/library/blivet.py
index 18807de..d82b86b 100644
--- a/library/blivet.py
+++ b/library/blivet.py
@@ -630,7 +630,7 @@ class BlivetVolume(BlivetBase):
device.original_format._key_file = self._volume.get('encryption_key')
device.original_format.passphrase = self._volume.get('encryption_password')
if device.isleaf:
- self._blivet.populate()
+ self._blivet.devicetree.populate()
if not device.isleaf:
device = device.children[0]
--
2.46.0

View File

@ -0,0 +1,174 @@
From 912c33982d9cc412eb72bc9baeab6696e29e7f27 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Tue, 28 May 2024 16:23:48 +0200
Subject: [PATCH 04/10] tests: Add a new 'match_sector_size' argument to
find_unused_disks
Some storage pools cannot be created on disks with different
sector sizes so we want to be able to find unused disks with the
same sector sizes for our tests.
Related: RHEL-25994
(cherry picked from commit 368ecd0214dbaad7c42547eeac0565e51c924546)
---
library/find_unused_disk.py | 79 ++++++++++++++++++++++------------
tests/get_unused_disk.yml | 1 +
tests/unit/test_unused_disk.py | 6 +--
3 files changed, 56 insertions(+), 30 deletions(-)
diff --git a/library/find_unused_disk.py b/library/find_unused_disk.py
index 09b8ad5..098f235 100644
--- a/library/find_unused_disk.py
+++ b/library/find_unused_disk.py
@@ -39,6 +39,11 @@ options:
description: Specifies which disk interface will be accepted (scsi, virtio, nvme).
default: null
type: str
+
+ match_sector_size:
+ description: Specifies whether all returned disks must have the same (logical) sector size.
+ default: false
+ type: bool
'''
EXAMPLES = '''
@@ -138,13 +143,13 @@ def get_partitions(disk_path):
def get_disks(module):
- buf = module.run_command(["lsblk", "-p", "--pairs", "--bytes", "-o", "NAME,TYPE,SIZE,FSTYPE"])[1]
+ buf = module.run_command(["lsblk", "-p", "--pairs", "--bytes", "-o", "NAME,TYPE,SIZE,FSTYPE,LOG-SEC"])[1]
disks = dict()
for line in buf.splitlines():
if not line:
continue
- m = re.search(r'NAME="(?P<path>[^"]*)" TYPE="(?P<type>[^"]*)" SIZE="(?P<size>\d+)" FSTYPE="(?P<fstype>[^"]*)"', line)
+ m = re.search(r'NAME="(?P<path>[^"]*)" TYPE="(?P<type>[^"]*)" SIZE="(?P<size>\d+)" FSTYPE="(?P<fstype>[^"]*)" LOG-SEC="(?P<ssize>\d+)"', line)
if m is None:
module.log(line)
continue
@@ -152,31 +157,16 @@ def get_disks(module):
if m.group('type') != "disk":
continue
- disks[m.group('path')] = {"type": m.group('type'), "size": m.group('size'), "fstype": m.group('fstype')}
+ disks[m.group('path')] = {"type": m.group('type'), "size": m.group('size'),
+ "fstype": m.group('fstype'), "ssize": m.group('ssize')}
return disks
-def run_module():
- """Create the module"""
- module_args = dict(
- max_return=dict(type='int', required=False, default=10),
- min_size=dict(type='str', required=False, default='0'),
- max_size=dict(type='str', required=False, default='0'),
- with_interface=dict(type='str', required=False, default=None)
- )
-
- result = dict(
- changed=False,
- disks=[]
- )
-
- module = AnsibleModule(
- argument_spec=module_args,
- supports_check_mode=True
- )
-
+def filter_disks(module):
+ disks = {}
max_size = Size(module.params['max_size'])
+
for path, attrs in get_disks(module).items():
if is_ignored(path):
continue
@@ -204,14 +194,49 @@ def run_module():
if not can_open(path):
continue
- result['disks'].append(os.path.basename(path))
- if len(result['disks']) >= module.params['max_return']:
- break
+ disks[path] = attrs
+
+ return disks
+
+
+def run_module():
+ """Create the module"""
+ module_args = dict(
+ max_return=dict(type='int', required=False, default=10),
+ min_size=dict(type='str', required=False, default='0'),
+ max_size=dict(type='str', required=False, default='0'),
+ with_interface=dict(type='str', required=False, default=None),
+ match_sector_size=dict(type='bool', required=False, default=False)
+ )
+
+ result = dict(
+ changed=False,
+ disks=[]
+ )
+
+ module = AnsibleModule(
+ argument_spec=module_args,
+ supports_check_mode=True
+ )
+
+ disks = filter_disks(module)
+
+ if module.params['match_sector_size']:
+ # pick the most disks with the same sector size
+ sector_sizes = dict()
+ for path, ss in [(path, disks[path]["ssize"]) for path in disks.keys()]:
+ if ss in sector_sizes.keys():
+ sector_sizes[ss].append(path)
+ else:
+ sector_sizes[ss] = [path]
+ disks = [os.path.basename(p) for p in max(sector_sizes.values(), key=len)]
+ else:
+ disks = [os.path.basename(p) for p in disks.keys()]
- if not result['disks']:
+ if not disks:
result['disks'] = "Unable to find unused disk"
else:
- result['disks'].sort()
+ result['disks'] = sorted(disks)[:int(module.params['max_return'])]
module.exit_json(**result)
diff --git a/tests/get_unused_disk.yml b/tests/get_unused_disk.yml
index 685541f..a61487e 100644
--- a/tests/get_unused_disk.yml
+++ b/tests/get_unused_disk.yml
@@ -19,6 +19,7 @@
max_size: "{{ max_size | d(omit) }}"
max_return: "{{ max_return | d(omit) }}"
with_interface: "{{ storage_test_use_interface | d(omit) }}"
+ match_sector_size: "{{ match_sector_size | d(omit) }}"
register: unused_disks_return
- name: Set unused_disks if necessary
diff --git a/tests/unit/test_unused_disk.py b/tests/unit/test_unused_disk.py
index 74c9cf1..ca44d0f 100644
--- a/tests/unit/test_unused_disk.py
+++ b/tests/unit/test_unused_disk.py
@@ -10,9 +10,9 @@ import os
blkid_data_pttype = [('/dev/sdx', '/dev/sdx: PTTYPE=\"dos\"'),
('/dev/sdy', '/dev/sdy: PTTYPE=\"test\"')]
-blkid_data = [('/dev/sdx', 'UUID=\"hello-1234-56789\" TYPE=\"crypto_LUKS\"'),
- ('/dev/sdy', 'UUID=\"this-1s-a-t3st-f0r-ansible\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"'),
- ('/dev/sdz', 'LABEL=\"/data\" UUID=\"a12bcdef-345g-67h8-90i1-234j56789k10\" VERSION=\"1.0\" TYPE=\"ext4\" USAGE=\"filesystem\"')]
+blkid_data = [('/dev/sdx', 'UUID=\"hello-1234-56789\" TYPE=\"crypto_LUKS\" LOG-SEC=\"512\"'),
+ ('/dev/sdy', 'UUID=\"this-1s-a-t3st-f0r-ansible\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\" LOG-SEC=\"512\"'),
+ ('/dev/sdz', 'LABEL=\"/data\" UUID=\"a12bcdef-345g-67h8-90i1-234j56789k10\" VERSION=\"1.0\" TYPE=\"ext4\" USAGE=\"filesystem\" LOG-SEC=\"512\"')]
holders_data_none = [('/dev/sdx', ''),
('/dev/dm-99', '')]
--
2.46.0

View File

@ -0,0 +1,96 @@
From da871866f07e2990f37b3fdea404bbaf091d81b6 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Thu, 30 May 2024 10:41:26 +0200
Subject: [PATCH 05/10] tests: Require same sector size disks for LVM tests
LVM VGs cannot be created on top of disks with different sector
sizes so for tests that need multiple disks we need to make sure
we get unused disks with the same sector size.
Resolves: RHEL-25994
(cherry picked from commit d8c5938c28417cc905a647ec30246a0fc4d19297)
---
tests/tests_change_fs_use_partitions.yml | 2 +-
tests/tests_create_lvm_cache_then_remove.yml | 1 +
tests/tests_create_thinp_then_remove.yml | 1 +
tests/tests_fatals_cache_volume.yml | 1 +
tests/tests_lvm_multiple_disks_multiple_volumes.yml | 1 +
tests/tests_lvm_pool_members.yml | 1 +
6 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/tests/tests_change_fs_use_partitions.yml b/tests/tests_change_fs_use_partitions.yml
index 52afb7f..87fed69 100644
--- a/tests/tests_change_fs_use_partitions.yml
+++ b/tests/tests_change_fs_use_partitions.yml
@@ -31,7 +31,7 @@
include_tasks: get_unused_disk.yml
vars:
min_size: "{{ volume_size }}"
- max_return: 2
+ max_return: 1
- name: Create an LVM partition with the default file system type
include_role:
diff --git a/tests/tests_create_lvm_cache_then_remove.yml b/tests/tests_create_lvm_cache_then_remove.yml
index 1769a78..6b5d0a5 100644
--- a/tests/tests_create_lvm_cache_then_remove.yml
+++ b/tests/tests_create_lvm_cache_then_remove.yml
@@ -57,6 +57,7 @@
min_size: "{{ volume_group_size }}"
max_return: 2
disks_needed: 2
+ match_sector_size: true
- name: Create a cached LVM logical volume under volume group 'foo'
include_role:
diff --git a/tests/tests_create_thinp_then_remove.yml b/tests/tests_create_thinp_then_remove.yml
index bf6c4b1..2e7f046 100644
--- a/tests/tests_create_thinp_then_remove.yml
+++ b/tests/tests_create_thinp_then_remove.yml
@@ -23,6 +23,7 @@
include_tasks: get_unused_disk.yml
vars:
max_return: 3
+ match_sector_size: true
- name: Create a thinpool device
include_role:
diff --git a/tests/tests_fatals_cache_volume.yml b/tests/tests_fatals_cache_volume.yml
index c14cf3f..fcfdbb8 100644
--- a/tests/tests_fatals_cache_volume.yml
+++ b/tests/tests_fatals_cache_volume.yml
@@ -29,6 +29,7 @@
vars:
max_return: 2
disks_needed: 2
+ match_sector_size: true
- name: Verify that creating a cached partition volume fails
include_tasks: verify-role-failed.yml
diff --git a/tests/tests_lvm_multiple_disks_multiple_volumes.yml b/tests/tests_lvm_multiple_disks_multiple_volumes.yml
index 9a01ec5..68f2e76 100644
--- a/tests/tests_lvm_multiple_disks_multiple_volumes.yml
+++ b/tests/tests_lvm_multiple_disks_multiple_volumes.yml
@@ -29,6 +29,7 @@
min_size: "{{ volume_group_size }}"
max_return: 2
disks_needed: 2
+ match_sector_size: true
- name: >-
Create a logical volume spanning two physical volumes that changes its
diff --git a/tests/tests_lvm_pool_members.yml b/tests/tests_lvm_pool_members.yml
index d1b941d..63c10c7 100644
--- a/tests/tests_lvm_pool_members.yml
+++ b/tests/tests_lvm_pool_members.yml
@@ -59,6 +59,7 @@
vars:
min_size: "{{ volume_group_size }}"
disks_needed: 3
+ match_sector_size: true
- name: Create volume group 'foo' with 3 PVs
include_role:
--
2.46.0

View File

@ -0,0 +1,41 @@
From 705a9db65a230013a9118481082d2bb548cd113d Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Fri, 31 May 2024 06:31:52 +0200
Subject: [PATCH 06/10] fix: Fix 'possibly-used-before-assignment' pylint
issues (#440)
Latest pylint added a new check for values used before assignment.
This fixes these issues found in the blivet module. Some of these
are false positives, some real potential issues.
(cherry picked from commit bfaae50586681bb4b0fcad5df6f6adde2b7c8502)
---
library/blivet.py | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/library/blivet.py b/library/blivet.py
index d82b86b..a6715d9 100644
--- a/library/blivet.py
+++ b/library/blivet.py
@@ -642,6 +642,9 @@ class BlivetVolume(BlivetBase):
self._device = None
return # TODO: see if we can create this device w/ the specified name
+ # pylint doesn't understand that "luks_fmt" is always set when "encrypted" is true
+ # pylint: disable=unknown-option-value
+ # pylint: disable=possibly-used-before-assignment
def _update_from_device(self, param_name):
""" Return True if param_name's value was retrieved from a looked-up device. """
log.debug("Updating volume settings from device: %r", self._device)
@@ -1717,6 +1720,8 @@ class BlivetLVMPool(BlivetPool):
if auto_size_dev_count > 0:
calculated_thinlv_size = available_space / auto_size_dev_count
+ else:
+ calculated_thinlv_size = available_space
for thinlv in thinlvs_to_create:
--
2.46.0

View File

@ -0,0 +1,54 @@
From 18edc9af26684f03e44fe2e22c82a8f93182da4a Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 5 Jun 2024 08:49:19 -0600
Subject: [PATCH 07/10] test: lsblk can return LOG_SEC or LOG-SEC
get_unused_disk is broken on some systems because `lsblk ... LOG-SEC` can
return `LOG_SEC` with an underscore instead of the requested
`LOG-SEC` with a dash.
(cherry picked from commit 64333ce8aa42f4b961c39a443ac43cc6590097b3)
---
library/find_unused_disk.py | 4 ++--
tests/get_unused_disk.yml | 9 +++++++++
2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/library/find_unused_disk.py b/library/find_unused_disk.py
index 098f235..270fb58 100644
--- a/library/find_unused_disk.py
+++ b/library/find_unused_disk.py
@@ -149,9 +149,9 @@ def get_disks(module):
if not line:
continue
- m = re.search(r'NAME="(?P<path>[^"]*)" TYPE="(?P<type>[^"]*)" SIZE="(?P<size>\d+)" FSTYPE="(?P<fstype>[^"]*)" LOG-SEC="(?P<ssize>\d+)"', line)
+ m = re.search(r'NAME="(?P<path>[^"]*)" TYPE="(?P<type>[^"]*)" SIZE="(?P<size>\d+)" FSTYPE="(?P<fstype>[^"]*)" LOG[_-]SEC="(?P<ssize>\d+)"', line)
if m is None:
- module.log(line)
+ module.log("Line did not match: " + line)
continue
if m.group('type') != "disk":
diff --git a/tests/get_unused_disk.yml b/tests/get_unused_disk.yml
index a61487e..0402770 100644
--- a/tests/get_unused_disk.yml
+++ b/tests/get_unused_disk.yml
@@ -22,6 +22,15 @@
match_sector_size: "{{ match_sector_size | d(omit) }}"
register: unused_disks_return
+- name: Debug why there are no unused disks
+ shell: |
+ set -x
+ exec 1>&2
+ lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC
+ journalctl -ex
+ changed_when: false
+ when: "'Unable to find unused disk' in unused_disks_return.disks"
+
- name: Set unused_disks if necessary
set_fact:
unused_disks: "{{ unused_disks_return.disks }}"
--
2.46.0

View File

@ -0,0 +1,34 @@
From aa6e494963a3bded3b1ca7ef5a81e0106e68d5bc Mon Sep 17 00:00:00 2001
From: Jan Pokorny <japokorn@redhat.com>
Date: Thu, 6 Jun 2024 11:54:48 +0200
Subject: [PATCH 08/10] test: lvm pool members test fix
tests_lvm_pool_members started to fail. It tried to create a device with
a requested size (20m) that was less than minimal allowed size (300m) for that type of
volume. Role automatically resized the device to allowed size. That lead to discrepancy
in actual and expected size values.
Increasing the requested device size to be same or larger than minimal fixes the
issue.
(cherry picked from commit ee740b7b14d09e09a26dd5eb95e8950aeb15147d)
---
tests/tests_lvm_pool_members.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/tests_lvm_pool_members.yml b/tests/tests_lvm_pool_members.yml
index 63c10c7..320626e 100644
--- a/tests/tests_lvm_pool_members.yml
+++ b/tests/tests_lvm_pool_members.yml
@@ -6,7 +6,7 @@
storage_safe_mode: false
storage_use_partitions: true
volume_group_size: '10g'
- volume_size: '20m'
+ volume_size: '300m'
tags:
- tests::lvm
--
2.46.0

View File

@ -0,0 +1,40 @@
From d2b59ac3758f51ffac5156e9f006b7ce9d8a28eb Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Tue, 4 Jun 2024 10:30:03 +0200
Subject: [PATCH 09/10] fix: Fix expected error message in tests_misc.yml
Different versions of blivet return a different error message when
trying to create a filesystem with invalid parameters.
On Fedora 39 and older:
"Failed to commit changes to disk: (FSError('format failed: 1'),
'/dev/mapper/foo-test1')"
On Fedora 40 and newer:
"Failed to commit changes to disk: Process reported exit code 1:
mke2fs: invalid block size - 512\n"
(cherry picked from commit 7ef66d85bd52f339483b24dbb8bc66e22054b378)
---
tests/tests_misc.yml | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/tests/tests_misc.yml b/tests/tests_misc.yml
index 363d843..432ec16 100644
--- a/tests/tests_misc.yml
+++ b/tests/tests_misc.yml
@@ -68,8 +68,9 @@
include_tasks: verify-role-failed.yml
vars:
__storage_failed_regex: >-
- Failed to commit changes to disk.*FSError.*format failed:
- 1.*/dev/mapper/foo-test1
+ Failed to commit changes to disk.*(FSError.*format failed:
+ 1.*/dev/mapper/foo-test1|
+ Process reported exit code 1: mke2fs: invalid block size - 512)
__storage_failed_msg: >-
Unexpected behavior when creating ext4 filesystem with invalid
parameter
--
2.46.0

View File

@ -0,0 +1,180 @@
From a86f7e013fe881e477b65509363bbb5af851662f Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Fri, 12 Apr 2024 14:45:15 +0200
Subject: [PATCH 10/10] tests: Use blockdev_info to check volume mount points
We can use the information from `lsblk` we already use for other
checks instead of using the Ansible mountinfo facts. This makes
the check simpler and also makes it easier to check for Stratis
volume mount points, because of the complicated Stratis devices
structure in /dev.
(cherry picked from commit 10e657bde68ffa9495b2441ed9f472cf79edbb19)
---
library/blockdev_info.py | 2 +-
tests/test-verify-volume-fs.yml | 51 ++++++++++++++++--------------
tests/test-verify-volume-mount.yml | 48 +++++-----------------------
3 files changed, 37 insertions(+), 64 deletions(-)
diff --git a/library/blockdev_info.py b/library/blockdev_info.py
index 13858fb..ec018de 100644
--- a/library/blockdev_info.py
+++ b/library/blockdev_info.py
@@ -64,7 +64,7 @@ def fixup_md_path(path):
def get_block_info(module):
- buf = module.run_command(["lsblk", "-o", "NAME,FSTYPE,LABEL,UUID,TYPE,SIZE", "-p", "-P", "-a"])[1]
+ buf = module.run_command(["lsblk", "-o", "NAME,FSTYPE,LABEL,UUID,TYPE,SIZE,MOUNTPOINT", "-p", "-P", "-a"])[1]
info = dict()
for line in buf.splitlines():
dev = dict()
diff --git a/tests/test-verify-volume-fs.yml b/tests/test-verify-volume-fs.yml
index 8e488c5..63b2770 100644
--- a/tests/test-verify-volume-fs.yml
+++ b/tests/test-verify-volume-fs.yml
@@ -1,26 +1,31 @@
---
# type
-- name: Verify fs type
- assert:
- that: storage_test_blkinfo.info[storage_test_volume._device].fstype ==
- storage_test_volume.fs_type or
- (storage_test_blkinfo.info[storage_test_volume._device].fstype | length
- == 0 and storage_test_volume.fs_type == "unformatted")
- when: storage_test_volume.fs_type and _storage_test_volume_present
+- name: Check volume filesystem
+ when: storage_test_volume.type != "stratis"
+ block:
+ - name: Verify fs type
+ assert:
+ that: storage_test_blkinfo.info[storage_test_volume._device].fstype ==
+ storage_test_volume.fs_type or
+ (storage_test_blkinfo.info[storage_test_volume._device].fstype | length
+ == 0 and storage_test_volume.fs_type == "unformatted")
+ when:
+ - storage_test_volume.fs_type
+ - _storage_test_volume_present
-# label
-- name: Verify fs label
- assert:
- that: storage_test_blkinfo.info[storage_test_volume._device].label ==
- storage_test_volume.fs_label
- msg: >-
- Volume '{{ storage_test_volume.name }}' labels do not match when they
- should
- ('{{ storage_test_blkinfo.info[storage_test_volume._device].label }}',
- '{{ storage_test_volume.fs_label }}')
- when:
- - _storage_test_volume_present | bool
- # label for GFS2 is set manually with the extra `-t` fs_create_options
- # so we can't verify it here because it was not set with fs_label so
- # the label from blkinfo doesn't match the expected "empty" fs_label
- - storage_test_volume.fs_type != "gfs2"
+ # label
+ - name: Verify fs label
+ assert:
+ that: storage_test_blkinfo.info[storage_test_volume._device].label ==
+ storage_test_volume.fs_label
+ msg: >-
+ Volume '{{ storage_test_volume.name }}' labels do not match when they
+ should
+ ('{{ storage_test_blkinfo.info[storage_test_volume._device].label }}',
+ '{{ storage_test_volume.fs_label }}')
+ when:
+ - _storage_test_volume_present | bool
+ # label for GFS2 is set manually with the extra `-t` fs_create_options
+ # so we can't verify it here because it was not set with fs_label so
+ # the label from blkinfo doesn't match the expected "empty" fs_label
+ - storage_test_volume.fs_type != "gfs2"
diff --git a/tests/test-verify-volume-mount.yml b/tests/test-verify-volume-mount.yml
index cf86b34..17d2a01 100644
--- a/tests/test-verify-volume-mount.yml
+++ b/tests/test-verify-volume-mount.yml
@@ -15,20 +15,13 @@
- name: Set some facts
set_fact:
- storage_test_mount_device_matches: "{{ ansible_mounts |
- selectattr('device', 'match', '^' ~ storage_test_device_path ~ '$') |
- list }}"
- storage_test_mount_point_matches: "{{ ansible_mounts |
- selectattr('mount', 'match',
- '^' ~ mount_prefix ~ storage_test_volume.mount_point ~ '$') |
- list if storage_test_volume.mount_point else [] }}"
- storage_test_mount_expected_match_count: "{{ 1
- if _storage_test_volume_present and storage_test_volume.mount_point and
- storage_test_volume.mount_point.startswith('/')
- else 0 }}"
storage_test_swap_expected_matches: "{{ 1 if
_storage_test_volume_present and
storage_test_volume.fs_type == 'swap' else 0 }}"
+ storage_test_mount_expected_mount_point: "{{
+ '[SWAP]' if storage_test_volume.fs_type == 'swap' else
+ '' if storage_test_volume.mount_point == 'none' else
+ mount_prefix + storage_test_volume.mount_point if storage_test_volume.mount_point else '' }}"
vars:
# assumes /opt which is /var/opt in ostree
mount_prefix: "{{ '/var' if __storage_is_ostree | d(false)
@@ -50,23 +43,12 @@
#
- name: Verify the current mount state by device
assert:
- that: storage_test_mount_device_matches | length ==
- storage_test_mount_expected_match_count | int
+ that: storage_test_blkinfo.info[storage_test_volume._device].mountpoint ==
+ storage_test_mount_expected_mount_point
msg: >-
Found unexpected mount state for volume
'{{ storage_test_volume.name }}' device
- when: _storage_test_volume_present and storage_test_volume.mount_point
-
-#
-# Verify mount directory (state, owner, group, permissions).
-#
-- name: Verify the current mount state by mount point
- assert:
- that: storage_test_mount_point_matches | length ==
- storage_test_mount_expected_match_count | int
- msg: >-
- Found unexpected mount state for volume
- '{{ storage_test_volume.name }}' mount point
+ when: _storage_test_volume_present
- name: Verify mount directory user
assert:
@@ -104,18 +86,6 @@
storage_test_volume.mount_point and
storage_test_volume.mount_mode
-#
-# Verify mount fs type.
-#
-- name: Verify the mount fs type
- assert:
- that: storage_test_mount_point_matches[0].fstype ==
- storage_test_volume.fs_type
- msg: >-
- Found unexpected mount state for volume
- '{{ storage_test_volume.name }} fs type
- when: storage_test_mount_expected_match_count | int == 1
-
#
# Verify swap status.
#
@@ -145,10 +115,8 @@
- name: Unset facts
set_fact:
- storage_test_mount_device_matches: null
- storage_test_mount_point_matches: null
- storage_test_mount_expected_match_count: null
storage_test_swap_expected_matches: null
storage_test_sys_node: null
storage_test_swaps: null
storage_test_found_mount_stat: null
+ storage_test_mount_expected_mount_point: null
--
2.46.0

View File

@ -0,0 +1,26 @@
From 36acf32d30d106159ba9f2fa88d723d9577c9f15 Mon Sep 17 00:00:00 2001
From: Samuel Bancal <Samuel.Bancal@groupe-t2i.com>
Date: Thu, 14 Mar 2024 10:15:11 +0100
Subject: [PATCH 101/115] fix: Add support for --check flag
Fix: https://github.com/linux-system-roles/podman/issues/133
(cherry picked from commit a47e6a95e2a5ee70714bf315d3e03310365d3650)
---
tasks/main.yml | 1 +
1 file changed, 1 insertion(+)
diff --git a/tasks/main.yml b/tasks/main.yml
index 1b9ca4a..61f1d1c 100644
--- a/tasks/main.yml
+++ b/tasks/main.yml
@@ -21,6 +21,7 @@
when: (__podman_packages | difference(ansible_facts.packages))
- name: Get podman version
+ check_mode: false
command: podman --version
changed_when: false
register: __podman_version_output
--
2.46.0

View File

@ -0,0 +1,56 @@
From 53f83475c59092e2c23d1957c2fc24c8ca4b6ad9 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Tue, 9 Apr 2024 18:27:25 -0600
Subject: [PATCH 102/115] fix: use correct user for cancel linger file name
Cause: When processing a list of kube or quadlet items, the
code was using the user id associated with the list, not the
item, to specify the linger filename.
Consequence: The linger file does not exist, so the code
does not cancel linger for the actual user.
Fix: Use the correct username to construct the linger filename.
Result: Lingering is cancelled for the correct users.
QE: The test is now in tests_basic.yml
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 67b88b9aa0a1b1123c2ae24bb7ca4a527924cd13)
---
tasks/cancel_linger.yml | 2 +-
tests/tests_basic.yml | 7 +++++++
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/tasks/cancel_linger.yml b/tasks/cancel_linger.yml
index 761778b..ede71fe 100644
--- a/tasks/cancel_linger.yml
+++ b/tasks/cancel_linger.yml
@@ -59,4 +59,4 @@
- __podman_linger_secrets.stdout == ""
changed_when: true
args:
- removes: /var/lib/systemd/linger/{{ __podman_user }}
+ removes: /var/lib/systemd/linger/{{ __podman_linger_user }}
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index a9f01c9..d4f9238 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -409,6 +409,13 @@
^[ ]*podman-kube@.+-{{ item[0] }}[.]yml[.]service[ ]+loaded[
]+active
+ - name: Ensure no linger
+ stat:
+ path: /var/lib/systemd/linger/{{ item[1] }}
+ loop: "{{ test_names_users }}"
+ register: __stat
+ failed_when: __stat.stat.exists
+
rescue:
- name: Dump journal
command: journalctl -ex
--
2.46.0

View File

@ -0,0 +1,28 @@
From dd93ef65b0d1929184d458914386086fca8b8d7a Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 10 Apr 2024 16:06:28 -0600
Subject: [PATCH 103/115] test: do not check for root linger
Do not check if there is a linger file for root.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 2b29e049daa28ba6c3b38f514cff9c62be5f3caf)
---
tests/tests_basic.yml | 1 +
1 file changed, 1 insertion(+)
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index d4f9238..d578b15 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -412,6 +412,7 @@
- name: Ensure no linger
stat:
path: /var/lib/systemd/linger/{{ item[1] }}
+ when: item[1] != "root"
loop: "{{ test_names_users }}"
register: __stat
failed_when: __stat.stat.exists
--
2.46.0

View File

@ -0,0 +1,210 @@
From b2e79348094ea8d89b71727d82a80a9f3cfbb1ce Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Tue, 9 Apr 2024 18:28:19 -0600
Subject: [PATCH 104/115] fix: do not use become for changing hostdir
ownership, and expose subuid/subgid info
When creating host directories, do not use `become`, because if
it needs to change ownership, that must be done by `root`, not
as the rootless podman user.
In order to test this, I have changed the role to export the subuid and subgid
information for the rootless users as two dictionaries:
`podman_subuid_info` and `podman_subgid_info`. See `README.md` for
usage.
NOTE that depending on the namespace used by your containers, you might not
be able to use the subuid and subgid information, which comes from `getsubids`
if available, or directly from the files `/etc/subuid` and `/etc/subgid` on
the host.
QE: The test tests_basic.yml has been extended for this.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 3d02eb725355088df6c707717547f5ad6b7c400c)
---
README.md | 28 ++++++++++++
tasks/create_update_kube_spec.yml | 2 -
tasks/create_update_quadlet_spec.yml | 2 -
tasks/handle_user_group.yml | 66 +++++++++++++++++++++-------
tests/tests_basic.yml | 2 +
5 files changed, 79 insertions(+), 21 deletions(-)
diff --git a/README.md b/README.md
index ea1edfb..e5a7c12 100644
--- a/README.md
+++ b/README.md
@@ -418,6 +418,34 @@ PodmanArgs=--secret=my-app-pwd,type=env,target=MYAPP_PASSWORD
{% endif %}
```
+### podman_subuid_info, podman_subgid_info
+
+The role needs to ensure any users and groups are present in the subuid and
+subgid information. Once it extracts this data, it will be available in
+`podman_subuid_info` and `podman_subgid_info`. These are dicts. The key is the
+user or group name, and the value is a `dict` with two fields:
+
+* `start` - the start of the id range for that user or group, as an `int`
+* `range` - the id range for that user or group, as an `int`
+
+```yaml
+podman_host_directories:
+ "/var/lib/db":
+ mode: "0777"
+ owner: "{{ 1001 + podman_subuid_info['dbuser']['start'] - 1 }}"
+ group: "{{ 1001 + podman_subgid_info['dbgroup']['start'] - 1 }}"
+```
+
+Where `1001` is the uid for user `dbuser`, and `1001` is the gid for group
+`dbgroup`.
+
+**NOTE**: depending on the namespace used by your containers, you might not be
+able to use the subuid and subgid information, which comes from `getsubids` if
+available, or directly from the files `/etc/subuid` and `/etc/subgid` on the
+host. See
+[podman user namespace modes](https://www.redhat.com/sysadmin/rootless-podman-user-namespace-modes)
+for more information.
+
## Example Playbooks
Create rootless container with volume mount:
diff --git a/tasks/create_update_kube_spec.yml b/tasks/create_update_kube_spec.yml
index 95d7d35..7a8ba9c 100644
--- a/tasks/create_update_kube_spec.yml
+++ b/tasks/create_update_kube_spec.yml
@@ -32,8 +32,6 @@
__defaults: "{{ {'path': item} | combine(__podman_hostdirs_defaults) |
combine(__owner_group) }}"
loop: "{{ __podman_volumes }}"
- become: "{{ __podman_rootless | ternary(true, omit) }}"
- become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
when:
- podman_create_host_directories | bool
- __podman_volumes | d([]) | length > 0
diff --git a/tasks/create_update_quadlet_spec.yml b/tasks/create_update_quadlet_spec.yml
index c3e0095..062c105 100644
--- a/tasks/create_update_quadlet_spec.yml
+++ b/tasks/create_update_quadlet_spec.yml
@@ -16,8 +16,6 @@
__defaults: "{{ {'path': item} | combine(__podman_hostdirs_defaults) |
combine(__owner_group) }}"
loop: "{{ __podman_volumes }}"
- become: "{{ __podman_rootless | ternary(true, omit) }}"
- become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
when:
- podman_create_host_directories | bool
- __podman_volumes | d([]) | length > 0
diff --git a/tasks/handle_user_group.yml b/tasks/handle_user_group.yml
index 17300b6..ea9984d 100644
--- a/tasks/handle_user_group.yml
+++ b/tasks/handle_user_group.yml
@@ -52,10 +52,26 @@
- name: Check user with getsubids
command: getsubids {{ __podman_user | quote }}
changed_when: false
+ register: __podman_register_subuids
- name: Check group with getsubids
command: getsubids -g {{ __podman_group_name | quote }}
changed_when: false
+ register: __podman_register_subgids
+
+ - name: Set user subuid and subgid info
+ set_fact:
+ podman_subuid_info: "{{ podman_subuid_info | d({}) |
+ combine({__podman_user:
+ {'start': __subuid_data[2] | int, 'range': __subuid_data[3] | int}})
+ if __subuid_data | length > 0 else podman_subuid_info | d({}) }}"
+ podman_subgid_info: "{{ podman_subgid_info | d({}) |
+ combine({__podman_group_name:
+ {'start': __subgid_data[2] | int, 'range': __subgid_data[3] | int}})
+ if __subgid_data | length > 0 else podman_subgid_info | d({}) }}"
+ vars:
+ __subuid_data: "{{ __podman_register_subuids.stdout.split() | list }}"
+ __subgid_data: "{{ __podman_register_subgids.stdout.split() | list }}"
- name: Check subuid, subgid files if no getsubids
when:
@@ -63,32 +79,48 @@
- __podman_user not in ["root", "0"]
- __podman_group not in ["root", "0"]
block:
- - name: Check if user is in subuid file
- find:
- path: /etc
- pattern: subuid
- use_regex: true
- contains: "^{{ __podman_user }}:.*$"
- register: __podman_uid_line_found
+ - name: Get subuid file
+ slurp:
+ path: /etc/subuid
+ register: __podman_register_subuids
+
+ - name: Get subgid file
+ slurp:
+ path: /etc/subgid
+ register: __podman_register_subgids
+
+ - name: Set user subuid and subgid info
+ set_fact:
+ podman_subuid_info: "{{ podman_subuid_info | d({}) |
+ combine({__podman_user:
+ {'start': __subuid_data[1] | int, 'range': __subuid_data[2] | int}})
+ if __subuid_data else podman_subuid_info | d({}) }}"
+ podman_subgid_info: "{{ podman_subgid_info | d({}) |
+ combine({__podman_group_name:
+ {'start': __subgid_data[1] | int, 'range': __subgid_data[2] | int}})
+ if __subgid_data else podman_subgid_info | d({}) }}"
+ vars:
+ __subuid_match_line: "{{
+ (__podman_register_subuids.content | b64decode).split('\n') | list |
+ select('match', '^' ~ __podman_user ~ ':') | list }}"
+ __subuid_data: "{{ __subuid_match_line[0].split(':') | list
+ if __subuid_match_line else null }}"
+ __subgid_match_line: "{{
+ (__podman_register_subgids.content | b64decode).split('\n') | list |
+ select('match', '^' ~ __podman_group_name ~ ':') | list }}"
+ __subgid_data: "{{ __subgid_match_line[0].split(':') | list
+ if __subgid_match_line else null }}"
- name: Fail if user not in subuid file
fail:
msg: >
The given podman user [{{ __podman_user }}] is not in the
/etc/subuid file - cannot continue
- when: not __podman_uid_line_found.matched
-
- - name: Check if group is in subgid file
- find:
- path: /etc
- pattern: subgid
- use_regex: true
- contains: "^{{ __podman_group_name }}:.*$"
- register: __podman_gid_line_found
+ when: not __podman_user in podman_subuid_info
- name: Fail if group not in subgid file
fail:
msg: >
The given podman group [{{ __podman_group_name }}] is not in the
/etc/subgid file - cannot continue
- when: not __podman_gid_line_found.matched
+ when: not __podman_group_name in podman_subuid_info
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index d578b15..121c3a7 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -8,6 +8,8 @@
podman_host_directories:
"/tmp/httpd1-create":
mode: "0777"
+ owner: "{{ 1001 + podman_subuid_info['user1']['start'] - 1 }}"
+ group: "{{ 1001 + podman_subgid_info['user1']['start'] - 1 }}"
podman_run_as_user: root
test_names_users:
- [httpd1, user1, 1001]
--
2.46.0

View File

@ -0,0 +1,42 @@
From 7978bed4d52e44feae114ba56e9b5035b7dd2c1c Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 17 Apr 2024 10:14:21 -0600
Subject: [PATCH 105/115] chore: change no_log false to true; fix comment
Forgot to change a `no_log: false` back to `no_log: true` when debugging.
Fix an error in a comment
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit b37ee8fc7e12317660cca765760c32bd4ba91035)
---
tasks/handle_secret.yml | 2 +-
vars/main.yml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/tasks/handle_secret.yml b/tasks/handle_secret.yml
index b3677ef..02bc15b 100644
--- a/tasks/handle_secret.yml
+++ b/tasks/handle_secret.yml
@@ -39,7 +39,7 @@
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
when: not __podman_rootless or __podman_xdg_stat.stat.exists
- no_log: false
+ no_log: true
vars:
__params: |
{% set rc = {} %}
diff --git a/vars/main.yml b/vars/main.yml
index 47293c5..38402ff 100644
--- a/vars/main.yml
+++ b/vars/main.yml
@@ -74,5 +74,5 @@ __podman_user_kube_path: "/.config/containers/ansible-kubernetes.d"
# location for system quadlet files
__podman_system_quadlet_path: "/etc/containers/systemd"
-# location for user kubernetes yaml files
+# location for user quadlet files
__podman_user_quadlet_path: "/.config/containers/systemd"
--
2.46.0

View File

@ -0,0 +1,214 @@
From 07053a415b4a0bde557f28f6f607250915e908e6 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 17 Apr 2024 11:35:52 -0600
Subject: [PATCH 106/115] fix: make kube cleanup idempotent
Cause: The task that calls podman_play was not checking if the kube yaml
file existed when cleaning up.
Consequence: The task would give an error that the pod could not be
removed.
Fix: Do not attempt to remove the pod if the kube yaml file does not
exist.
Result: Calling the podman role repeatedly to remove a kube spec
will not fail and will not report changes for subsequent removals.
QE: tests_basic.yml has been changed to check for this case
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit e506f39b6608613a5801190091a72b013b85a888)
---
tasks/cleanup_kube_spec.yml | 9 +++++-
tests/tests_basic.yml | 62 ++++++++++++++++++++++++++-----------
2 files changed, 52 insertions(+), 19 deletions(-)
diff --git a/tasks/cleanup_kube_spec.yml b/tasks/cleanup_kube_spec.yml
index c864179..b6b47bd 100644
--- a/tasks/cleanup_kube_spec.yml
+++ b/tasks/cleanup_kube_spec.yml
@@ -25,6 +25,11 @@
vars:
__service_error: Could not find the requested service
+- name: Check if kube file exists
+ stat:
+ path: "{{ __podman_kube_file }}"
+ register: __podman_kube_file_stat
+
- name: Remove pod/containers
containers.podman.podman_play: "{{ __podman_kube_spec |
combine({'kube_file': __podman_kube_file}) }}"
@@ -33,7 +38,9 @@
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
register: __podman_removed
- when: not __podman_rootless or __podman_xdg_stat.stat.exists
+ when:
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
+ - __podman_kube_file_stat.stat.exists
- name: Remove kubernetes yaml file
file:
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index 121c3a7..b8ddc50 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -6,13 +6,16 @@
- vars/test_vars.yml
vars:
podman_host_directories:
- "/tmp/httpd1-create":
+ "{{ __test_tmpdir.path ~ '/httpd1-create' }}":
mode: "0777"
- owner: "{{ 1001 + podman_subuid_info['user1']['start'] - 1 }}"
- group: "{{ 1001 + podman_subgid_info['user1']['start'] - 1 }}"
+ owner: "{{ 1001 +
+ podman_subuid_info[__podman_test_username]['start'] - 1 }}"
+ group: "{{ 1001 +
+ podman_subgid_info[__podman_test_username]['start'] - 1 }}"
podman_run_as_user: root
+ __podman_test_username: podman_basic_user
test_names_users:
- - [httpd1, user1, 1001]
+ - [httpd1, "{{ __podman_test_username }}", 1001]
- [httpd2, root, 0]
- [httpd3, root, 0]
podman_create_host_directories: true
@@ -26,7 +29,7 @@
- state: started
debug: true
log_level: debug
- run_as_user: user1
+ run_as_user: "{{ __podman_test_username }}"
kube_file_content:
apiVersion: v1
kind: Pod
@@ -57,10 +60,10 @@
volumes:
- name: www
hostPath:
- path: /tmp/httpd1
+ path: "{{ __test_tmpdir.path ~ '/httpd1' }}"
- name: create
hostPath:
- path: /tmp/httpd1-create
+ path: "{{ __test_tmpdir.path ~ '/httpd1-create' }}"
- state: started
debug: true
log_level: debug
@@ -94,10 +97,10 @@
volumes:
- name: www
hostPath:
- path: /tmp/httpd2
+ path: "{{ __test_tmpdir.path ~ '/httpd2' }}"
- name: create
hostPath:
- path: /tmp/httpd2-create
+ path: "{{ __test_tmpdir.path ~ '/httpd2-create' }}"
__podman_kube_file_content: |
apiVersion: v1
kind: Pod
@@ -128,11 +131,23 @@
volumes:
- name: www
hostPath:
- path: /tmp/httpd3
+ path: "{{ __test_tmpdir.path ~ '/httpd3' }}"
- name: create
hostPath:
- path: /tmp/httpd3-create
+ path: "{{ __test_tmpdir.path ~ '/httpd3-create' }}"
tasks:
+ - name: Create tmpdir for testing
+ tempfile:
+ state: directory
+ prefix: lsr_
+ suffix: _podman
+ register: __test_tmpdir
+
+ - name: Change tmpdir permissions
+ file:
+ path: "{{ __test_tmpdir.path }}"
+ mode: "0777"
+
- name: Run basic tests
vars:
__podman_use_kube_file:
@@ -156,7 +171,7 @@
- name: Create user
user:
- name: user1
+ name: "{{ __podman_test_username }}"
uid: 1001
- name: Create tempfile for kube_src
@@ -171,12 +186,12 @@
copy:
content: "{{ __podman_kube_file_content }}"
dest: "{{ __kube_file_src.path }}"
- mode: 0600
+ mode: "0600"
delegate_to: localhost
- name: Create host directories for data
file:
- path: /tmp/{{ item[0] }}
+ path: "{{ __test_tmpdir.path ~ '/' ~ item[0] }}"
state: directory
mode: "0755"
owner: "{{ item[1] }}"
@@ -184,7 +199,7 @@
- name: Create data files
copy:
- dest: /tmp/{{ item[0] }}/index.txt
+ dest: "{{ __test_tmpdir.path ~ '/' ~ item[0] ~ '/index.txt' }}"
content: "123"
mode: "0644"
owner: "{{ item[1] }}"
@@ -315,7 +330,7 @@
loop: [15001, 15002]
- name: Check host directories
- command: ls -alrtF /tmp/{{ item[0] }}-create
+ command: ls -alrtF {{ __test_tmpdir.path ~ '/' ~ item[0] }}-create
loop: "{{ test_names_users }}"
changed_when: false
@@ -419,6 +434,18 @@
register: __stat
failed_when: __stat.stat.exists
+ - name: Remove pods and units again - test idempotence
+ include_role:
+ name: linux-system-roles.podman
+ vars:
+ # noqa jinja[spacing]
+ podman_kube_specs: "{{ __podman_kube_specs |
+ union([__podman_use_kube_file]) |
+ map('combine', {'state':'absent'}) | list }}"
+ podman_create_host_directories: false
+ podman_firewall: []
+ podman_selinux_ports: []
+
rescue:
- name: Dump journal
command: journalctl -ex
@@ -438,9 +465,8 @@
- name: Clean up host directories
file:
- path: /tmp/{{ item }}
+ path: "{{ __test_tmpdir.path }}"
state: absent
- loop: [httpd1, httpd2, httpd3]
tags:
- tests::cleanup
--
2.46.0

View File

@ -0,0 +1,35 @@
From 0a8ce32cdc093c388718d4fe28007259ac86854d Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Thu, 18 Apr 2024 08:39:33 -0600
Subject: [PATCH 107/115] chore: use none in jinja code, not null
Must use `none` in Jinja code, not `null`, which is used in YAML.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit fdf98595e9ecdacfed80d40c2539b18c7d715368)
---
tasks/handle_user_group.yml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tasks/handle_user_group.yml b/tasks/handle_user_group.yml
index ea9984d..0b98d99 100644
--- a/tasks/handle_user_group.yml
+++ b/tasks/handle_user_group.yml
@@ -104,12 +104,12 @@
(__podman_register_subuids.content | b64decode).split('\n') | list |
select('match', '^' ~ __podman_user ~ ':') | list }}"
__subuid_data: "{{ __subuid_match_line[0].split(':') | list
- if __subuid_match_line else null }}"
+ if __subuid_match_line else none }}"
__subgid_match_line: "{{
(__podman_register_subgids.content | b64decode).split('\n') | list |
select('match', '^' ~ __podman_group_name ~ ':') | list }}"
__subgid_data: "{{ __subgid_match_line[0].split(':') | list
- if __subgid_match_line else null }}"
+ if __subgid_match_line else none }}"
- name: Fail if user not in subuid file
fail:
--
2.46.0

View File

@ -0,0 +1,44 @@
From 4824891e596c197e49557d9d2679cabc76e598e9 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Fri, 19 Apr 2024 07:33:41 -0600
Subject: [PATCH 108/115] uid 1001 conflicts on some test systems
(cherry picked from commit 5b7ad16d23b78f6f0f68638c0d69015ebb26b3b0)
---
tests/tests_basic.yml | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tests/tests_basic.yml b/tests/tests_basic.yml
index b8ddc50..c91cc5f 100644
--- a/tests/tests_basic.yml
+++ b/tests/tests_basic.yml
@@ -8,14 +8,14 @@
podman_host_directories:
"{{ __test_tmpdir.path ~ '/httpd1-create' }}":
mode: "0777"
- owner: "{{ 1001 +
+ owner: "{{ 3001 +
podman_subuid_info[__podman_test_username]['start'] - 1 }}"
- group: "{{ 1001 +
+ group: "{{ 3001 +
podman_subgid_info[__podman_test_username]['start'] - 1 }}"
podman_run_as_user: root
__podman_test_username: podman_basic_user
test_names_users:
- - [httpd1, "{{ __podman_test_username }}", 1001]
+ - [httpd1, "{{ __podman_test_username }}", 3001]
- [httpd2, root, 0]
- [httpd3, root, 0]
podman_create_host_directories: true
@@ -172,7 +172,7 @@
- name: Create user
user:
name: "{{ __podman_test_username }}"
- uid: 1001
+ uid: 3001
- name: Create tempfile for kube_src
tempfile:
--
2.46.0

View File

@ -0,0 +1,26 @@
From 2343663a17a42e71aa5b78ad5deca72823a0afb0 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Mon, 3 Jun 2024 13:15:07 -0600
Subject: [PATCH 109/115] fix ansible-lint octal value issues
(cherry picked from commit c684c68151f106b4a494bed865e138a0b54ecb43)
---
tests/tests_quadlet_demo.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/tests_quadlet_demo.yml b/tests/tests_quadlet_demo.yml
index a719f9c..259a694 100644
--- a/tests/tests_quadlet_demo.yml
+++ b/tests/tests_quadlet_demo.yml
@@ -98,7 +98,7 @@
get_url:
url: https://localhost:8000
dest: /run/out
- mode: 0600
+ mode: "0600"
validate_certs: false
register: __web_status
until: __web_status is success
--
2.46.0

View File

@ -0,0 +1,308 @@
From 6a5722ce2a591c57e50ac4ff702c810bf452431d Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Thu, 6 Jun 2024 15:20:22 -0600
Subject: [PATCH 110/115] fix: grab name of network to remove from quadlet file
Cause: The code was using "systemd-" + name of quadlet for
the network name when removing networks.
Consequence: If the quadlet had a different NetworkName, the
removal would fail.
Fix: Grab the network quadlet file and grab the NetworkName from
the file to use to remove the network.
Result: The removal of quadlet networks will work both with and
without a custom NetworkName in the quadlet file.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
This also adds a fix for el10 and Fedora which installs the iptables-nft
package to allow rootless podman to manage networks using nftables.
(cherry picked from commit bcd5a750250736a07605c72f98e50c1babcddf16)
---
.ostree/packages-runtime-CentOS-10.txt | 3 ++
.ostree/packages-runtime-Fedora.txt | 3 ++
.ostree/packages-runtime-RedHat-10.txt | 3 ++
tasks/cleanup_quadlet_spec.yml | 43 +++++++++++++++++++++++++-
tests/files/quadlet-basic.network | 5 +++
tests/tests_quadlet_basic.yml | 31 +++++++------------
tests/tests_quadlet_demo.yml | 19 +++---------
vars/CentOS_10.yml | 7 +++++
vars/Fedora.yml | 7 +++++
vars/RedHat_10.yml | 7 +++++
10 files changed, 94 insertions(+), 34 deletions(-)
create mode 100644 .ostree/packages-runtime-CentOS-10.txt
create mode 100644 .ostree/packages-runtime-Fedora.txt
create mode 100644 .ostree/packages-runtime-RedHat-10.txt
create mode 100644 tests/files/quadlet-basic.network
create mode 100644 vars/CentOS_10.yml
create mode 100644 vars/Fedora.yml
create mode 100644 vars/RedHat_10.yml
diff --git a/.ostree/packages-runtime-CentOS-10.txt b/.ostree/packages-runtime-CentOS-10.txt
new file mode 100644
index 0000000..16b8eae
--- /dev/null
+++ b/.ostree/packages-runtime-CentOS-10.txt
@@ -0,0 +1,3 @@
+iptables-nft
+podman
+shadow-utils-subid
diff --git a/.ostree/packages-runtime-Fedora.txt b/.ostree/packages-runtime-Fedora.txt
new file mode 100644
index 0000000..16b8eae
--- /dev/null
+++ b/.ostree/packages-runtime-Fedora.txt
@@ -0,0 +1,3 @@
+iptables-nft
+podman
+shadow-utils-subid
diff --git a/.ostree/packages-runtime-RedHat-10.txt b/.ostree/packages-runtime-RedHat-10.txt
new file mode 100644
index 0000000..16b8eae
--- /dev/null
+++ b/.ostree/packages-runtime-RedHat-10.txt
@@ -0,0 +1,3 @@
+iptables-nft
+podman
+shadow-utils-subid
diff --git a/tasks/cleanup_quadlet_spec.yml b/tasks/cleanup_quadlet_spec.yml
index ba68771..8ea069b 100644
--- a/tasks/cleanup_quadlet_spec.yml
+++ b/tasks/cleanup_quadlet_spec.yml
@@ -30,6 +30,43 @@
vars:
__service_error: Could not find the requested service
+- name: See if quadlet file exists
+ stat:
+ path: "{{ __podman_quadlet_file }}"
+ register: __podman_network_stat
+ when: __podman_quadlet_type == "network"
+
+- name: Get network quadlet network name
+ when:
+ - __podman_quadlet_type == "network"
+ - __podman_network_stat.stat.exists
+ block:
+ - name: Create tempdir
+ tempfile:
+ prefix: podman_
+ suffix: _lsr.ini
+ state: directory
+ register: __podman_network_tmpdir
+ delegate_to: localhost
+
+ - name: Fetch the network quadlet
+ fetch:
+ dest: "{{ __podman_network_tmpdir.path }}/network.ini"
+ src: "{{ __podman_quadlet_file }}"
+ flat: true
+
+ - name: Get the network name
+ set_fact:
+ __podman_network_name: "{{
+ lookup('ini', 'NetworkName section=Network file=' ~
+ __podman_network_tmpdir.path ~ '/network.ini') }}"
+ always:
+ - name: Remove tempdir
+ file:
+ path: "{{ __podman_network_tmpdir.path }}"
+ state: absent
+ delegate_to: localhost
+
- name: Remove quadlet file
file:
path: "{{ __podman_quadlet_file }}"
@@ -62,10 +99,14 @@
changed_when: true
- name: Remove network
- command: podman network rm systemd-{{ __podman_quadlet_name }}
+ command: podman network rm {{ __name | quote }}
changed_when: true
when: __podman_quadlet_type == "network"
environment:
XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ vars:
+ __name: "{{ __podman_network_name if
+ __podman_network_name | d('') | length > 0
+ else 'systemd-' ~ __podman_quadlet_name }}"
diff --git a/tests/files/quadlet-basic.network b/tests/files/quadlet-basic.network
new file mode 100644
index 0000000..7db6e0d
--- /dev/null
+++ b/tests/files/quadlet-basic.network
@@ -0,0 +1,5 @@
+[Network]
+Subnet=192.168.29.0/24
+Gateway=192.168.29.1
+Label=app=wordpress
+NetworkName=quadlet-basic
diff --git a/tests/tests_quadlet_basic.yml b/tests/tests_quadlet_basic.yml
index 1b472be..2891b1a 100644
--- a/tests/tests_quadlet_basic.yml
+++ b/tests/tests_quadlet_basic.yml
@@ -19,12 +19,8 @@
state: present
data: "{{ __json_secret_data | string }}"
__podman_quadlet_specs:
- - name: quadlet-basic
- type: network
- Network:
- Subnet: 192.168.29.0/24
- Gateway: 192.168.29.1
- Label: app=wordpress
+ - file_src: files/quadlet-basic.network
+ state: started
- name: quadlet-basic-mysql
type: volume
Volume: {}
@@ -197,7 +193,8 @@
failed_when: not __stat.stat.exists
# must clean up networks last - cannot remove a network
- # in use by a container
+ # in use by a container - using reverse assumes the network
+ # is defined first in the list
- name: Cleanup user
include_role:
name: linux-system-roles.podman
@@ -206,10 +203,7 @@
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets | map('combine', __absent) |
list }}"
- podman_quadlet_specs: "{{ ((__podman_quadlet_specs |
- rejectattr('type', 'match', '^network$') | list) +
- (__podman_quadlet_specs |
- selectattr('type', 'match', '^network$') | list)) |
+ podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
- name: Ensure no linger
@@ -242,6 +236,11 @@
changed_when: false
rescue:
+ - name: Check AVCs
+ command: grep type=AVC /var/log/audit/audit.log
+ changed_when: false
+ failed_when: false
+
- name: Dump journal
command: journalctl -ex
changed_when: false
@@ -258,10 +257,7 @@
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets |
map('combine', __absent) | list }}"
- podman_quadlet_specs: "{{ ((__podman_quadlet_specs |
- rejectattr('type', 'match', '^network$') | list) +
- (__podman_quadlet_specs |
- selectattr('type', 'match', '^network$') | list)) |
+ podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
- name: Remove test user
@@ -277,10 +273,7 @@
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets |
map('combine', __absent) | list }}"
- podman_quadlet_specs: "{{ ((__podman_quadlet_specs |
- rejectattr('type', 'match', '^network$') | list) +
- (__podman_quadlet_specs |
- selectattr('type', 'match', '^network$') | list)) |
+ podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
rescue:
diff --git a/tests/tests_quadlet_demo.yml b/tests/tests_quadlet_demo.yml
index 259a694..b6c27ef 100644
--- a/tests/tests_quadlet_demo.yml
+++ b/tests/tests_quadlet_demo.yml
@@ -11,7 +11,7 @@
podman_use_copr: false # disable copr for CI testing
podman_fail_if_too_old: false
podman_create_host_directories: true
- podman_quadlet_specs:
+ __podman_quadlet_specs:
- file_src: quadlet-demo.network
- file_src: quadlet-demo-mysql.volume
- template_src: quadlet-demo-mysql.container.j2
@@ -45,6 +45,7 @@
include_role:
name: linux-system-roles.podman
vars:
+ podman_quadlet_specs: "{{ __podman_quadlet_specs }}"
podman_pull_retry: true
podman_secrets:
- name: mysql-root-password-container
@@ -149,19 +150,9 @@
include_role:
name: linux-system-roles.podman
vars:
- podman_quadlet_specs:
- - template_src: quadlet-demo-mysql.container.j2
- state: absent
- - file_src: quadlet-demo-mysql.volume
- state: absent
- - file_src: envoy-proxy-configmap.yml
- state: absent
- - file_src: quadlet-demo.kube
- state: absent
- - template_src: quadlet-demo.yml.j2
- state: absent
- - file_src: quadlet-demo.network
- state: absent
+ __absent: {"state":"absent"}
+ podman_quadlet_specs: "{{ __podman_quadlet_specs |
+ reverse | map('combine', __absent) | list }}"
podman_secrets:
- name: mysql-root-password-container
state: absent
diff --git a/vars/CentOS_10.yml b/vars/CentOS_10.yml
new file mode 100644
index 0000000..83589d5
--- /dev/null
+++ b/vars/CentOS_10.yml
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: MIT
+---
+# shadow-utils-subid for getsubids
+__podman_packages:
+ - iptables-nft
+ - podman
+ - shadow-utils-subid
diff --git a/vars/Fedora.yml b/vars/Fedora.yml
new file mode 100644
index 0000000..83589d5
--- /dev/null
+++ b/vars/Fedora.yml
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: MIT
+---
+# shadow-utils-subid for getsubids
+__podman_packages:
+ - iptables-nft
+ - podman
+ - shadow-utils-subid
diff --git a/vars/RedHat_10.yml b/vars/RedHat_10.yml
new file mode 100644
index 0000000..83589d5
--- /dev/null
+++ b/vars/RedHat_10.yml
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: MIT
+---
+# shadow-utils-subid for getsubids
+__podman_packages:
+ - iptables-nft
+ - podman
+ - shadow-utils-subid
--
2.46.0

View File

@ -0,0 +1,615 @@
From e11e1ff198f0840fcef6cbe75c74ca69dd22f694 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Mon, 8 Jul 2024 16:35:29 -0600
Subject: [PATCH 111/115] fix: proper cleanup for networks; ensure cleanup of
resources
Cause: The code was not managing network systemd quadlet units.
Consequence: Network systemd quadlet units were not being stopped and
disabled. Subsequent runs would fail due to the network units not
being cleaned up properly.
Fix: The role manages network systemd quadlet units, including stopping
and removing.
Result: Systemd quadlet network units are properly cleaned up.
In addition - improve the removal of all types of quadlet resources,
and include code which can be used to test and debug quadlet resource
removal.
(cherry picked from commit a85908ec7f6f8e19908f8d4d18d6d7b64ab1d31e)
---
README.md | 6 +
defaults/main.yml | 4 +
tasks/cancel_linger.yml | 2 +-
tasks/cleanup_quadlet_spec.yml | 188 +++++++++++++-----
tasks/handle_quadlet_spec.yml | 2 +
tasks/manage_linger.yml | 2 +-
tasks/parse_quadlet_file.yml | 57 ++++++
tests/files/quadlet-basic.network | 2 +-
.../templates/quadlet-demo-mysql.container.j2 | 2 +-
tests/tests_quadlet_basic.yml | 69 ++++++-
tests/tests_quadlet_demo.yml | 33 +++
11 files changed, 309 insertions(+), 58 deletions(-)
create mode 100644 tasks/parse_quadlet_file.yml
diff --git a/README.md b/README.md
index e5a7c12..8b6496e 100644
--- a/README.md
+++ b/README.md
@@ -388,6 +388,12 @@ a newer version. For example, if you attempt to manage quadlet or secrets with
podman 4.3 or earlier, the role will fail with an error. If you want the role to
be skipped instead, use `podman_fail_if_too_old: false`.
+### podman_prune_images
+
+Boolean - default is `false` - by default, the role will not prune unused images
+when removing quadlets and other resources. Set this to `true` to tell the role
+to remove unused images when cleaning up.
+
## Variables Exported by the Role
### podman_version
diff --git a/defaults/main.yml b/defaults/main.yml
index 92e4eb8..02453c9 100644
--- a/defaults/main.yml
+++ b/defaults/main.yml
@@ -109,3 +109,7 @@ podman_continue_if_pull_fails: false
# If true, if a pull attempt fails, it will be retried according
# to the default Ansible `until` behavior.
podman_pull_retry: false
+
+# Prune images when removing quadlets/kube specs -
+# this will remove all unused/unreferenced images
+podman_prune_images: false
diff --git a/tasks/cancel_linger.yml b/tasks/cancel_linger.yml
index ede71fe..f233fc4 100644
--- a/tasks/cancel_linger.yml
+++ b/tasks/cancel_linger.yml
@@ -49,7 +49,7 @@
when: __podman_xdg_stat.stat.exists
- name: Cancel linger if no more resources are in use
- command: loginctl disable-linger {{ __podman_linger_user }}
+ command: loginctl disable-linger {{ __podman_linger_user | quote }}
when:
- __podman_xdg_stat.stat.exists
- __podman_container_info.containers | length == 0
diff --git a/tasks/cleanup_quadlet_spec.yml b/tasks/cleanup_quadlet_spec.yml
index 8ea069b..df69243 100644
--- a/tasks/cleanup_quadlet_spec.yml
+++ b/tasks/cleanup_quadlet_spec.yml
@@ -33,39 +33,11 @@
- name: See if quadlet file exists
stat:
path: "{{ __podman_quadlet_file }}"
- register: __podman_network_stat
- when: __podman_quadlet_type == "network"
+ register: __podman_quadlet_stat
-- name: Get network quadlet network name
- when:
- - __podman_quadlet_type == "network"
- - __podman_network_stat.stat.exists
- block:
- - name: Create tempdir
- tempfile:
- prefix: podman_
- suffix: _lsr.ini
- state: directory
- register: __podman_network_tmpdir
- delegate_to: localhost
-
- - name: Fetch the network quadlet
- fetch:
- dest: "{{ __podman_network_tmpdir.path }}/network.ini"
- src: "{{ __podman_quadlet_file }}"
- flat: true
-
- - name: Get the network name
- set_fact:
- __podman_network_name: "{{
- lookup('ini', 'NetworkName section=Network file=' ~
- __podman_network_tmpdir.path ~ '/network.ini') }}"
- always:
- - name: Remove tempdir
- file:
- path: "{{ __podman_network_tmpdir.path }}"
- state: absent
- delegate_to: localhost
+- name: Parse quadlet file
+ include_tasks: parse_quadlet_file.yml
+ when: __podman_quadlet_stat.stat.exists
- name: Remove quadlet file
file:
@@ -73,40 +45,158 @@
state: absent
register: __podman_file_removed
+- name: Refresh systemd # noqa no-handler
+ systemd:
+ daemon_reload: true
+ scope: "{{ __podman_systemd_scope }}"
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+ when: __podman_file_removed is changed # noqa no-handler
+
+- name: Remove managed resource
+ command: >-
+ podman {{ 'rm' if __podman_quadlet_type == 'container'
+ else 'network rm' if __podman_quadlet_type == 'network'
+ else 'volume rm' if __podman_quadlet_type == 'volume' }}
+ {{ __podman_quadlet_resource_name | quote }}
+ register: __podman_rm
+ failed_when:
+ - __podman_rm is failed
+ - not __podman_rm.stderr is search(__str)
+ changed_when: __podman_rm.rc == 0
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+ vars:
+ __str: " found: no such "
+ __type_to_name: # map quadlet type to quadlet property name
+ container:
+ section: Container
+ name: ContainerName
+ network:
+ section: Network
+ name: NetworkName
+ volume:
+ section: Volume
+ name: VolumeName
+ __section: "{{ __type_to_name[__podman_quadlet_type]['section'] }}"
+ __name: "{{ __type_to_name[__podman_quadlet_type]['name'] }}"
+ __podman_quadlet_resource_name: "{{
+ __podman_quadlet_parsed[__section][__name]
+ if __section in __podman_quadlet_parsed
+ and __name in __podman_quadlet_parsed[__section]
+ else 'systemd-' ~ __podman_quadlet_name }}"
+ when:
+ - __podman_file_removed is changed # noqa no-handler
+ - __podman_quadlet_type in __type_to_name
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
+ - __podman_service_name | length > 0
+ no_log: true
+
+- name: Remove volumes
+ command: podman volume rm {{ item | quote }}
+ loop: "{{ __volume_names }}"
+ when:
+ - __podman_file_removed is changed # noqa no-handler
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
+ - __podman_service_name | length == 0
+ - __podman_quadlet_file.endswith(".yml") or
+ __podman_quadlet_file.endswith(".yaml")
+ changed_when: true
+ vars:
+ __volumes: "{{ __podman_quadlet_parsed |
+ selectattr('apiVersion', 'defined') | selectattr('spec', 'defined') |
+ map(attribute='spec') | selectattr('volumes', 'defined') |
+ map(attribute='volumes') | flatten }}"
+ __config_maps: "{{ __volumes | selectattr('configMap', 'defined') |
+ map(attribute='configMap') | selectattr('name', 'defined') |
+ map(attribute='name') | list }}"
+ __secrets: "{{ __volumes | selectattr('secret', 'defined') |
+ map(attribute='secret') | selectattr('secretName', 'defined') |
+ map(attribute='secretName') | list }}"
+ __pvcs: "{{ __volumes | selectattr('persistentVolumeClaim', 'defined') |
+ map(attribute='persistentVolumeClaim') | selectattr('claimName', 'defined') |
+ map(attribute='claimName') | list }}"
+ __volume_names: "{{ __config_maps + __secrets + __pvcs }}"
+ no_log: true
+
+- name: Clear parsed podman variable
+ set_fact:
+ __podman_quadlet_parsed: null
+
+- name: Prune images no longer in use
+ command: podman image prune --all -f
+ when:
+ - podman_prune_images | bool
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
+ changed_when: true
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+
- name: Manage linger
include_tasks: manage_linger.yml
vars:
__podman_item_state: absent
-- name: Cleanup container resources
- when: __podman_file_removed is changed # noqa no-handler
+- name: Collect information for testing/debugging
+ when:
+ - __podman_test_debug | d(false)
+ - not __podman_rootless or __podman_xdg_stat.stat.exists
block:
- - name: Reload systemctl # noqa no-handler
- systemd:
- daemon_reload: true
- scope: "{{ __podman_systemd_scope }}"
+ - name: For testing and debugging - images
+ command: podman images -n
+ register: __podman_test_debug_images
+ changed_when: false
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
environment:
XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
- - name: Prune images no longer in use
- command: podman image prune -f
+ - name: For testing and debugging - volumes
+ command: podman volume ls -n
+ register: __podman_test_debug_volumes
+ changed_when: false
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
environment:
XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+
+ - name: For testing and debugging - containers
+ command: podman ps --noheading
+ register: __podman_test_debug_containers
+ changed_when: false
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
- changed_when: true
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+
+ - name: For testing and debugging - networks
+ command: podman network ls -n -q
+ register: __podman_test_debug_networks
+ changed_when: false
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
- - name: Remove network
- command: podman network rm {{ __name | quote }}
- changed_when: true
- when: __podman_quadlet_type == "network"
+ - name: For testing and debugging - secrets
+ command: podman secret ls -n -q
+ register: __podman_test_debug_secrets
+ changed_when: false
+ no_log: true
+ become: "{{ __podman_rootless | ternary(true, omit) }}"
+ become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
environment:
XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
+
+ - name: For testing and debugging - services
+ service_facts:
become: "{{ __podman_rootless | ternary(true, omit) }}"
become_user: "{{ __podman_rootless | ternary(__podman_user, omit) }}"
- vars:
- __name: "{{ __podman_network_name if
- __podman_network_name | d('') | length > 0
- else 'systemd-' ~ __podman_quadlet_name }}"
+ environment:
+ XDG_RUNTIME_DIR: "{{ __podman_xdg_runtime_dir }}"
diff --git a/tasks/handle_quadlet_spec.yml b/tasks/handle_quadlet_spec.yml
index ce6ef67..851c8a3 100644
--- a/tasks/handle_quadlet_spec.yml
+++ b/tasks/handle_quadlet_spec.yml
@@ -129,6 +129,8 @@
if __podman_quadlet_type in ['container', 'kube']
else __podman_quadlet_name ~ '-volume.service'
if __podman_quadlet_type in ['volume']
+ else __podman_quadlet_name ~ '-network.service'
+ if __podman_quadlet_type in ['network']
else none }}"
- name: Set per-container variables part 4
diff --git a/tasks/manage_linger.yml b/tasks/manage_linger.yml
index b506b70..be69490 100644
--- a/tasks/manage_linger.yml
+++ b/tasks/manage_linger.yml
@@ -10,7 +10,7 @@
- __podman_item_state | d('present') != 'absent'
block:
- name: Enable linger if needed
- command: loginctl enable-linger {{ __podman_user }}
+ command: loginctl enable-linger {{ __podman_user | quote }}
when: __podman_rootless | bool
args:
creates: /var/lib/systemd/linger/{{ __podman_user }}
diff --git a/tasks/parse_quadlet_file.yml b/tasks/parse_quadlet_file.yml
new file mode 100644
index 0000000..5f5297f
--- /dev/null
+++ b/tasks/parse_quadlet_file.yml
@@ -0,0 +1,57 @@
+---
+# Input:
+# * __podman_quadlet_file - path to quadlet file to parse
+# Output:
+# * __podman_quadlet_parsed - dict
+- name: Slurp quadlet file
+ slurp:
+ path: "{{ __podman_quadlet_file }}"
+ register: __podman_quadlet_raw
+ no_log: true
+
+- name: Parse quadlet file
+ set_fact:
+ __podman_quadlet_parsed: |-
+ {% set rv = {} %}
+ {% set section = ["DEFAULT"] %}
+ {% for line in __val %}
+ {% if line.startswith("[") %}
+ {% set val = line.replace("[", "").replace("]", "") %}
+ {% set _ = section.__setitem__(0, val) %}
+ {% else %}
+ {% set ary = line.split("=", 1) %}
+ {% set key = ary[0] %}
+ {% set val = ary[1] %}
+ {% if key in rv.get(section[0], {}) %}
+ {% set curval = rv[section[0]][key] %}
+ {% if curval is string %}
+ {% set newary = [curval, val] %}
+ {% set _ = rv[section[0]].__setitem__(key, newary) %}
+ {% else %}
+ {% set _ = rv[section[0]][key].append(val) %}
+ {% endif %}
+ {% else %}
+ {% set _ = rv.setdefault(section[0], {}).__setitem__(key, val) %}
+ {% endif %}
+ {% endif %}
+ {% endfor %}
+ {{ rv }}
+ vars:
+ __val: "{{ (__podman_quadlet_raw.content | b64decode).split('\n') |
+ select | reject('match', '#') | list }}"
+ when: __podman_service_name | length > 0
+ no_log: true
+
+- name: Parse quadlet yaml file
+ set_fact:
+ __podman_quadlet_parsed: "{{ __podman_quadlet_raw.content | b64decode |
+ from_yaml_all }}"
+ when:
+ - __podman_service_name | length == 0
+ - __podman_quadlet_file.endswith(".yml") or
+ __podman_quadlet_file.endswith(".yaml")
+ no_log: true
+
+- name: Reset raw variable
+ set_fact:
+ __podman_quadlet_raw: null
diff --git a/tests/files/quadlet-basic.network b/tests/files/quadlet-basic.network
index 7db6e0d..5b002ba 100644
--- a/tests/files/quadlet-basic.network
+++ b/tests/files/quadlet-basic.network
@@ -2,4 +2,4 @@
Subnet=192.168.29.0/24
Gateway=192.168.29.1
Label=app=wordpress
-NetworkName=quadlet-basic
+NetworkName=quadlet-basic-name
diff --git a/tests/templates/quadlet-demo-mysql.container.j2 b/tests/templates/quadlet-demo-mysql.container.j2
index c84f0e8..92097d4 100644
--- a/tests/templates/quadlet-demo-mysql.container.j2
+++ b/tests/templates/quadlet-demo-mysql.container.j2
@@ -9,7 +9,7 @@ Volume=/tmp/quadlet_demo:/var/lib/quadlet_demo:Z
Network=quadlet-demo.network
{% if podman_version is version("4.5", ">=") %}
Secret=mysql-root-password-container,type=env,target=MYSQL_ROOT_PASSWORD
-HealthCmd=/usr/bin/true
+HealthCmd=/bin/true
HealthOnFailure=kill
{% else %}
PodmanArgs=--secret=mysql-root-password-container,type=env,target=MYSQL_ROOT_PASSWORD
diff --git a/tests/tests_quadlet_basic.yml b/tests/tests_quadlet_basic.yml
index 2891b1a..0fdced4 100644
--- a/tests/tests_quadlet_basic.yml
+++ b/tests/tests_quadlet_basic.yml
@@ -21,7 +21,14 @@
__podman_quadlet_specs:
- file_src: files/quadlet-basic.network
state: started
+ - name: quadlet-basic-unused-network
+ type: network
+ Network: {}
- name: quadlet-basic-mysql
+ type: volume
+ Volume:
+ VolumeName: quadlet-basic-mysql-name
+ - name: quadlet-basic-unused-volume
type: volume
Volume: {}
- name: quadlet-basic-mysql
@@ -30,7 +37,7 @@
WantedBy: default.target
Container:
Image: "{{ mysql_image }}"
- ContainerName: quadlet-basic-mysql
+ ContainerName: quadlet-basic-mysql-name
Volume: quadlet-basic-mysql.volume:/var/lib/mysql
Network: quadlet-basic.network
# Once 4.5 is released change this line to use the quadlet Secret key
@@ -192,13 +199,14 @@
register: __stat
failed_when: not __stat.stat.exists
- # must clean up networks last - cannot remove a network
- # in use by a container - using reverse assumes the network
- # is defined first in the list
+ # must clean up in the reverse order of creating - and
+ # ensure networks are removed last
- name: Cleanup user
include_role:
name: linux-system-roles.podman
vars:
+ podman_prune_images: true
+ __podman_test_debug: true
podman_run_as_user: user_quadlet_basic
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets | map('combine', __absent) |
@@ -206,6 +214,22 @@
podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
+ - name: Ensure no resources
+ assert:
+ that:
+ - __podman_test_debug_images.stdout == ""
+ - __podman_test_debug_networks.stdout_lines |
+ reject("match", "^podman$") |
+ reject("match", "^podman-default-kube-network$") |
+ list | length == 0
+ - __podman_test_debug_volumes.stdout == ""
+ - __podman_test_debug_containers.stdout == ""
+ - __podman_test_debug_secrets.stdout == ""
+ - ansible_facts["services"] | dict2items |
+ rejectattr("value.status", "match", "not-found") |
+ selectattr("key", "match", "quadlet-demo") |
+ list | length == 0
+
- name: Ensure no linger
stat:
path: /var/lib/systemd/linger/user_quadlet_basic
@@ -230,12 +254,28 @@
- quadlet-basic-mysql.volume
- name: Check JSON
- command: podman exec quadlet-basic-mysql cat /tmp/test.json
+ command: podman exec quadlet-basic-mysql-name cat /tmp/test.json
register: __result
failed_when: __result.stdout != __json_secret_data
changed_when: false
rescue:
+ - name: Debug3
+ shell: |
+ set -x
+ set -o pipefail
+ exec 1>&2
+ #podman volume rm --all
+ #podman network prune -f
+ podman volume ls
+ podman network ls
+ podman secret ls
+ podman container ls
+ podman pod ls
+ podman images
+ systemctl list-units | grep quadlet
+ changed_when: false
+
- name: Check AVCs
command: grep type=AVC /var/log/audit/audit.log
changed_when: false
@@ -253,6 +293,7 @@
include_role:
name: linux-system-roles.podman
vars:
+ podman_prune_images: true
podman_run_as_user: user_quadlet_basic
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets |
@@ -270,12 +311,30 @@
include_role:
name: linux-system-roles.podman
vars:
+ podman_prune_images: true
+ __podman_test_debug: true
__absent: {"state":"absent"}
podman_secrets: "{{ __podman_secrets |
map('combine', __absent) | list }}"
podman_quadlet_specs: "{{ __podman_quadlet_specs | reverse |
map('combine', __absent) | list }}"
+ - name: Ensure no resources
+ assert:
+ that:
+ - __podman_test_debug_images.stdout == ""
+ - __podman_test_debug_networks.stdout_lines |
+ reject("match", "^podman$") |
+ reject("match", "^podman-default-kube-network$") |
+ list | length == 0
+ - __podman_test_debug_volumes.stdout == ""
+ - __podman_test_debug_containers.stdout == ""
+ - __podman_test_debug_secrets.stdout == ""
+ - ansible_facts["services"] | dict2items |
+ rejectattr("value.status", "match", "not-found") |
+ selectattr("key", "match", "quadlet-demo") |
+ list | length == 0
+
rescue:
- name: Dump journal
command: journalctl -ex
diff --git a/tests/tests_quadlet_demo.yml b/tests/tests_quadlet_demo.yml
index b6c27ef..1cc7e62 100644
--- a/tests/tests_quadlet_demo.yml
+++ b/tests/tests_quadlet_demo.yml
@@ -84,6 +84,11 @@
changed_when: false
failed_when: false
+ - name: Check volumes
+ command: podman volume ls
+ changed_when: false
+ failed_when: false
+
- name: Check pods
command: podman pod ps --ctr-ids --ctr-names --ctr-status
changed_when: false
@@ -150,6 +155,8 @@
include_role:
name: linux-system-roles.podman
vars:
+ podman_prune_images: true
+ __podman_test_debug: true
__absent: {"state":"absent"}
podman_quadlet_specs: "{{ __podman_quadlet_specs |
reverse | map('combine', __absent) | list }}"
@@ -161,7 +168,33 @@
- name: envoy-certificates
state: absent
+ - name: Ensure no resources
+ assert:
+ that:
+ - __podman_test_debug_images.stdout == ""
+ - __podman_test_debug_networks.stdout_lines |
+ reject("match", "^podman$") |
+ reject("match", "^podman-default-kube-network$") |
+ list | length == 0
+ - __podman_test_debug_volumes.stdout == ""
+ - __podman_test_debug_containers.stdout == ""
+ - __podman_test_debug_secrets.stdout == ""
+ - ansible_facts["services"] | dict2items |
+ rejectattr("value.status", "match", "not-found") |
+ selectattr("key", "match", "quadlet-demo") |
+ list | length == 0
+
rescue:
+ - name: Debug
+ shell: |
+ exec 1>&2
+ set -x
+ set -o pipefail
+ systemctl list-units --plain -l --all | grep quadlet || :
+ systemctl list-unit-files --all | grep quadlet || :
+ systemctl list-units --plain --failed -l --all | grep quadlet || :
+ changed_when: false
+
- name: Get journald
command: journalctl -ex
changed_when: false
--
2.46.0

View File

@ -0,0 +1,72 @@
From 7473a31e3a0201131e42281bce9bbf9c88ac04ca Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Wed, 31 Jul 2024 18:52:57 -0600
Subject: [PATCH 112/115] fix: Ensure user linger is closed on EL10
Cause: There is an issue with loginctl on EL10 - doing cancel-linger
will leave the user session in the closing state.
Consequence: User sessions accumulate, and the test user cannot
be removed.
Fix: As suggested in the systemd issue, the fix is to shutdown and
restart systemd-logind in this situation.
Result: User cancel-linger works as expected.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 0ceea96a12bf0b462ca62d012d86cdcbd4f20eaa)
---
tasks/cancel_linger.yml | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/tasks/cancel_linger.yml b/tasks/cancel_linger.yml
index f233fc4..00d38c2 100644
--- a/tasks/cancel_linger.yml
+++ b/tasks/cancel_linger.yml
@@ -58,5 +58,42 @@
list | length == 0
- __podman_linger_secrets.stdout == ""
changed_when: true
+ register: __cancel_linger
args:
removes: /var/lib/systemd/linger/{{ __podman_linger_user }}
+
+- name: Wait for user session to exit closing state # noqa no-handler
+ command: loginctl show-user -P State {{ __podman_linger_user | quote }}
+ register: __user_state
+ changed_when: false
+ until: __user_state.stdout != "closing"
+ when: __cancel_linger is changed
+ ignore_errors: true
+
+# see https://github.com/systemd/systemd/issues/26744#issuecomment-2261509208
+- name: Handle user stuck in closing state
+ when:
+ - __cancel_linger is changed
+ - __user_state is failed
+ block:
+ - name: Stop logind
+ service:
+ name: systemd-logind
+ state: stopped
+
+ - name: Wait for user session to exit closing state
+ command: loginctl show-user -P State {{ __podman_linger_user | quote }}
+ changed_when: false
+ register: __user_state
+ until: __user_state.stderr is match(__pat) or
+ __user_state.stdout != "closing"
+ failed_when:
+ - not __user_state.stderr is match(__pat)
+ - __user_state.stdout == "closing"
+ vars:
+ __pat: "Failed to get user: User ID .* is not logged in or lingering"
+
+ - name: Restart logind
+ service:
+ name: systemd-logind
+ state: started
--
2.46.0

View File

@ -0,0 +1,70 @@
From acc8e5458170cd653681beee8cec162e1d3e4f1f Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Mon, 19 Aug 2024 10:11:05 -0600
Subject: [PATCH 113/115] test: skip quadlet tests on non-x86_64
The images we currently use for quadlet testing are only available
on x86_64
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 4a2ab77cafd9ae330f9260a5180680036707bf92)
---
tests/tests_quadlet_basic.yml | 11 +++++++++++
tests/tests_quadlet_demo.yml | 12 ++++++++++++
2 files changed, 23 insertions(+)
diff --git a/tests/tests_quadlet_basic.yml b/tests/tests_quadlet_basic.yml
index 0fdced4..5a06864 100644
--- a/tests/tests_quadlet_basic.yml
+++ b/tests/tests_quadlet_basic.yml
@@ -48,6 +48,17 @@
- FOO=/bin/busybox-extras
- BAZ=test
tasks:
+ - name: Test is only supported on x86_64
+ debug:
+ msg: >
+ This test is only supported on x86_64 because the test images used are only
+ available on that platform.
+ when: ansible_facts["architecture"] != "x86_64"
+
+ - name: End test
+ meta: end_play
+ when: ansible_facts["architecture"] != "x86_64"
+
- name: Run test
block:
- name: See if not pulling images fails
diff --git a/tests/tests_quadlet_demo.yml b/tests/tests_quadlet_demo.yml
index 1cc7e62..f08d482 100644
--- a/tests/tests_quadlet_demo.yml
+++ b/tests/tests_quadlet_demo.yml
@@ -2,6 +2,7 @@
---
- name: Deploy the quadlet demo app
hosts: all
+ gather_facts: true
vars_files:
- vars/test_vars.yml
vars:
@@ -28,6 +29,17 @@
"/tmp/quadlet_demo":
mode: "0777"
tasks:
+ - name: Test is only supported on x86_64
+ debug:
+ msg: >
+ This test is only supported on x86_64 because the test images used are only
+ available on that platform.
+ when: ansible_facts["architecture"] != "x86_64"
+
+ - name: End test
+ meta: end_play
+ when: ansible_facts["architecture"] != "x86_64"
+
- name: Run tests
block:
- name: Generate certificates
--
2.46.0

View File

@ -0,0 +1,208 @@
From 5367219c4d12b988b531b00a90625cb6747baf13 Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Thu, 29 Aug 2024 08:47:03 -0600
Subject: [PATCH 114/115] fix: subgid maps user to gids, not group to gids
Cause: The podman role was looking up groups in the subgid values, not
users.
Consequence: If the user name was different from the group name, the role
would fail to lookup the subgid values.
Fix: Ensure that the user is used to lookup the subgid values.
Result: The subgid values are looked up correctly.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit ad01b0091707fc4eae6f98f694f1a213fb9f8521)
---
README.md | 45 ++++++++++++++++++-------------------
tasks/handle_user_group.yml | 32 ++++++++------------------
2 files changed, 31 insertions(+), 46 deletions(-)
diff --git a/README.md b/README.md
index 8b6496e..6222098 100644
--- a/README.md
+++ b/README.md
@@ -35,12 +35,11 @@ restrictions:
* They must be already present on the system - the role will not create the
users or groups - the role will exit with an error if a non-existent user or
group is specified
-* They must already exist in `/etc/subuid` and `/etc/subgid`, or are otherwise
- provided by your identity management system - the role will exit with an error
- if a specified user is not present in `/etc/subuid`, or if a specified group
- is not in `/etc/subgid`. The role uses `getsubids` to check the user and
- group if available, or checks the files directly if `getsubids` is not
- available.
+* The user must already exist in `/etc/subuid` and `/etc/subgid`, or otherwise
+ be provided by your identity management system - the role will exit with an
+ error if a specified user is not present in `/etc/subuid` and `/etc/subgid`.
+ The role uses `getsubids` to check the user and group if available, or checks
+ the files directly if `getsubids` is not available.
## Role Variables
@@ -56,14 +55,15 @@ except for the following:
* `started` - Create the pods and systemd services, and start them running
* `created` - Create the pods and systemd services, but do not start them
* `absent` - Remove the pods and systemd services
-* `run_as_user` - Use this to specify a per-pod user. If you do not
- specify this, then the global default `podman_run_as_user` value will be used.
+* `run_as_user` - Use this to specify a per-pod user. If you do not specify
+ this, then the global default `podman_run_as_user` value will be used.
Otherwise, `root` will be used. NOTE: The user must already exist - the role
- will not create one. The user must be present in `/etc/subuid`.
-* `run_as_group` - Use this to specify a per-pod group. If you do not
- specify this, then the global default `podman_run_as_group` value will be
- used. Otherwise, `root` will be used. NOTE: The group must already exist -
- the role will not create one. The group must be present in `/etc/subgid`.
+ will not create one. The user must be present in `/etc/subuid` and
+ `/etc/subgid`.
+* `run_as_group` - Use this to specify a per-pod group. If you do not specify
+ this, then the global default `podman_run_as_group` value will be used.
+ Otherwise, `root` will be used. NOTE: The group must already exist - the role
+ will not create one.
* `systemd_unit_scope` - The scope to use for the systemd unit. If you do not
specify this, then the global default `podman_systemd_unit_scope` will be
used. Otherwise, the scope will be `system` for root containers, and `user`
@@ -278,14 +278,13 @@ podman_selinux_ports:
This is the name of the user to use for all rootless containers. You can also
specify per-container username with `run_as_user` in `podman_kube_specs`. NOTE:
The user must already exist - the role will not create one. The user must be
-present in `/etc/subuid`.
+present in `/etc/subuid` and `/etc/subgid`.
### podman_run_as_group
This is the name of the group to use for all rootless containers. You can also
specify per-container group name with `run_as_group` in `podman_kube_specs`.
-NOTE: The group must already exist - the role will not create one. The group must
-be present in `/etc/subgid`.
+NOTE: The group must already exist - the role will not create one.
### podman_systemd_unit_scope
@@ -426,24 +425,24 @@ PodmanArgs=--secret=my-app-pwd,type=env,target=MYAPP_PASSWORD
### podman_subuid_info, podman_subgid_info
-The role needs to ensure any users and groups are present in the subuid and
+The role needs to ensure any users are present in the subuid and
subgid information. Once it extracts this data, it will be available in
`podman_subuid_info` and `podman_subgid_info`. These are dicts. The key is the
-user or group name, and the value is a `dict` with two fields:
+user name, and the value is a `dict` with two fields:
-* `start` - the start of the id range for that user or group, as an `int`
-* `range` - the id range for that user or group, as an `int`
+* `start` - the start of the id range for that user, as an `int`
+* `range` - the id range for that user, as an `int`
```yaml
podman_host_directories:
"/var/lib/db":
mode: "0777"
owner: "{{ 1001 + podman_subuid_info['dbuser']['start'] - 1 }}"
- group: "{{ 1001 + podman_subgid_info['dbgroup']['start'] - 1 }}"
+ group: "{{ 2001 + podman_subgid_info['dbuser']['start'] - 1 }}"
```
-Where `1001` is the uid for user `dbuser`, and `1001` is the gid for group
-`dbgroup`.
+Where `1001` is the uid for user `dbuser`, and `2001` is the gid for the
+group you want to use.
**NOTE**: depending on the namespace used by your containers, you might not be
able to use the subuid and subgid information, which comes from `getsubids` if
diff --git a/tasks/handle_user_group.yml b/tasks/handle_user_group.yml
index 0b98d99..2e19cdd 100644
--- a/tasks/handle_user_group.yml
+++ b/tasks/handle_user_group.yml
@@ -25,19 +25,6 @@
{{ ansible_facts["getent_passwd"][__podman_user][2] }}
{%- endif -%}
-- name: Get group information
- getent:
- database: group
- key: "{{ __podman_group }}"
- fail_key: false
- when: "'getent_group' not in ansible_facts or
- __podman_group not in ansible_facts['getent_group']"
-
-- name: Set group name
- set_fact:
- __podman_group_name: "{{ ansible_facts['getent_group'].keys() |
- list | first }}"
-
- name: See if getsubids exists
stat:
path: /usr/bin/getsubids
@@ -49,13 +36,13 @@
- __podman_user not in ["root", "0"]
- __podman_stat_getsubids.stat.exists
block:
- - name: Check user with getsubids
+ - name: Check with getsubids for user subuids
command: getsubids {{ __podman_user | quote }}
changed_when: false
register: __podman_register_subuids
- - name: Check group with getsubids
- command: getsubids -g {{ __podman_group_name | quote }}
+ - name: Check with getsubids for user subgids
+ command: getsubids -g {{ __podman_user | quote }}
changed_when: false
register: __podman_register_subgids
@@ -66,7 +53,7 @@
{'start': __subuid_data[2] | int, 'range': __subuid_data[3] | int}})
if __subuid_data | length > 0 else podman_subuid_info | d({}) }}"
podman_subgid_info: "{{ podman_subgid_info | d({}) |
- combine({__podman_group_name:
+ combine({__podman_user:
{'start': __subgid_data[2] | int, 'range': __subgid_data[3] | int}})
if __subgid_data | length > 0 else podman_subgid_info | d({}) }}"
vars:
@@ -77,7 +64,6 @@
when:
- not __podman_stat_getsubids.stat.exists
- __podman_user not in ["root", "0"]
- - __podman_group not in ["root", "0"]
block:
- name: Get subuid file
slurp:
@@ -96,7 +82,7 @@
{'start': __subuid_data[1] | int, 'range': __subuid_data[2] | int}})
if __subuid_data else podman_subuid_info | d({}) }}"
podman_subgid_info: "{{ podman_subgid_info | d({}) |
- combine({__podman_group_name:
+ combine({__podman_user:
{'start': __subgid_data[1] | int, 'range': __subgid_data[2] | int}})
if __subgid_data else podman_subgid_info | d({}) }}"
vars:
@@ -107,7 +93,7 @@
if __subuid_match_line else none }}"
__subgid_match_line: "{{
(__podman_register_subgids.content | b64decode).split('\n') | list |
- select('match', '^' ~ __podman_group_name ~ ':') | list }}"
+ select('match', '^' ~ __podman_user ~ ':') | list }}"
__subgid_data: "{{ __subgid_match_line[0].split(':') | list
if __subgid_match_line else none }}"
@@ -118,9 +104,9 @@
/etc/subuid file - cannot continue
when: not __podman_user in podman_subuid_info
- - name: Fail if group not in subgid file
+ - name: Fail if user not in subgid file
fail:
msg: >
- The given podman group [{{ __podman_group_name }}] is not in the
+ The given podman user [{{ __podman_user }}] is not in the
/etc/subgid file - cannot continue
- when: not __podman_group_name in podman_subuid_info
+ when: not __podman_user in podman_subgid_info
--
2.46.0

View File

@ -0,0 +1,38 @@
From c78741d6d5a782f599ee42c6deb89b80426e403d Mon Sep 17 00:00:00 2001
From: Rich Megginson <rmeggins@redhat.com>
Date: Fri, 6 Sep 2024 14:15:20 -0600
Subject: [PATCH 115/115] fix: Cannot remove volumes from kube yaml - need to
convert yaml to list
Cause: __podman_quadlet_parsed was not converted to a list.
Consequence: On older versions of Ansible, the volumes from the kube yaml
were not removed when removing quadlets.
Fix: Convert __podman_quadlet_parsed to a list after parsing.
Result: Older versions of Ansible can remove volumes specified
in kube yaml files.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
(cherry picked from commit 423c98342c82893aca891d49c63713193dc96222)
---
tasks/parse_quadlet_file.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tasks/parse_quadlet_file.yml b/tasks/parse_quadlet_file.yml
index 5f5297f..2d58c4e 100644
--- a/tasks/parse_quadlet_file.yml
+++ b/tasks/parse_quadlet_file.yml
@@ -45,7 +45,7 @@
- name: Parse quadlet yaml file
set_fact:
__podman_quadlet_parsed: "{{ __podman_quadlet_raw.content | b64decode |
- from_yaml_all }}"
+ from_yaml_all | list }}"
when:
- __podman_service_name | length == 0
- __podman_quadlet_file.endswith(".yml") or
--
2.46.0

View File

@ -1,5 +1,34 @@
Changelog
=========
[1.23.0-3] - 2024-09-11
----------------------------
### New Features
- [bootloader - bootloader role tests do not work on ostree [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58917)
- [logging - RFE - system-roles - logging: Add truncate options for local file inputs [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58485)
- [logging - redhat.rhel_system_roles.logging role fails to process logging_outputs: of type: "custom" [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58481)
- [logging - [RFE] Add the umask settings or enable a variable in linux-system-roles.logging [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58477)
- [nbde_client - feat: Allow initrd configuration to be skipped [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58519)
### Bug Fixes
- [ - package rhel-system-roles.noarch does not provide docs for ansible-doc [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58465)
- [ad_integration - fix: Sets domain name lower case in realmd.conf section header [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58494)
- [bootloader - fix: Set user.cfg path to /boot/grub2/ on EL 9 UEFI [rhel-8]](https://issues.redhat.com/browse/RHEL-45711)
- [cockpit - cockpit install all wildcard match does not work in newer el9 [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58515)
- [logging - Setup imuxsock using rhel-system-roles.logging causing an error EL8](https://issues.redhat.com/browse/RHEL-37550)
- [podman - fix: proper cleanup for networks; ensure cleanup of resources [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58525)
- [podman - fix: grab name of network to remove from quadlet file [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58511)
- [podman - Create podman secret when skip_existing=True and it does not exist [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58507)
- [podman - fix: do not use become for changing hostdir ownership, and expose subuid/subgid info [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58503)
- [podman - fix: use correct user for cancel linger file name [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58498)
- [podman - redhat.rhel_system_roles.podman fails to configure and run containers with podman rootless using different username and groupname. [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58460)
- [sshd - second SSHD service broken [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58473)
- [storage - rhel-system-role.storage is not idempotent [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58469)
- [timesync - System Roles: No module documentation [rhel-8.10.z]](https://issues.redhat.com/browse/RHEL-58489)
[1.23.0] - 2024-01-15
----------------------------

View File

@ -1,10 +1,10 @@
Source801: https://galaxy.ansible.com/download/ansible-posix-1.5.4.tar.gz
Source901: https://galaxy.ansible.com/download/community-general-8.3.0.tar.gz
Source902: https://galaxy.ansible.com/download/containers-podman-1.12.0.tar.gz
Source902: https://galaxy.ansible.com/download/containers-podman-1.15.4.tar.gz
Provides: bundled(ansible-collection(ansible.posix)) = 1.5.4
Provides: bundled(ansible-collection(community.general)) = 8.3.0
Provides: bundled(ansible-collection(containers.podman)) = 1.12.0
Provides: bundled(ansible-collection(containers.podman)) = 1.15.4
Source996: CHANGELOG.rst
Source998: collection_readme.sh

View File

@ -19,7 +19,7 @@ Name: linux-system-roles
Url: https://github.com/linux-system-roles
Summary: Set of interfaces for unified system management
Version: 1.23.0
Release: 2.21%{?dist}
Release: 3%{?dist}
License: GPLv3+ and MIT and BSD and Python
%global _pkglicensedir %{_licensedir}/%{name}
@ -92,7 +92,7 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%deftag 2 1.7.4
%global rolename3 timesync
%deftag 3 1.8.2
%deftag 3 1.9.0
%global rolename4 kdump
%deftag 4 1.4.4
@ -113,13 +113,13 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%deftag 9 1.2.2
%global rolename10 logging
%deftag 10 1.12.4
%deftag 10 1.13.4
%global rolename11 nbde_server
%deftag 11 1.4.3
%global rolename12 nbde_client
%deftag 12 1.2.17
%deftag 12 1.3.0
%global rolename13 certificate
%deftag 13 1.3.3
@ -130,7 +130,7 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%global forgeorg15 https://github.com/willshersystems
%global repo15 ansible-sshd
%global rolename15 sshd
%deftag 15 v0.23.2
%deftag 15 v0.25.0
%global rolename16 ssh
%deftag 16 1.3.2
@ -145,13 +145,13 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%deftag 19 1.7.4
%global rolename20 cockpit
%deftag 20 1.5.5
%deftag 20 1.5.10
%global rolename21 podman
%deftag 21 1.4.7
%global rolename22 ad_integration
%deftag 22 1.4.2
%deftag 22 1.4.6
%global rolename23 rhc
%deftag 23 1.6.0
@ -172,7 +172,7 @@ Source: %{url}/auto-maintenance/archive/%{mainid}/auto-maintenance-%{mainid}.tar
%deftag 28 1.1.1
%global rolename29 bootloader
%deftag 29 1.0.3
%deftag 29 1.0.7
%global rolename30 snapshot
%deftag 30 1.3.1
@ -209,6 +209,35 @@ Source29: %{archiveurl29}
Source30: %{archiveurl30}
# END AUTOGENERATED SOURCES
# storage role patches
Patch1: 0001-test-fix-sector-based-disk-size-calculation-from-ans.patch
Patch2: 0002-fix-Fix-recreate-check-for-formats-without-labelling.patch
Patch3: 0003-fix-Fix-incorrent-populate-call.patch
Patch4: 0004-tests-Add-a-new-match_sector_size-argument-to-find_u.patch
Patch5: 0005-tests-Require-same-sector-size-disks-for-LVM-tests.patch
Patch6: 0006-fix-Fix-possibly-used-before-assignment-pylint-issue.patch
Patch7: 0007-test-lsblk-can-return-LOG_SEC-or-LOG-SEC.patch
Patch8: 0008-test-lvm-pool-members-test-fix.patch
Patch9: 0009-fix-Fix-expected-error-message-in-tests_misc.yml.patch
Patch10: 0010-tests-Use-blockdev_info-to-check-volume-mount-points.patch
# podman role patches
Patch101: 0101-fix-Add-support-for-check-flag.patch
Patch102: 0102-fix-use-correct-user-for-cancel-linger-file-name.patch
Patch103: 0103-test-do-not-check-for-root-linger.patch
Patch104: 0104-fix-do-not-use-become-for-changing-hostdir-ownership.patch
Patch105: 0105-chore-change-no_log-false-to-true-fix-comment.patch
Patch106: 0106-fix-make-kube-cleanup-idempotent.patch
Patch107: 0107-chore-use-none-in-jinja-code-not-null.patch
Patch108: 0108-uid-1001-conflicts-on-some-test-systems.patch
Patch109: 0109-fix-ansible-lint-octal-value-issues.patch
Patch110: 0110-fix-grab-name-of-network-to-remove-from-quadlet-file.patch
Patch111: 0111-fix-proper-cleanup-for-networks-ensure-cleanup-of-re.patch
Patch112: 0112-fix-Ensure-user-linger-is-closed-on-EL10.patch
Patch113: 0113-test-skip-quadlet-tests-on-non-x86_64.patch
Patch114: 0114-fix-subgid-maps-user-to-gids-not-group-to-gids.patch
Patch115: 0115-fix-Cannot-remove-volumes-from-kube-yaml-need-to-con.patch
# Includes with definitions/tags that differ between RHEL and Fedora
Source1001: extrasources.inc
@ -336,6 +365,39 @@ if [ "$rolesdir" != "$realrolesdir" ]; then
fi
cd ..
# storage role patches
cd %{rolename6}
%patch1 -p1
%patch2 -p1
%patch3 -p1
%patch4 -p1
%patch5 -p1
%patch6 -p1
%patch7 -p1
%patch8 -p1
%patch9 -p1
%patch10 -p1
cd ..
# podman role patches
cd %{rolename21}
%patch101 -p1
%patch102 -p1
%patch103 -p1
%patch104 -p1
%patch105 -p1
%patch106 -p1
%patch107 -p1
%patch108 -p1
%patch109 -p1
%patch110 -p1
%patch111 -p1
%patch112 -p1
%patch113 -p1
%patch114 -p1
%patch115 -p1
cd ..
# vendoring build steps, if any
%include %{SOURCE1004}
@ -677,6 +739,27 @@ find %{buildroot}%{ansible_roles_dir} -mindepth 1 -maxdepth 1 | \
%endif
%changelog
* Wed Sep 11 2024 Rich Megginson <rmeggins@redhat.com> - 1.23.0-3
- Resolves: RHEL-58465 : - package rhel-system-roles.noarch does not provide docs for ansible-doc [rhel-8.10.z]
- Resolves: RHEL-58494 : ad_integration - fix: Sets domain name lower case in realmd.conf section header [rhel-8.10.z]
- Resolves: RHEL-58917 : bootloader - bootloader role tests do not work on ostree [rhel-8.10.z]
- Resolves: RHEL-45711 : bootloader - fix: Set user.cfg path to /boot/grub2/ on EL 9 UEFI [rhel-8]
- Resolves: RHEL-58515 : cockpit - cockpit install all wildcard match does not work in newer el9 [rhel-8.10.z]
- Resolves: RHEL-58485 : logging - RFE - system-roles - logging: Add truncate options for local file inputs [rhel-8.10.z]
- Resolves: RHEL-58481 : logging - redhat.rhel_system_roles.logging role fails to process logging_outputs: of type: "custom" [rhel-8.10.z]
- Resolves: RHEL-58477 : logging - [RFE] Add the umask settings or enable a variable in linux-system-roles.logging [rhel-8.10.z]
- Resolves: RHEL-37550 : logging - Setup imuxsock using rhel-system-roles.logging causing an error EL8
- Resolves: RHEL-58519 : nbde_client - feat: Allow initrd configuration to be skipped [rhel-8.10.z]
- Resolves: RHEL-58525 : podman - fix: proper cleanup for networks; ensure cleanup of resources [rhel-8.10.z]
- Resolves: RHEL-58511 : podman - fix: grab name of network to remove from quadlet file [rhel-8.10.z]
- Resolves: RHEL-58507 : podman - Create podman secret when skip_existing=True and it does not exist [rhel-8.10.z]
- Resolves: RHEL-58503 : podman - fix: do not use become for changing hostdir ownership, and expose subuid/subgid info [rhel-8.10.z]
- Resolves: RHEL-58498 : podman - fix: use correct user for cancel linger file name [rhel-8.10.z]
- Resolves: RHEL-58460 : podman - redhat.rhel_system_roles.podman fails to configure and run containers with podman rootless using different username and groupname. [rhel-8.10.z]
- Resolves: RHEL-58473 : sshd - second SSHD service broken [rhel-8.10.z]
- Resolves: RHEL-58469 : storage - rhel-system-role.storage is not idempotent [rhel-8.10.z]
- Resolves: RHEL-58489 : timesync - System Roles: No module documentation [rhel-8.10.z]
* Mon Feb 26 2024 Rich Megginson <rmeggins@redhat.com> - 1.23.0-2.21
- Resolves: RHEL-3241 : bootloader - Create bootloader role (MVP)
fix issue with path on arches other than x86_64, and EFI systems

16
sources
View File

@ -1,12 +1,12 @@
SHA512 (ad_integration-1.4.2.tar.gz) = 1772eb6a61a6246f69681c8539178b1aade85ca1bdb8dcb3d6ceb10de758c128e31a62f7e20948814c16928b00df9de412c56f487ec4929c0f9c831afae4cc27
SHA512 (ad_integration-1.4.6.tar.gz) = 3f097b0a4b24488ee5eee64dfd88dbff48deb871dbb9734601331737fe67b147a17fd771c21fd43c539079b5bb3a7c8246063d8f7084338ea640bcd826ffd634
SHA512 (ansible-posix-1.5.4.tar.gz) = 63321c2b439bb2c707c5bea2fba61eaefecb0ce1c832c4cfc8ee8bb89448c8af10e447bf580e8ae6d325c0b5891b609683ff2ba46b78040e2c4d3d8b6bdcd724
SHA512 (ansible-sshd-v0.23.2.tar.gz) = 49b5919fe50486574889b300b30804055a3be91969e1e7065ca97fbeac8ed0f5dd9a2747d194e5aa6a8e4c01f9ae28a73f6d68dab48dae6eccb78f4ebf71b056
SHA512 (ansible-sshd-v0.25.0.tar.gz) = bf789bd8b1ff34208220ef6b2865286c7a7fdfd0f7e11586cb69c328108348b4c3c91d759d0d3a268bc8ddbb5fd9797ab3b4cf86d6ca8f8cd32106b7890ae962
SHA512 (auto-maintenance-11ad785c9bb72611244e7909450ca4247e12db4d.tar.gz) = a8347b15e420ba77e1086193154dec9e0305474050493cb0be156a7afeb00d0bc02617cadcc77ac48d9cd9d749e1bd4f788d4def01b51a3356a4bd74c65912ea
SHA512 (bootloader-1.0.3.tar.gz) = 5838e37e9e9f55f757d12ba0ae75b4a707b2f0731cd9d866fbde8fad8ae25c5fdb05c83bada0919cb39f7daca9f56fc06740c7da5fdce017da32ae22a0619a10
SHA512 (bootloader-1.0.7.tar.gz) = d3d08c64596f63fb9336673a4b16e6ecf8a283ed53d945e0673d166217315bc6cf367d3c559eb7624ac57b480c106bbbb114e5cc17a413d3e36c4c0a813b05cc
SHA512 (certificate-1.3.3.tar.gz) = f459cf3f388b7e8d686ad1502c976900b3fe95e8a63f16c1ec2b739fad86de1adf4cabe41e1b8d0173f408db6465aa08ab5bd4d13f00e6e4c57520adc5b44c3b
SHA512 (cockpit-1.5.5.tar.gz) = 728c858de1133dc8dd62337cb58d06794d57a571e41d30c172fe687942cb8be45ae3d15b71816eb5f3d9462f451b94c47c946c7bb5e4f9a541d3b161d4b1c497
SHA512 (cockpit-1.5.10.tar.gz) = 8940d819dfebdc1dc0b7ae163306cc8d1da9db43a96754db8bdfdb51a7801e9da523b94684b7f4b0d8da7d3e0103f2ee718d82d02b033c67551e418d031c60fd
SHA512 (community-general-8.3.0.tar.gz) = 674123b7e72ecfe24d82980dae075123999202e7e5de759506a8f9b5df2b072afefbf199516a64dd217d8e6d45ab2952838cf5ad534df20ee3f58e06c8837736
SHA512 (containers-podman-1.12.0.tar.gz) = c568b6e5ed36c3f46aba854489317aa992872b32a8b27e06e1561fd0362b455c063e923bddc8201788834043a90c20568ce8f69f7b9e6f7c864864e11f1ea733
SHA512 (containers-podman-1.15.4.tar.gz) = 9476f4455be26c926b8e74889b517bdd4f99ccb2bc80f87f1732ba163286ef70e38fc729865674490f157b180f21cb53ef5e056d25d12901fadf74b69be40afc
SHA512 (crypto_policies-1.3.2.tar.gz) = 8007402391778b7c8625af8fd6b96d77ec930180746a87c1c2ade0f4f7a17deaac66e28d94f18e25b0fea68274ad88e0f1ce9eec40fd3abc10d8a4891dd04b53
SHA512 (fapolicyd-1.1.1.tar.gz) = 462877da988f3d9646c33757cd8f1c6ab6a2df145cf9021edd5fbc9d3fea8d0aa3c515b4d1cf20724bfefa340919f49ce5a90648fa4147b60cec4483a6bbfe83
SHA512 (firewall-1.7.4.tar.gz) = 1298c7a4bd23e06318bf6ca5bca12281898620f7d8f645a4d21f556f68cd0a03dd99e7118cf9e27f2f5a86bc9a8c621bcd5433eabb73976e65f80ab8050452b7
@ -15,9 +15,9 @@ SHA512 (journald-1.2.3.tar.gz) = f51cbbe1f03ce25d8a1c6b492433727a3407d805a3ccb55
SHA512 (kdump-1.4.4.tar.gz) = 1fe5f6c5f3168adaec135f0c8c3bc109d65fca48c96948704ef570692b0b1e03afd73f53964103dbd2c156eddcc01833da7330245810ac9dc017202974191f2e
SHA512 (kernel_settings-1.2.2.tar.gz) = 548fd51b89436be2ec0b45f81a738e949506ad1f6ce5ce72098c5aacd4df1df3cd2d00296cee112792b2833713f84b755c5ce7b0cb251fd2bb543a731ab895e1
SHA512 (keylime_server-1.1.2.tar.gz) = c0bdd3e570edcd1ba4dbbc61ae35015c82367f73f254acd3b06cb42fb88864042b383c8185f7611bcdb7eb219f484981fc5cb6c098ac2f2126a6bce1204401eb
SHA512 (logging-1.12.4.tar.gz) = fa9fda7c857a6405187d2f434e7626b7d61a98e57f454166b480c5d87a5b61593e47825be68b9f9d005d02a31c3ebb128a6a3c9bff02ee7d6f7c27a2e9d2f2e3
SHA512 (logging-1.13.4.tar.gz) = bb85e02d7b44fa074f9c3b1d2da7e732ed00f09aa299de653f14451e17316a231e084c5092786c5b2d7b981539adff6e1ed114c6f93da668c0e3c52f0d0e07c3
SHA512 (metrics-1.10.1.tar.gz) = 648c1f7ff8b35048c7ef272df5db8240f0b62fb4de998fa45ad5c79ecd9337f9f95c280d6d37e390749f398c1f4f04f634203be6c5d387e1154341ec8ad1aa4c
SHA512 (nbde_client-1.2.17.tar.gz) = f11189c4cf42478d166c8528b75ea02d32bb412e16fa6c47e205905b731e02c1791b6e564d89d8568448d2d3f8995b263116199801f1cc69593aafc0aa503818
SHA512 (nbde_client-1.3.0.tar.gz) = 3c7681bb7d564aff1ecf6b24c6933a22185fd6f0bc207f572104270468ec274cfa66bb69f4db06d8d6b6403379b06187f38344aee5810a565027970aa3185079
SHA512 (nbde_server-1.4.3.tar.gz) = ed309ae3890ac9e9c05f70206044ed1e2d0876733a2b2e8f3064411870dd17e09d0072e843c1e4394e3ac7a1891f40ef69604836cc4577dcf0b2ef024ba256df
SHA512 (network-1.15.1.tar.gz) = b1b504a66ab2975fb01ce7129343bab18013c91d3c7f4c633b696488364f48e5e59980c1101fa506aa789bc784d7f6aff7d1536875aa43016bcb40986a175e5c
SHA512 (podman-1.4.7.tar.gz) = ff25604e0d38acf648863de3ca8681466adab37aabe9bea3ff295afb94a2c682bade9563b274b8630cb46670f0d82b6431e366551594bfe47145da024fc3d726
@ -29,6 +29,6 @@ SHA512 (snapshot-1.3.1.tar.gz) = f50f6509669c612d3ce95de32a841ca2a34095478614c27
SHA512 (ssh-1.3.2.tar.gz) = 89533339fa9e3420f88f320dcead50e6e14db415201bb21c2b8a47ad96013c3ce59fc883963205ccb921e65a75f1cf38859f8d93ff7032a99fcabc0eaba4b5d7
SHA512 (storage-1.16.2.tar.gz) = c2e26178396bda7badc8cd1fb5c74e3676c9b4adcd174befd5ea76a35cb639e101d2aeafb38ac04f1e6123a5a52001bf0302fe198de20e7f958ad7c704fe2974
SHA512 (systemd-1.1.2.tar.gz) = 0b9ad42f6dff4c563a879ba4ff7f977c4ecd8595d6965a998d10bf8e3fdffa04b25bbc9e1f3deacffb263bd92910554f2f7d879dabf4c2fcf48f8bbaa91c14c0
SHA512 (timesync-1.8.2.tar.gz) = 301156a23a26cd93f919112bf972f27ed5a154ccd9e1167c1ef7a9066d5a8519c40e839f0ae4b0f8a3e04dfad5fc1c3b00166cfbd4f15d46e12ebdfec42f87a6
SHA512 (timesync-1.9.0.tar.gz) = 99ced1c244e3f64b618fb1913e0fdbcb0b44583114d32a747a9723fa88ee837b624e8a2fc00a66e4faa5c4e476d8a74ece4e89361c3122622b01c6bc073db4d5
SHA512 (tlog-1.3.3.tar.gz) = 9130559ad2d3ec1d99cb5dbbb454cd97c9490c82cf92d45b3909ac521ef6f3ab52a00180418f77775bbe0d83e80a35d9f9b4d51ca5ba4bb9ced62caf4e3f34fd
SHA512 (vpn-1.6.3.tar.gz) = 5feb32a505806fc270ca9c1a940e661e2a0bf165f988dfc0a5c66893ee8fb6c77616497c9fc768617e9cd9dc6bb6e785b184a192b0471878885531e209aadf50

View File

@ -22,8 +22,37 @@ declare -A plugin_map=(
[containers/podman/plugins/modules/podman_play.py]=podman
[containers/podman/plugins/modules/podman_secret.py]=podman
[containers/podman/plugins/module_utils/podman/common.py]=podman
[containers/podman/plugins/module_utils/podman/quadlet.py]=podman
)
# fix the following issue
# ERROR: Found 1 pylint issue(s) which need to be resolved:
# ERROR: plugins/modules/rhsm_repository.py:263:8: wrong-collection-deprecated: Wrong collection name ('community.general') found in call to Display.deprecated or AnsibleModule.deprecate
sed "s/collection_name='community.general'/collection_name='%{collection_namespace}.%{collection_name}'/" \
-i .external/community/general/plugins/modules/rhsm_repository.py
fix_module_documentation() {
local module_src doc_fragment_name df_dest_dir
local -a paths
module_src=".external/$1"
sed ':a;N;$!ba;s/description:\n\( *\)/description:\n\1- "WARNING: Do not use this plugin directly! It is only for role internal use."\n\1/' \
-i "$module_src"
# grab documentation fragments
for doc_fragment_name in $(awk -F'[ -]+' '/^extends_documentation_fragment:/ {reading = 1; next}; /^[ -]/ {if (reading) {print $2}; next}; /^[^ -]/ {if (reading) {exit}}' "$module_src"); do
if [ "$doc_fragment_name" = files ]; then continue; fi # this one is built-in
df_dest_dir="%{collection_build_path}/plugins/doc_fragments"
if [ ! -d "$df_dest_dir" ]; then
mkdir -p "$df_dest_dir"
fi
paths=(${doc_fragment_name//./ })
# if we ever have two different collections that have the same doc_fragment name
# with different contents, we will be in trouble . . .
# will have to make the doc fragment files unique, then edit $dest to use
# the unique name
cp ".external/${paths[0]}/${paths[1]}/plugins/doc_fragments/${paths[2]}.py" "$df_dest_dir"
done
}
declare -a modules mod_utils collection_plugins
declare -A dests
# vendor in plugin files - fix documentation, fragments
@ -31,9 +60,12 @@ for src in "${!plugin_map[@]}"; do
roles="${plugin_map["$src"]}"
if [ "$roles" = __collection ]; then
collection_plugins+=("$src")
case "$src" in
*/plugins/modules/*) fix_module_documentation "$src";;
esac
else
case "$src" in
*/plugins/modules/*) srcdir=plugins/modules; subdir=library; modules+=("$src") ;;
*/plugins/modules/*) srcdir=plugins/modules; subdir=library; modules+=("$src"); fix_module_documentation "$src";;
*/plugins/module_utils/*) srcdir=plugins/module_utils; mod_utils+=("$src") ;;
*/plugins/action/*) srcdir=plugins/action ;;
esac
@ -54,9 +86,6 @@ for src in "${!plugin_map[@]}"; do
mkdir -p "$destdir"
fi
cp -pL ".external/$src" "$dest"
sed -e ':a;N;$!ba;s/description:\n\( *\)/description:\n\1- WARNING: Do not use this plugin directly! It is only for role internal use.\n\1/' \
-e '/^extends_documentation_fragment:/,/^[^ -]/{/^extends/d;/^[ -]/d}' \
-i "$dest"
done
done
@ -92,11 +121,32 @@ done
# for podman, change the FQCN - using a non-FQCN module name doesn't seem to work,
# even for the legacy role format
for rolename in %{rolenames}; do
find "$rolename" -type f -exec \
sed -e "s/linux-system-roles[.]${rolename}\\>/%{roleinstprefix}${rolename}/g" \
-e "s/fedora[.]linux_system_roles[.]/%{collection_namespace}.%{collection_name}./g" \
-e "s/containers[.]podman[.]/%{collection_namespace}.%{collection_name}./g" \
-e "s/community[.]general[.]/%{collection_namespace}.%{collection_name}./g" \
-e "s/ansible[.]posix[.]/%{collection_namespace}.%{collection_name}./g" \
-i {} \;
find "$rolename" -type f -exec \
sed -e "s/linux-system-roles[.]${rolename}\\>/%{roleinstprefix}${rolename}/g" \
-e "s/fedora[.]linux_system_roles[.]/%{collection_namespace}.%{collection_name}./g" \
-e "s/containers[.]podman[.]/%{collection_namespace}.%{collection_name}./g" \
-e "s/community[.]general[.]/%{collection_namespace}.%{collection_name}./g" \
-e "s/ansible[.]posix[.]/%{collection_namespace}.%{collection_name}./g" \
-i {} \;
done
# add ansible-test ignores needed due to vendoring
for ansible_ver in 2.14 2.15 2.16; do
ignore_file="podman/.sanity-ansible-ignore-${ansible_ver}.txt"
cat >> "$ignore_file" <<EOF
plugins/module_utils/podman_lsr/podman/quadlet.py compile-2.7!skip
plugins/module_utils/podman_lsr/podman/quadlet.py import-2.7!skip
plugins/modules/podman_image.py import-2.7!skip
plugins/modules/podman_play.py import-2.7!skip
EOF
done
# these platforms still use python 3.5
for ansible_ver in 2.14 2.15; do
ignore_file="podman/.sanity-ansible-ignore-${ansible_ver}.txt"
cat >> "$ignore_file" <<EOF
plugins/module_utils/podman_lsr/podman/quadlet.py compile-3.5!skip
plugins/module_utils/podman_lsr/podman/quadlet.py import-3.5!skip
plugins/modules/podman_image.py import-3.5!skip
plugins/modules/podman_play.py import-3.5!skip
EOF
done