systemd-257-17

Resolves: RHEL-126456,RHEL-109832,RHEL-109902,RHEL-112205,RHEL-113920,RHEL-120177,RHEL-72813
This commit is contained in:
Jan Macku 2025-11-05 08:17:05 +01:00
parent 2d81c4f556
commit cbf9da6b59
33 changed files with 3657 additions and 1 deletions

View File

@ -0,0 +1,87 @@
From f3a6a5afc55b525dc8796ba34097a4db49b852a0 Mon Sep 17 00:00:00 2001
From: Michal Sekletar <msekleta@redhat.com>
Date: Mon, 25 Aug 2025 15:09:36 +0200
Subject: [PATCH] pam_systemd: honor session class provided via PAM environment
Replaces #38638
Co-authored-by: Lennart Poettering <lennart@poettering.net>
(cherry picked from commit cf2630acaa87ded5ad99ea30ed4bd895e71ca503)
Resolves: RHEL-109832
---
man/pam_systemd.xml | 13 +++++++++++++
src/login/pam_systemd.c | 13 +++++++++----
2 files changed, 22 insertions(+), 4 deletions(-)
diff --git a/man/pam_systemd.xml b/man/pam_systemd.xml
index 183b37d676..3cc25bdf5b 100644
--- a/man/pam_systemd.xml
+++ b/man/pam_systemd.xml
@@ -147,6 +147,19 @@
</tgroup>
</table>
+ <para>If no session class is specified via either the PAM module option or via the
+ <varname>$XDG_SESSION_CLASS</varname> environment variable, the class is automatically chosen, depending on
+ various session parameters, such as the session type (if known), whether the session has a TTY or X11
+ display, and the user disposition. Note that various tools allow setting the session class for newly
+ allocated PAM sessions explicitly by means of the <varname>$XDG_SESSION_CLASS</varname> environment variable.
+ For example, classic UNIX cronjobs support environment variable assignments (see
+ <citerefentry project='man-pages'><refentrytitle>crontab</refentrytitle><manvolnum>5</manvolnum></citerefentry>),
+ which may be used to choose between the <constant>background</constant> and
+ <constant>background-light</constant> session class individually per cronjob, or
+ <command>run0 --setenv=XDG_SESSION_CLASS=user-light</command> may be used
+ to choose between <constant>user</constant> and <constant>user-light</constant> for invoked privileged sessions.
+ </para>
+
<xi:include href="version-info.xml" xpointer="v197"/></listitem>
</varlistentry>
diff --git a/src/login/pam_systemd.c b/src/login/pam_systemd.c
index 7cc64144eb..b3c84a835e 100644
--- a/src/login/pam_systemd.c
+++ b/src/login/pam_systemd.c
@@ -997,12 +997,14 @@ _public_ PAM_EXTERN int pam_sm_open_session(
desktop = getenv_harder(handle, "XDG_SESSION_DESKTOP", desktop_pam);
incomplete = getenv_harder_bool(handle, "XDG_SESSION_INCOMPLETE", false);
+ /* The session class can be overridden via the PAM environment, and we try to honor that selection. */
if (streq_ptr(service, "systemd-user")) {
/* If we detect that we are running in the "systemd-user" PAM stack, then let's patch the class to
* 'manager' if not set, simply for robustness reasons. */
type = "unspecified";
- class = IN_SET(user_record_disposition(ur), USER_INTRINSIC, USER_SYSTEM, USER_DYNAMIC) ?
- "manager-early" : "manager";
+ if (isempty(class))
+ class = IN_SET(user_record_disposition(ur), USER_INTRINSIC, USER_SYSTEM, USER_DYNAMIC) ?
+ "manager-early" : "manager";
tty = NULL;
} else if (tty && strchr(tty, ':')) {
@@ -1018,20 +1020,23 @@ _public_ PAM_EXTERN int pam_sm_open_session(
* (as they otherwise even try to update it!) — but cron doesn't actually allocate a TTY for its forked
* off processes.) */
type = "unspecified";
- class = "background";
+ if (isempty(class))
+ class = "background";
tty = NULL;
} else if (streq_ptr(tty, "ssh")) {
/* ssh has been setting PAM_TTY to "ssh" (for the same reason as cron does this, see above. For further
* details look for "PAM_TTY_KLUDGE" in the openssh sources). */
type = "tty";
- class = "user";
+ if (isempty(class))
+ class = "user";
tty = NULL; /* This one is particularly sad, as this means that ssh sessions — even though usually
* associated with a pty — won't be tracked by their tty in logind. This is because ssh
* does the PAM session registration early for new connections, and registers a pty only
* much later (this is because it doesn't know yet if it needs one at all, as whether to
* register a pty or not is negotiated much later in the protocol). */
+
} else if (tty)
/* Chop off leading /dev prefix that some clients specify, but others do not. */
tty = skip_dev_prefix(tty);

View File

@ -0,0 +1,58 @@
From 1603dfd95502d68cb47aa14e1c020c39397e1a59 Mon Sep 17 00:00:00 2001
From: Jan Macku <jamacku@redhat.com>
Date: Wed, 15 Oct 2025 09:02:50 +0200
Subject: [PATCH] udev/net_id: introduce naming scheme for RHEL-10.2
rhel-only: policy
Resolves: RHEL-72813
---
man/systemd.net-naming-scheme.xml | 9 +++++++++
src/shared/netif-naming-scheme.c | 1 +
src/shared/netif-naming-scheme.h | 1 +
3 files changed, 11 insertions(+)
diff --git a/man/systemd.net-naming-scheme.xml b/man/systemd.net-naming-scheme.xml
index d997b46133..9e7ae58f47 100644
--- a/man/systemd.net-naming-scheme.xml
+++ b/man/systemd.net-naming-scheme.xml
@@ -567,6 +567,15 @@
<xi:include href="version-info.xml" xpointer="rhel-10.1"/>
</listitem>
</varlistentry>
+
+ <varlistentry>
+ <term><constant>rhel-10.2</constant></term>
+
+ <listitem><para>Same as naming scheme <constant>rhel-10.0</constant>.</para>
+
+ <xi:include href="version-info.xml" xpointer="rhel-10.2"/>
+ </listitem>
+ </varlistentry>
</variablelist>
<para>By default <constant>rhel-10.0</constant> is used.</para>
diff --git a/src/shared/netif-naming-scheme.c b/src/shared/netif-naming-scheme.c
index 4929808d4d..7d22bb8831 100644
--- a/src/shared/netif-naming-scheme.c
+++ b/src/shared/netif-naming-scheme.c
@@ -47,6 +47,7 @@ static const NamingScheme naming_schemes[] = {
{ "rhel-10.0-beta", NAMING_RHEL_10_0_BETA },
{ "rhel-10.0", NAMING_RHEL_10_0 },
{ "rhel-10.1", NAMING_RHEL_10_1 },
+ { "rhel-10.2", NAMING_RHEL_10_2 },
/* … add more schemes here, as the logic to name devices is updated … */
EXTRA_NET_NAMING_MAP
diff --git a/src/shared/netif-naming-scheme.h b/src/shared/netif-naming-scheme.h
index e8ea61b6cc..06542c9e20 100644
--- a/src/shared/netif-naming-scheme.h
+++ b/src/shared/netif-naming-scheme.h
@@ -90,6 +90,7 @@ typedef enum NamingSchemeFlags {
NAMING_RHEL_10_0_BETA = NAMING_V255,
NAMING_RHEL_10_0 = NAMING_V257,
NAMING_RHEL_10_1 = NAMING_RHEL_10_0,
+ NAMING_RHEL_10_2 = NAMING_RHEL_10_0,
EXTRA_NET_NAMING_SCHEMES

View File

@ -0,0 +1,59 @@
From ea4962d881846eed9491110fbb69234b667a445c Mon Sep 17 00:00:00 2001
From: Jan Macku <jamacku@redhat.com>
Date: Wed, 15 Oct 2025 12:54:52 +0200
Subject: [PATCH] udev/net_id: introduce naming scheme for RHEL-9.8
rhel-only: policy
Resolves: RHEL-72813
---
man/systemd.net-naming-scheme.xml | 10 ++++++++++
src/shared/netif-naming-scheme.c | 1 +
src/shared/netif-naming-scheme.h | 1 +
3 files changed, 12 insertions(+)
diff --git a/man/systemd.net-naming-scheme.xml b/man/systemd.net-naming-scheme.xml
index 9e7ae58f47..3c4d67cce3 100644
--- a/man/systemd.net-naming-scheme.xml
+++ b/man/systemd.net-naming-scheme.xml
@@ -669,6 +669,16 @@
<xi:include href="version-info.xml" xpointer="rhel-9.7"/>
</listitem>
</varlistentry>
+
+ <varlistentry>
+ <term><constant>rhel-9.8</constant></term>
+
+ <listitem>
+ <para>PCI slot number is now read from <constant>firmware_node/sun</constant> sysfs file.</para>
+
+ <xi:include href="version-info.xml" xpointer="rhel-9.8"/>
+ </listitem>
+ </varlistentry>
</variablelist>
</refsect2>
diff --git a/src/shared/netif-naming-scheme.c b/src/shared/netif-naming-scheme.c
index 7d22bb8831..18b012b8c8 100644
--- a/src/shared/netif-naming-scheme.c
+++ b/src/shared/netif-naming-scheme.c
@@ -44,6 +44,7 @@ static const NamingScheme naming_schemes[] = {
{ "rhel-9.5", NAMING_RHEL_9_5 },
{ "rhel-9.6", NAMING_RHEL_9_6 },
{ "rhel-9.7", NAMING_RHEL_9_7 },
+ { "rhel-9.8", NAMING_RHEL_9_8 },
{ "rhel-10.0-beta", NAMING_RHEL_10_0_BETA },
{ "rhel-10.0", NAMING_RHEL_10_0 },
{ "rhel-10.1", NAMING_RHEL_10_1 },
diff --git a/src/shared/netif-naming-scheme.h b/src/shared/netif-naming-scheme.h
index 06542c9e20..eab374fb9a 100644
--- a/src/shared/netif-naming-scheme.h
+++ b/src/shared/netif-naming-scheme.h
@@ -86,6 +86,7 @@ typedef enum NamingSchemeFlags {
NAMING_RHEL_9_5 = NAMING_RHEL_9_4 & ~NAMING_BRIDGE_MULTIFUNCTION_SLOT,
NAMING_RHEL_9_6 = NAMING_RHEL_9_5,
NAMING_RHEL_9_7 = NAMING_RHEL_9_5,
+ NAMING_RHEL_9_8 = NAMING_RHEL_9_5 | NAMING_FIRMWARE_NODE_SUN,
NAMING_RHEL_10_0_BETA = NAMING_V255,
NAMING_RHEL_10_0 = NAMING_V257,

View File

@ -0,0 +1,423 @@
From 8b14e3ac69bb7f657b103883c0db22d2fdf22ffb Mon Sep 17 00:00:00 2001
From: Luca Boccassi <luca.boccassi@gmail.com>
Date: Thu, 21 Nov 2024 09:51:14 +0000
Subject: [PATCH] test: split VM-only subtests from TEST-74-AUX-UTILS to new
VM-only test
TEST-74-AUX-UTILS covers many subtests, as it's a catch-all job, and a few
need a VM to run. The job is thus marked VM-only. But that means in settings
where we can't run VM tests (no KVM available), the entire thing is skipped,
losing tons of coverage that doesn't need skipping.
Move the VM-only subtests to TEST-87-AUX-UTILS-VM that is configured to only
run in VMs under both runners. This way we keep the existing tests as-is, and
we can add new VM-only tests without worrying. This is how the rest of the
tests are organized.
Follow-up for f4faac20730cbb339ae05ed6e20da687a2868e76
(cherry picked from commit 3f9539a97f3b4747ff22a530bac39dec24ac58af)
Resolves: RHEL-112205
---
test/TEST-74-AUX-UTILS/meson.build | 4 +-
test/TEST-87-AUX-UTILS-VM/meson.build | 11 ++
test/meson.build | 1 +
test/units/TEST-74-AUX-UTILS.detect-virt.sh | 4 -
...ctl.sh => TEST-87-AUX-UTILS-VM.bootctl.sh} | 7 +-
...mp.sh => TEST-87-AUX-UTILS-VM.coredump.sh} | 9 +-
.../units/TEST-87-AUX-UTILS-VM.detect-virt.sh | 11 ++
...h => TEST-87-AUX-UTILS-VM.modules-load.sh} | 7 +-
test/units/TEST-87-AUX-UTILS-VM.mount.sh | 182 ++++++++++++++++++
...tore.sh => TEST-87-AUX-UTILS-VM.pstore.sh} | 5 +-
test/units/TEST-87-AUX-UTILS-VM.sh | 11 ++
11 files changed, 225 insertions(+), 27 deletions(-)
create mode 100644 test/TEST-87-AUX-UTILS-VM/meson.build
rename test/units/{TEST-74-AUX-UTILS.bootctl.sh => TEST-87-AUX-UTILS-VM.bootctl.sh} (98%)
rename test/units/{TEST-74-AUX-UTILS.coredump.sh => TEST-87-AUX-UTILS-VM.coredump.sh} (98%)
create mode 100755 test/units/TEST-87-AUX-UTILS-VM.detect-virt.sh
rename test/units/{TEST-74-AUX-UTILS.modules-load.sh => TEST-87-AUX-UTILS-VM.modules-load.sh} (94%)
create mode 100755 test/units/TEST-87-AUX-UTILS-VM.mount.sh
rename test/units/{TEST-74-AUX-UTILS.pstore.sh => TEST-87-AUX-UTILS-VM.pstore.sh} (98%)
create mode 100755 test/units/TEST-87-AUX-UTILS-VM.sh
diff --git a/test/TEST-74-AUX-UTILS/meson.build b/test/TEST-74-AUX-UTILS/meson.build
index ee24cd8f78..698d03b055 100644
--- a/test/TEST-74-AUX-UTILS/meson.build
+++ b/test/TEST-74-AUX-UTILS/meson.build
@@ -1,11 +1,9 @@
# SPDX-License-Identifier: LGPL-2.1-or-later
+# Container-specific auxiliary tests. VM-based ones go in TEST-87-AUX-UTILS-VM.
integration_tests += [
integration_test_template + {
'name' : fs.name(meson.current_source_dir()),
- 'storage': 'persistent',
- 'vm' : true,
- 'coredump-exclude-regex' : '/(test-usr-dump|test-dump|bash)$',
},
]
diff --git a/test/TEST-87-AUX-UTILS-VM/meson.build b/test/TEST-87-AUX-UTILS-VM/meson.build
new file mode 100644
index 0000000000..8490139204
--- /dev/null
+++ b/test/TEST-87-AUX-UTILS-VM/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: LGPL-2.1-or-later
+# VM-specific auxiliary tests. Container-based ones go in TEST-74-AUX-UTILS.
+
+integration_tests += [
+ integration_test_template + {
+ 'name' : fs.name(meson.current_source_dir()),
+ 'storage': 'persistent',
+ 'coredump-exclude-regex' : '/(test-usr-dump|test-dump|bash)$',
+ 'vm' : true,
+ },
+]
diff --git a/test/meson.build b/test/meson.build
index c440bcb722..40b2ba2c38 100644
--- a/test/meson.build
+++ b/test/meson.build
@@ -381,6 +381,7 @@ foreach dirname : [
# 'TEST-84-STORAGETM', # we don't ship systemd-storagetm
# 'TEST-85-NETWORK', # we don't ship systemd-networkd
'TEST-86-MULTI-PROFILE-UKI',
+ 'TEST-87-AUX-UTILS-VM',
]
subdir(dirname)
endforeach
diff --git a/test/units/TEST-74-AUX-UTILS.detect-virt.sh b/test/units/TEST-74-AUX-UTILS.detect-virt.sh
index a1539d9b44..fe1db4d2aa 100755
--- a/test/units/TEST-74-AUX-UTILS.detect-virt.sh
+++ b/test/units/TEST-74-AUX-UTILS.detect-virt.sh
@@ -5,7 +5,3 @@ set -o pipefail
SYSTEMD_IN_CHROOT=1 systemd-detect-virt --chroot
(! SYSTEMD_IN_CHROOT=0 systemd-detect-virt --chroot)
-
-if ! systemd-detect-virt -c; then
- unshare --mount-proc --fork --user --pid systemd-detect-virt --container
-fi
diff --git a/test/units/TEST-74-AUX-UTILS.bootctl.sh b/test/units/TEST-87-AUX-UTILS-VM.bootctl.sh
similarity index 98%
rename from test/units/TEST-74-AUX-UTILS.bootctl.sh
rename to test/units/TEST-87-AUX-UTILS-VM.bootctl.sh
index 650c289aca..5a9d8fb27f 100755
--- a/test/units/TEST-74-AUX-UTILS.bootctl.sh
+++ b/test/units/TEST-87-AUX-UTILS-VM.bootctl.sh
@@ -3,11 +3,6 @@
set -eux
set -o pipefail
-if systemd-detect-virt --quiet --container; then
- echo "running on container, skipping."
- exit 0
-fi
-
if ! command -v bootctl >/dev/null; then
echo "bootctl not found, skipping."
exit 0
@@ -24,6 +19,8 @@ fi
# shellcheck source=test/units/test-control.sh
. "$(dirname "$0")"/test-control.sh
+(! systemd-detect-virt -cq)
+
basic_tests() {
bootctl "$@" --help
bootctl "$@" --version
diff --git a/test/units/TEST-74-AUX-UTILS.coredump.sh b/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
similarity index 98%
rename from test/units/TEST-74-AUX-UTILS.coredump.sh
rename to test/units/TEST-87-AUX-UTILS-VM.coredump.sh
index 2c084f54d2..7ab6f29d7d 100755
--- a/test/units/TEST-74-AUX-UTILS.coredump.sh
+++ b/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
@@ -19,12 +19,9 @@ at_exit() {
rm -fv -- "$CORE_TEST_BIN" "$CORE_TEST_UNPRIV_BIN" "$MAKE_DUMP_SCRIPT" "$MAKE_STACKTRACE_DUMP"
}
-trap at_exit EXIT
+(! systemd-detect-virt -cq)
-if systemd-detect-virt -cq; then
- echo "Running in a container, skipping the systemd-coredump test..."
- exit 0
-fi
+trap at_exit EXIT
# To make all coredump entries stored in system.journal.
journalctl --rotate
@@ -81,7 +78,7 @@ timeout 30 bash -c "while [[ \$(coredumpctl list -q --no-legend $CORE_TEST_BIN |
if cgroupfs_supports_user_xattrs; then
# Make sure we can forward crashes back to containers
- CONTAINER="TEST-74-AUX-UTILS-container"
+ CONTAINER="TEST-87-AUX-UTILS-VM-container"
mkdir -p "/var/lib/machines/$CONTAINER"
mkdir -p "/run/systemd/system/systemd-nspawn@$CONTAINER.service.d"
diff --git a/test/units/TEST-87-AUX-UTILS-VM.detect-virt.sh b/test/units/TEST-87-AUX-UTILS-VM.detect-virt.sh
new file mode 100755
index 0000000000..251a0e8910
--- /dev/null
+++ b/test/units/TEST-87-AUX-UTILS-VM.detect-virt.sh
@@ -0,0 +1,11 @@
+#!/usr/bin/env bash
+# SPDX-License-Identifier: LGPL-2.1-or-later
+set -eux
+set -o pipefail
+
+(! systemd-detect-virt -cq)
+
+SYSTEMD_IN_CHROOT=1 systemd-detect-virt --chroot
+(! SYSTEMD_IN_CHROOT=0 systemd-detect-virt --chroot)
+
+unshare --mount-proc --fork --user --pid systemd-detect-virt --container
diff --git a/test/units/TEST-74-AUX-UTILS.modules-load.sh b/test/units/TEST-87-AUX-UTILS-VM.modules-load.sh
similarity index 94%
rename from test/units/TEST-74-AUX-UTILS.modules-load.sh
rename to test/units/TEST-87-AUX-UTILS-VM.modules-load.sh
index ceac8262bf..140f3d5f95 100755
--- a/test/units/TEST-74-AUX-UTILS.modules-load.sh
+++ b/test/units/TEST-87-AUX-UTILS-VM.modules-load.sh
@@ -10,12 +10,9 @@ at_exit() {
rm -rfv "${CONFIG_FILE:?}"
}
-trap at_exit EXIT
+(! systemd-detect-virt -cq)
-if systemd-detect-virt -cq; then
- echo "Running in a container, skipping the systemd-modules-load test..."
- exit 0
-fi
+trap at_exit EXIT
ORIG_MODULES_LOAD_CONFIG="$(systemd-analyze cat-config modules-load.d)"
diff --git a/test/units/TEST-87-AUX-UTILS-VM.mount.sh b/test/units/TEST-87-AUX-UTILS-VM.mount.sh
new file mode 100755
index 0000000000..3075d0fe2e
--- /dev/null
+++ b/test/units/TEST-87-AUX-UTILS-VM.mount.sh
@@ -0,0 +1,182 @@
+#!/usr/bin/env bash
+# SPDX-License-Identifier: LGPL-2.1-or-later
+set -eux
+set -o pipefail
+
+# shellcheck source=test/units/util.sh
+. "$(dirname "$0")"/util.sh
+
+at_exit() {
+ set +e
+
+ [[ -n "${LOOP:-}" ]] && losetup -d "$LOOP"
+ [[ -n "${WORK_DIR:-}" ]] && rm -fr "$WORK_DIR"
+}
+
+(! systemd-detect-virt -cq)
+
+trap at_exit EXIT
+
+WORK_DIR="$(mktemp -d)"
+mkdir -p "$WORK_DIR/mnt"
+
+systemd-mount --list
+systemd-mount --list --full
+systemd-mount --list --no-legend
+systemd-mount --list --no-pager
+systemd-mount --list --quiet
+systemd-mount --list --json=pretty
+
+# tmpfs
+mkdir -p "$WORK_DIR/mnt/foo/bar"
+systemd-mount --tmpfs "$WORK_DIR/mnt/foo"
+test ! -d "$WORK_DIR/mnt/foo/bar"
+touch "$WORK_DIR/mnt/foo/baz"
+systemd-umount "$WORK_DIR/mnt/foo"
+test -d "$WORK_DIR/mnt/foo/bar"
+test ! -e "$WORK_DIR/mnt/foo/baz"
+
+# overlay
+systemd-mount --type=overlay --options="lowerdir=/etc,upperdir=$WORK_DIR/upper,workdir=$WORK_DIR/work" /etc "$WORK_DIR/overlay"
+touch "$WORK_DIR/overlay/foo"
+test -e "$WORK_DIR/upper/foo"
+systemd-umount "$WORK_DIR/overlay"
+
+# Set up a simple block device for further tests
+dd if=/dev/zero of="$WORK_DIR/simple.img" bs=1M count=16
+mkfs.ext4 -L sd-mount-test "$WORK_DIR/simple.img"
+LOOP="$(losetup --show --find "$WORK_DIR/simple.img")"
+udevadm wait --timeout 60 --settle "$LOOP"
+# Also wait for the .device unit for the loop device is active. Otherwise, the .device unit activation
+# that is triggered by the .mount unit introduced by systemd-mount below may time out.
+timeout 60 bash -c "until systemctl is-active $LOOP; do sleep 1; done"
+mount "$LOOP" "$WORK_DIR/mnt"
+touch "$WORK_DIR/mnt/foo.bar"
+umount "$LOOP"
+(! mountpoint "$WORK_DIR/mnt")
+# Wait for the mount unit to be unloaded. Otherwise, creation of the transient unit below may fail.
+MOUNT_UNIT=$(systemd-escape --path --suffix=mount "$WORK_DIR/mnt")
+timeout 60 bash -c "while [[ -n \$(systemctl list-units --all --no-legend $MOUNT_UNIT) ]]; do sleep 1; done"
+
+# Mount with both source and destination set
+systemd-mount "$LOOP" "$WORK_DIR/mnt"
+systemctl status "$WORK_DIR/mnt"
+systemd-mount --list --full
+test -e "$WORK_DIR/mnt/foo.bar"
+systemd-umount "$WORK_DIR/mnt"
+# Same thing, but with explicitly specified filesystem and disabled filesystem check
+systemd-mount --type=ext4 --fsck=no --collect "$LOOP" "$WORK_DIR/mnt"
+systemctl status "$(systemd-escape --path "$WORK_DIR/mnt").mount"
+test -e "$WORK_DIR/mnt/foo.bar"
+systemd-mount --umount "$LOOP"
+# Discover additional metadata (unit description should now contain filesystem label)
+systemd-mount --no-ask-password --discover "$LOOP" "$WORK_DIR/mnt"
+test -e "$WORK_DIR/mnt/foo.bar"
+systemctl show -P Description "$WORK_DIR/mnt" | grep -q sd-mount-test
+systemd-umount "$WORK_DIR/mnt"
+# Set a unit description
+systemd-mount --description="Very Important Unit" "$LOOP" "$WORK_DIR/mnt"
+test -e "$WORK_DIR/mnt/foo.bar"
+systemctl show -P Description "$WORK_DIR/mnt" | grep -q "Very Important Unit"
+systemd-umount "$WORK_DIR/mnt"
+# Set a property
+systemd-mount --property="Description=Foo Bar" "$LOOP" "$WORK_DIR/mnt"
+test -e "$WORK_DIR/mnt/foo.bar"
+systemctl show -P Description "$WORK_DIR/mnt" | grep -q "Foo Bar"
+systemd-umount "$WORK_DIR/mnt"
+# Set mount options
+systemd-mount --options=ro,x-foo-bar "$LOOP" "$WORK_DIR/mnt"
+test -e "$WORK_DIR/mnt/foo.bar"
+systemctl show -P Options "$WORK_DIR/mnt" | grep -Eq "(^ro|,ro)"
+systemctl show -P Options "$WORK_DIR/mnt" | grep -q "x-foo-bar"
+systemd-umount "$WORK_DIR/mnt"
+
+# Mount with only source set
+systemd-mount "$LOOP"
+systemctl status /run/media/system/sd-mount-test
+systemd-mount --list --full
+test -e /run/media/system/sd-mount-test/foo.bar
+systemd-umount LABEL=sd-mount-test
+
+# Automount
+systemd-mount --automount=yes "$LOOP" "$WORK_DIR/mnt"
+systemd-mount --list --full
+systemctl status "$(systemd-escape --path "$WORK_DIR/mnt").automount"
+[[ "$(systemctl show -P ActiveState "$WORK_DIR/mnt")" == inactive ]]
+test -e "$WORK_DIR/mnt/foo.bar"
+systemctl status "$WORK_DIR/mnt"
+systemd-umount "$WORK_DIR/mnt"
+# Automount + automount-specific property
+systemd-mount -A --automount-property="Description=Bar Baz" "$LOOP" "$WORK_DIR/mnt"
+systemctl show -P Description "$(systemd-escape --path "$WORK_DIR/mnt").automount" | grep -q "Bar Baz"
+test -e "$WORK_DIR/mnt/foo.bar"
+# Call --umount via --machine=, first with a relative path (bad) and then with
+# an absolute one (good)
+(! systemd-umount --machine=.host "$(realpath --relative-to=. "$WORK_DIR/mnt")")
+systemd-umount --machine=.host "$WORK_DIR/mnt"
+
+# ext4 doesn't support uid=/gid=
+(! systemd-mount -t ext4 --owner=testuser "$LOOP" "$WORK_DIR/mnt")
+
+# Automount + --bind-device
+systemd-mount --automount=yes --bind-device --timeout-idle-sec=1 "$LOOP" "$WORK_DIR/mnt"
+systemctl status "$(systemd-escape --path "$WORK_DIR/mnt").automount"
+# Trigger the automount
+test -e "$WORK_DIR/mnt/foo.bar"
+# Wait until it's idle again
+sleep 1.5
+# Safety net for slower/overloaded systems
+timeout 10s bash -c "while systemctl is-active -q $WORK_DIR/mnt; do sleep .2; done"
+systemctl status "$(systemd-escape --path "$WORK_DIR/mnt").automount"
+# Disassemble the underlying block device
+losetup -d "$LOOP"
+unset LOOP
+# The automount unit should disappear once the underlying blockdev is gone
+timeout 10s bash -c "while systemctl status '$(systemd-escape --path "$WORK_DIR/mnt".automount)'; do sleep .2; done"
+
+# Mount a disk image
+systemd-mount --discover "$WORK_DIR/simple.img"
+# We can access files in the image even if the loopback block device is not initialized by udevd.
+test -e /run/media/system/simple.img/foo.bar
+# systemd-mount --list and systemd-umount require the loopback block device is initialized by udevd.
+udevadm settle --timeout 30
+assert_in "/dev/loop.* ext4 +sd-mount-test" "$(systemd-mount --list --full)"
+LOOP_AUTO=$(systemd-mount --list --full --no-legend | awk '$7 == "sd-mount-test" { print $1 }')
+LOOP_AUTO_DEVPATH=$(udevadm info --query property --property DEVPATH --value "$LOOP_AUTO")
+systemd-umount "$WORK_DIR/simple.img"
+# Wait for 'change' uevent for the device with DISK_MEDIA_CHANGE=1.
+# After the event, the backing_file attribute should be removed.
+timeout 60 bash -c "while [[ -e /sys/$LOOP_AUTO_DEVPATH/loop/backing_file ]]; do sleep 1; done"
+
+# --owner + vfat
+#
+# Create a vfat image, as ext4 doesn't support uid=/gid= fixating for all
+# files/directories
+dd if=/dev/zero of="$WORK_DIR/owner-vfat.img" bs=1M count=16
+mkfs.vfat -n owner-vfat "$WORK_DIR/owner-vfat.img"
+LOOP="$(losetup --show --find "$WORK_DIR/owner-vfat.img")"
+# If the synthesized uevent triggered by inotify event has been processed earlier than the kernel finishes to
+# attach the backing file, then SYSTEMD_READY=0 is set for the device. As a workaround, monitor sysattr
+# and re-trigger uevent after that.
+LOOP_DEVPATH=$(udevadm info --query property --property DEVPATH --value "$LOOP")
+timeout 60 bash -c "until [[ -e /sys/$LOOP_DEVPATH/loop/backing_file ]]; do sleep 1; done"
+udevadm trigger --settle "$LOOP"
+# Also wait for the .device unit for the loop device is active. Otherwise, the .device unit activation
+# that is triggered by the .mount unit introduced by systemd-mount below may time out.
+if ! timeout 60 bash -c "until systemctl is-active $LOOP; do sleep 1; done"; then
+ # For debugging issue like
+ # https://github.com/systemd/systemd/issues/32680#issuecomment-2120959238
+ # https://github.com/systemd/systemd/issues/32680#issuecomment-2122074805
+ udevadm info "$LOOP"
+ udevadm info --attribute-walk "$LOOP"
+ cat /sys/"$(udevadm info --query property --property DEVPATH --value "$LOOP")"/loop/backing_file || :
+ false
+fi
+# Mount it and check the UID/GID
+[[ "$(stat -c "%U:%G" "$WORK_DIR/mnt")" == "root:root" ]]
+systemd-mount --owner=testuser "$LOOP" "$WORK_DIR/mnt"
+systemctl status "$WORK_DIR/mnt"
+[[ "$(stat -c "%U:%G" "$WORK_DIR/mnt")" == "testuser:testuser" ]]
+touch "$WORK_DIR/mnt/hello"
+[[ "$(stat -c "%U:%G" "$WORK_DIR/mnt/hello")" == "testuser:testuser" ]]
+systemd-umount LABEL=owner-vfat
diff --git a/test/units/TEST-74-AUX-UTILS.pstore.sh b/test/units/TEST-87-AUX-UTILS-VM.pstore.sh
similarity index 98%
rename from test/units/TEST-74-AUX-UTILS.pstore.sh
rename to test/units/TEST-87-AUX-UTILS-VM.pstore.sh
index 9be8066e8e..043d023856 100755
--- a/test/units/TEST-74-AUX-UTILS.pstore.sh
+++ b/test/units/TEST-87-AUX-UTILS-VM.pstore.sh
@@ -5,10 +5,7 @@ set -o pipefail
systemctl log-level info
-if systemd-detect-virt -cq; then
- echo "Running in a container, skipping the systemd-pstore test..."
- exit 0
-fi
+(! systemd-detect-virt -cq)
DUMMY_DMESG_0="$(mktemp)"
cat >"$DUMMY_DMESG_0" <<\EOF
diff --git a/test/units/TEST-87-AUX-UTILS-VM.sh b/test/units/TEST-87-AUX-UTILS-VM.sh
new file mode 100755
index 0000000000..9c2a033aa9
--- /dev/null
+++ b/test/units/TEST-87-AUX-UTILS-VM.sh
@@ -0,0 +1,11 @@
+#!/usr/bin/env bash
+# SPDX-License-Identifier: LGPL-2.1-or-later
+set -eux
+set -o pipefail
+
+# shellcheck source=test/units/test-control.sh
+. "$(dirname "$0")"/test-control.sh
+
+run_subtests
+
+touch /testok

View File

@ -0,0 +1,123 @@
From 78ed449353cbe20dcddcbea1f7f6370a5df0c209 Mon Sep 17 00:00:00 2001
From: Yu Watanabe <watanabe.yu+github@gmail.com>
Date: Mon, 1 Sep 2025 05:08:45 +0900
Subject: [PATCH] core/transaction: first drop unmergable jobs for anchor jobs
As you can see, something spurious happens in the logs below.
```
initrd-switch-root.target: Trying to enqueue job initrd-switch-root.target/start/isolate
systemd-repart.service: Looking at job systemd-repart.service/stop conflicted_by=no
systemd-repart.service: Looking at job systemd-repart.service/start conflicted_by=no
systemd-repart.service: Fixing conflicting jobs systemd-repart.service/stop,systemd-repart.service/start by deleting job systemd-repart.service/stop
initrd-switch-root.target: Fixing conflicting jobs initrd-switch-root.target/stop,initrd-switch-root.target/start by deleting job initrd-switch-root.target/stop
systemd-repart.service: Deleting job systemd-repart.service/start as dependency of job initrd-switch-root.target/stop
```
The two conflicting jobs for systemd-repart.service are initially queued
as the following:
- initrd-switch-root.target has Wants=initrd-root-fs.target, and
initrd-root-fs.target has Wants=systemd-repart.service (through symlink),
hence starting initrd-switch-root.target tries to start
systemd-repart.service,
- systemd-repart.service has Conflicts=initrd-switch-root.target, hence
starting initrd-switch-root.target tries to stop
systemd-repart.service.
As similar, interestingly(?) starting initrd-switch-root.target tries to
stop initrd-switch-root.target.
So, now there are at least two pairs of conflicting jobs:
- systemd-repart.service: start vs stop,
- initrd-switch-root.target: start vs stop.
As these jobs are induced by starting initrd-switch-root.target, of course
the most important one is the start job for initrd-switch-root.target.
Previously, as you can see in the logs at the beginning, even if
the start job for initrd-switch-root.target is important, we may first
try to resolve the conflict in systemd-repart.service, and may drop the
stop job for systemd-repart.service even if it is relevant to the start
job of initrd-switch-root.target.
This makes first we solve the pair of conflicting jobs for anchor task.
So the stop job for initrd-switch-root.target is dropped first, and the
induced start job for systemd-repart.service is automatically removed,
thus it is not necessary to solve the conflict in systemd-repart.service
anymore.
This is especially important for services that are enabled both in initrd
and after switching root. If a stop job for one of the service is
unexpectedly dropped during switching root, then the service is not stopped
before switching root, and will never start after that.
Fixes #38765.
(cherry picked from commit 811af8d53463fae5a8470b7884158cee0f9acbe4)
Resolves: RHEL-112205
---
src/core/transaction.c | 29 +++++++++++++++++++++++++----
1 file changed, 25 insertions(+), 4 deletions(-)
diff --git a/src/core/transaction.c b/src/core/transaction.c
index 705ed0c50f..eeaf7e8be5 100644
--- a/src/core/transaction.c
+++ b/src/core/transaction.c
@@ -14,6 +14,7 @@
#include "terminal-util.h"
#include "transaction.h"
+static bool job_matters_to_anchor(Job *job);
static void transaction_unlink_job(Transaction *tr, Job *j, bool delete_dependencies);
static void transaction_delete_job(Transaction *tr, Job *j, bool delete_dependencies) {
@@ -215,17 +216,18 @@ static int delete_one_unmergeable_job(Transaction *tr, Job *job) {
return -EINVAL;
}
-static int transaction_merge_jobs(Transaction *tr, sd_bus_error *e) {
+static int transaction_ensure_mergeable(Transaction *tr, bool matters_to_anchor, sd_bus_error *e) {
Job *j;
int r;
assert(tr);
- /* First step, check whether any of the jobs for one specific
- * task conflict. If so, try to drop one of them. */
HASHMAP_FOREACH(j, tr->jobs) {
JobType t;
+ if (job_matters_to_anchor(j) != matters_to_anchor)
+ continue;
+
t = j->type;
LIST_FOREACH(transaction, k, j->transaction_next) {
if (job_type_merge_and_collapse(&t, k->type, j->unit) >= 0)
@@ -252,7 +254,26 @@ static int transaction_merge_jobs(Transaction *tr, sd_bus_error *e) {
}
}
- /* Second step, merge the jobs. */
+ return 0;
+}
+
+static int transaction_merge_jobs(Transaction *tr, sd_bus_error *e) {
+ Job *j;
+ int r;
+
+ assert(tr);
+
+ /* First step, try to drop unmergeable jobs for jobs that matter to anchor. */
+ r = transaction_ensure_mergeable(tr, /* matters_to_anchor = */ true, e);
+ if (r < 0)
+ return r;
+
+ /* Second step, do the same for jobs that not matter to anchor. */
+ r = transaction_ensure_mergeable(tr, /* matters_to_anchor = */ false, e);
+ if (r < 0)
+ return r;
+
+ /* Third step, merge the jobs. */
HASHMAP_FOREACH(j, tr->jobs) {
JobType t = j->type;

View File

@ -0,0 +1,44 @@
From 44e20b005290dbee4e15b65238f7d87bd51be912 Mon Sep 17 00:00:00 2001
From: Yu Watanabe <watanabe.yu+github@gmail.com>
Date: Thu, 4 Sep 2025 00:49:34 +0900
Subject: [PATCH] test: add test case for issue #38765
(cherry picked from commit 5b89cc2a5ad9ecb040dc1fc9b31fb0e24a59e9ae)
Resolves: RHEL-112205
---
src/core/transaction.c | 1 +
test/units/TEST-87-AUX-UTILS-VM.sh | 8 ++++++++
2 files changed, 9 insertions(+)
diff --git a/src/core/transaction.c b/src/core/transaction.c
index eeaf7e8be5..e4b04d0461 100644
--- a/src/core/transaction.c
+++ b/src/core/transaction.c
@@ -171,6 +171,7 @@ static int delete_one_unmergeable_job(Transaction *tr, Job *job) {
* another unit in which case we
* rather remove the start. */
+ /* Update test/units/TEST-87-AUX-UTILS-VM.sh when logs below are changed. */
log_unit_debug(j->unit,
"Looking at job %s/%s conflicted_by=%s",
j->unit->id, job_type_to_string(j->type),
diff --git a/test/units/TEST-87-AUX-UTILS-VM.sh b/test/units/TEST-87-AUX-UTILS-VM.sh
index 9c2a033aa9..ecbff290f0 100755
--- a/test/units/TEST-87-AUX-UTILS-VM.sh
+++ b/test/units/TEST-87-AUX-UTILS-VM.sh
@@ -3,6 +3,14 @@
set -eux
set -o pipefail
+# For issue #38765
+journalctl --sync
+if journalctl -q -o short-monotonic --grep "Looking at job .*/.* conflicted_by=(yes|no)" >/failed; then
+ echo "Found unexpected unmergeable jobs"
+ cat /failed
+ exit 1
+fi
+
# shellcheck source=test/units/test-control.sh
. "$(dirname "$0")"/test-control.sh

View File

@ -0,0 +1,89 @@
From cdb2fd795861211f6e89b9156971677eb5e2ef70 Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Mon, 20 Jan 2025 10:31:09 +0100
Subject: [PATCH] strv: add strv_equal_ignore_order() helper
(cherry picked from commit 5072f4268b89a71e47e59c434da0222f722c7f0e)
Resolves: RHEL-109902
---
src/basic/strv.c | 20 ++++++++++++++++++++
src/basic/strv.h | 2 ++
src/test/test-strv.c | 22 ++++++++++++++++++++++
3 files changed, 44 insertions(+)
diff --git a/src/basic/strv.c b/src/basic/strv.c
index a92b1234a3..c9c4551cdc 100644
--- a/src/basic/strv.c
+++ b/src/basic/strv.c
@@ -861,6 +861,26 @@ int strv_compare(char * const *a, char * const *b) {
return 0;
}
+bool strv_equal_ignore_order(char **a, char **b) {
+
+ /* Just like strv_equal(), but doesn't care about the order of elements or about redundant entries
+ * (i.e. it's even ok if the number of entries in the array differ, as long as the difference just
+ * consists of repititions) */
+
+ if (a == b)
+ return true;
+
+ STRV_FOREACH(i, a)
+ if (!strv_contains(b, *i))
+ return false;
+
+ STRV_FOREACH(i, b)
+ if (!strv_contains(a, *i))
+ return false;
+
+ return true;
+}
+
void strv_print_full(char * const *l, const char *prefix) {
STRV_FOREACH(s, l)
printf("%s%s\n", strempty(prefix), *s);
diff --git a/src/basic/strv.h b/src/basic/strv.h
index 49ef19dcb5..86ba06f835 100644
--- a/src/basic/strv.h
+++ b/src/basic/strv.h
@@ -96,6 +96,8 @@ static inline bool strv_equal(char * const *a, char * const *b) {
return strv_compare(a, b) == 0;
}
+bool strv_equal_ignore_order(char **a, char **b);
+
char** strv_new_internal(const char *x, ...) _sentinel_;
char** strv_new_ap(const char *x, va_list ap);
#define strv_new(...) strv_new_internal(__VA_ARGS__, NULL)
diff --git a/src/test/test-strv.c b/src/test/test-strv.c
index d641043c50..b1d30d73a5 100644
--- a/src/test/test-strv.c
+++ b/src/test/test-strv.c
@@ -1255,4 +1255,26 @@ TEST(strv_find_closest) {
ASSERT_NULL(strv_find_closest(l, "sfajosajfosdjaofjdsaf"));
}
+TEST(strv_equal_ignore_order) {
+
+ ASSERT_TRUE(strv_equal_ignore_order(NULL, NULL));
+ ASSERT_TRUE(strv_equal_ignore_order(NULL, STRV_MAKE(NULL)));
+ ASSERT_TRUE(strv_equal_ignore_order(STRV_MAKE(NULL), NULL));
+ ASSERT_TRUE(strv_equal_ignore_order(STRV_MAKE(NULL), STRV_MAKE(NULL)));
+
+ ASSERT_FALSE(strv_equal_ignore_order(STRV_MAKE("foo"), NULL));
+ ASSERT_FALSE(strv_equal_ignore_order(STRV_MAKE("foo"), STRV_MAKE(NULL)));
+ ASSERT_FALSE(strv_equal_ignore_order(NULL, STRV_MAKE("foo")));
+ ASSERT_FALSE(strv_equal_ignore_order(STRV_MAKE(NULL), STRV_MAKE("foo")));
+ ASSERT_TRUE(strv_equal_ignore_order(STRV_MAKE("foo"), STRV_MAKE("foo")));
+ ASSERT_FALSE(strv_equal_ignore_order(STRV_MAKE("foo"), STRV_MAKE("foo", "bar")));
+ ASSERT_FALSE(strv_equal_ignore_order(STRV_MAKE("foo", "bar"), STRV_MAKE("foo")));
+ ASSERT_TRUE(strv_equal_ignore_order(STRV_MAKE("foo", "bar"), STRV_MAKE("foo", "bar")));
+ ASSERT_TRUE(strv_equal_ignore_order(STRV_MAKE("bar", "foo"), STRV_MAKE("foo", "bar")));
+ ASSERT_FALSE(strv_equal_ignore_order(STRV_MAKE("bar", "foo"), STRV_MAKE("foo", "bar", "quux")));
+ ASSERT_FALSE(strv_equal_ignore_order(STRV_MAKE("bar", "foo", "quux"), STRV_MAKE("foo", "bar")));
+ ASSERT_TRUE(strv_equal_ignore_order(STRV_MAKE("bar", "foo", "quux"), STRV_MAKE("quux", "foo", "bar")));
+ ASSERT_TRUE(strv_equal_ignore_order(STRV_MAKE("bar", "foo"), STRV_MAKE("bar", "foo", "bar", "foo", "foo")));
+}
+
DEFINE_TEST_MAIN(LOG_INFO);

View File

@ -0,0 +1,98 @@
From 463c56040bad633e1c7d8883f8d80d5c017f38b7 Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Thu, 16 Jan 2025 14:15:26 +0100
Subject: [PATCH] pam: minor coding style tweaks
(cherry picked from commit 30de5691744781277f992a25afa268518f3fe711)
Resolves: RHEL-109902
---
src/home/pam_systemd_home.c | 7 ++-----
src/login/pam_systemd.c | 13 ++++++-------
2 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/src/home/pam_systemd_home.c b/src/home/pam_systemd_home.c
index 624f1ced88..0d28e99ba2 100644
--- a/src/home/pam_systemd_home.c
+++ b/src/home/pam_systemd_home.c
@@ -115,7 +115,6 @@ static int acquire_user_record(
r = pam_get_user(handle, &username, NULL);
if (r != PAM_SUCCESS)
return pam_syslog_pam_error(handle, LOG_ERR, r, "Failed to get user name: @PAMERR@");
-
if (isempty(username))
return pam_syslog_pam_error(handle, LOG_ERR, PAM_SERVICE_ERR, "User name not set.");
}
@@ -535,7 +534,6 @@ static int acquire_home(
r = pam_get_user(handle, &username, NULL);
if (r != PAM_SUCCESS)
return pam_syslog_pam_error(handle, LOG_ERR, r, "Failed to get user name: @PAMERR@");
-
if (isempty(username))
return pam_syslog_pam_error(handle, LOG_ERR, PAM_SERVICE_ERR, "User name not set.");
@@ -879,7 +877,6 @@ _public_ PAM_EXTERN int pam_sm_close_session(
r = pam_get_user(handle, &username, NULL);
if (r != PAM_SUCCESS)
return pam_syslog_pam_error(handle, LOG_ERR, r, "Failed to get user name: @PAMERR@");
-
if (isempty(username))
return pam_syslog_pam_error(handle, LOG_ERR, PAM_SERVICE_ERR, "User name not set.");
@@ -949,7 +946,7 @@ _public_ PAM_EXTERN int pam_sm_acct_mgmt(
if (r != PAM_SUCCESS)
return r;
- r = acquire_user_record(handle, NULL, debug, &ur, NULL);
+ r = acquire_user_record(handle, /* username= */ NULL, debug, &ur, /* bus_data= */ NULL);
if (r != PAM_SUCCESS)
return r;
@@ -1057,7 +1054,7 @@ _public_ PAM_EXTERN int pam_sm_chauthtok(
pam_debug_syslog(handle, debug, "pam-systemd-homed account management");
- r = acquire_user_record(handle, NULL, debug, &ur, NULL);
+ r = acquire_user_record(handle, /* username= */ NULL, debug, &ur, /* bus_data= */ NULL);
if (r != PAM_SUCCESS)
return r;
diff --git a/src/login/pam_systemd.c b/src/login/pam_systemd.c
index b3c84a835e..cc51daebc1 100644
--- a/src/login/pam_systemd.c
+++ b/src/login/pam_systemd.c
@@ -183,7 +183,6 @@ static int acquire_user_record(
r = pam_get_user(handle, &username, NULL);
if (r != PAM_SUCCESS)
return pam_syslog_pam_error(handle, LOG_ERR, r, "Failed to get user name: @PAMERR@");
-
if (isempty(username))
return pam_syslog_pam_error(handle, LOG_ERR, PAM_SERVICE_ERR, "User name not valid.");
@@ -220,7 +219,7 @@ static int acquire_user_record(
_cleanup_free_ char *formatted = NULL;
/* Request the record ourselves */
- r = userdb_by_name(username, 0, &ur);
+ r = userdb_by_name(username, /* flags= */ 0, &ur);
if (r < 0) {
pam_syslog_errno(handle, LOG_ERR, r, "Failed to get user record: %m");
return PAM_USER_UNKNOWN;
@@ -1283,12 +1282,12 @@ _public_ PAM_EXTERN int pam_sm_close_session(
if (parse_argv(handle,
argc, argv,
- NULL,
- NULL,
- NULL,
+ /* class= */ NULL,
+ /* type= */ NULL,
+ /* deskop= */ NULL,
&debug,
- NULL,
- NULL) < 0)
+ /* default_capability_bounding_set */ NULL,
+ /* default_capability_ambient_set= */ NULL) < 0)
return PAM_SESSION_ERR;
pam_debug_syslog(handle, debug, "pam-systemd shutting down");

View File

@ -0,0 +1,196 @@
From 941c60137700e5242834459edb4025d96bd52f3d Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Fri, 3 Jan 2025 17:53:33 +0100
Subject: [PATCH] user-record: add helper that checks if a provided user name
matches a record
This ensures that user names can be specified either in the regular
short syntax or with a realm appended, and both are accepted. (The
latter of course only if the record actually defines a realm)
(cherry picked from commit 8aacf0fee1a8e9503bc071d5557293b0f3af50a4)
Resolves: RHEL-109902
---
src/home/homed-varlink.c | 4 ++--
src/home/pam_systemd_home.c | 2 +-
src/login/pam_systemd.c | 2 +-
src/shared/group-record.c | 13 +++++++++++++
src/shared/group-record.h | 2 ++
src/shared/user-record.c | 13 +++++++++++++
src/shared/user-record.h | 2 ++
src/shared/userdb-dropin.c | 5 +++--
src/userdb/userwork.c | 4 ++--
9 files changed, 39 insertions(+), 8 deletions(-)
diff --git a/src/home/homed-varlink.c b/src/home/homed-varlink.c
index f6dd27594f..cfd46ea51a 100644
--- a/src/home/homed-varlink.c
+++ b/src/home/homed-varlink.c
@@ -62,7 +62,7 @@ static bool home_user_match_lookup_parameters(LookupParameters *p, Home *h) {
assert(p);
assert(h);
- if (p->user_name && !streq(p->user_name, h->user_name))
+ if (p->user_name && !user_record_matches_user_name(h->record, p->user_name))
return false;
if (uid_is_valid(p->uid) && h->uid != p->uid)
@@ -175,7 +175,7 @@ static bool home_group_match_lookup_parameters(LookupParameters *p, Home *h) {
assert(p);
assert(h);
- if (p->group_name && !streq(h->user_name, p->group_name))
+ if (p->group_name && !user_record_matches_user_name(h->record, p->group_name))
return false;
if (gid_is_valid(p->gid) && h->uid != (uid_t) p->gid)
diff --git a/src/home/pam_systemd_home.c b/src/home/pam_systemd_home.c
index 0d28e99ba2..fb61105295 100644
--- a/src/home/pam_systemd_home.c
+++ b/src/home/pam_systemd_home.c
@@ -220,7 +220,7 @@ static int acquire_user_record(
return pam_syslog_errno(handle, LOG_ERR, r, "Failed to load user record: %m");
/* Safety check if cached record actually matches what we are looking for */
- if (!streq_ptr(username, ur->user_name))
+ if (!user_record_matches_user_name(ur, username))
return pam_syslog_pam_error(handle, LOG_ERR, PAM_SERVICE_ERR,
"Acquired user record does not match user name.");
diff --git a/src/login/pam_systemd.c b/src/login/pam_systemd.c
index cc51daebc1..008bcb28a0 100644
--- a/src/login/pam_systemd.c
+++ b/src/login/pam_systemd.c
@@ -212,7 +212,7 @@ static int acquire_user_record(
return pam_syslog_errno(handle, LOG_ERR, r, "Failed to load user record: %m");
/* Safety check if cached record actually matches what we are looking for */
- if (!streq_ptr(username, ur->user_name))
+ if (!user_record_matches_user_name(ur, username))
return pam_syslog_pam_error(handle, LOG_ERR, PAM_SERVICE_ERR,
"Acquired user record does not match user name.");
} else {
diff --git a/src/shared/group-record.c b/src/shared/group-record.c
index 7b401bf064..e4a4eca99c 100644
--- a/src/shared/group-record.c
+++ b/src/shared/group-record.c
@@ -327,6 +327,19 @@ int group_record_clone(GroupRecord *h, UserRecordLoadFlags flags, GroupRecord **
return 0;
}
+bool group_record_matches_group_name(const GroupRecord *g, const char *group_name) {
+ assert(g);
+ assert(group_name);
+
+ if (streq_ptr(g->group_name, group_name))
+ return true;
+
+ if (streq_ptr(g->group_name_and_realm_auto, group_name))
+ return true;
+
+ return false;
+}
+
int group_record_match(GroupRecord *h, const UserDBMatch *match) {
assert(h);
assert(match);
diff --git a/src/shared/group-record.h b/src/shared/group-record.h
index a2cef81c8a..5705fe2511 100644
--- a/src/shared/group-record.h
+++ b/src/shared/group-record.h
@@ -47,3 +47,5 @@ int group_record_match(GroupRecord *h, const UserDBMatch *match);
const char* group_record_group_name_and_realm(GroupRecord *h);
UserDisposition group_record_disposition(GroupRecord *h);
+
+bool group_record_matches_group_name(const GroupRecord *g, const char *groupname);
diff --git a/src/shared/user-record.c b/src/shared/user-record.c
index 4557718023..16631ba0ac 100644
--- a/src/shared/user-record.c
+++ b/src/shared/user-record.c
@@ -2621,6 +2621,19 @@ int user_record_is_nobody(const UserRecord *u) {
return u->uid == UID_NOBODY || STRPTR_IN_SET(u->user_name, NOBODY_USER_NAME, "nobody");
}
+bool user_record_matches_user_name(const UserRecord *u, const char *user_name) {
+ assert(u);
+ assert(user_name);
+
+ if (streq_ptr(u->user_name, user_name))
+ return true;
+
+ if (streq_ptr(u->user_name_and_realm_auto, user_name))
+ return true;
+
+ return false;
+}
+
int suitable_blob_filename(const char *name) {
/* Enforces filename requirements as described in docs/USER_RECORD_BULK_DIRS.md */
return filename_is_valid(name) &&
diff --git a/src/shared/user-record.h b/src/shared/user-record.h
index b539b3f55e..b762867c08 100644
--- a/src/shared/user-record.h
+++ b/src/shared/user-record.h
@@ -489,6 +489,8 @@ typedef struct UserDBMatch {
bool user_name_fuzzy_match(const char *names[], size_t n_names, char **matches);
int user_record_match(UserRecord *u, const UserDBMatch *match);
+bool user_record_matches_user_name(const UserRecord *u, const char *username);
+
const char* user_storage_to_string(UserStorage t) _const_;
UserStorage user_storage_from_string(const char *s) _pure_;
diff --git a/src/shared/userdb-dropin.c b/src/shared/userdb-dropin.c
index 9f027d7783..81fd5f3ebc 100644
--- a/src/shared/userdb-dropin.c
+++ b/src/shared/userdb-dropin.c
@@ -4,6 +4,7 @@
#include "fd-util.h"
#include "fileio.h"
#include "format-util.h"
+#include "group-record.h"
#include "path-util.h"
#include "stdio-util.h"
#include "user-util.h"
@@ -87,7 +88,7 @@ static int load_user(
if (r < 0)
return r;
- if (name && !streq_ptr(name, u->user_name))
+ if (name && !user_record_matches_user_name(u, name))
return -EINVAL;
if (uid_is_valid(uid) && uid != u->uid)
@@ -231,7 +232,7 @@ static int load_group(
if (r < 0)
return r;
- if (name && !streq_ptr(name, g->group_name))
+ if (name && !group_record_matches_group_name(g, name))
return -EINVAL;
if (gid_is_valid(gid) && gid != g->gid)
diff --git a/src/userdb/userwork.c b/src/userdb/userwork.c
index 1e36face40..dce60e2ebd 100644
--- a/src/userdb/userwork.c
+++ b/src/userdb/userwork.c
@@ -215,7 +215,7 @@ static int vl_method_get_user_record(sd_varlink *link, sd_json_variant *paramete
}
if ((uid_is_valid(p.uid) && hr->uid != p.uid) ||
- (p.user_name && !streq(hr->user_name, p.user_name)))
+ (p.user_name && !user_record_matches_user_name(hr, p.user_name)))
return sd_varlink_error(link, "io.systemd.UserDatabase.ConflictingRecordFound", NULL);
r = build_user_json(link, hr, &v);
@@ -345,7 +345,7 @@ static int vl_method_get_group_record(sd_varlink *link, sd_json_variant *paramet
}
if ((uid_is_valid(p.gid) && g->gid != p.gid) ||
- (p.group_name && !streq(g->group_name, p.group_name)))
+ (p.group_name && !group_record_matches_group_name(g, p.group_name)))
return sd_varlink_error(link, "io.systemd.UserDatabase.ConflictingRecordFound", NULL);
r = build_group_json(link, g, &v);

View File

@ -0,0 +1,110 @@
From 33c12c3976fa14bf25ca62d8aea3bbc0a11f356f Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Thu, 16 Jan 2025 14:15:52 +0100
Subject: [PATCH] user-record: add support for alias user names to user record
(cherry picked from commit e2e1f38f5a9d442d0a027986024f4ea75ce97d2f)
Resolves: RHEL-109902
---
docs/USER_RECORD.md | 8 ++++++++
src/shared/user-record-show.c | 7 +++++++
src/shared/user-record.c | 14 +++++++++++++-
src/shared/user-record.h | 1 +
4 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/docs/USER_RECORD.md b/docs/USER_RECORD.md
index 911fceb03f..5babc70f65 100644
--- a/docs/USER_RECORD.md
+++ b/docs/USER_RECORD.md
@@ -226,6 +226,14 @@ This field is optional, when unset the user should not be considered part of any
A user record with a realm set is never compatible (for the purpose of updates,
see above) with a user record without one set, even if the `userName` field matches.
+`aliases` → An array of strings, each being a valid UNIX user name. If
+specified, a list of additional UNIX user names this record shall be known
+under. These are *alias* names only, the name in `userName` is always the
+primary name. Typically, a user record that carries this field shall be
+retrievable and resolvable under every name listed here, pretty much everywhere
+the primary user name is. If logging in is attempted via an alias name it
+should be normalized to the primary name.
+
`blobDirectory` → The absolute path to a world-readable copy of the user's blob
directory. See [Blob Directories](/USER_RECORD_BLOB_DIRS) for more details.
diff --git a/src/shared/user-record-show.c b/src/shared/user-record-show.c
index 4d8ffe1c35..f47da4b4c7 100644
--- a/src/shared/user-record-show.c
+++ b/src/shared/user-record-show.c
@@ -65,6 +65,13 @@ void user_record_show(UserRecord *hr, bool show_full_group_info) {
printf(" User name: %s\n",
user_record_user_name_and_realm(hr));
+ if (!strv_isempty(hr->aliases)) {
+ STRV_FOREACH(i, hr->aliases)
+ printf(i == hr->aliases ?
+ " Alias: %s" : ", %s", *i);
+ putchar('\n');
+ }
+
if (hr->state) {
const char *color;
diff --git a/src/shared/user-record.c b/src/shared/user-record.c
index 16631ba0ac..8617c70aef 100644
--- a/src/shared/user-record.c
+++ b/src/shared/user-record.c
@@ -139,6 +139,7 @@ static UserRecord* user_record_free(UserRecord *h) {
free(h->user_name);
free(h->realm);
free(h->user_name_and_realm_auto);
+ strv_free(h->aliases);
free(h->real_name);
free(h->email_address);
erase_and_free(h->password_hint);
@@ -1537,6 +1538,7 @@ int user_record_load(UserRecord *h, sd_json_variant *v, UserRecordLoadFlags load
static const sd_json_dispatch_field user_dispatch_table[] = {
{ "userName", SD_JSON_VARIANT_STRING, json_dispatch_user_group_name, offsetof(UserRecord, user_name), SD_JSON_RELAX },
+ { "aliases", SD_JSON_VARIANT_ARRAY, json_dispatch_user_group_list, offsetof(UserRecord, aliases), SD_JSON_RELAX },
{ "realm", SD_JSON_VARIANT_STRING, json_dispatch_realm, offsetof(UserRecord, realm), 0 },
{ "blobDirectory", SD_JSON_VARIANT_STRING, json_dispatch_path, offsetof(UserRecord, blob_directory), SD_JSON_STRICT },
{ "blobManifest", SD_JSON_VARIANT_OBJECT, dispatch_blob_manifest, offsetof(UserRecord, blob_manifest), 0 },
@@ -2631,6 +2633,15 @@ bool user_record_matches_user_name(const UserRecord *u, const char *user_name) {
if (streq_ptr(u->user_name_and_realm_auto, user_name))
return true;
+ if (strv_contains(u->aliases, user_name))
+ return true;
+
+ const char *realm = strrchr(user_name, '@');
+ if (realm && streq_ptr(realm+1, u->realm))
+ STRV_FOREACH(a, u->aliases)
+ if (startswith(user_name, *a) == realm)
+ return true;
+
return false;
}
@@ -2700,7 +2711,8 @@ int user_record_match(UserRecord *u, const UserDBMatch *match) {
u->cifs_user_name,
};
- if (!user_name_fuzzy_match(names, ELEMENTSOF(names), match->fuzzy_names))
+ if (!user_name_fuzzy_match(names, ELEMENTSOF(names), match->fuzzy_names) &&
+ !user_name_fuzzy_match((const char**) u->aliases, strv_length(u->aliases), match->fuzzy_names))
return false;
}
diff --git a/src/shared/user-record.h b/src/shared/user-record.h
index b762867c08..f8c7454f21 100644
--- a/src/shared/user-record.h
+++ b/src/shared/user-record.h
@@ -238,6 +238,7 @@ typedef struct UserRecord {
char *user_name;
char *realm;
char *user_name_and_realm_auto; /* the user_name field concatenated with '@' and the realm, if the latter is defined */
+ char **aliases;
char *real_name;
char *email_address;
char *password_hint;

View File

@ -0,0 +1,25 @@
From bc858640e9cba5b7b1a657ad6ab7469b82dfb556 Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Thu, 16 Jan 2025 14:14:08 +0100
Subject: [PATCH] pam_systemd_home: use right field name in error message
(cherry picked from commit 1fb53bb561db11e72bb695d578f8e94042565822)
Resolves: RHEL-109902
---
src/home/pam_systemd_home.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/home/pam_systemd_home.c b/src/home/pam_systemd_home.c
index fb61105295..794438129e 100644
--- a/src/home/pam_systemd_home.c
+++ b/src/home/pam_systemd_home.c
@@ -202,7 +202,7 @@ static int acquire_user_record(
r = pam_set_data(handle, generic_field, json_copy, pam_cleanup_free);
if (r != PAM_SUCCESS)
return pam_syslog_pam_error(handle, LOG_ERR, r,
- "Failed to set PAM user record data '%s': @PAMERR@", homed_field);
+ "Failed to set PAM user record data '%s': @PAMERR@", generic_field);
TAKE_PTR(json_copy);
}

View File

@ -0,0 +1,181 @@
From 43398a911607928300c76312215eb03447925220 Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Thu, 16 Jan 2025 14:15:03 +0100
Subject: [PATCH] pam_systemd_home: support login with alias names + user names
with realms
This in particular makes sure that we normalize the user name and update
it in the PAM session, once we acquire it. This means that if you have a
user with name "a" and alias "b", and the user logs in as "b" they end
up properly with "a" as user name set, as intended by the PAM gods.
Moreover, if you have a user "c" in a ralm "d", they may log in by
specifying "c" or "c@d", with equivalent results.
(cherry picked from commit a642f9d2d37945917bf4200a1095a43e6e7b6ea7)
Resolves: RHEL-109902
---
src/home/pam_systemd_home.c | 70 +++++++++++++++++++++++--------------
src/login/pam_systemd.c | 8 ++---
2 files changed, 48 insertions(+), 30 deletions(-)
diff --git a/src/home/pam_systemd_home.c b/src/home/pam_systemd_home.c
index 794438129e..95f719d912 100644
--- a/src/home/pam_systemd_home.c
+++ b/src/home/pam_systemd_home.c
@@ -102,11 +102,6 @@ static int acquire_user_record(
UserRecord **ret_record,
PamBusData **bus_data) {
- _cleanup_(sd_bus_message_unrefp) sd_bus_message *reply = NULL;
- _cleanup_(sd_json_variant_unrefp) sd_json_variant *v = NULL;
- _cleanup_(user_record_unrefp) UserRecord *ur = NULL;
- _cleanup_free_ char *homed_field = NULL;
- const char *json = NULL;
int r;
assert(handle);
@@ -124,13 +119,19 @@ static int acquire_user_record(
if (STR_IN_SET(username, "root", NOBODY_USER_NAME) || !valid_user_group_name(username, 0))
return PAM_USER_UNKNOWN;
+ _cleanup_(sd_bus_message_unrefp) sd_bus_message *reply = NULL;
+ _cleanup_(sd_json_variant_unrefp) sd_json_variant *v = NULL;
+ _cleanup_(user_record_unrefp) UserRecord *ur = NULL;
+ const char *json = NULL;
+ bool fresh_data;
+
/* We cache the user record in the PAM context. We use a field name that includes the username, since
* clients might change the user name associated with a PAM context underneath us. Notably, 'sudo'
* creates a single PAM context and first authenticates it with the user set to the originating user,
* then updates the user for the destination user and issues the session stack with the same PAM
* context. We thus must be prepared that the user record changes between calls and we keep any
* caching separate. */
- homed_field = strjoin("systemd-home-user-record-", username);
+ _cleanup_free_ char *homed_field = strjoin("systemd-home-user-record-", username);
if (!homed_field)
return pam_log_oom(handle);
@@ -143,9 +144,10 @@ static int acquire_user_record(
* negative cache indicator) */
if (json == POINTER_MAX)
return PAM_USER_UNKNOWN;
+
+ fresh_data = false;
} else {
_cleanup_(sd_bus_error_free) sd_bus_error error = SD_BUS_ERROR_NULL;
- _cleanup_free_ char *generic_field = NULL, *json_copy = NULL;
_cleanup_(sd_bus_unrefp) sd_bus *bus = NULL;
r = pam_acquire_bus_connection(handle, "pam-systemd-home", debug, &bus, bus_data);
@@ -177,9 +179,42 @@ static int acquire_user_record(
if (r < 0)
return pam_bus_log_parse_error(handle, r);
+ fresh_data = true;
+ }
+
+ r = sd_json_parse(json, /* flags= */ 0, &v, NULL, NULL);
+ if (r < 0)
+ return pam_syslog_errno(handle, LOG_ERR, r, "Failed to parse JSON user record: %m");
+
+ ur = user_record_new();
+ if (!ur)
+ return pam_log_oom(handle);
+
+ r = user_record_load(ur, v, USER_RECORD_LOAD_REFUSE_SECRET|USER_RECORD_PERMISSIVE);
+ if (r < 0)
+ return pam_syslog_errno(handle, LOG_ERR, r, "Failed to load user record: %m");
+
+ /* Safety check if cached record actually matches what we are looking for */
+ if (!user_record_matches_user_name(ur, username))
+ return pam_syslog_pam_error(handle, LOG_ERR, PAM_SERVICE_ERR,
+ "Acquired user record does not match user name.");
+
+ /* Update the 'username' pointer to point to our own record now. The pam_set_item() call below is
+ * going to invalidate the old version after all */
+ username = ur->user_name;
+
+ /* We passed all checks. Let's now make sure the rest of the PAM stack continues with the primary,
+ * normalized name of the user record (i.e. not an alias or so). */
+ r = pam_set_item(handle, PAM_USER, ur->user_name);
+ if (r != PAM_SUCCESS)
+ return pam_syslog_pam_error(handle, LOG_ERR, r,
+ "Failed to update username PAM item to '%s': @PAMERR@", ur->user_name);
+
+ /* Everything seems to be good, let's cache this data now */
+ if (fresh_data) {
/* First copy: for the homed-specific data field, i.e. where we know the user record is from
* homed */
- json_copy = strdup(json);
+ _cleanup_free_ char *json_copy = strdup(json);
if (!json_copy)
return pam_log_oom(handle);
@@ -195,7 +230,7 @@ static int acquire_user_record(
if (!json_copy)
return pam_log_oom(handle);
- generic_field = strjoin("systemd-user-record-", username);
+ _cleanup_free_ char *generic_field = strjoin("systemd-user-record-", username);
if (!generic_field)
return pam_log_oom(handle);
@@ -207,23 +242,6 @@ static int acquire_user_record(
TAKE_PTR(json_copy);
}
- r = sd_json_parse(json, SD_JSON_PARSE_SENSITIVE, &v, NULL, NULL);
- if (r < 0)
- return pam_syslog_errno(handle, LOG_ERR, r, "Failed to parse JSON user record: %m");
-
- ur = user_record_new();
- if (!ur)
- return pam_log_oom(handle);
-
- r = user_record_load(ur, v, USER_RECORD_LOAD_REFUSE_SECRET|USER_RECORD_PERMISSIVE);
- if (r < 0)
- return pam_syslog_errno(handle, LOG_ERR, r, "Failed to load user record: %m");
-
- /* Safety check if cached record actually matches what we are looking for */
- if (!user_record_matches_user_name(ur, username))
- return pam_syslog_pam_error(handle, LOG_ERR, PAM_SERVICE_ERR,
- "Acquired user record does not match user name.");
-
if (ret_record)
*ret_record = TAKE_PTR(ur);
diff --git a/src/login/pam_systemd.c b/src/login/pam_systemd.c
index 008bcb28a0..ab50137e4e 100644
--- a/src/login/pam_systemd.c
+++ b/src/login/pam_systemd.c
@@ -173,13 +173,11 @@ static int acquire_user_record(
pam_handle_t *handle,
UserRecord **ret_record) {
- _cleanup_(user_record_unrefp) UserRecord *ur = NULL;
- const char *username = NULL, *json = NULL;
- _cleanup_free_ char *field = NULL;
int r;
assert(handle);
+ const char *username = NULL;
r = pam_get_user(handle, &username, NULL);
if (r != PAM_SUCCESS)
return pam_syslog_pam_error(handle, LOG_ERR, r, "Failed to get user name: @PAMERR@");
@@ -188,10 +186,12 @@ static int acquire_user_record(
/* If pam_systemd_homed (or some other module) already acquired the user record we can reuse it
* here. */
- field = strjoin("systemd-user-record-", username);
+ _cleanup_free_ char *field = strjoin("systemd-user-record-", username);
if (!field)
return pam_log_oom(handle);
+ _cleanup_(user_record_unrefp) UserRecord *ur = NULL;
+ const char *json = NULL;
r = pam_get_data(handle, field, (const void**) &json);
if (!IN_SET(r, PAM_SUCCESS, PAM_NO_MODULE_DATA))
return pam_syslog_pam_error(handle, LOG_ERR, r, "Failed to get PAM user record data: @PAMERR@");

View File

@ -0,0 +1,503 @@
From d2d21aa57f8c266002c96ea41e811db28b40eeca Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Thu, 16 Jan 2025 14:08:51 +0100
Subject: [PATCH] homed: support user record aliases
(cherry picked from commit 40fd0e0423ed9e4ae47d7f1adba83c0487d7decd)
Resolves: RHEL-109902
---
src/home/homectl.c | 12 +-----
src/home/homed-home-bus.c | 7 +++-
src/home/homed-home.c | 28 ++++++++++++-
src/home/homed-home.h | 5 +++
src/home/homed-manager-bus.c | 76 ++++++++++++++++++++++++++----------
src/home/homed-manager.c | 50 +++++++++++++++++++-----
src/home/homed-manager.h | 2 +
src/home/homed-varlink.c | 28 +++++++------
8 files changed, 154 insertions(+), 54 deletions(-)
diff --git a/src/home/homectl.c b/src/home/homectl.c
index 47ca015813..f48b8dc833 100644
--- a/src/home/homectl.c
+++ b/src/home/homectl.c
@@ -741,17 +741,9 @@ static int inspect_home(int argc, char *argv[], void *userdata) {
uid_t uid;
r = parse_uid(*i, &uid);
- if (r < 0) {
- if (!valid_user_group_name(*i, 0)) {
- log_error("Invalid user name '%s'.", *i);
- if (ret == 0)
- ret = -EINVAL;
-
- continue;
- }
-
+ if (r < 0)
r = bus_call_method(bus, bus_mgr, "GetUserRecordByName", &error, &reply, "s", *i);
- } else
+ else
r = bus_call_method(bus, bus_mgr, "GetUserRecordByUID", &error, &reply, "u", (uint32_t) uid);
if (r < 0) {
log_error_errno(r, "Failed to inspect home: %s", bus_error_message(&error, r));
diff --git a/src/home/homed-home-bus.c b/src/home/homed-home-bus.c
index 80e2773447..a3e6a32162 100644
--- a/src/home/homed-home-bus.c
+++ b/src/home/homed-home-bus.c
@@ -799,8 +799,11 @@ static int bus_home_object_find(
if (parse_uid(e, &uid) >= 0)
h = hashmap_get(m->homes_by_uid, UID_TO_PTR(uid));
- else
- h = hashmap_get(m->homes_by_name, e);
+ else {
+ r = manager_get_home_by_name(m, e, &h);
+ if (r < 0)
+ return r;
+ }
if (!h)
return 0;
diff --git a/src/home/homed-home.c b/src/home/homed-home.c
index 32691e4f81..44e3274c42 100644
--- a/src/home/homed-home.c
+++ b/src/home/homed-home.c
@@ -106,6 +106,7 @@ static int suitable_home_record(UserRecord *hr) {
int home_new(Manager *m, UserRecord *hr, const char *sysfs, Home **ret) {
_cleanup_(home_freep) Home *home = NULL;
_cleanup_free_ char *nm = NULL, *ns = NULL, *blob = NULL;
+ _cleanup_strv_free_ char **aliases = NULL;
int r;
assert(m);
@@ -118,19 +119,29 @@ int home_new(Manager *m, UserRecord *hr, const char *sysfs, Home **ret) {
if (hashmap_contains(m->homes_by_name, hr->user_name))
return -EBUSY;
+ STRV_FOREACH(a, hr->aliases)
+ if (hashmap_contains(m->homes_by_name, *a))
+ return -EBUSY;
+
if (hashmap_contains(m->homes_by_uid, UID_TO_PTR(hr->uid)))
return -EBUSY;
if (sysfs && hashmap_contains(m->homes_by_sysfs, sysfs))
return -EBUSY;
- if (hashmap_size(m->homes_by_name) >= HOME_USERS_MAX)
+ if (hashmap_size(m->homes_by_uid) >= HOME_USERS_MAX)
return -EUSERS;
nm = strdup(hr->user_name);
if (!nm)
return -ENOMEM;
+ if (!strv_isempty(hr->aliases)) {
+ aliases = strv_copy(hr->aliases);
+ if (!aliases)
+ return -ENOMEM;
+ }
+
if (sysfs) {
ns = strdup(sysfs);
if (!ns)
@@ -144,6 +155,7 @@ int home_new(Manager *m, UserRecord *hr, const char *sysfs, Home **ret) {
*home = (Home) {
.manager = m,
.user_name = TAKE_PTR(nm),
+ .aliases = TAKE_PTR(aliases),
.uid = hr->uid,
.state = _HOME_STATE_INVALID,
.worker_stdout_fd = -EBADF,
@@ -157,6 +169,12 @@ int home_new(Manager *m, UserRecord *hr, const char *sysfs, Home **ret) {
if (r < 0)
return r;
+ STRV_FOREACH(a, home->aliases) {
+ r = hashmap_put(m->homes_by_name, *a, home);
+ if (r < 0)
+ return r;
+ }
+
r = hashmap_put(m->homes_by_uid, UID_TO_PTR(home->uid), home);
if (r < 0)
return r;
@@ -202,6 +220,9 @@ Home *home_free(Home *h) {
if (h->user_name)
(void) hashmap_remove_value(h->manager->homes_by_name, h->user_name, h);
+ STRV_FOREACH(a, h->aliases)
+ (void) hashmap_remove_value(h->manager->homes_by_name, *a, h);
+
if (uid_is_valid(h->uid))
(void) hashmap_remove_value(h->manager->homes_by_uid, UID_TO_PTR(h->uid), h);
@@ -223,6 +244,7 @@ Home *home_free(Home *h) {
h->worker_event_source = sd_event_source_disable_unref(h->worker_event_source);
safe_close(h->worker_stdout_fd);
free(h->user_name);
+ strv_free(h->aliases);
free(h->sysfs);
h->ref_event_source_please_suspend = sd_event_source_disable_unref(h->ref_event_source_please_suspend);
@@ -262,6 +284,10 @@ int home_set_record(Home *h, UserRecord *hr) {
if (!user_record_compatible(h->record, hr))
return -EREMCHG;
+ /* For now do not allow changing list of aliases */
+ if (!strv_equal_ignore_order(h->aliases, hr->aliases))
+ return -EREMCHG;
+
if (!FLAGS_SET(hr->mask, USER_RECORD_REGULAR) ||
FLAGS_SET(hr->mask, USER_RECORD_SECRET))
return -EINVAL;
diff --git a/src/home/homed-home.h b/src/home/homed-home.h
index 8c92e39fe5..93689563d3 100644
--- a/src/home/homed-home.h
+++ b/src/home/homed-home.h
@@ -109,7 +109,12 @@ static inline bool HOME_STATE_MAY_RETRY_DEACTIVATE(HomeState state) {
struct Home {
Manager *manager;
+
+ /* The fields this record can be looked up by. This is kinda redundant, as the same information is
+ * available in the .record field, but we keep separate copies of these keys to make memory
+ * management for the hashmaps easier. */
char *user_name;
+ char **aliases;
uid_t uid;
char *sysfs; /* When found via plugged in device, the sysfs path to it */
diff --git a/src/home/homed-manager-bus.c b/src/home/homed-manager-bus.c
index 08c917aee2..a08cc3803c 100644
--- a/src/home/homed-manager-bus.c
+++ b/src/home/homed-manager-bus.c
@@ -37,7 +37,7 @@ static int property_get_auto_login(
if (r < 0)
return r;
- HASHMAP_FOREACH(h, m->homes_by_name) {
+ HASHMAP_FOREACH(h, m->homes_by_uid) {
_cleanup_strv_free_ char **seats = NULL;
_cleanup_free_ char *home_path = NULL;
@@ -97,11 +97,9 @@ static int lookup_user_name(
return sd_bus_error_setf(error, BUS_ERROR_NO_SUCH_HOME, "Client's UID " UID_FMT " not managed.", uid);
} else {
-
- if (!valid_user_group_name(user_name, 0))
- return sd_bus_error_setf(error, SD_BUS_ERROR_INVALID_ARGS, "User name %s is not valid", user_name);
-
- h = hashmap_get(m->homes_by_name, user_name);
+ r = manager_get_home_by_name(m, user_name, &h);
+ if (r < 0)
+ return r;
if (!h)
return sd_bus_error_setf(error, BUS_ERROR_NO_SUCH_HOME, "No home for user %s known", user_name);
}
@@ -342,6 +340,31 @@ static int method_deactivate_home(sd_bus_message *message, void *userdata, sd_bu
return generic_home_method(userdata, message, bus_home_method_deactivate, error);
}
+static int check_for_conflicts(Manager *m, const char *name, sd_bus_error *error) {
+ int r;
+
+ assert(m);
+ assert(name);
+
+ Home *other = hashmap_get(m->homes_by_name, name);
+ if (other)
+ return sd_bus_error_setf(error, BUS_ERROR_USER_NAME_EXISTS, "Specified user name %s exists already, refusing.", name);
+
+ r = getpwnam_malloc(name, /* ret= */ NULL);
+ if (r >= 0)
+ return sd_bus_error_setf(error, BUS_ERROR_USER_NAME_EXISTS, "Specified user name %s exists in the NSS user database, refusing.", name);
+ if (r != -ESRCH)
+ return r;
+
+ r = getgrnam_malloc(name, /* ret= */ NULL);
+ if (r >= 0)
+ return sd_bus_error_setf(error, BUS_ERROR_USER_NAME_EXISTS, "Specified user name %s conflicts with an NSS group by the same name, refusing.", name);
+ if (r != -ESRCH)
+ return r;
+
+ return 0;
+}
+
static int validate_and_allocate_home(Manager *m, UserRecord *hr, Hashmap *blobs, Home **ret, sd_bus_error *error) {
_cleanup_(user_record_unrefp) UserRecord *signed_hr = NULL;
bool signed_locally;
@@ -356,21 +379,32 @@ static int validate_and_allocate_home(Manager *m, UserRecord *hr, Hashmap *blobs
if (r < 0)
return r;
- other = hashmap_get(m->homes_by_name, hr->user_name);
- if (other)
- return sd_bus_error_setf(error, BUS_ERROR_USER_NAME_EXISTS, "Specified user name %s exists already, refusing.", hr->user_name);
-
- r = getpwnam_malloc(hr->user_name, /* ret= */ NULL);
- if (r >= 0)
- return sd_bus_error_setf(error, BUS_ERROR_USER_NAME_EXISTS, "Specified user name %s exists in the NSS user database, refusing.", hr->user_name);
- if (r != -ESRCH)
+ r = check_for_conflicts(m, hr->user_name, error);
+ if (r < 0)
return r;
- r = getgrnam_malloc(hr->user_name, /* ret= */ NULL);
- if (r >= 0)
- return sd_bus_error_setf(error, BUS_ERROR_USER_NAME_EXISTS, "Specified user name %s conflicts with an NSS group by the same name, refusing.", hr->user_name);
- if (r != -ESRCH)
- return r;
+ if (hr->realm) {
+ r = check_for_conflicts(m, user_record_user_name_and_realm(hr), error);
+ if (r < 0)
+ return r;
+ }
+
+ STRV_FOREACH(a, hr->aliases) {
+ r = check_for_conflicts(m, *a, error);
+ if (r < 0)
+ return r;
+
+ if (hr->realm) {
+ _cleanup_free_ char *alias_with_realm = NULL;
+ alias_with_realm = strjoin(*a, "@", hr->realm);
+ if (!alias_with_realm)
+ return -ENOMEM;
+
+ r = check_for_conflicts(m, alias_with_realm, error);
+ if (r < 0)
+ return r;
+ }
+ }
if (blobs) {
const char *failed = NULL;
@@ -637,7 +671,7 @@ static int method_lock_all_homes(sd_bus_message *message, void *userdata, sd_bus
* for every suitable home we have and only when all of them completed we send a reply indicating
* completion. */
- HASHMAP_FOREACH(h, m->homes_by_name) {
+ HASHMAP_FOREACH(h, m->homes_by_uid) {
if (!home_shall_suspend(h))
continue;
@@ -676,7 +710,7 @@ static int method_deactivate_all_homes(sd_bus_message *message, void *userdata,
* systemd-homed.service itself since we want to allow restarting of it without tearing down all home
* directories. */
- HASHMAP_FOREACH(h, m->homes_by_name) {
+ HASHMAP_FOREACH(h, m->homes_by_uid) {
if (!o) {
o = operation_new(OPERATION_DEACTIVATE_ALL, message);
diff --git a/src/home/homed-manager.c b/src/home/homed-manager.c
index de7c3d8dbe..6b9e4fcf11 100644
--- a/src/home/homed-manager.c
+++ b/src/home/homed-manager.c
@@ -75,7 +75,6 @@ static bool uid_is_home(uid_t uid) {
#define UID_CLAMP_INTO_HOME_RANGE(rnd) (((uid_t) (rnd) % (HOME_UID_MAX - HOME_UID_MIN + 1)) + HOME_UID_MIN)
DEFINE_PRIVATE_HASH_OPS_WITH_VALUE_DESTRUCTOR(homes_by_uid_hash_ops, void, trivial_hash_func, trivial_compare_func, Home, home_free);
-DEFINE_PRIVATE_HASH_OPS_WITH_VALUE_DESTRUCTOR(homes_by_name_hash_ops, char, string_hash_func, string_compare_func, Home, home_free);
DEFINE_PRIVATE_HASH_OPS_WITH_VALUE_DESTRUCTOR(homes_by_worker_pid_hash_ops, void, trivial_hash_func, trivial_compare_func, Home, home_free);
DEFINE_PRIVATE_HASH_OPS_WITH_VALUE_DESTRUCTOR(homes_by_sysfs_hash_ops, char, path_hash_func, path_compare, Home, home_free);
@@ -191,7 +190,7 @@ static int on_home_inotify(sd_event_source *s, const struct inotify_event *event
log_debug("%s has been moved away, revalidating.", j);
h = hashmap_get(m->homes_by_name, n);
- if (h) {
+ if (h && streq(h->user_name, n)) {
manager_revalidate_image(m, h);
(void) bus_manager_emit_auto_login_changed(m);
}
@@ -242,7 +241,7 @@ int manager_new(Manager **ret) {
if (!m->homes_by_uid)
return -ENOMEM;
- m->homes_by_name = hashmap_new(&homes_by_name_hash_ops);
+ m->homes_by_name = hashmap_new(&string_hash_ops);
if (!m->homes_by_name)
return -ENOMEM;
@@ -697,6 +696,11 @@ static int manager_add_home_by_image(
if (h) {
bool same;
+ if (!streq(h->user_name, user_name)) {
+ log_debug("Found an image for user %s which already is an alias for another user, skipping.", user_name);
+ return 0; /* Ignore images that would synthesize a user that conflicts with an alias of another user */
+ }
+
if (h->state != HOME_UNFIXATED) {
log_debug("Found an image for user %s which already has a record, skipping.", user_name);
return 0; /* ignore images that synthesize a user we already have a record for */
@@ -1714,7 +1718,7 @@ int manager_gc_images(Manager *m) {
} else {
/* Gc all */
- HASHMAP_FOREACH(h, m->homes_by_name)
+ HASHMAP_FOREACH(h, m->homes_by_uid)
manager_revalidate_image(m, h);
}
@@ -1734,12 +1738,14 @@ static int manager_gc_blob(Manager *m) {
return log_error_errno(errno, "Failed to open %s: %m", home_system_blob_dir());
}
- FOREACH_DIRENT(de, d, return log_error_errno(errno, "Failed to read system blob directory: %m"))
- if (!hashmap_contains(m->homes_by_name, de->d_name)) {
+ FOREACH_DIRENT(de, d, return log_error_errno(errno, "Failed to read system blob directory: %m")) {
+ Home *found = hashmap_get(m->homes_by_name, de->d_name);
+ if (!found || !streq(found->user_name, de->d_name)) {
r = rm_rf_at(dirfd(d), de->d_name, REMOVE_ROOT|REMOVE_PHYSICAL|REMOVE_SUBVOLUME);
if (r < 0)
log_warning_errno(r, "Failed to delete blob dir for missing user '%s', ignoring: %m", de->d_name);
}
+ }
return 0;
}
@@ -1834,7 +1840,7 @@ static bool manager_shall_rebalance(Manager *m) {
if (IN_SET(m->rebalance_state, REBALANCE_PENDING, REBALANCE_SHRINKING, REBALANCE_GROWING))
return true;
- HASHMAP_FOREACH(h, m->homes_by_name)
+ HASHMAP_FOREACH(h, m->homes_by_uid)
if (home_shall_rebalance(h))
return true;
@@ -1880,7 +1886,7 @@ static int manager_rebalance_calculate(Manager *m) {
* (home dirs get 100 by default, i.e. 5x more). This weight
* is not configurable, the per-home weights are. */
- HASHMAP_FOREACH(h, m->homes_by_name) {
+ HASHMAP_FOREACH(h, m->homes_by_uid) {
statfs_f_type_t fstype;
h->rebalance_pending = false; /* First, reset the flag, we only want it to be true for the
* homes that qualify for rebalancing */
@@ -2017,7 +2023,7 @@ static int manager_rebalance_apply(Manager *m) {
assert(m);
- HASHMAP_FOREACH(h, m->homes_by_name) {
+ HASHMAP_FOREACH(h, m->homes_by_uid) {
_cleanup_(sd_bus_error_free) sd_bus_error error = SD_BUS_ERROR_NULL;
if (!h->rebalance_pending)
@@ -2258,3 +2264,29 @@ int manager_reschedule_rebalance(Manager *m) {
return 1;
}
+
+int manager_get_home_by_name(Manager *m, const char *user_name, Home **ret) {
+ assert(m);
+ assert(user_name);
+
+ Home *h = hashmap_get(m->homes_by_name, user_name);
+ if (!h) {
+ /* Also search by username and realm. For that simply chop off realm, then look for the home, and verify it afterwards. */
+ const char *realm = strrchr(user_name, '@');
+ if (realm) {
+ _cleanup_free_ char *prefix = strndup(user_name, realm - user_name);
+ if (!prefix)
+ return -ENOMEM;
+
+ Home *j;
+ j = hashmap_get(m->homes_by_name, prefix);
+ if (j && user_record_matches_user_name(j->record, user_name))
+ h = j;
+ }
+ }
+
+ if (ret)
+ *ret = h;
+
+ return !!h;
+}
diff --git a/src/home/homed-manager.h b/src/home/homed-manager.h
index 3369284e2a..7f9a8a8199 100644
--- a/src/home/homed-manager.h
+++ b/src/home/homed-manager.h
@@ -91,3 +91,5 @@ int manager_acquire_key_pair(Manager *m);
int manager_sign_user_record(Manager *m, UserRecord *u, UserRecord **ret, sd_bus_error *error);
int bus_manager_emit_auto_login_changed(Manager *m);
+
+int manager_get_home_by_name(Manager *m, const char *user_name, Home **ret);
diff --git a/src/home/homed-varlink.c b/src/home/homed-varlink.c
index cfd46ea51a..ef30ea7eaf 100644
--- a/src/home/homed-varlink.c
+++ b/src/home/homed-varlink.c
@@ -100,15 +100,17 @@ int vl_method_get_user_record(sd_varlink *link, sd_json_variant *parameters, sd_
if (uid_is_valid(p.uid))
h = hashmap_get(m->homes_by_uid, UID_TO_PTR(p.uid));
- else if (p.user_name)
- h = hashmap_get(m->homes_by_name, p.user_name);
- else {
+ else if (p.user_name) {
+ r = manager_get_home_by_name(m, p.user_name, &h);
+ if (r < 0)
+ return r;
+ } else {
/* If neither UID nor name was specified, then dump all homes. Do so with varlink_notify()
* for all entries but the last, so that clients can stream the results, and easily process
* them piecemeal. */
- HASHMAP_FOREACH(h, m->homes_by_name) {
+ HASHMAP_FOREACH(h, m->homes_by_uid) {
if (!home_user_match_lookup_parameters(&p, h))
continue;
@@ -212,11 +214,13 @@ int vl_method_get_group_record(sd_varlink *link, sd_json_variant *parameters, sd
if (gid_is_valid(p.gid))
h = hashmap_get(m->homes_by_uid, UID_TO_PTR((uid_t) p.gid));
- else if (p.group_name)
- h = hashmap_get(m->homes_by_name, p.group_name);
- else {
+ else if (p.group_name) {
+ r = manager_get_home_by_name(m, p.group_name, &h);
+ if (r < 0)
+ return r;
+ } else {
- HASHMAP_FOREACH(h, m->homes_by_name) {
+ HASHMAP_FOREACH(h, m->homes_by_uid) {
if (!home_group_match_lookup_parameters(&p, h))
continue;
@@ -279,7 +283,9 @@ int vl_method_get_memberships(sd_varlink *link, sd_json_variant *parameters, sd_
if (p.user_name) {
const char *last = NULL;
- h = hashmap_get(m->homes_by_name, p.user_name);
+ r = manager_get_home_by_name(m, p.user_name, &h);
+ if (r < 0)
+ return r;
if (!h)
return sd_varlink_error(link, "io.systemd.UserDatabase.NoRecordFound", NULL);
@@ -315,7 +321,7 @@ int vl_method_get_memberships(sd_varlink *link, sd_json_variant *parameters, sd_
} else if (p.group_name) {
const char *last = NULL;
- HASHMAP_FOREACH(h, m->homes_by_name) {
+ HASHMAP_FOREACH(h, m->homes_by_uid) {
if (!strv_contains(h->record->member_of, p.group_name))
continue;
@@ -340,7 +346,7 @@ int vl_method_get_memberships(sd_varlink *link, sd_json_variant *parameters, sd_
} else {
const char *last_user_name = NULL, *last_group_name = NULL;
- HASHMAP_FOREACH(h, m->homes_by_name)
+ HASHMAP_FOREACH(h, m->homes_by_uid)
STRV_FOREACH(j, h->record->member_of) {
if (last_user_name) {

View File

@ -0,0 +1,116 @@
From 5540ae46fe0f113b4145b49de7dd556d84c98dc7 Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Thu, 16 Jan 2025 14:01:15 +0100
Subject: [PATCH] homectl: add support for creating users with alias names
(cherry picked from commit 5cd7b455e0b2ee5991ff06a885c8bc4fe78c2225)
Resolves: RHEL-109902
---
man/homectl.xml | 10 ++++++++++
src/home/homectl.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 60 insertions(+)
diff --git a/man/homectl.xml b/man/homectl.xml
index 927fe939ee..282066c4fa 100644
--- a/man/homectl.xml
+++ b/man/homectl.xml
@@ -226,6 +226,16 @@
<xi:include href="version-info.xml" xpointer="v245"/></listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--alias=<replaceable>NAME</replaceable><optional>,<replaceable>NAME…</replaceable></optional></option></term>
+
+ <listitem><para>Additional names for the user. Takes one or more valid UNIX user names, separated by
+ commas. May be used multiple times to define multiple aliases. An alias username may be specified
+ wherever the primary user name may be specified, and resolves to the same user record.</para>
+
+ <xi:include href="version-info.xml" xpointer="v258"/></listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--email-address=<replaceable>EMAIL</replaceable></option></term>
diff --git a/src/home/homectl.c b/src/home/homectl.c
index f48b8dc833..08136cda3f 100644
--- a/src/home/homectl.c
+++ b/src/home/homectl.c
@@ -2746,6 +2746,7 @@ static int help(int argc, char *argv[], void *userdata) {
"\n%4$sGeneral User Record Properties:%5$s\n"
" -c --real-name=REALNAME Real name for user\n"
" --realm=REALM Realm to create user in\n"
+ " --alias=ALIAS Define alias usernames for this account\n"
" --email-address=EMAIL Email address for user\n"
" --location=LOCATION Set location of user on earth\n"
" --icon-name=NAME Icon name for user\n"
@@ -2890,6 +2891,7 @@ static int parse_argv(int argc, char *argv[]) {
ARG_NO_ASK_PASSWORD,
ARG_OFFLINE,
ARG_REALM,
+ ARG_ALIAS,
ARG_EMAIL_ADDRESS,
ARG_DISK_SIZE,
ARG_ACCESS_MODE,
@@ -2981,6 +2983,7 @@ static int parse_argv(int argc, char *argv[]) {
{ "real-name", required_argument, NULL, 'c' },
{ "comment", required_argument, NULL, 'c' }, /* Compat alias to keep thing in sync with useradd(8) */
{ "realm", required_argument, NULL, ARG_REALM },
+ { "alias", required_argument, NULL, ARG_ALIAS },
{ "email-address", required_argument, NULL, ARG_EMAIL_ADDRESS },
{ "location", required_argument, NULL, ARG_LOCATION },
{ "password-hint", required_argument, NULL, ARG_PASSWORD_HINT },
@@ -3136,6 +3139,53 @@ static int parse_argv(int argc, char *argv[]) {
break;
+ case ARG_ALIAS: {
+ if (isempty(optarg)) {
+ r = drop_from_identity("aliases");
+ if (r < 0)
+ return r;
+ break;
+ }
+
+ for (const char *p = optarg;;) {
+ _cleanup_free_ char *word = NULL;
+
+ r = extract_first_word(&p, &word, ",", 0);
+ if (r < 0)
+ return log_error_errno(r, "Failed to parse alias list: %m");
+ if (r == 0)
+ break;
+
+ if (!valid_user_group_name(word, 0))
+ return log_error_errno(SYNTHETIC_ERRNO(EINVAL), "Invalid alias user name %s.", word);
+
+ _cleanup_(sd_json_variant_unrefp) sd_json_variant *av =
+ sd_json_variant_ref(sd_json_variant_by_key(arg_identity_extra, "aliases"));
+
+ _cleanup_strv_free_ char **list = NULL;
+ r = sd_json_variant_strv(av, &list);
+ if (r < 0)
+ return log_error_errno(r, "Failed to parse group list: %m");
+
+ r = strv_extend(&list, word);
+ if (r < 0)
+ return log_oom();
+
+ strv_sort_uniq(list);
+
+ av = sd_json_variant_unref(av);
+ r = sd_json_variant_new_array_strv(&av, list);
+ if (r < 0)
+ return log_error_errno(r, "Failed to create alias list JSON: %m");
+
+ r = sd_json_variant_set_field(&arg_identity_extra, "aliases", av);
+ if (r < 0)
+ return log_error_errno(r, "Failed to update alias list: %m");
+ }
+
+ break;
+ }
+
case 'd': {
_cleanup_free_ char *hd = NULL;

View File

@ -0,0 +1,46 @@
From 7938e4205fbd9e35375aa4469cd133251a6dcaa5 Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Thu, 16 Jan 2025 15:18:45 +0100
Subject: [PATCH] test: add test for homed alias and realm user resolution
(cherry picked from commit 853e9b754a16c58f9fb147376af941ec679e65a6)
Resolves: RHEL-109902
---
test/units/TEST-46-HOMED.sh | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/test/units/TEST-46-HOMED.sh b/test/units/TEST-46-HOMED.sh
index e858b0c297..8de170a1c9 100755
--- a/test/units/TEST-46-HOMED.sh
+++ b/test/units/TEST-46-HOMED.sh
@@ -629,6 +629,29 @@ EOF
wait_for_state homedsshtest inactive
fi
+NEWPASSWORD=hunter4711 homectl create aliastest --storage=directory --alias=aliastest2 --alias=aliastest3 --realm=myrealm
+
+homectl inspect aliastest
+homectl inspect aliastest2
+homectl inspect aliastest3
+homectl inspect aliastest@myrealm
+homectl inspect aliastest2@myrealm
+homectl inspect aliastest3@myrealm
+
+userdbctl user aliastest
+userdbctl user aliastest2
+userdbctl user aliastest3
+userdbctl user aliastest@myrealm
+userdbctl user aliastest2@myrealm
+userdbctl user aliastest3@myrealm
+
+getent passwd aliastest
+getent passwd aliastest2
+getent passwd aliastest3
+getent passwd aliastest@myrealm
+getent passwd aliastest2@myrealm
+getent passwd aliastest3@myrealm
+
systemd-analyze log-level info
touch /testok

41
0468-update-TODO.patch Normal file
View File

@ -0,0 +1,41 @@
From 536eac23913704e258ee8052cd45c0273804b8f9 Mon Sep 17 00:00:00 2001
From: Lennart Poettering <lennart@poettering.net>
Date: Thu, 16 Jan 2025 13:58:14 +0100
Subject: [PATCH] update TODO
(cherry picked from commit 3d3f27cd9a9d7ba325f0d6acc8526c86b938f09c)
Resolves: RHEL-109902
---
TODO | 11 -----------
1 file changed, 11 deletions(-)
diff --git a/TODO b/TODO
index 99e8f1c723..a20eb2c61e 100644
--- a/TODO
+++ b/TODO
@@ -227,12 +227,6 @@ Features:
suffix the escape sequence with one more decimal digit, because compilers
think you might actually specify a value outside the 8bit range with that.
-* homed: allow login via username + realm on getty/login prompt. Then rewrite
- the user name in the PAM stack
-
-* homed/userdb: add "aliases" field to user record, which can alternatively be
- used for logging in. Rewrite user name in the PAM stack once acquired.
-
* confext/sysext: instead of mounting the overlayfs directly on /etc/ + /usr/,
insert an intermediary bind mount on itself there. This has the benefit that
services where mount propagation from the root fs is off, an still have
@@ -354,11 +348,6 @@ Features:
* Clean up "reboot argument" handling, i.e. set it through some IPC service
instead of directly via /run/, so that it can be sensible set remotely.
-* userdb: add concept for user "aliases", to cover for cases where you can log
- in under the name lennart@somenetworkfsserver, and it would automatically
- generate a local user, and from the one both names can be used to allow
- logins into the same account.
-
* systemd-tpm2-support: add a some logic that detects if system is in DA
lockout mode, and queries the user for TPM recovery PIN then.

View File

@ -0,0 +1,30 @@
From dbe632700253fb0622309a548403cbd097d5c9d5 Mon Sep 17 00:00:00 2001
From: Miroslav Lichvar <mlichvar@redhat.com>
Date: Tue, 14 Oct 2025 11:03:01 +0200
Subject: [PATCH] udev: create symlinks for s390 PTP devices
Similarly to the udev rules handling KVM and Hyper-V PTP devices, create
symlinks for the s390-specific STCKE and Physical clocks (supported
since Linux 6.13) to have some stable names that can be specified in
default configurations of PTP/NTP applications.
(cherry picked from commit 4db925d7da880001b31415354307604dcbe3a4e6)
Resolves: RHEL-120177
---
rules.d/50-udev-default.rules.in | 2 ++
1 file changed, 2 insertions(+)
diff --git a/rules.d/50-udev-default.rules.in b/rules.d/50-udev-default.rules.in
index 8fa518cd8f..08b2de7047 100644
--- a/rules.d/50-udev-default.rules.in
+++ b/rules.d/50-udev-default.rules.in
@@ -32,6 +32,8 @@ SUBSYSTEM=="net", IMPORT{builtin}="net_driver"
SUBSYSTEM=="ptp", ATTR{clock_name}=="KVM virtual PTP", SYMLINK+="ptp_kvm"
SUBSYSTEM=="ptp", ATTR{clock_name}=="hyperv", SYMLINK+="ptp_hyperv"
+SUBSYSTEM=="ptp", ATTR{clock_name}=="s390 Physical Clock", SYMLINK+="ptp_s390_physical"
+SUBSYSTEM=="ptp", ATTR{clock_name}=="s390 STCKE Clock", SYMLINK+="ptp_s390_stcke"
ACTION!="add", GOTO="default_end"

View File

@ -0,0 +1,180 @@
From 46dccb96f595bcaa26db228b4ed5dc7dd553990e Mon Sep 17 00:00:00 2001
From: Frantisek Sumsal <frantisek@sumsal.cz>
Date: Thu, 9 Oct 2025 23:08:19 +0200
Subject: [PATCH] test: build the crashing test binary outside of the test
So we don't have to pull in gcc and other stuff into it.
Also, make the test itself a bit more robust and debug-able.
(cherry picked from commit 937f609b41b9e27eba69c5ddbab4df2232e5a37b)
Related: RHEL-113920
---
src/test/meson.build | 11 ++++
src/test/test-coredump-stacktrace.c | 29 +++++++++
test/units/TEST-87-AUX-UTILS-VM.coredump.sh | 72 +++++++++++++++------
3 files changed, 93 insertions(+), 19 deletions(-)
create mode 100644 src/test/test-coredump-stacktrace.c
diff --git a/src/test/meson.build b/src/test/meson.build
index 9dae4996f4..d8135c226c 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -261,6 +261,17 @@ executables += [
'sources' : files('test-compress.c'),
'link_with' : [libshared],
},
+ test_template + {
+ 'sources' : files('test-coredump-stacktrace.c'),
+ 'type' : 'manual',
+ # This test intentionally crashes with SIGSEGV by dereferencing a NULL pointer
+ # to generate a coredump with a predictable stack trace. To prevent sanitizers
+ # from catching the error first let's disable them explicitly, and also always
+ # build with minimal optimizations to make the stack trace predictable no matter
+ # what we build the rest of systemd with
+ 'override_options' : ['b_sanitize=none', 'strip=false', 'debug=true'],
+ 'c_args' : ['-fno-sanitize=all', '-fno-optimize-sibling-calls', '-O1'],
+ },
test_template + {
'sources' : files('test-cryptolib.c'),
'dependencies' : lib_openssl_or_gcrypt,
diff --git a/src/test/test-coredump-stacktrace.c b/src/test/test-coredump-stacktrace.c
new file mode 100644
index 0000000000..334a155a9c
--- /dev/null
+++ b/src/test/test-coredump-stacktrace.c
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: LGPL-2.1-or-later */
+
+/* This is a test program that intentionally segfaults so we can generate a
+ * predictable-ish stack trace in tests. */
+
+#include <stdlib.h>
+
+__attribute__((noinline))
+static void baz(int *x) {
+ *x = rand();
+}
+
+__attribute__((noinline))
+static void bar(void) {
+ int * volatile x = NULL;
+
+ baz(x);
+}
+
+__attribute__((noinline))
+static void foo(void) {
+ bar();
+}
+
+int main(void) {
+ foo();
+
+ return 0;
+}
diff --git a/test/units/TEST-87-AUX-UTILS-VM.coredump.sh b/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
index 7ab6f29d7d..52c9d2fb0a 100755
--- a/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
+++ b/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
@@ -8,15 +8,13 @@ set -o pipefail
# Make sure the binary name fits into 15 characters
CORE_TEST_BIN="/tmp/test-dump"
-CORE_STACKTRACE_TEST_BIN="/tmp/test-stacktrace-dump"
-MAKE_STACKTRACE_DUMP="/tmp/make-stacktrace-dump"
CORE_TEST_UNPRIV_BIN="/tmp/test-usr-dump"
MAKE_DUMP_SCRIPT="/tmp/make-dump"
# Unset $PAGER so we don't have to use --no-pager everywhere
export PAGER=
at_exit() {
- rm -fv -- "$CORE_TEST_BIN" "$CORE_TEST_UNPRIV_BIN" "$MAKE_DUMP_SCRIPT" "$MAKE_STACKTRACE_DUMP"
+ rm -fv -- "$CORE_TEST_BIN" "$CORE_TEST_UNPRIV_BIN" "$MAKE_DUMP_SCRIPT"
}
(! systemd-detect-virt -cq)
@@ -226,30 +224,66 @@ systemd-run -t --property CoredumpFilter=default ls /tmp
(! coredumpctl debug --debugger=/bin/true --debugger-arguments='"')
# Test for EnterNamespace= feature
-if pkgconf --atleast-version 0.192 libdw ; then
- # dwfl_set_sysroot() is supported only in libdw-0.192 or newer.
- cat >"$MAKE_STACKTRACE_DUMP" <<END
-#!/bin/bash
-mount -t tmpfs tmpfs /tmp
-gcc -xc -O0 -g -o $CORE_STACKTRACE_TEST_BIN - <<EOF
-void baz(void) { int *x = 0; *x = 42; }
-void bar(void) { baz(); }
-void foo(void) { bar(); }
-int main(void) { foo(); return 0;}
+#
+# dwfl_set_sysroot() is supported only in libdw-0.192 or newer.
+if pkgconf --atleast-version 0.192 libdw; then
+ MAKE_STACKTRACE_DUMP="/tmp/make-stacktrace-dump"
+
+ # Simple script that mounts tmpfs on /tmp/ and copies the crashing test binary there, which in
+ # combination with `unshare --mount` ensures the "outside" systemd-coredump process won't be able to
+ # access the crashed binary (and hence won't be able to symbolize its stacktrace) unless
+ # EnterNamespace=yes is used
+ cat >"$MAKE_STACKTRACE_DUMP" <<\EOF
+#!/usr/bin/bash -eux
+
+TARGET="/tmp/${1:?}"
+EC=0
+
+# "Unhide" debuginfo in the namespace (see the comment below)
+test -d /usr/lib/debug/ && umount /usr/lib/debug/
+
+mount -t tmpfs tmpfs /tmp/
+cp /usr/lib/systemd/tests/unit-tests/manual/test-coredump-stacktrace "$TARGET"
+
+$TARGET || EC=$?
+if [[ $EC -ne 139 ]]; then
+ echo >&2 "$TARGET didn't crash, this shouldn't happen"
+ exit 1
+fi
+
+exit 0
EOF
-$CORE_STACKTRACE_TEST_BIN
-END
chmod +x "$MAKE_STACKTRACE_DUMP"
+ # Since the test-coredump-stacktrace binary is built together with rest of the systemd its debug symbols
+ # might be part of debuginfo packages (if supported & built), and libdw will then use them to symbolize
+ # the stacktrace even if it doesn't have access to the original crashing binary. Let's make the test
+ # simpler and just "hide" the debuginfo data, so libdw is forced to access the target namespace to get
+ # the necessary symbols
+ test -d /usr/lib/debug/ && mount -t tmpfs tmpfs /usr/lib/debug/
+
mkdir -p /run/systemd/coredump.conf.d/
printf '[Coredump]\nEnterNamespace=no' >/run/systemd/coredump.conf.d/99-enter-namespace.conf
- unshare --pid --fork --mount-proc --mount --uts --ipc --net /bin/bash -c "$MAKE_STACKTRACE_DUMP" || :
- timeout 30 bash -c "until coredumpctl -1 info $CORE_STACKTRACE_TEST_BIN | grep -zvqE 'baz.*bar.*foo'; do sleep .2; done"
+ unshare --pid --fork --mount-proc --mount --uts --ipc --net "$MAKE_STACKTRACE_DUMP" "test-stacktrace-not-symbolized"
+ timeout 30 bash -c "until coredumpctl list -q --no-legend /tmp/test-stacktrace-not-symbolized; do sleep .2; done"
+ coredumpctl info /tmp/test-stacktrace-not-symbolized | tee /tmp/not-symbolized.log
+ (! grep -E "#[0-9]+ .* main " /tmp/not-symbolized.log)
+ (! grep -E "#[0-9]+ .* foo " /tmp/not-symbolized.log)
+ (! grep -E "#[0-9]+ .* bar " /tmp/not-symbolized.log)
+ (! grep -E "#[0-9]+ .* baz " /tmp/not-symbolized.log)
printf '[Coredump]\nEnterNamespace=yes' >/run/systemd/coredump.conf.d/99-enter-namespace.conf
- unshare --pid --fork --mount-proc --mount --uts --ipc --net /bin/bash -c "$MAKE_STACKTRACE_DUMP" || :
- timeout 30 bash -c "until coredumpctl -1 info $CORE_STACKTRACE_TEST_BIN | grep -zqE 'baz.*bar.*foo'; do sleep .2; done"
+ unshare --pid --fork --mount-proc --mount --uts --ipc --net "$MAKE_STACKTRACE_DUMP" "test-stacktrace-symbolized"
+ timeout 30 bash -c "until coredumpctl list -q --no-legend /tmp/test-stacktrace-symbolized; do sleep .2; done"
+ coredumpctl info /tmp/test-stacktrace-symbolized | tee /tmp/symbolized.log
+ grep -E "#[0-9]+ .* main " /tmp/symbolized.log
+ grep -E "#[0-9]+ .* foo " /tmp/symbolized.log
+ grep -E "#[0-9]+ .* bar " /tmp/symbolized.log
+ grep -E "#[0-9]+ .* baz " /tmp/symbolized.log
+
+ test -d /usr/lib/debug/ && umount /usr/lib/debug/
+ rm -f "$MAKE_STACKTRACE_DUMP" /run/systemd/coredump.conf.d/99-enter-namespace.conf /tmp/{not-,}symbolized.log
else
echo "libdw doesn't not support setting sysroot, skipping EnterNamespace= test"
fi

View File

@ -0,0 +1,28 @@
From b060529f5a896df4ee443ffc881bf7f61a645f25 Mon Sep 17 00:00:00 2001
From: Frantisek Sumsal <frantisek@sumsal.cz>
Date: Thu, 9 Oct 2025 17:57:25 +0200
Subject: [PATCH] test: exclude test-stacktrace(-not)?-symbolized from the
coredump check
As they are expected coredumps from the EnterNamespace= feature test.
(cherry picked from commit cfb604f8f7c83912648d69bd3ad89c2436b4b8ef)
Related: RHEL-113920
---
test/TEST-87-AUX-UTILS-VM/meson.build | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/test/TEST-87-AUX-UTILS-VM/meson.build b/test/TEST-87-AUX-UTILS-VM/meson.build
index 8490139204..ae9e654032 100644
--- a/test/TEST-87-AUX-UTILS-VM/meson.build
+++ b/test/TEST-87-AUX-UTILS-VM/meson.build
@@ -5,7 +5,7 @@ integration_tests += [
integration_test_template + {
'name' : fs.name(meson.current_source_dir()),
'storage': 'persistent',
- 'coredump-exclude-regex' : '/(test-usr-dump|test-dump|bash)$',
+ 'coredump-exclude-regex' : '/(test-usr-dump|test-dump|test-stacktrace(-not)?-symbolized|bash)$',
'vm' : true,
},
]

View File

@ -0,0 +1,105 @@
From bb285f766c13da97b24194391e111cb0d9701a24 Mon Sep 17 00:00:00 2001
From: Frantisek Sumsal <frantisek@sumsal.cz>
Date: Thu, 9 Oct 2025 17:54:58 +0200
Subject: [PATCH] mkosi: install test dependencies for EnterNamespace= test
The test for the EnterNamespace= feature [0] has been both broken and
disabled since the migration to the mkosi framework, as it's missing the
libdw.pc file for pkg-config, so the test is skipped completely, and
it's also missing gcc to actually build the test binary.
[0] Part of TEST-87-AUX-UTILS-VM.coredump.sh
(cherry picked from commit 4d8e8d44ab3f6f99102faf0dcb53ca4de4d517ae)
Related: RHEL-113920
---
mkosi.conf.d/10-arch/mkosi.conf | 2 ++
mkosi.conf.d/10-centos-fedora/mkosi.conf | 3 +++
mkosi.conf.d/10-debian-ubuntu/mkosi.conf | 3 +++
mkosi.conf.d/10-opensuse/mkosi.conf | 3 +++
4 files changed, 11 insertions(+)
diff --git a/mkosi.conf.d/10-arch/mkosi.conf b/mkosi.conf.d/10-arch/mkosi.conf
index 9ceb6ea6f8..7194edeeac 100644
--- a/mkosi.conf.d/10-arch/mkosi.conf
+++ b/mkosi.conf.d/10-arch/mkosi.conf
@@ -21,6 +21,7 @@ Packages=
dbus-broker
dbus-broker-units
dhcp
+ elfutils
erofs-utils
f2fs-tools
git
@@ -38,6 +39,7 @@ Packages=
openssl
pacman
perf
+ pkgconf
polkit
procps-ng
psmisc
diff --git a/mkosi.conf.d/10-centos-fedora/mkosi.conf b/mkosi.conf.d/10-centos-fedora/mkosi.conf
index 90603bba14..6cd4a056c7 100644
--- a/mkosi.conf.d/10-centos-fedora/mkosi.conf
+++ b/mkosi.conf.d/10-centos-fedora/mkosi.conf
@@ -26,6 +26,8 @@ Packages=
device-mapper-event
device-mapper-multipath
dnf
+ elfutils-devel
+ elfutils-libs
git-core
glibc-langpack-de
glibc-langpack-en
@@ -45,6 +47,7 @@ Packages=
pam
passwd
perf
+ pkgconf
policycoreutils
polkit
procps-ng
diff --git a/mkosi.conf.d/10-debian-ubuntu/mkosi.conf b/mkosi.conf.d/10-debian-ubuntu/mkosi.conf
index c898664f83..85f2af492c 100644
--- a/mkosi.conf.d/10-debian-ubuntu/mkosi.conf
+++ b/mkosi.conf.d/10-debian-ubuntu/mkosi.conf
@@ -54,6 +54,8 @@ Packages=
isc-dhcp-server
knot
libcap-ng-utils
+ libdw-dev
+ libdw1
locales
login
man-db
@@ -64,6 +66,7 @@ Packages=
openssh-server
passwd
polkitd
+ pkgconf
procps
psmisc
python3-pexpect
diff --git a/mkosi.conf.d/10-opensuse/mkosi.conf b/mkosi.conf.d/10-opensuse/mkosi.conf
index 4ee3894c00..d400a85320 100644
--- a/mkosi.conf.d/10-opensuse/mkosi.conf
+++ b/mkosi.conf.d/10-opensuse/mkosi.conf
@@ -52,6 +52,8 @@ Packages=
kernel-default
kmod
knot
+ libdw-devel
+ libdw1
multipath-tools
ncat
open-iscsi
@@ -60,6 +62,7 @@ Packages=
pam
patterns-base-minimal_base
perf
+ pkgconf
procps4
psmisc
python3-pefile

View File

@ -0,0 +1,32 @@
From c762fb58a253f611d6a6005e0d947b90ad08880a Mon Sep 17 00:00:00 2001
From: Luca Boccassi <luca.boccassi@gmail.com>
Date: Fri, 11 Apr 2025 14:44:30 +0100
Subject: [PATCH] coredump: verify pidfd after parsing data in usermode helper
Ensure the pidfd is still valid before continuing
Follow-up for 313537da6ffdea4049873571202679734d49f0a1
(cherry picked from commit ba6c955f21ac3f46a6914c3607b910e371a25dee)
Related: RHEL-104135
---
src/coredump/coredump.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index d3a1f7c09d..db7f76f6c4 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -1458,6 +1458,11 @@ static int gather_pid_metadata_from_procfs(struct iovec_wrapper *iovw, Context *
if (get_process_environ(pid, &t) >= 0)
(void) iovw_put_string_field_free(iovw, "COREDUMP_ENVIRON=", t);
+ /* Now that we have parsed info from /proc/ ensure the pidfd is still valid before continuing */
+ r = pidref_verify(&context->pidref);
+ if (r < 0)
+ return log_error_errno(r, "PIDFD validation failed: %m");
+
/* we successfully acquired all metadata */
return context_parse_iovw(context, iovw);
}

View File

@ -0,0 +1,122 @@
From 24cf0dbc7ced71ce472dc0a2a19ee6c2328b6f29 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Tue, 29 Apr 2025 14:47:59 +0200
Subject: [PATCH] coredump: restore compatibility with older patterns
This was broken in f45b8015513d38ee5f7cc361db9c5b88c9aae704. Unfortunately
the review does not talk about backward compatibility at all. There are
two places where it matters:
- During upgrades, the replacement of kernel.core_pattern is asynchronous.
For example, during rpm upgrades, it would be updated a post-transaction
file trigger. In other scenarios, the update might only happen after
reboot. We have a potentially long window where the old pattern is in
place. We need to capture coredumps during upgrades too.
- With --backtrace. The interface of --backtrace, in hindsight, is not
great. But there are users of --backtrace which were written to use
a specific set of arguments, and we can't just break compatiblity.
One example is systemd-coredump-python, but there are also reports of
users using --backtrace to generate coredump logs.
Thus, we require the original set of args, and will use the additional args if
found.
A test is added to verify that --backtrace works with and without the optional
args.
(cherry picked from commit ded0aac389e647d35bce7ec4a48e718d77c0435b)
Related: RHEL-104135
---
src/coredump/coredump.c | 23 ++++++++++++++-------
test/units/TEST-87-AUX-UTILS-VM.coredump.sh | 18 +++++++++-------
2 files changed, 26 insertions(+), 15 deletions(-)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index db7f76f6c4..58bcd4910f 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -105,8 +105,12 @@ enum {
META_ARGV_SIGNAL, /* %s: number of signal causing dump */
META_ARGV_TIMESTAMP, /* %t: time of dump, expressed as seconds since the Epoch (we expand this to μs granularity) */
META_ARGV_RLIMIT, /* %c: core file size soft resource limit */
- META_ARGV_HOSTNAME, /* %h: hostname */
+ _META_ARGV_REQUIRED,
+ /* The fields below were added to kernel/core_pattern at later points, so they might be missing. */
+ META_ARGV_HOSTNAME = _META_ARGV_REQUIRED, /* %h: hostname */
_META_ARGV_MAX,
+ /* If new fields are added, they should be added here, to maintain compatibility
+ * with callers which don't know about the new fields. */
/* The following indexes are cached for a couple of special fields we use (and
* thereby need to be retrieved quickly) for naming coredump files, and attaching
@@ -117,7 +121,7 @@ enum {
_META_MANDATORY_MAX,
/* The rest are similar to the previous ones except that we won't fail if one of
- * them is missing. */
+ * them is missing in a message sent over the socket. */
META_EXE = _META_MANDATORY_MAX,
META_UNIT,
@@ -1046,7 +1050,7 @@ static int context_parse_iovw(Context *context, struct iovec_wrapper *iovw) {
}
/* The basic fields from argv[] should always be there, refuse early if not */
- for (int i = 0; i < _META_ARGV_MAX; i++)
+ for (int i = 0; i < _META_ARGV_REQUIRED; i++)
if (!context->meta[i])
return log_error_errno(SYNTHETIC_ERRNO(EINVAL), "A required (%s) has not been sent, aborting.", meta_field_names[i]);
@@ -1314,14 +1318,17 @@ static int gather_pid_metadata_from_argv(
assert(context);
/* We gather all metadata that were passed via argv[] into an array of iovecs that
- * we'll forward to the socket unit */
+ * we'll forward to the socket unit.
+ *
+ * We require at least _META_ARGV_REQUIRED args, but will accept more.
+ * We know how to parse _META_ARGV_MAX args. The rest will be ignored. */
- if (argc < _META_ARGV_MAX)
+ if (argc < _META_ARGV_REQUIRED)
return log_error_errno(SYNTHETIC_ERRNO(EINVAL),
- "Not enough arguments passed by the kernel (%i, expected %i).",
- argc, _META_ARGV_MAX);
+ "Not enough arguments passed by the kernel (%i, expected between %i and %i).",
+ argc, _META_ARGV_REQUIRED, _META_ARGV_MAX);
- for (int i = 0; i < _META_ARGV_MAX; i++) {
+ for (int i = 0; i < MIN(argc, _META_ARGV_MAX); i++) {
_cleanup_free_ char *buf = NULL;
const char *t = argv[i];
diff --git a/test/units/TEST-87-AUX-UTILS-VM.coredump.sh b/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
index 52c9d2fb0a..a170223f37 100755
--- a/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
+++ b/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
@@ -189,14 +189,18 @@ rm -f /tmp/core.{output,redirected}
(! "${UNPRIV_CMD[@]}" coredumpctl dump "$CORE_TEST_BIN" >/dev/null)
# --backtrace mode
-# Pass one of the existing journal coredump records to systemd-coredump and
-# use our PID as the source to make matching the coredump later easier
-# systemd-coredump args: PID UID GID SIGNUM TIMESTAMP CORE_SOFT_RLIMIT HOSTNAME
+# Pass one of the existing journal coredump records to systemd-coredump.
+# Use our PID as the source to be able to create a PIDFD and to make matching easier.
+# systemd-coredump args: PID UID GID SIGNUM TIMESTAMP CORE_SOFT_RLIMIT [HOSTNAME]
journalctl -b -n 1 --output=export --output-fields=MESSAGE,COREDUMP COREDUMP_EXE="/usr/bin/test-dump" |
- /usr/lib/systemd/systemd-coredump --backtrace $$ 0 0 6 1679509994 12345 mymachine
-# Wait a bit for the coredump to get processed
-timeout 30 bash -c "while [[ \$(coredumpctl list -q --no-legend $$ | wc -l) -eq 0 ]]; do sleep 1; done"
-coredumpctl info "$$"
+ /usr/lib/systemd/systemd-coredump --backtrace $$ 0 0 6 1679509900 12345
+journalctl -b -n 1 --output=export --output-fields=MESSAGE,COREDUMP COREDUMP_EXE="/usr/bin/test-dump" |
+ /usr/lib/systemd/systemd-coredump --backtrace $$ 0 0 6 1679509901 12345 mymachine
+# Wait a bit for the coredumps to get processed
+timeout 30 bash -c "while [[ \$(coredumpctl list -q --no-legend $$ | wc -l) -lt 2 ]]; do sleep 1; done"
+coredumpctl info $$
+coredumpctl info COREDUMP_TIMESTAMP=1679509900000000
+coredumpctl info COREDUMP_TIMESTAMP=1679509901000000
coredumpctl info COREDUMP_HOSTNAME="mymachine"
# This used to cause a stack overflow

View File

@ -0,0 +1,123 @@
From 6e3c4832994b9bc6da0a9a0b7a0c55a6fb38cab5 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Wed, 21 May 2025 22:33:50 +0200
Subject: [PATCH] coredump: wrap long lines, fix grammar in comments
(cherry picked from commit c673f1f67aa44f99be5fdcb0dc22d7599776e5ed)
Related: RHEL-104135
---
src/coredump/coredump.c | 34 ++++++++++++++++++----------------
1 file changed, 18 insertions(+), 16 deletions(-)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index 58bcd4910f..c96b59b2f5 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -95,9 +95,9 @@ assert_cc(JOURNAL_SIZE_MAX <= DATA_SIZE_MAX);
enum {
/* We use these as array indexes for our process metadata cache.
*
- * The first indices of the cache stores the same metadata as the ones passed by
- * the kernel via argv[], ie the strings array passed by the kernel according to
- * our pattern defined in /proc/sys/kernel/core_pattern (see man:core(5)). */
+ * The first indices of the cache stores the same metadata as the ones passed by the kernel via
+ * argv[], i.e. the strings specified in our pattern defined in /proc/sys/kernel/core_pattern,
+ * see core(5). */
META_ARGV_PID, /* %P: as seen in the initial pid namespace */
META_ARGV_UID, /* %u: as seen in the initial user namespace */
@@ -274,7 +274,6 @@ static int fix_acl(int fd, uid_t uid, bool allow_user) {
}
static int fix_xattr(int fd, const Context *context) {
-
static const char * const xattrs[_META_MAX] = {
[META_ARGV_PID] = "user.coredump.pid",
[META_ARGV_UID] = "user.coredump.uid",
@@ -1032,9 +1031,9 @@ static int context_parse_iovw(Context *context, struct iovec_wrapper *iovw) {
bool have_signal_name = false;
FOREACH_ARRAY(iovec, iovw->iovec, iovw->count) {
for (size_t i = 0; i < ELEMENTSOF(meta_field_names); i++) {
- /* Note that these strings are NUL terminated, because we made sure that a
+ /* Note that these strings are NUL-terminated, because we made sure that a
* trailing NUL byte is in the buffer, though not included in the iov_len
- * count (see process_socket() and gather_pid_metadata_*()) */
+ * count (see process_socket() and gather_pid_metadata_*()). */
assert(((char*) iovec->iov_base)[iovec->iov_len] == 0);
const char *p = memory_startswith(iovec->iov_base, iovec->iov_len, meta_field_names[i]);
@@ -1049,10 +1048,11 @@ static int context_parse_iovw(Context *context, struct iovec_wrapper *iovw) {
memory_startswith(iovec->iov_base, iovec->iov_len, "COREDUMP_SIGNAL_NAME=");
}
- /* The basic fields from argv[] should always be there, refuse early if not */
+ /* The basic fields from argv[] should always be there, refuse early if not. */
for (int i = 0; i < _META_ARGV_REQUIRED; i++)
if (!context->meta[i])
- return log_error_errno(SYNTHETIC_ERRNO(EINVAL), "A required (%s) has not been sent, aborting.", meta_field_names[i]);
+ return log_error_errno(SYNTHETIC_ERRNO(EINVAL),
+ "A required (%s) has not been sent, aborting.", meta_field_names[i]);
pid_t parsed_pid;
r = parse_pid(context->meta[META_ARGV_PID], &parsed_pid);
@@ -1060,7 +1060,8 @@ static int context_parse_iovw(Context *context, struct iovec_wrapper *iovw) {
return log_error_errno(r, "Failed to parse PID \"%s\": %m", context->meta[META_ARGV_PID]);
if (pidref_is_set(&context->pidref)) {
if (context->pidref.pid != parsed_pid)
- return log_error_errno(r, "Passed PID " PID_FMT " does not match passed " PID_FMT ": %m", parsed_pid, context->pidref.pid);
+ return log_error_errno(r, "Passed PID " PID_FMT " does not match passed " PID_FMT ": %m",
+ parsed_pid, context->pidref.pid);
} else {
r = pidref_set_pid(&context->pidref, parsed_pid);
if (r < 0)
@@ -1158,7 +1159,8 @@ static int process_socket(int fd) {
* that's permissible for the final two fds. Hence let's be strict on the
* first fd, but lenient on the other two. */
- if (!cmsg_find(&mh, SOL_SOCKET, SCM_RIGHTS, (socklen_t) -1) && state != STATE_PAYLOAD) /* no fds, and already got the first fd → we are done */
+ if (!cmsg_find(&mh, SOL_SOCKET, SCM_RIGHTS, (socklen_t) -1) && state != STATE_PAYLOAD)
+ /* No fds, and already got the first fd → we are done. */
break;
cmsg_close_all(&mh);
@@ -1350,7 +1352,7 @@ static int gather_pid_metadata_from_argv(
}
/* Cache some of the process metadata we collected so far and that we'll need to
- * access soon */
+ * access soon. */
return context_parse_iovw(context, iovw);
}
@@ -1465,12 +1467,12 @@ static int gather_pid_metadata_from_procfs(struct iovec_wrapper *iovw, Context *
if (get_process_environ(pid, &t) >= 0)
(void) iovw_put_string_field_free(iovw, "COREDUMP_ENVIRON=", t);
- /* Now that we have parsed info from /proc/ ensure the pidfd is still valid before continuing */
+ /* Now that we have parsed info from /proc/ ensure the pidfd is still valid before continuing. */
r = pidref_verify(&context->pidref);
if (r < 0)
return log_error_errno(r, "PIDFD validation failed: %m");
- /* we successfully acquired all metadata */
+ /* We successfully acquired all metadata. */
return context_parse_iovw(context, iovw);
}
@@ -1826,12 +1828,12 @@ static int process_kernel(int argc, char* argv[]) {
log_warning_errno(r, "Failed to access the mount tree of a container, ignoring: %m");
}
- /* If this is PID 1 disable coredump collection, we'll unlikely be able to process
+ /* If this is PID 1, disable coredump collection, we'll unlikely be able to process
* it later on.
*
* FIXME: maybe we should disable coredumps generation from the beginning and
- * re-enable it only when we know it's either safe (ie we're not running OOM) or
- * it's not pid1 ? */
+ * re-enable it only when we know it's either safe (i.e. we're not running OOM) or
+ * it's not PID 1 ? */
if (context.is_pid1) {
log_notice("Due to PID 1 having crashed coredump collection will now be turned off.");
disable_coredumps();

View File

@ -0,0 +1,94 @@
From 213ca1ba422418ebf37d4e518704e97b691e8f13 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Mon, 26 May 2025 12:04:44 +0200
Subject: [PATCH] coredump: get rid of _META_MANDATORY_MAX
No functional change. This change is done in preparation for future changes.
Currently, the list of fields which are received on the command line is a
strict subset of the fields which are always expected to be received on a
socket. But when we add new kernel args in the future, we'll have two
non-overlapping sets and this approach will not work. Get rid of the variable
and enumerate the required fields. This set will never change, so this is
actually more maintainable.
The message with the hint where to add new fields is switched with
_META_ARGV_MAX. The new order is more correct.
(cherry picked from commit 49f1f2d4a7612bbed5211a73d11d6a94fbe3bb69)
Related: RHEL-104135
---
src/coredump/coredump.c | 29 +++++++++++++++++++++--------
1 file changed, 21 insertions(+), 8 deletions(-)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index c96b59b2f5..ac1e1cb9d3 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -92,7 +92,7 @@ assert_cc(JOURNAL_SIZE_MAX <= DATA_SIZE_MAX);
#define MOUNT_TREE_ROOT "/run/systemd/mount-rootfs"
-enum {
+typedef enum {
/* We use these as array indexes for our process metadata cache.
*
* The first indices of the cache stores the same metadata as the ones passed by the kernel via
@@ -108,9 +108,9 @@ enum {
_META_ARGV_REQUIRED,
/* The fields below were added to kernel/core_pattern at later points, so they might be missing. */
META_ARGV_HOSTNAME = _META_ARGV_REQUIRED, /* %h: hostname */
- _META_ARGV_MAX,
/* If new fields are added, they should be added here, to maintain compatibility
* with callers which don't know about the new fields. */
+ _META_ARGV_MAX,
/* The following indexes are cached for a couple of special fields we use (and
* thereby need to be retrieved quickly) for naming coredump files, and attaching
@@ -118,16 +118,15 @@ enum {
* environment. */
META_COMM = _META_ARGV_MAX,
- _META_MANDATORY_MAX,
/* The rest are similar to the previous ones except that we won't fail if one of
* them is missing in a message sent over the socket. */
- META_EXE = _META_MANDATORY_MAX,
+ META_EXE,
META_UNIT,
META_PROC_AUXV,
_META_MAX
-};
+} meta_argv_t;
static const char * const meta_field_names[_META_MAX] = {
[META_ARGV_PID] = "COREDUMP_PID=",
@@ -1224,10 +1223,24 @@ static int process_socket(int fd) {
if (r < 0)
return r;
- /* Make sure we received at least all fields we need. */
- for (int i = 0; i < _META_MANDATORY_MAX; i++)
+ /* Make sure we received all the expected fields. We support being called by an *older*
+ * systemd-coredump from the outside, so we require only the basic set of fields that
+ * was being sent when the support for sending to containers over a socket was added
+ * in a108c43e36d3ceb6e34efe37c014fc2cda856000. */
+ meta_argv_t i;
+ FOREACH_ARGUMENT(i,
+ META_ARGV_PID,
+ META_ARGV_UID,
+ META_ARGV_GID,
+ META_ARGV_SIGNAL,
+ META_ARGV_TIMESTAMP,
+ META_ARGV_RLIMIT,
+ META_ARGV_HOSTNAME,
+ META_COMM)
if (!context.meta[i])
- return log_error_errno(SYNTHETIC_ERRNO(EINVAL), "A mandatory argument (%i) has not been sent, aborting.", i);
+ return log_error_errno(SYNTHETIC_ERRNO(EINVAL),
+ "Mandatory argument %s not received on socket, aborting.",
+ meta_field_names[i]);
return submit_coredump(&context, &iovw, input_fd);
}

View File

@ -0,0 +1,155 @@
From ca6244bc21757a799cbc81090cda3a85aa183325 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Tue, 29 Apr 2025 14:47:59 +0200
Subject: [PATCH] coredump: use %d in kernel core pattern
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The kernel provides %d which is documented as
"dump mode—same as value returned by prctl(2) PR_GET_DUMPABLE".
We already query /proc/pid/auxv for this information, but unfortunately this
check is subject to a race, because the crashed process may be replaced by an
attacker before we read this data, for example replacing a SUID process that
was killed by a signal with another process that is not SUID, tricking us into
making the coredump of the original process readable by the attacker.
With this patch, we effectively add one more check to the list of conditions
that need be satisfied if we are to make the coredump accessible to the user.
Reportedy-by: Qualys Security Advisory <qsa@qualys.com>
(cherry picked from commit 0c49e0049b7665bb7769a13ef346fef92e1ad4d6)
Related: RHEL-104135
---
man/systemd-coredump.xml | 12 ++++++++++++
src/coredump/coredump.c | 21 ++++++++++++++++++---
sysctl.d/50-coredump.conf.in | 2 +-
test/units/TEST-87-AUX-UTILS-VM.coredump.sh | 5 +++++
4 files changed, 36 insertions(+), 4 deletions(-)
diff --git a/man/systemd-coredump.xml b/man/systemd-coredump.xml
index 737b80de9a..0f5ccf12f9 100644
--- a/man/systemd-coredump.xml
+++ b/man/systemd-coredump.xml
@@ -292,6 +292,18 @@ COREDUMP_FILENAME=/var/lib/systemd/coredump/core.Web….552351.….zst
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><varname>COREDUMP_DUMPABLE=</varname></term>
+
+ <listitem><para>The <constant>PR_GET_DUMPABLE</constant> field as reported by the kernel, see
+ <citerefentry
+ project='man-pages'><refentrytitle>prctl</refentrytitle><manvolnum>2</manvolnum></citerefentry>.
+ </para>
+
+ <xi:include href="version-info.xml" xpointer="v258"/>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><varname>COREDUMP_OPEN_FDS=</varname></term>
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index ac1e1cb9d3..19d4d02437 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -108,6 +108,7 @@ typedef enum {
_META_ARGV_REQUIRED,
/* The fields below were added to kernel/core_pattern at later points, so they might be missing. */
META_ARGV_HOSTNAME = _META_ARGV_REQUIRED, /* %h: hostname */
+ META_ARGV_DUMPABLE, /* %d: as set by the kernel */
/* If new fields are added, they should be added here, to maintain compatibility
* with callers which don't know about the new fields. */
_META_ARGV_MAX,
@@ -136,6 +137,7 @@ static const char * const meta_field_names[_META_MAX] = {
[META_ARGV_TIMESTAMP] = "COREDUMP_TIMESTAMP=",
[META_ARGV_RLIMIT] = "COREDUMP_RLIMIT=",
[META_ARGV_HOSTNAME] = "COREDUMP_HOSTNAME=",
+ [META_ARGV_DUMPABLE] = "COREDUMP_DUMPABLE=",
[META_COMM] = "COREDUMP_COMM=",
[META_EXE] = "COREDUMP_EXE=",
[META_UNIT] = "COREDUMP_UNIT=",
@@ -146,6 +148,7 @@ typedef struct Context {
PidRef pidref;
uid_t uid;
gid_t gid;
+ unsigned dumpable;
int signo;
uint64_t rlimit;
bool is_pid1;
@@ -433,14 +436,16 @@ static int grant_user_access(int core_fd, const Context *context) {
if (r < 0)
return r;
- /* We allow access if we got all the data and at_secure is not set and
- * the uid/gid matches euid/egid. */
+ /* We allow access if dumpable on the command line was exactly 1, we got all the data,
+ * at_secure is not set, and the uid/gid match euid/egid. */
bool ret =
+ context->dumpable == 1 &&
at_secure == 0 &&
uid != UID_INVALID && euid != UID_INVALID && uid == euid &&
gid != GID_INVALID && egid != GID_INVALID && gid == egid;
- log_debug("Will %s access (uid="UID_FMT " euid="UID_FMT " gid="GID_FMT " egid="GID_FMT " at_secure=%s)",
+ log_debug("Will %s access (dumpable=%u uid="UID_FMT " euid="UID_FMT " gid="GID_FMT " egid="GID_FMT " at_secure=%s)",
ret ? "permit" : "restrict",
+ context->dumpable,
uid, euid, gid, egid, yes_no(at_secure));
return ret;
}
@@ -1083,6 +1088,16 @@ static int context_parse_iovw(Context *context, struct iovec_wrapper *iovw) {
if (r < 0)
log_warning_errno(r, "Failed to parse resource limit \"%s\", ignoring: %m", context->meta[META_ARGV_RLIMIT]);
+ /* The value is set to contents of /proc/sys/fs/suid_dumpable, which we set to 2,
+ * if the process is marked as not dumpable, see PR_SET_DUMPABLE(2const). */
+ if (context->meta[META_ARGV_DUMPABLE]) {
+ r = safe_atou(context->meta[META_ARGV_DUMPABLE], &context->dumpable);
+ if (r < 0)
+ return log_error_errno(r, "Failed to parse dumpable field \"%s\": %m", context->meta[META_ARGV_DUMPABLE]);
+ if (context->dumpable > 2)
+ log_notice("Got unexpected %%d/dumpable value %u.", context->dumpable);
+ }
+
unit = context->meta[META_UNIT];
context->is_pid1 = streq(context->meta[META_ARGV_PID], "1") || streq_ptr(unit, SPECIAL_INIT_SCOPE);
context->is_journald = streq_ptr(unit, SPECIAL_JOURNALD_SERVICE);
diff --git a/sysctl.d/50-coredump.conf.in b/sysctl.d/50-coredump.conf.in
index 90c080bdfe..a550c87258 100644
--- a/sysctl.d/50-coredump.conf.in
+++ b/sysctl.d/50-coredump.conf.in
@@ -13,7 +13,7 @@
# the core dump.
#
# See systemd-coredump(8) and core(5).
-kernel.core_pattern=|{{LIBEXECDIR}}/systemd-coredump %P %u %g %s %t %c %h
+kernel.core_pattern=|{{LIBEXECDIR}}/systemd-coredump %P %u %g %s %t %c %h %d
# Allow 16 coredumps to be dispatched in parallel by the kernel.
# We collect metadata from /proc/%P/, and thus need to make sure the crashed
diff --git a/test/units/TEST-87-AUX-UTILS-VM.coredump.sh b/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
index a170223f37..0d7bed5609 100755
--- a/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
+++ b/test/units/TEST-87-AUX-UTILS-VM.coredump.sh
@@ -196,12 +196,17 @@ journalctl -b -n 1 --output=export --output-fields=MESSAGE,COREDUMP COREDUMP_EXE
/usr/lib/systemd/systemd-coredump --backtrace $$ 0 0 6 1679509900 12345
journalctl -b -n 1 --output=export --output-fields=MESSAGE,COREDUMP COREDUMP_EXE="/usr/bin/test-dump" |
/usr/lib/systemd/systemd-coredump --backtrace $$ 0 0 6 1679509901 12345 mymachine
+journalctl -b -n 1 --output=export --output-fields=MESSAGE,COREDUMP COREDUMP_EXE="/usr/bin/test-dump" |
+ /usr/lib/systemd/systemd-coredump --backtrace $$ 0 0 6 1679509902 12345 youmachine 1
# Wait a bit for the coredumps to get processed
timeout 30 bash -c "while [[ \$(coredumpctl list -q --no-legend $$ | wc -l) -lt 2 ]]; do sleep 1; done"
coredumpctl info $$
coredumpctl info COREDUMP_TIMESTAMP=1679509900000000
coredumpctl info COREDUMP_TIMESTAMP=1679509901000000
coredumpctl info COREDUMP_HOSTNAME="mymachine"
+coredumpctl info COREDUMP_TIMESTAMP=1679509902000000
+coredumpctl info COREDUMP_HOSTNAME="youmachine"
+coredumpctl info COREDUMP_DUMPABLE="1"
# This used to cause a stack overflow
systemd-run -t --property CoredumpFilter=all ls /tmp

View File

@ -0,0 +1,52 @@
From c4112c747b8efc7d2704daddedcc1d0816580359 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Mon, 5 May 2025 15:48:40 +0200
Subject: [PATCH] coredump: also stop forwarding non-dumpable processes
See the comment in the patch for details.
Suggested-by: Qualys Security Advisory <qsa@qualys.com>
(cherry picked from commit 8fc7b2a211eb13ef1a94250b28e1c79cab8bdcb9)
Related: RHEL-104135
---
src/coredump/coredump.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index 19d4d02437..048eb53546 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -1560,10 +1560,21 @@ static int receive_ucred(int transport_fd, struct ucred *ret_ucred) {
return 0;
}
-static int can_forward_coredump(pid_t pid) {
+static int can_forward_coredump(Context *context, pid_t pid) {
_cleanup_free_ char *cgroup = NULL, *path = NULL, *unit = NULL;
int r;
+ assert(context);
+
+ /* We don't use %F/pidfd to pin down the crashed process yet. We need to avoid a situation where the
+ * attacker crashes a SUID process or a root daemon and quickly replaces it with a namespaced process
+ * and we forward the initial part of the coredump to the attacker, inside the namespace.
+ *
+ * TODO: relax this check when %F is implemented and used.
+ */
+ if (context->dumpable != 1)
+ return false;
+
r = cg_pid_get_path(SYSTEMD_CGROUP_CONTROLLER, pid, &cgroup);
if (r < 0)
return r;
@@ -1607,7 +1618,7 @@ static int forward_coredump_to_container(Context *context) {
if (r < 0)
return log_debug_errno(r, "Failed to get namespace leader: %m");
- r = can_forward_coredump(leader_pid);
+ r = can_forward_coredump(context, leader_pid);
if (r < 0)
return log_debug_errno(r, "Failed to check if coredump can be forwarded: %m");
if (r == 0)

View File

@ -0,0 +1,42 @@
From 06872611dc55c15942a554b9c2a684ee4979ad00 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Mon, 26 May 2025 15:24:04 +0200
Subject: [PATCH] coredump: get rid of a bogus assertion
The check looks plausible, but when I started checking whether it needs
to be lowered for the recent changes, I realized that it doesn't make
much sense.
context_parse_iovw() is called from a few places, e.g.:
- process_socket(), where the other side controls the contents of the
message. We already do other checks on the correctness of the message
and this assert is not needed.
- gather_pid_metadata_from_argv(), which is called after
inserting MESSAGE_ID= and PRIORITY= into the array, so there is no
direct relation between _META_ARGV_MAX and the number of args in the
iovw.
- gather_pid_metadata_from_procfs(), where we insert a bazillion fields,
but without any relation to _META_ARGV_MAX.
Since we already separately check if the required stuff was set, drop this
misleading check.
(cherry picked from commit 13902e025321242b1d95c6d8b4e482b37f58cdef)
Related: RHEL-104135
---
src/coredump/coredump.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index 048eb53546..88cd1c394d 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -1027,7 +1027,6 @@ static int context_parse_iovw(Context *context, struct iovec_wrapper *iovw) {
assert(context);
assert(iovw);
- assert(iovw->count >= _META_ARGV_MAX);
/* Converts the data in the iovec array iovw into separate fields. Fills in context->meta[] (for
* which no memory is allocated, it just contains direct pointers into the iovec array memory). */

View File

@ -0,0 +1,153 @@
From bf1c8601385ea5b066557ae8def99f80d876f213 Mon Sep 17 00:00:00 2001
From: Luca Boccassi <luca.boccassi@gmail.com>
Date: Sun, 13 Apr 2025 22:10:36 +0100
Subject: [PATCH] coredump: add support for new %F PIDFD specifier
A new core_pattern specifier was added, %F, to provide a PIDFD
to the usermode helper process referring to the crashed process.
This removes all possible race conditions, ensuring only the
crashed process gets inspected by systemd-coredump.
(cherry picked from commit 868d95577ec9f862580ad365726515459be582fc)
Related: RHEL-104135
---
man/systemd-coredump.xml | 11 +++++++
src/coredump/coredump.c | 60 ++++++++++++++++++++++++++++++++++--
sysctl.d/50-coredump.conf.in | 2 +-
3 files changed, 70 insertions(+), 3 deletions(-)
diff --git a/man/systemd-coredump.xml b/man/systemd-coredump.xml
index 0f5ccf12f9..185497125c 100644
--- a/man/systemd-coredump.xml
+++ b/man/systemd-coredump.xml
@@ -192,6 +192,17 @@ COREDUMP_FILENAME=/var/lib/systemd/coredump/core.Web….552351.….zst
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><varname>COREDUMP_BY_PIDFD=</varname></term>
+ <listitem><para>If the crashed process was analyzed using a PIDFD provided by the kernel (requires
+ kernel v6.16) then this field will be present and set to <literal>1</literal>. If this field is
+ not set, then the crashed process was analyzed via a PID, which is known to be subject to race
+ conditions.</para>
+
+ <xi:include href="version-info.xml" xpointer="v258"/>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><varname>COREDUMP_TIMESTAMP=</varname></term>
<listitem><para>The time of the crash as reported by the kernel (in μs since the epoch).</para>
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index 88cd1c394d..940eb44528 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -109,6 +109,7 @@ typedef enum {
/* The fields below were added to kernel/core_pattern at later points, so they might be missing. */
META_ARGV_HOSTNAME = _META_ARGV_REQUIRED, /* %h: hostname */
META_ARGV_DUMPABLE, /* %d: as set by the kernel */
+ META_ARGV_PIDFD, /* %F: pidfd of the process, since v6.16 */
/* If new fields are added, they should be added here, to maintain compatibility
* with callers which don't know about the new fields. */
_META_ARGV_MAX,
@@ -138,6 +139,7 @@ static const char * const meta_field_names[_META_MAX] = {
[META_ARGV_RLIMIT] = "COREDUMP_RLIMIT=",
[META_ARGV_HOSTNAME] = "COREDUMP_HOSTNAME=",
[META_ARGV_DUMPABLE] = "COREDUMP_DUMPABLE=",
+ [META_ARGV_PIDFD] = "COREDUMP_BY_PIDFD=",
[META_COMM] = "COREDUMP_COMM=",
[META_EXE] = "COREDUMP_EXE=",
[META_UNIT] = "COREDUMP_UNIT=",
@@ -1341,7 +1343,8 @@ static int gather_pid_metadata_from_argv(
Context *context,
int argc, char **argv) {
- int r;
+ _cleanup_(pidref_done) PidRef local_pidref = PIDREF_NULL;
+ int r, kernel_fd = -EBADF;
assert(iovw);
assert(context);
@@ -1373,6 +1376,47 @@ static int gather_pid_metadata_from_argv(
t = buf;
}
+ if (i == META_ARGV_PID) {
+ /* Store this so that we can check whether the core will be forwarded to a container
+ * even when the kernel doesn't provide a pidfd. Can be dropped once baseline is
+ * >= v6.16. */
+ r = pidref_set_pidstr(&local_pidref, t);
+ if (r < 0)
+ return log_error_errno(r, "Failed to initialize pidref from pid %s: %m", t);
+ }
+
+ if (i == META_ARGV_PIDFD) {
+ /* If the current kernel doesn't support the %F specifier (which resolves to a
+ * pidfd), but we included it in the core_pattern expression, we'll receive an empty
+ * string here. Deal with that gracefully. */
+ if (isempty(t))
+ continue;
+
+ assert(!pidref_is_set(&context->pidref));
+ assert(kernel_fd < 0);
+
+ kernel_fd = parse_fd(t);
+ if (kernel_fd < 0)
+ return log_error_errno(kernel_fd, "Failed to parse pidfd \"%s\": %m", t);
+
+ r = pidref_set_pidfd(&context->pidref, kernel_fd);
+ if (r < 0)
+ return log_error_errno(r, "Failed to initialize pidref from pidfd %d: %m", kernel_fd);
+
+ /* If there are containers involved with different versions of the code they might
+ * not be using pidfds, so it would be wrong to set the metadata, skip it. */
+ r = in_same_namespace(/* pid1 = */ 0, context->pidref.pid, NAMESPACE_PID);
+ if (r < 0)
+ log_debug_errno(r, "Failed to check pidns of crashing process, ignoring: %m");
+ if (r <= 0)
+ continue;
+
+ /* We don't print the fd number in the journal as it's meaningless, but we still
+ * record that the parsing was done with a kernel-provided fd as it means it's safe
+ * from races, which is valuable information to provide in the journal record. */
+ t = "1";
+ }
+
r = iovw_put_string_field(iovw, meta_field_names[i], t);
if (r < 0)
return r;
@@ -1380,7 +1424,19 @@ static int gather_pid_metadata_from_argv(
/* Cache some of the process metadata we collected so far and that we'll need to
* access soon. */
- return context_parse_iovw(context, iovw);
+ r = context_parse_iovw(context, iovw);
+ if (r < 0)
+ return r;
+
+ /* If the kernel didn't give us a PIDFD, then use the one derived from the
+ * PID immediately, given we have it. */
+ if (!pidref_is_set(&context->pidref))
+ context->pidref = TAKE_PIDREF(local_pidref);
+
+ /* Close the kernel-provided FD as the last thing after everything else succeeded */
+ kernel_fd = safe_close(kernel_fd);
+
+ return 0;
}
static int gather_pid_metadata_from_procfs(struct iovec_wrapper *iovw, Context *context) {
diff --git a/sysctl.d/50-coredump.conf.in b/sysctl.d/50-coredump.conf.in
index a550c87258..fe8f7670b0 100644
--- a/sysctl.d/50-coredump.conf.in
+++ b/sysctl.d/50-coredump.conf.in
@@ -13,7 +13,7 @@
# the core dump.
#
# See systemd-coredump(8) and core(5).
-kernel.core_pattern=|{{LIBEXECDIR}}/systemd-coredump %P %u %g %s %t %c %h %d
+kernel.core_pattern=|{{LIBEXECDIR}}/systemd-coredump %P %u %g %s %t %c %h %d %F
# Allow 16 coredumps to be dispatched in parallel by the kernel.
# We collect metadata from /proc/%P/, and thus need to make sure the crashed

View File

@ -0,0 +1,53 @@
From 0838984f8cd8959c11fa7ff115510911d0a28890 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Tue, 27 May 2025 10:44:32 +0200
Subject: [PATCH] coredump: when %F/pidfd is used, again allow forwarding to
containers
(cherry picked from commit e6a8687b939ab21854f12f59a3cce703e32768cf)
Related: RHEL-104135
---
src/coredump/coredump.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index 940eb44528..67abc20ec5 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -155,6 +155,7 @@ typedef struct Context {
uint64_t rlimit;
bool is_pid1;
bool is_journald;
+ bool got_pidfd;
int mount_tree_fd;
/* These point into external memory, are not owned by this object */
@@ -1403,6 +1404,8 @@ static int gather_pid_metadata_from_argv(
if (r < 0)
return log_error_errno(r, "Failed to initialize pidref from pidfd %d: %m", kernel_fd);
+ context->got_pidfd = 1;
+
/* If there are containers involved with different versions of the code they might
* not be using pidfds, so it would be wrong to set the metadata, skip it. */
r = in_same_namespace(/* pid1 = */ 0, context->pidref.pid, NAMESPACE_PID);
@@ -1621,13 +1624,11 @@ static int can_forward_coredump(Context *context, pid_t pid) {
assert(context);
- /* We don't use %F/pidfd to pin down the crashed process yet. We need to avoid a situation where the
- * attacker crashes a SUID process or a root daemon and quickly replaces it with a namespaced process
- * and we forward the initial part of the coredump to the attacker, inside the namespace.
- *
- * TODO: relax this check when %F is implemented and used.
- */
- if (context->dumpable != 1)
+ /* We need to avoid a situation where the attacker crashes a SUID process or a root daemon and
+ * quickly replaces it with a namespaced process and we forward the coredump to the attacker, into
+ * the namespace. With %F/pidfd we can reliably check the namespace of the original process, hence we
+ * can allow forwarding. */
+ if (!context->got_pidfd && context->dumpable != 1)
return false;
r = cg_pid_get_path(SYSTEMD_CGROUP_CONTROLLER, pid, &cgroup);

View File

@ -0,0 +1,82 @@
From 7d1e71ab4edfde63ac5516dcbbb972379fae17dd Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Tue, 27 May 2025 20:32:30 +0200
Subject: [PATCH] coredump: introduce an enum to wrap dumpable constants
Two constants are described in the man page, but are not defined by a header.
The third constant is described in the kernel docs. Use explicit values to
show that those are values are defined externally.
(cherry picked from commit 76e0ab49c47965877c19772a2b3bf55f6417ca39)
Related: RHEL-104135
---
src/coredump/coredump.c | 10 +++++-----
src/shared/coredump-util.h | 7 +++++++
2 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index 67abc20ec5..7bde2f5196 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -442,7 +442,7 @@ static int grant_user_access(int core_fd, const Context *context) {
/* We allow access if dumpable on the command line was exactly 1, we got all the data,
* at_secure is not set, and the uid/gid match euid/egid. */
bool ret =
- context->dumpable == 1 &&
+ context->dumpable == SUID_DUMP_USER &&
at_secure == 0 &&
uid != UID_INVALID && euid != UID_INVALID && uid == euid &&
gid != GID_INVALID && egid != GID_INVALID && gid == egid;
@@ -1090,13 +1090,13 @@ static int context_parse_iovw(Context *context, struct iovec_wrapper *iovw) {
if (r < 0)
log_warning_errno(r, "Failed to parse resource limit \"%s\", ignoring: %m", context->meta[META_ARGV_RLIMIT]);
- /* The value is set to contents of /proc/sys/fs/suid_dumpable, which we set to 2,
+ /* The value is set to contents of /proc/sys/fs/suid_dumpable, which we set to SUID_DUMP_SAFE (2),
* if the process is marked as not dumpable, see PR_SET_DUMPABLE(2const). */
if (context->meta[META_ARGV_DUMPABLE]) {
r = safe_atou(context->meta[META_ARGV_DUMPABLE], &context->dumpable);
if (r < 0)
return log_error_errno(r, "Failed to parse dumpable field \"%s\": %m", context->meta[META_ARGV_DUMPABLE]);
- if (context->dumpable > 2)
+ if (context->dumpable > SUID_DUMP_SAFE)
log_notice("Got unexpected %%d/dumpable value %u.", context->dumpable);
}
@@ -1628,7 +1628,7 @@ static int can_forward_coredump(Context *context, pid_t pid) {
* quickly replaces it with a namespaced process and we forward the coredump to the attacker, into
* the namespace. With %F/pidfd we can reliably check the namespace of the original process, hence we
* can allow forwarding. */
- if (!context->got_pidfd && context->dumpable != 1)
+ if (!context->got_pidfd && context->dumpable != SUID_DUMP_USER)
return false;
r = cg_pid_get_path(SYSTEMD_CGROUP_CONTROLLER, pid, &cgroup);
@@ -2016,7 +2016,7 @@ static int run(int argc, char *argv[]) {
log_set_target_and_open(LOG_TARGET_KMSG);
/* Make sure we never enter a loop */
- (void) prctl(PR_SET_DUMPABLE, 0);
+ (void) prctl(PR_SET_DUMPABLE, SUID_DUMP_DISABLE);
/* Ignore all parse errors */
(void) parse_config();
diff --git a/src/shared/coredump-util.h b/src/shared/coredump-util.h
index 4f54bb94c0..73c74c98c7 100644
--- a/src/shared/coredump-util.h
+++ b/src/shared/coredump-util.h
@@ -25,6 +25,13 @@ typedef enum CoredumpFilter {
/* The kernel doesn't like UINT64_MAX and returns ERANGE, use UINT32_MAX to support future new flags */
#define COREDUMP_FILTER_MASK_ALL UINT32_MAX
+typedef enum SuidDumpMode {
+ SUID_DUMP_DISABLE = 0, /* PR_SET_DUMPABLE(2const) */
+ SUID_DUMP_USER = 1, /* PR_SET_DUMPABLE(2const) */
+ SUID_DUMP_SAFE = 2, /* https://www.kernel.org/doc/html/latest/admin-guide/sysctl/fs.html#suid-dumpable */
+ _SUID_DUMP_MODE_MAX,
+} SuidDumpMode;
+
const char* coredump_filter_to_string(CoredumpFilter i) _const_;
CoredumpFilter coredump_filter_from_string(const char *s) _pure_;
int coredump_filter_mask_from_string(const char *s, uint64_t *ret);

View File

@ -0,0 +1,115 @@
From 8973e33905369c319d8db07d9a453fc3c99099ef Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Wed, 28 May 2025 18:31:13 +0200
Subject: [PATCH] Define helper to call PR_SET_DUMPABLE
(cherry picked from commit 9ce8e3e449def92c75ada41b7d10c5bc3946be77)
Related: RHEL-104135
---
src/coredump/coredump.c | 3 +--
src/shared/coredump-util.c | 7 +++++++
src/shared/coredump-util.h | 2 ++
src/shared/elf-util.c | 4 ++--
src/shared/tests.c | 1 +
5 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index 7bde2f5196..caec4bb76c 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -3,7 +3,6 @@
#include <errno.h>
#include <stdio.h>
#include <sys/mount.h>
-#include <sys/prctl.h>
#include <sys/statvfs.h>
#include <sys/auxv.h>
#include <sys/xattr.h>
@@ -2016,7 +2015,7 @@ static int run(int argc, char *argv[]) {
log_set_target_and_open(LOG_TARGET_KMSG);
/* Make sure we never enter a loop */
- (void) prctl(PR_SET_DUMPABLE, SUID_DUMP_DISABLE);
+ (void) set_dumpable(SUID_DUMP_DISABLE);
/* Ignore all parse errors */
(void) parse_config();
diff --git a/src/shared/coredump-util.c b/src/shared/coredump-util.c
index 805503f366..0050e133c4 100644
--- a/src/shared/coredump-util.c
+++ b/src/shared/coredump-util.c
@@ -1,14 +1,21 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
#include <elf.h>
+#include <sys/prctl.h>
#include "coredump-util.h"
+#include "errno-util.h"
#include "extract-word.h"
#include "fileio.h"
#include "string-table.h"
#include "unaligned.h"
#include "virt.h"
+int set_dumpable(SuidDumpMode mode) {
+ /* Cast mode explicitly to long, because prctl wants longs but is varargs. */
+ return RET_NERRNO(prctl(PR_SET_DUMPABLE, (long) mode));
+}
+
static const char *const coredump_filter_table[_COREDUMP_FILTER_MAX] = {
[COREDUMP_FILTER_PRIVATE_ANONYMOUS] = "private-anonymous",
[COREDUMP_FILTER_SHARED_ANONYMOUS] = "shared-anonymous",
diff --git a/src/shared/coredump-util.h b/src/shared/coredump-util.h
index 73c74c98c7..b18cb33c84 100644
--- a/src/shared/coredump-util.h
+++ b/src/shared/coredump-util.h
@@ -32,6 +32,8 @@ typedef enum SuidDumpMode {
_SUID_DUMP_MODE_MAX,
} SuidDumpMode;
+int set_dumpable(SuidDumpMode mode);
+
const char* coredump_filter_to_string(CoredumpFilter i) _const_;
CoredumpFilter coredump_filter_from_string(const char *s) _pure_;
int coredump_filter_mask_from_string(const char *s, uint64_t *ret);
diff --git a/src/shared/elf-util.c b/src/shared/elf-util.c
index a3ff1fd3fb..ff8818de27 100644
--- a/src/shared/elf-util.c
+++ b/src/shared/elf-util.c
@@ -6,12 +6,12 @@
#include <elfutils/libdwelf.h>
#include <elfutils/libdwfl.h>
#include <libelf.h>
-#include <sys/prctl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <unistd.h>
#include "alloc-util.h"
+#include "coredump-util.h"
#include "dlfcn-util.h"
#include "elf-util.h"
#include "errno-util.h"
@@ -825,7 +825,7 @@ int parse_elf_object(int fd, const char *executable, const char *root, bool fork
if (r == 0) {
/* We want to avoid loops, given this can be called from systemd-coredump */
if (fork_disable_dump) {
- r = RET_NERRNO(prctl(PR_SET_DUMPABLE, 0));
+ r = set_dumpable(SUID_DUMP_DISABLE);
if (r < 0)
report_errno_and_exit(error_pipe[1], r);
}
diff --git a/src/shared/tests.c b/src/shared/tests.c
index 50b30ca17d..88031e90d9 100644
--- a/src/shared/tests.c
+++ b/src/shared/tests.c
@@ -16,6 +16,7 @@
#include "bus-wait-for-jobs.h"
#include "cgroup-setup.h"
#include "cgroup-util.h"
+#include "coredump-util.h"
#include "env-file.h"
#include "env-util.h"
#include "fd-util.h"

View File

@ -0,0 +1,25 @@
From f370e6bdbd9fb01e331ff1850f6e9d5be51a15aa Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
Date: Fri, 6 Jun 2025 17:03:46 +0200
Subject: [PATCH] coredump: fix 0-passed-as-pointer warning
(cherry picked from commit 8ec2e177b01339ee940efd323361971acf027cc9)
Related: RHEL-104135
---
src/coredump/coredump.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/coredump/coredump.c b/src/coredump/coredump.c
index caec4bb76c..412411bff7 100644
--- a/src/coredump/coredump.c
+++ b/src/coredump/coredump.c
@@ -217,7 +217,7 @@ static int parse_config(void) {
#if HAVE_DWFL_SET_SYSROOT
{ "Coredump", "EnterNamespace", config_parse_bool, 0, &arg_enter_namespace },
#else
- { "Coredump", "EnterNamespace", config_parse_warn_compat, DISABLED_CONFIGURATION, 0 },
+ { "Coredump", "EnterNamespace", config_parse_warn_compat, DISABLED_CONFIGURATION, NULL },
#endif
{}
};

View File

@ -48,7 +48,7 @@ Url: https://systemd.io
# Allow users to specify the version and release when building the rpm by
# setting the %%version_override and %%release_override macros.
Version: %{?version_override}%{!?version_override:257}
Release: 16%{?dist}
Release: 17%{?dist}
%global stable %(c="%version"; [ "$c" = "${c#*.*}" ]; echo $?)
@ -562,6 +562,38 @@ Patch0449: 0449-test-restarting-elapsed-timer-shouldn-t-trigger-the-.patch
Patch0450: 0450-test-check-the-next-elapse-timer-timestamp-after-des.patch
Patch0451: 0451-timer-don-t-run-service-immediately-after-restart-of.patch
Patch0452: 0452-test-store-and-compare-just-the-property-value.patch
Patch0453: 0453-pam_systemd-honor-session-class-provided-via-PAM-env.patch
Patch0454: 0454-udev-net_id-introduce-naming-scheme-for-RHEL-10.2.patch
Patch0455: 0455-udev-net_id-introduce-naming-scheme-for-RHEL-9.8.patch
Patch0456: 0456-test-split-VM-only-subtests-from-TEST-74-AUX-UTILS-t.patch
Patch0457: 0457-core-transaction-first-drop-unmergable-jobs-for-anch.patch
Patch0458: 0458-test-add-test-case-for-issue-38765.patch
Patch0459: 0459-strv-add-strv_equal_ignore_order-helper.patch
Patch0460: 0460-pam-minor-coding-style-tweaks.patch
Patch0461: 0461-user-record-add-helper-that-checks-if-a-provided-use.patch
Patch0462: 0462-user-record-add-support-for-alias-user-names-to-user.patch
Patch0463: 0463-pam_systemd_home-use-right-field-name-in-error-messa.patch
Patch0464: 0464-pam_systemd_home-support-login-with-alias-names-user.patch
Patch0465: 0465-homed-support-user-record-aliases.patch
Patch0466: 0466-homectl-add-support-for-creating-users-with-alias-na.patch
Patch0467: 0467-test-add-test-for-homed-alias-and-realm-user-resolut.patch
Patch0468: 0468-update-TODO.patch
Patch0469: 0469-udev-create-symlinks-for-s390-PTP-devices.patch
Patch0470: 0470-test-build-the-crashing-test-binary-outside-of-the-t.patch
Patch0471: 0471-test-exclude-test-stacktrace-not-symbolized-from-the.patch
Patch0472: 0472-mkosi-install-test-dependencies-for-EnterNamespace-t.patch
Patch0473: 0473-coredump-verify-pidfd-after-parsing-data-in-usermode.patch
Patch0474: 0474-coredump-restore-compatibility-with-older-patterns.patch
Patch0475: 0475-coredump-wrap-long-lines-fix-grammar-in-comments.patch
Patch0476: 0476-coredump-get-rid-of-_META_MANDATORY_MAX.patch
Patch0477: 0477-coredump-use-d-in-kernel-core-pattern.patch
Patch0478: 0478-coredump-also-stop-forwarding-non-dumpable-processes.patch
Patch0479: 0479-coredump-get-rid-of-a-bogus-assertion.patch
Patch0480: 0480-coredump-add-support-for-new-F-PIDFD-specifier.patch
Patch0481: 0481-coredump-when-F-pidfd-is-used-again-allow-forwarding.patch
Patch0482: 0482-coredump-introduce-an-enum-to-wrap-dumpable-constant.patch
Patch0483: 0483-Define-helper-to-call-PR_SET_DUMPABLE.patch
Patch0484: 0484-coredump-fix-0-passed-as-pointer-warning.patch
# Downstream-only patches (90009999)
%endif
@ -1508,6 +1540,40 @@ rm -f .file-list-*
rm -f %{name}.lang
%changelog
* Wed Nov 05 2025 systemd maintenance team <systemd-maint@redhat.com> - 257-17
- pam_systemd: honor session class provided via PAM environment (RHEL-109832)
- udev/net_id: introduce naming scheme for RHEL-10.2 (RHEL-72813)
- udev/net_id: introduce naming scheme for RHEL-9.8 (RHEL-72813)
- test: split VM-only subtests from TEST-74-AUX-UTILS to new VM-only test (RHEL-112205)
- core/transaction: first drop unmergable jobs for anchor jobs (RHEL-112205)
- test: add test case for issue #38765 (RHEL-112205)
- strv: add strv_equal_ignore_order() helper (RHEL-109902)
- pam: minor coding style tweaks (RHEL-109902)
- user-record: add helper that checks if a provided user name matches a record (RHEL-109902)
- user-record: add support for alias user names to user record (RHEL-109902)
- pam_systemd_home: use right field name in error message (RHEL-109902)
- pam_systemd_home: support login with alias names + user names with realms (RHEL-109902)
- homed: support user record aliases (RHEL-109902)
- homectl: add support for creating users with alias names (RHEL-109902)
- test: add test for homed alias and realm user resolution (RHEL-109902)
- update TODO (RHEL-109902)
- udev: create symlinks for s390 PTP devices (RHEL-120177)
- test: build the crashing test binary outside of the test (RHEL-113920)
- test: exclude test-stacktrace(-not)?-symbolized from the coredump check (RHEL-113920)
- mkosi: install test dependencies for EnterNamespace= test (RHEL-113920)
- coredump: verify pidfd after parsing data in usermode helper (RHEL-104135)
- coredump: restore compatibility with older patterns (RHEL-104135)
- coredump: wrap long lines, fix grammar in comments (RHEL-104135)
- coredump: get rid of _META_MANDATORY_MAX (RHEL-104135)
- coredump: use %d in kernel core pattern (RHEL-104135)
- coredump: also stop forwarding non-dumpable processes (RHEL-104135)
- coredump: get rid of a bogus assertion (RHEL-104135)
- coredump: add support for new %F PIDFD specifier (RHEL-104135)
- coredump: when %F/pidfd is used, again allow forwarding to containers (RHEL-104135)
- coredump: introduce an enum to wrap dumpable constants (RHEL-104135)
- Define helper to call PR_SET_DUMPABLE (RHEL-104135)
- coredump: fix 0-passed-as-pointer warning (RHEL-104135)
* Thu Oct 02 2025 systemd maintenance team <systemd-maint@redhat.com> - 257-16
- test: rename TEST-53-ISSUE-16347 to TEST-53-TIMER (RHEL-118216)
- test: restarting elapsed timer shouldn't trigger the corresponding service (RHEL-118216)