Much-belated cleanup

- remove all references to cgroups v1. That code never worked,
  and "cgroups v2" in test names was misleading because it
  implied an alternative. Remove it.
- refactor podman remote and local tests
- clean up docs
- Ansible bitrot cleanup ("this is deprecated, use that")

Tested using 1minutetip but even so this is a big change and
we need to be prepared for fallout in the next bodhi.

Signed-off-by: Ed Santiago <santiago@redhat.com>
This commit is contained in:
Ed Santiago 2024-02-01 06:33:04 -07:00 committed by lsm5
parent aa55adeec9
commit 67da5bfbb7
6 changed files with 17 additions and 127 deletions

View File

@ -1,21 +1,14 @@
I'm sorry. The playbooks here are a much-too-complicated way of saying:
- test podman (root and rootless) under cgroups v2
- reboot into cgroups v1
- repeat the same podman tests
We can't use standard-test-basic any more because, tl;dr, that has to
be the last stanza in the playbook and it doesn't offer any mechanism
for running a reboot in the middle of tests. (I actually found a way
but it was even uglier than this approach).
- test podman (root and rootless)
- same, with podman-remote
The starting point is tests.yml . From there:
tests.yml
\- test_podman.yml
|- roles/rootless_user_ready/
\- test_podman_cgroups_vn.yml (runs twice: cgroups v2, v1)
|- roles/set_cgroups/
\- run_podman_tests.yml (once for local, once for remote)
\- roles/run_bats_tests/ (runs tests: root, rootless)
Principal result is the file 'artifacts/test.log'. It will contain

View File

@ -4,7 +4,7 @@
copy: dest=/tmp/test.log content='' force=yes mode=0666
- name: execute tests
include: run_one_test.yml
include_tasks: run_one_test.yml
with_items: "{{ tests }}"
loop_control:
loop_var: test

View File

@ -1,73 +0,0 @@
---
# Check the CURRENT cgroup level; we get this from /proc/cmdline
- name: check current kernel options
shell: fgrep systemd.unified_cgroup_hierarchy=0 /proc/cmdline
register: result
ignore_errors: true
- name: determine current cgroups | assume v2
set_fact: current_cgroups=2
- name: determine current cgroups | looks like v1
set_fact: current_cgroups=1
when: result is succeeded
- debug:
msg: "want: v{{ want_cgroups }} actual: v{{ current_cgroups }}"
- name: grubenv, pre-edit, cat
shell: cat /boot/grub2/grubenv
register: grubenv
- name: grubenv, pre-edit, show
debug:
msg: "{{ grubenv.stdout_lines }}"
# Update grubenv file to reflect the desired cgroup level
- name: remove cgroup option from kernel flags
shell:
cmd: sed -i -e "s/^\(kernelopts=.*\)systemd\.unified_cgroup_hierarchy=.\(.*\)/\1 \2/" /boot/grub2/grubenv
- name: add it with the desired value
shell:
cmd: sed -i -e "s/^\(kernelopts=.*\)/\1 systemd.unified_cgroup_hierarchy=0/" /boot/grub2/grubenv
when: want_cgroups == 1
- name: grubenv, post-edit, cat
shell: cat /boot/grub2/grubenv
register: grubenv
- name: grubenv, post-edit, show
debug:
msg: "post: {{ grubenv.stdout_lines }}"
# If want != have, reboot
- name: reboot and wait
block:
- name: reboot
reboot:
reboot_timeout: 900
ignore_errors: yes
- name: wait and reconnect
wait_for_connection:
timeout: 900
when: want_cgroups|int != current_cgroups|int
- set_fact:
expected_fstype:
- none
- tmpfs
- cgroup2fs
- name: confirm cgroups setting
shell: stat -f -c "%T" /sys/fs/cgroup
register: fstype
- debug:
msg: "stat(/sys/fs/cgroup) = {{ fstype.stdout }}"
- name: system cgroups is the expected type
assert:
that:
- fstype.stdout == expected_fstype[want_cgroups|int]
fail_msg: "stat(/sys/fs/cgroup) = {{ fstype.stdout }} (expected {{ expected_fstype[want_cgroups|int] }})"

View File

@ -1,21 +1,22 @@
---
# Requires: 'want_cgroups' variable set to 1 or 2
- include_role:
name: set_cgroups
- name: "podman-remote | install"
dnf: name="podman-remote" state=installed
when: podman_bin == "podman-remote"
- include_role:
name: run_bats_tests
vars:
tests:
# Yes, this is horrible duplication, but trying to refactor in ansible
# yields even more horrible unreadable code. This is the lesser evil.
- name: podman root cgroupsv{{ want_cgroups }}
- name: "{{ podman_bin }} root"
package: podman
environment:
PODMAN: /usr/bin/podman
PODMAN: /usr/bin/{{ podman_bin }}
QUADLET: /usr/libexec/podman/quadlet
- name: podman rootless cgroupsv{{ want_cgroups }}
- name: "{{ podman_bin }} rootless"
package: podman
environment:
PODMAN: /usr/bin/podman
PODMAN: /usr/bin/{{ podman_bin }}
QUADLET: /usr/libexec/podman/quadlet
become: true

View File

@ -24,24 +24,12 @@
- name: clear test results (results.yml)
local_action: copy content="results:\n" dest={{ artifacts }}/results.yml
# These are the actual tests: set cgroups vN, then run root/rootless tests.
#
# FIXME FIXME FIXME: 2020-05-21: 'loop' should be '2, 1' but there's some
# nightmarish bug in CI wherein reboots hang forever. There's a bug open[1]
# but it seems dead. Without a working reboot, there's no way to test v1.
# [1] https://redhat.service-now.com/surl.do?n=PNT0808530
# I'm leaving this as a 'loop' in (foolish? vain?) hope that the bug will
# be fixed. Let's revisit this after, say, 2020-08. If the bug persists
# then let's just revert the entire cgroups v1 change, and go back to
# using standard-test-basic.
- name: set cgroups and run podman tests
include_tasks: test_podman_cgroups_vn.yml
loop: [ 2 ]
# These are the actual tests.
- name: test podman
include_tasks: run_podman_tests.yml
loop: [ podman, podman-remote ]
loop_control:
loop_var: want_cgroups
- name: test podman-remote
include_tasks: test_podman_remote.yml
loop_var: podman_bin
- name: test toolbox
include_tasks: test_toolbox.yml

View File

@ -1,19 +0,0 @@
---
- name: "podman-remote | install"
dnf: name="podman-remote" state=installed
- include_role:
name: run_bats_tests
vars:
tests:
- name: podman-remote root
package: podman
environment:
PODMAN: /usr/bin/podman-remote
QUADLET: /usr/libexec/podman/quadlet
- name: podman-remote rootless
package: podman
environment:
PODMAN: /usr/bin/podman-remote
QUADLET: /usr/libexec/podman/quadlet
become: true