Previously the sync for dump_fs is problematic, it always
return success according to man 2 sync. So it cannot detect
the error of the dump target is full and not all of vmcore
data been written back the disk, which will leave the vmcore
imcomplete and report misleading log as "saving vmcore
complete".
In this patch, we will use "sync -f vmcore" instead, which
will return error if syncfs on the dump target fails. In
this way, vmcore sync related failures, such as autoextend
of lvm2 thinpool fails, can be detected and handled properly.
Signed-off-by: Tao Liu <ltao@redhat.com>
Reviewed-by: Coiby Xu <coxu@redhat.com>
This patch add virtiofs support for kexec-tools by introducing a new option
for /etc/kdump.conf:
virtiofs myfs
Where myfs is a variable tag name specified in qemu cmdline
"-device vhost-user-fs-pci,tag=myfs".
The patch covers the following cases:
1) Dumping VM's vmcore to a virtiofs shared directory;
2) When the VM's rootfs is a virtiofs shared directory and dumping the
VM's vmcore to its subdirectory, such as /var/crash;
3) The combination of case 1 & 2: The VM's rootfs is a virtiofs shared
directory and dumping the VM's vmcore to another virtiofs shared
directory.
Case 2 & 3 need dracut >= 057, otherwise VM cannot boot from virtiofs
shared rootfs. But it is not the issue of kexec-tools.
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Tao Liu <ltao@redhat.com>
Since commit c5bdd2d8f1 ("kdump-lib: use non-debug kernels first"),
non-debug kernel is preferred, over the debug variant, as dump capture
kernel to reduce memory consumption. This works alright for kdump as
the capture kernel is loaded using kexec.
In case of fadump, regular boot loader is used to load the capture
kernel. So, the default kernel needs to be used as capture kernel as
well. But with commit c5bdd2d8f1, initrd of a different kernel is
made dump capture capable, breaking fadump's ability to capture dump
properly. Fix this by sticking with the debug variant in case of
fadump.
Fixes: c5bdd2d8f1 ("kdump-lib: use non-debug kernels first")
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Acked-by: Lichen Liu <lichliu@redhat.com>
Acked-by: Coiby Xu <coxu@redhat.com>
When kdump is configured with a NFS location, and the remote directory does
not exist, kdump.service fails with a confusing error message.
kdumpctl[2172]: kdump: Dump path "/tmp/mkdumprd.ftWhOF/target/dumps"
does not exist in dump target "10.111.113.2:/srv/kdump"
We just need to print the remote directory "dumps" in such case, because
"/tmp/mkdumprd.ftWhOF/target" is the local temporary mount point.
Signed-off-by: Lichen Liu <lichliu@redhat.com>
Reviewed-by: Coiby Xu<coxu@redhat.com>
Decrease the risk that of leaking information that could potentially
be used to exploit the crash further (think location of keys).
Signed-off-by: Lichen Liu <lichliu@redhat.com>
Acked-by: Coiby Xu <coxu@redhat.com>
These kdump.sysconfig.* files are almost identical with a bit difference
in several parameters, just use a simple script to generate them upon
packaging. This should make it easier to maintain, updating a comment or
param for a certain arch can be done in one place.
There are only some comment or empty option differences with the generated
version because some arch's sysconfig is not up-to-dated, this actually
fixes the issue, I used the following script to check these differences:
# for arch in aarch64 i386 ppc64 ppc64le s390x x86_64; do
./gen-kdump-sysconfig.sh $arch > kdump.sysconfig.$arch.new
git checkout HEAD^ kdump.sysconfig.$arch &>/dev/null
echo "$arch:"
diff kdump.sysconfig.$arch kdump.sysconfig.$arch.new; echo ""
done; git reset;
Signed-off-by: Kairui Song <kasong@tencent.com>
Reviewed-by: Philipp Rudo <prudo@redhat.com>
So fedpkg will fetch the sources that matches given Fedora version.
Signed-off-by: Coiby Xu <coxu@redhat.com>
Reviewed-by: Philipp Rudo <prudo@redhat.com>
New version of qemu-img requires specifying the backing format for the
backing file otherwise it will abort.
Signed-off-by: Coiby Xu <coxu@redhat.com>
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Fedora 33 and 34 Cloud Base Images have only one partition with the
following directory structure,
.
├── bin -> usr/bin
├── boot
├── dev
├── etc
├── home
├── root
By comparison, Fedora 35, 36 and 37 Cloud Base Images have multiple
partitions. The root partition which is the last partition has the
following directory,
.
├── home
└── root
├── bin -> usr/bin
├── boot
├── dev
├── etc
├── home
├── root
and the 2nd partition is the boot partition.
This patch address the above changes by mounting {LAST_PARTITION}/root as
to TEMP_ROOT and mount SECOND_PARTITION to TEMP_ROOT/boot. So the test
image can be built successfully.
Signed-off-by: Coiby Xu <coxu@redhat.com>
Reviewed-by: Philipp Rudo <prudo@redhat.com>
s390x doesn't use GRUB. To make sure the boot entries are updated, call
zipl after running grubby.
Suggested-by: smitterl@redhat.com
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
"grubby --zipl" only takes effect when setting default kernel. It's
useless to add "--zipl" when updating kernel command line. Also rename
_update_grub to _update_kernel_cmdline since s390x doesn't use GRUB.
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
Resolves: bz2104534
When running "kdumpctl reset-crashkernel --kernel=ALL" on s390x,
sed: can't read /etc/default/grub: No such file or directory
sed: can't read /etc/default/grub: No such file or directory
This happens because s390x doesn't use the grub bootloader and
/etc/default/grub doesn't exist.
Reported-by: smitterl@redhat.com
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
Resolves: bz2092012
According to the ostree team [1], the existence of /run/ostree-booted
> is the most stable way to signal/check that a system has been
> booted in ostree-style. It is also used by rpm-ostree at
> compose/install time in the sandboxed environment where scriptlets run,
> in order to signal that the package is being installed/composed into
> an ostree commit (i.e. not directly on a live system). See
> 8ddf5f40d9/src/libpriv/rpmostree-scripts.cxx (L350-L353)
> for reference.
By checking the existence of /run/ostree-booted, we could skip trying to
update kernel cmdline during OSTree compose time.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=2092012#c3
Reported-by: Luca BRUNO <lucab@redhat.com>
Suggested-by: Luca BRUNO <lucab@redhat.com>
Fixes: 0adb0f4 ("try to reset kernel crashkernel when kexec-tools updates the default crashkernel value")
Signed-off-by: Coiby Xu <coxu@redhat.com>
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Acked-by: Timothée Ravier <siosm@fedoraproject.org>
Resolves: bz2089871
Currently, kexec-tools can't be updated using virt-customize because
older version of kdumpctl can't acquire instance lock for the
get-default-crashkernel subcommand. The reason is /var/lock is linked to
/run/lock which however doesn't exist in the case of virt-customize.
This patch fixes this problem by using /tmp/kdump.lock as the lock
file if /run/lock doesn't exist.
Note
1. The lock file is now created in /run/lock instead of /var/run/lock since
Fedora has adopted adopted /run [2] since F15.
2. %pre scriptlet now always return success since package update won't
be blocked
[1] https://fedoraproject.org/wiki/Features/var-run-tmpfs
Fixes: 0adb0f4 ("try to reset kernel crashkernel when kexec-tools updates the default crashkernel value")
Reported-by: Nicolas Hicher <nhicher@redhat.com>
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Suggested-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Currently, kdump may experience failure on some aws aarch64 platform.
The final scenario is:
[ 79.145089] printk: console [ttyS0] disabled
Then the system has no response any more. And after reboot, there is no
vmcore generated under /var/crash/. More detail [1].
In a short word, it is caused by the irqpoll policy and some unknown
acpi issue. The serial device is hot-removed as a pci device.
More detailed, the irqpoll policy demands to iterate over all interrupt
handler, if the interrupt line is shared, then the handler is
dispatched. And acpi handler acpi_irq() is on a shared interrupt line,
so it is called. But for some unknown reason, the acpi hardware regs
hold wrong state, and the acpi driver decides that a hot-removed event
happens on a pci slot, which finally removes the pci serial device.
To tackle this issue by removing the irqpoll parameter on aws aarch64
platform, until the real root cause in acpi is found and resolved.
[1]: https://bugzilla.redhat.com/show_bug.cgi?id=2080468#c0
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Coiby Xu <coxu@redhat.com>
Resolves: bz2106645
The code of commit 163c02970e takes effect in rhel firstly, later
pulled to Fedora. However, Fedora OS doesn't have 40-redhat.rules
in systemd-udev package. With this commit applied, a false positive
warning message can always been seen as below.
So fixing it by checking if 40-redhat.rules exists before handling.
With this change, the false warning is gone.
[root@ ~]# kdumpctl restart
kdump: kexec: unloaded kdump kernel
kdump: Stopping kdump: [OK]
kdump: No kdump initial ramdisk found.
kdump: Rebuilding /boot/initramfs-5.19.0-rc6+kdump.img
sed: can't read /var/tmp/dracut.NnAV2g/initramfs/usr/lib/udev/rules.d/40-redhat.rules: No such file or directory
kdump: kexec: loaded kdump kernel
kdump: Starting kdump: [OK]
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
The kernel of CoreOS is not in the standard locations, add
/boot/ostree/* to the boot_dirlist to find the vmlinuz.
Signed-off-by: Lichen Liu <lichliu@redhat.com>
Acked-by: Coiby Xu <coxu@redhat.com>
Currently $boot_img can get bad data if running on a platform
that doesn't set BOOT_IMAGE in the kernel command line. For
example, currently:
- s390x Fedora CoreOS machine:
```
[root@cosa-devsh ~]# sed "s/^BOOT_IMAGE=\((\S*)\)\?\(\S*\) .*/\2/" /proc/cmdline
mitigations=auto,nosmt ignition.platform.id=qemu ostree=/ostree/boot.0/fedora-coreos/2a72567ac8f7ed678c3ac89408f795e6ccd4e97b41e14af5f471b6a807e858b9/0 root=UUID=2a88436a-3b6b-4706-b33a-b8270bd87cde rw rootflags=prjquota boot=UUID=f4b2eaa5-9317-4798-85cf-308c477fee4c crashkernel=600M
```
where on a platform that uses GRUB we get:
- x86_64 Fedora CoreOS machine:
```
[root@cosa-devsh ~]# sed "s/^BOOT_IMAGE=\((\S*)\)\?\(\S*\) .*/\2/" /proc/cmdline
/ostree/fedora-coreos-af4f6cc7b9ff486cfa647680b180e989c72c8eed03a34a42e7328e49332bd20e/vmlinuz-5.18.5-200.fc36.x86_64
```
We should change the setting of the boot_img variable such that it will
be empty if BOOT_IMAGE doesn't exist.
With this change on the s390x machine:
```
[root@cosa-devsh ~]# grep -P -o '^BOOT_IMAGE=(\S+)' /proc/cmdline | sed "s/^BOOT_IMAGE=\((\S*)\)\?\(\S*\)/\2/"
[root@cosa-devsh ~]#
```
This change mattered much more before the change in c5bdd2d which changed
the following line from [[ -n $boot_img ]] to [[ "$boot_img" == *"$kdump_kernelver" ]].
Still I think this change has merit.
Signed-off-by: Dusty Mabe <dusty@dustymabe.com>
Acked-by: Coiby Xu <coxu@redhat.com>
There are many variants on OSTree based systems these days so
we should probably refer to the class of systems as "OSTree
based systems". Also, Atomic Host is dead.
Signed-off-by: Dusty Mabe <dusty@dustymabe.com>
Acked-by: Coiby Xu <coxu@redhat.com>
On RHEL9 and Fedora, the arm64 platform only supports 4KB page size.
the reserved memory size can be aligned to that on x86_64.
Introducing a new formula for 4KB on arm64, which bases on x86_64 plus
extra 64MB.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Kdump uses currently running kernel as default, but when currently
running kernel is a debug kernel, it will consume more memory,
which may cause out-of-memory and fail to collect vmcore.
Now we will try to use non-debug kernels first if possible.
Also extract the logic of determine KDUMP_KERNEL from
prepare_kdump_bootinfo into a function. This function will return
KDUMP_KERNEL given a kernel version.
Signed-off-by: Lichen Liu <lichliu@redhat.com>
Acked-by: Coiby Xu <coxu@redhat.com>
For CoreOS based systems we use Ignition for provisioning machines
in the initramfs on first boot. We trigger Ignition right now by
the presence of `ignition.firstboot` in the kernel command line. The
kernel argument is only present on first boot so after a reboot it
no longer is in the kernel command line.
If a kernel crash happens before the first reboot of a machine we
want the `ignition.firstboot` kernel argument to be removed and not
passed on to the crash kernel.
This patch rewrites get_recommend_size to get rid of the following
limitations,
1. only supports ranges in crashkernel sorted in increasing order
2. the first entry of crashkernel should have only a single digit and
it's in gigabytes
Suggested-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Recently, it's found 'kdumpctl estimate' returns 512M while the system
reserves 1024M kdump memory in a case. This happens because the ranges
in /proc/iomem are inclusively. For example, "0-1: System RAM" means 2
bytes of system memory other than 1 byte. Fix this error by adding one
more byte.
Note
1. the function has been simplified as well.
2. define PROC_IOMEM as /proc/iomem for the sake of unit tests
Reported-by: Ruowen Qin <ruqin@redhat.com>
Fixes: 1813189 ("kdump-lib.sh: introduce functions to return recommened mem size")
Suggested-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Make log saving the last step of kdump.sh, so it can catch more info,
for example, the output of post.d hooks will be covered by the log now.
Signed-off-by: Kairui Song <kasong@tencent.com>
Reviewed-by: Philipp Rudo <prudo@redhat.com>
1. yum is deprecated so use dnf instead
2. use the "kdumpctl reset-crashkernel --fadump=on" API
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
1. yum is deprecated so use dnf instead
2. use the "kdumpctl reset-crashkernel" API
3. ask the users to refer to crashkernel-howto.txt for setting custom
crashkernel value
4. fix a typo
Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
1. clean up left crashkernel.default
2. fix a few typos and grammar mistakes
3. ask the users to refer to `man kdumpctl` for reset-crashkernel
Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
This test prevents the mistake of adding an option to kdump.conf
without changing check_config as is the case with commit 73ced7f
("introduce the auto_reset_crashkernel option to kdump.conf").
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
kdump_get_conf_val allows to retrieves config value defined in
kdump.conf and it also supports sed regex like
"ext[234]\|xfs\|btrfs\|minix\|raw\|nfs\|ssh".
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
This commit adds a relatively thorough test suite for
kdumpctl reset-crashkernel [--fadump=[on|off|nocma]] [--kernel=path_to_kernel] [--reboot]
as implemented in commit 140da74 ("rewrite reset_crashkernel to support
fadump and to used by RPM scriptlet").
grubby have a few options to support its own testing,
- --no-etc-grub-update, not update /etc/default/grub
- --bad-image-okay, don't check the validity of the image
- --env, specify custom grub2 environment block file to avoid modifying
the default /boot/grub2/grubenv
- --bls-directory, specify custom BootLoaderSpec config files to avoid
modifying the default /boot/loader/entries
So the grubby called by kdumpctl is mocked as
@grubby --grub2 --no-etc-grub-update --bad-image-okay --env=$SPEC_TEST_DIR/env_temp -b $SPEC_TEST_DIR/boot_load_entries "$@"
in the tests. To be able to call the actual grubby in the mock function [1],
ShellSpec provides the following command
$ shellspec --gen-bin @grubby
to generate spec/support/bins/@grubby which is used to call the actual grubby.
kdumpctl has implemented its own version of updating /etc/default/grub
in _update_kernel_cmdline_in_grub_etc_default. To avoiding writing to
/etc/default/grub, this function is mocked as outputting its name and
received arguments similar to python unitest's assert_called_with.
[1] https://github.com/shellspec/shellspec#execute-the-actual-command-within-a-mock-function
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
AfterAll is an example group hook [1] which would be run after the group
tests are executed. Use this hook to clean up the files created by mktemp.
[1] https://github.com/shellspec/shellspec#beforeall-afterall---example-group-hook
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
Currently there are two issues with unit-testing the functions defined
in kdumpctl and other shell scripts after sourcing them,
- kdumpctl would call main which requires root permission and would
create single instance lock (/var/lock/kdump)
- kdumpctl and other shell scripts directly source files under /usr/lib/kdump/
When ShellSpec load a script via "Include", it defines the__SOURCED__
variable. By making use of __SOURCED__, we can
1. let kdumpctl not call main when kdumpctl is "Include"d by ShellSpec
2. instruct kdumpctl and kdump-lib.sh to source the files in the repo
when running ShelSpec tests
Note coverage/ is added to .gitignore because ShellSpec generates code
coverage results in this folder.
Reviewed-by: Philipp Rudo <prudo@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
Make use of the new ${OPT[]} array and simplify local_fs_dump_target to
remove one more file operations.
While at it rename the local_fs_dump_target to is_local_target
Signed-off-by: Philipp Rudo <prudo@redhat.com>
Reviewed-by: Tao Liu <ltao@redhat.com>
Reviewed-by: Coiby Xu <coxu@redhat.com>
With the introduction of ${OPT[fstype]} this call to kdump_get_conf_val
can be removed now as well.
Signed-off-by: Philipp Rudo <prudo@redhat.com>
Reviewed-by: Tao Liu <ltao@redhat.com>
Reviewed-by: Coiby Xu <coxu@redhat.com>
The variable is only used for ssh dump targets. Furthermore it is
identical to the value stored in ${OPT[_target]}. Thus drop DUMP_TARGET and
use ${OPT[_target]} instead.
In order to be able to distinguish between the different target types
introduce the internal ${OPT[_fstype]}.
Signed-off-by: Philipp Rudo <prudo@redhat.com>
Reviewed-by: Tao Liu <ltao@redhat.com>
Reviewed-by: Coiby Xu <coxu@redhat.com>