In case no target is specified explicitly in /etc/kdump.conf, the behavior
of path is changed, a check need be taken to see if any separate file
system is mounted on any tier of 'path', and also to take the relevant
action. Now the path section need be renewed for kdump.conf and its
man page accordingly.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In case no target is specified explicitly in /etc/kdump.conf, the behavior
of path is changed, a check need be taken to see if any separate file
system is mounted on any tier of 'path', and also to take the relevant
action. Now the path section need be renewed accordingly.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
If default target is a separate disk, the related information need be
stored in /etc/kdump.conf of kdump initramfs. This includes the disk
info which will help to deduce the dump_code and path which the vmcore
will be written into.
v5->v7:
No v6 for this patch. Just use newly introduced function
is_fs_type_nfs in default_dump_target_install_conf().
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
When user does not specify dump target explicitly, it's better to
dump to the "path" specified. That means after dump user enter into
1st kernel, can find vmcore in the "path". If that path is in root
fs, vmcore is stored in root fs. If separate disk is mounted on
any tier of "path", we just dump vmcore into the left path on the
left separate disk.
E.g in kdump.conf
path /mnt/nfs
in mount info,
/dev/vdb on /mnt type ext4 (rw,relatime,seclabel,data=ordered)
Then vmcore will be saved in /nfs of /dev/vdb.
In this patch, pass mount info to dracut in this case if separate
disk is mounted on any tier of "path".
Meanwhile introduce a function in kdump-lib.sh to check if any
target is specified.
v4->v5:
Per Vivek's comment, add a helper function is_fs_dump_target.
Then is_user_configured_dump_target is rewrite with these helper
functions.
v5->v7:
No v6 for this patch. Just use newly introduced function
is_fs_type_nfs in handle_default_dump_target.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
kdump need create the dir specified in "path" formerly if it does not
exist. Now change the behavior to be that ueser takes charge of the
"path", make sure "path" has been created, especially when separate disk
is mounted on this "path".
Also introduce 2 helper functions to help check the existence of path.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
These utility function will be shared by several files, they are all
operation related to mount stuff.
Meantime define DEFAULT_PATH="/var/crash".
v5-> v6:
Since in rhel7 nfs4 becomes default nfs version, and its fstype is
nfs4. So change the implementation of get_fs_type_from_target(),
whatever fstype returned from findmount, just echo nfs as fstype for all
nfs version.
v6->v7:
Introduce is_fs_type_nfs to check if fstype is nfs or nfs4 per Vivek's
idea.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
kdump-dep-generator is a systemd generator, used to write out kdump
service dependencies.
Currently it's only useful for ssh dump case. And in ssh dump case, it
writes out a dependency which kdump.service "Wants"
network-online.target:
# ls -l /run/systemd/generator/kdump.service.wants/
[..] network-online.target -> /usr/lib/systemd/system/network-online.target
So that kdump.service will pull in network-online.target and delayed
start until network-online.target is reached.
In the future, we could use generator to write out kdump.service
dynamically and get rid of the static defined kdump.service at all.
v1->v2:
Vivek: not using hardcoded run time generator path, use what systemd pass in.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Adds two new options to kdump.conf to be able to configure fence_kdump
support for generic clusters:
fence_kdump_args <arg(s)>
- Command line arguments for fence_kdump_send (it can contain all
valid arguments except hosts to send notification to)
fence_kdump_nodes <node(s)>
- List of cluster node(s) separated by space to send fence_kdump
notification to (this option is mandatory to enable fence_kdump)
Generic clusters fence_kdump configuration take precedence over older
method of fence_kdump configuration for Pacemaker clusters. It means
that if fence_kdump is configured using above options in kdump.conf, old
Pacemaker configuration is not used even if it exists.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Adds get_option_value() function to retrieve value of specified option
from /etc/kdump.conf.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames check_fence_kdump to check_pcs_fence_kdump to clearly identify
that this method checks only Pacemaker configuration of fence_kdump.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames kdump_check_fence_kdump kdump_configure_fence_kdump to clearly
identify what this function does.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames is_fence_kdump to is_pcs_fence_kdump to identify that this
method should be used to detect fence_kdump configuration only in
Pacemaker clusters.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames FENCE_KDUMP_NODES variable to FENCE_KDUMP_NODES_FILE to
distinguish it from values read from fence_kdump_nodes option in
kdump.conf (introduced in following patches).
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames FENCE_KDUMP_CONFIG variable to FENCE_KDUMP_CONFIG_FILE to
distinguish it from values read from fence_kdump_args option in
kdump.conf (introduced in following patches).
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Add a README file which explains what's the process for kexec-tools patch
inclusion and where one should post patches for review. This should help
with removing some confusion.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Now when dump target is not specified, separate disk can't be mounted on
"path", e.g /var/crash. However if target is specified, whatever the default
fail action is set, mkdumprd should go ahead and not be failed.
In check_block_dump_target(), the check only on disk is not complete,
NFS and ssh need be filtered too. So introduce is_user_configured_dump_target
to check this.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In exteme case vmcore-dmesg will overflow. upstream has fixed the
some problem. so simply backport it
Signed-off-by: Arthur Zou <zzou@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
When kdump service is started, /sbin/mkdump checks if there is enough
free space on the ssh server using "df -P" command.
However, the slight difference in the first line of the "df -P" command
output for English and Japanese environment causes a problem.
-----
# LANG=en_us.utf8 df -P | head -1
Filesystem 1024-blocks Used Available Capacity Mount on
# LANG=ja_JP.utf8 df -P | head -1
ファイルシス 1024-ブロック 使用 使用可 容量 マウント位置
-----
Because the number of words is 7 in the English output and 6 in
Japanese, the subsequent awk command could not correctly locate the
free space field and results in an error.
One way to fix it is use df -P /var/crash|tail -1, but for remote restricted
shell pipe is not supported. Thus fix this by print field NF-2 in awk script.
Signed-off-by: Dave Young <dyoung@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Let's omit resume module when building kdump initramfs, because:
- kdump don't want to resume
- it would pull in the swap device dependencie
Tested on Fedora20. This change doesn't break anything.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
kdump now dumps vmcore to root partition by default in SAVE_PATCH
directory, e.g /var/crash defaultly. This is problematic when another
disk is mounted on /var or /var/crash, because the saved vmcore will
he hidden after dump in 1st kernel. This also has the potential of
blindly filling the root file system without a clue as to why.
Now fix this by failing the loading of kdump kernel if dump target
is root fs by default while different disk is mounted on save path.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Move the invocation of check_resettable() to be together with all
other invocation of functions. This can make the flow of script
clearer and more readable.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In function get_block_dump_target(), code block to get user configured
dump disk and get root fs device can be reused by other places. Now
abstract and wrap them into 2 new functions:
get_user_configured_dump_disk()
get_root_fs_device()
And put them into kdump-lib.sh.
Meanwhile change the get_block_dump_target() accordingly.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
The old implementation in installkernel() will not return success when
added wdt module is not iTCO_wdt. The returned value is related to the
comparison. This is not correct and will cause kdump load failed.
Now move the exact wdt module inserting to the right place, this can
be fixed.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
== Version 2 ==
Addresses Vivek's review comments:
1. Don't force numeric in awk script snipet.
2. Command line processing is moved from load_kernel to new function
"prepare_cmdline." This new function is responsible for
setting up the command line passed to KEXEC.
3. New function "append_cmdline" is added to append {argument,value}
pair to command line if argument is not already present.
== Version 1 ==
A recent patch (https://lkml.org/lkml/2014/1/15/42) enables multiple
processors in the crash kernel.
To do this safely the crash kernel needs to know which CPU was the 1st
kernel BSP (bootstrap processor) so that the crash kernel will NOT send
the BSP an INIT. If the crash kernel sends an INIT to the 1st kernel
BSP, some systems may reset or hang.
The EFI spec doesn't require that any particular processor is chosen
as the BSP and the CPU (and its apic id) can change from one boot to
the next. Hence automating the selection of CPU to disable if the
system would panic is desired.
This patch updates the kdumpctl script to get the "initial apicid"
of CPU 0 in the first kernel and will pass this as the
"disable_cpu_apicid=" arguement to kexec if it wasn't explicitly
set in /etc/sysconfig/kdump KDUMP_COMMANDLINE_APPEND.
CPU 0 is chosen as it is the processor thats execute the OS
initialization
code and hence was the BSP as per x86 SDM (Vol 3a Section 8.4.)
See associated Red Hat Bugzilla(s) for additional background material:
https://bugzilla.redhat.com/show_bug.cgi?id=1059031https://bugzilla.redhat.com/show_bug.cgi?id=980621
Signed-off-by: Jerry Hoemann <jerry.hoemann@hp.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Description:
Currently kdumpctl will fail to create kdump initramfs and start
kdump service while dump target is encrypted. This restriction is
too strict.
Resolution:
Just warn user that encrypted device is in dump path and second
kernel will wait on console for password to be entered.
Signed-off-by: arthur <zzou@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Description of problem:
Previously with selinux in enforcing mode, could prevent ssh-keygen from
generating keys. Support for selinux policy for allowing applications to
access ssh-keygen for generating ssh keys was added in
selinux-policy-3.7.19-126.el6_2.6.
Solutions:
Because of the context was added for ssh key generation, so the keys were
generated without fliping from enforcing mode to permissive mode for ssh
key generation. This patch removes selinux code which switches between
enforcing and permissive modes.
Signed-off-by: arthur <zzou@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Move the code to check /sys/kernel/kexec_crash_loaded to function
check_kdump_feasibility(). And rename status() to check_current_kdump_status()
so these two functions become clearer.
cleanup kdumpctl status path as well.
Tested with kernel without CONFIG_KEXEC
Tested with run kdumpctl start two times.
Signed-off-by: Dave Young <dyoung@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Kdump does not support secure boot yet, so let's claim it is not supported
at the begginning of service start function.
In this patch for checking secure boot status I'm checking the efivars per
suggestion from pjones. see in code comments for the details.
Tested in Fedora 19 + qemu ovmf with secure boot enabled.
Signed-off-by: Dave Young <dyoung@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
When watchdog is enabled in 1st kernel, then crash dump in kdump
kernel will be interrupted if watchdog is timeout. Since some
wdt drivers can stop the watchdog when its driver is loaded,
e.g iTCO_wdt, this can benefit crash dump.
Add watchdog driver which is active in system to initramfs, its
loading can stop watchdog.
For now, put this adding in 99kdumpbase.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In ssh dump, we use random-seed to feed /dev/urandom. Since the systemd
random-seed file could change location, it's better we create our
own random-seed.
The discussion is listed below for future reference:
https://lists.fedoraproject.org/pipermail/kexec/2014-January/000340.html
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit:
commit 4404368
Author: WANG Chao <chaowang@redhat.com>
Date: Wed Dec 18 22:34:43 2013 +0900
[PATCH] memset() in cyclic bitmap initialization introduce segment fault.
We are using memset() to improve performance when creating 1st and 2nd
bitmap. After doing round up the pfn_start and round down pfn_end, it's
possible that pfn_start_roundup is greater than pfn_end_round. A segment
fault could happen in that case because memset is taking roughly the
value of (pfn_end_round << 3 - pfn_start_roundup << 3 ), which is
negative, as its third argument.
So we can skip the memset if start is greater than end. It's safe
because we will set bit for the round up part and also round down part.
Actually this happens on my EFI virtual machine:
cat /proc/iomem:
00000000-00000fff : reserved
00001000-0009ffff : System RAM
000a0000-000bffff : PCI Bus 0000:00
000f0000-000fffff : System ROM
00100000-3d162017 : System RAM
01000000-015cab9b : Kernel code
015cab9c-019beb3f : Kernel data
01b4f000-01da9fff : Kernel bss
30000000-37ffffff : Crash kernel
3d162018-3d171e57 : System RAM
3d171e58-3d172017 : System RAM
3d172018-3d17ae57 : System RAM
3d17ae58-3dc10fff : System RAM
3dc11000-3dc18fff : reserved
3dc19000-3dc41fff : System RAM
3dc42000-3ddcefff : reserved
3ddcf000-3f7fefff : System RAM
3f7ff000-3f856fff : reserved
[..]
gdb ./makedumpfile core
(gdb) bt full
[..]
#1 0x000000000042775d in create_1st_bitmap_cyclic () at makedumpfile.c:4543
i = 0x5
pfn = 0x3d190
phys_start = 0x3d18ee58
phys_end = 0x3d18f018
pfn_start = 0x3d18e
pfn_end = 0x3d18f
pfn_start_roundup = 0x3d190
pfn_end_round = 0x3d188
pfn_start_byte = 0x7a32
pfn_end_byte = 0x7a31
[..]
(gdb) list makedumpfile.c:4543
4538 return FALSE;
4539
4540 pfn_start_byte = (pfn_start_roundup - info->cyclic_start_pfn) >> 3;
4541 pfn_end_byte = (pfn_end_round - info->cyclic_start_pfn) >> 3;
4542
4543 memset(info->partial_bitmap2 + pfn_start_byte,
4544 0xff,
4545 pfn_end_byte - pfn_start_byte);
4546
4547 for (pfn = pfn_end_round; pfn < pfn_end; ++pfn)
Signed-off-by: WANG Chao <chaowang@redhat.com>
This patch fixes segment fault issues on the systems with very small
memory map range (less than 8 pages).
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In kdump kernel boot, kdump kernel is booted with memmap= and add
them into e820 map. Then ACPI is initialized and the kernel traverses
the ACPI namespace to find entries for memory device to be hot added.
This adds page table information and the kexec/kdump kernel runs out
of memory.
So in kdump kernel, hot plug memory need be disabled always, only
exact map is trusted. Now add the kernel parameter acpi_no_memhotplug
to kdump kernel cmdline.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In the remote dump case, and if fence kdump is configured, chances are
that the same network interface will be setup more than once.
One time for network dump, the other times for fence kdump. The result
is we will have two or more duplicate ip= configuration in 40ip.conf.
These are exactly duplicates, however dracut will refuse to continue and
raise a fatal error if there are duplicate configuration for the same
interface. So we have to avoid adding these duplicates.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Tested-by: Zhi Zou <zzou@redhat.com>
Tested-by: Marek Grac <mgrac@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This patch is used to setup fence kdump environment when building kdump
initrd:
1. Check if it's cluster and fence_kdump is configured.
2. Get all the nodes in the cluster and pass them to 2nd kernel via
/etc/fence_kdump_nodes
3. Setup network interface which will be used by fence kdump notifier in
2nd kernel.
4. Install fence kdump notifier (/usr/libexec/fence_kdump_send) to
initrd.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Tested-by: Zhi Zou <zzou@redhat.com>
Tested-by: Marek Grac <mgrac@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In 2nd kernel, to prevent the crashed system from being fenced off,
fence kdump message must be send to other nodes in the cluster
periodically before dumping process.
We preserve every node's name in /etc/fence_kdump_nodes in the
initrd, so we parse this file and send notify them.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Tested-by: Zhi Zou <zzou@redhat.com>
Tested-by: Marek Grac <mgrac@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
If the system is configured fence kdump, we need to update kdump
initramfs if cluster or fence_kdump config is newer.
In RHEL7, cluster config is no longer keeping locally but stored
remotely. Fortunately we can use a pcs tool to retrieve the xml based
config and parse the last changed time from that.
/etc/sysconfig/fence_kdump is used to configure runtime arguments to
fence_kdump_send. So We have to pass the arguments to 2nd kernel.
When cluster config or /etc/sysconfig/fence_kdump is newer than local
kdump initramfs, we must rebuild initramfs to adapt changes in cluster.
For example:
Detected change(s) the following file(s):
cluster-cib /etc/sysconfig/fence_kdump
Rebuilding /boot/initramfs-xxxkdump.img
[..]
Signed-off-by: WANG Chao <chaowang@redhat.com>
Tested-by: Zhi Zou <zzou@redhat.com>
Tested-by: Marek Grac <mgrac@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>