We met a problem that eth0 ends up being eth1 and eth1 being eth0
between 1st and 2nd kernel. Because we pass ifname=eth0:$mac to force
it's named eth0 and since "eth0"is already taken by the other NIC, udev
fails to bring up the NIC we want, thus kdump fails.
kernel assigned network interface names are not persistent. So if first
kernel is using kernel assigned interface names, then force it to use
"kdump-" prefixed names in second kernel.
For ethX, we put a prefix "kdump-" before it, so in 2nd kernel, ethX
will name to "kdump-ethX". So that we can avoid the naming conflict.
We only need to change the ethernet card name, that means, for bridge,
vlan, bond, team devices' names , we never prefix them. Because these
names are assigned when they're created by userspace.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
We handle different types of device for vlan. For each type, it should
write different options for vlan.conf in each control path.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This cleanup patch removes unnecessary keyword "function" at all places in
kdumpctl script. Also, corrects couple of typos in the script.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Vivek suggested we should display message while waiting for the lock,
because the waiting could be long and user will have no idea what's
going on.
So we will repeat the following message every 5 seconds while waiting:
"Another app is currently holding the kdump lock; waiting for it to exit..."
Thanks Vivek for providing a more comprehensive message.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit.
commit 0b732828091a545185ad13d0b2e6800600788d61
Author: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Date: Tue Jun 10 13:57:29 2014 +0900
[PATCH 3/3] Stop maximizing the bitmap buffer to reduce the risk of OOM.
We tried to maximize the bitmap buffer to get the best performance,
but the performance degradation caused by multi-cycle processing
looks very small according to the benchmark on 2TB memory:
https://lkml.org/lkml/2013/3/26/914
This result means we don't need to make an effort to maximize the
bitmap buffer, it will just increase the risk of OOM.
This patch sets a small fixed value (4MB) as a safety limit,
it may be safer and enough in most cases.
Signed-off-by: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit.
commit 2648a8f7caa63e3ec82fd4bce471cec0a895b704
Author: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Date: Mon Jun 9 17:48:30 2014 +0900
[PATCH 2/3] Move counting pfn_memhole for cyclic mode.
In cyclic mode, memory holes are checked in initialize_2nd_bitmap_cyclic()
in both the kdump path and the ELF path, so pfn_memhole should be
counted there.
Signed-off-by: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit.
commit 16b94ab7fad6744d8b77f2b26838f220307e3118
Author: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Date: Mon Jun 9 17:44:43 2014 +0900
[PATCH 1/3] Remove the 1st bitmap buffer from the ELF path in cyclic mode.
We can create the 2nd bitmap without creating the 1st bitmap by commit
363d53fc8, so we don't need to create the 1st bitmap in cyclic mode
in the ELF path since it isn't used. Thus, we can use the whole bitmap
buffer only for the 2nd bitmap like the kdump path.
Signed-off-by: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit. It is about freeing
the wrong bitmap thing, it could increase the risk of OOM when system is
in an edge of OOM.
commit 0e7b1a6e3c1919c9222b662d458637ddf802dd04
Author: Arthur Zou <zzou@redhat.com>
Date: Wed May 7 17:54:16 2014 +0900
[PATCH v3] Fix free bitmap_buffer_cyclic error.
Description:
In create_dump_bitmap() and write_kdump_pages_and_bitmap_cyclic(),
What should be freed is info->partial_bitmap instead of info->bitmap.
Solution:
Add two functions to free the bitmap_buffer_cyclic. info->partial_bitmap1
is freed by free_bitmap1_buffer_cyclic(). info->partial_bitmap2 is
freed by free_bitmap2_buffer_cyclic(). At the same time, remove
thoes frees that free partial_bitmap1 or partial_bitmap2 at the end
of main() because partial_bitmap1 and partial_bitmap2 has been freed
after dump file has been written out, so there is no need to free it
again at the end of main.
Signed-off-by: Arthur Zou <zzou@redhat.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit. Late back ported
commit depends on it.
commit 9dc6440c63320066bc6344c6e3ca3c3af88bcc42
Author: Petr Tesarik <ptesarik@suse.cz>
Date: Thu Apr 24 10:58:43 2014 +0900
[PATCH v3] Introduce the mdf_pfn_t type.
Replace unsigned long long with mdf_pfn_t where:
a. the variable denotes a PFN
b. the variable is a number of pages
The number of pages is converted to a mdf_pfn_t, because it is a result
of subtracting two PFNs or incremented in a loop over a range of PFNs,
so it can get as large as a PFN.
Note: The mdf_ (i.e. makedumpfile) prefix is used to prevent possible
conflicts with other software that defines a pfn_t type.
Signed-off-by: Petr Tesarik <ptesarik@suse.cz>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com
Backport from the following commit from upstream makedumpfile:
commit 45fc42c
Author: WANG Chao <chaowang@redhat.com>
Date: Tue Jun 10 14:11:27 2014 +0900
[PATCH] Fix Makefile for eppic_makedumpfile.so build.
When libeppic isn't installed on a standard location, building
eppic_makedumpfile.so with -leppic directly doesn't work.
Add LDFLAGS to build arguments, so that one can pass LDFLAGS="-Ldir
-Idir" to tell where to search for libeppic library and its header
files.
For example, if eppic source is installed on the same directory level
with makedumpfile as the following:
makedumpfile
|--- arch
+--- eeppic_scripts
eppic
|--- applications
+--- libeppic
After compiling libeppic, one can use the following command to build
eppic_makedumpfile.so:
make LDFLAGS="-I../eppic/libeppic -L../eppic/libeppic" eppic_makedumpfile.so
Signed-off-by: WANG Chao <chaowang@redhat.com>
With this patch, we don't need use a fedora-specific patch for building
eppic_makedumpfile.so.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
NetworkManager changed the format of ifcfg-device files. They may define
static IP addresses with the following format:
IPADDR0=192.168.122.100
PREFIX0=24
There may be up to 255 ip addresses for a network device - each with a unique
number tagged to the end of IPADDR and PREFIX.
Prior to this fix, kdump only handled static ip addresses defined with
IPADDR=192.168.122.100
PREFIX=24
ie. without the number.
The solution is to use "ip" commands to find the correct network information.
Tested with both static and dynamic IP addresses.
v2: Fixed a local variable that was set incorrectly
v3: Fix iscsi case
Signed-off-by: Marc Milgram <mmilgram@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Rename the subpackage kdump-anaconda-addon to kexec-tools-anaconda-addon
to keep consistency and make fedpkg build happy
Because every time fedpkg builds a new release the package version number
should increase. But kdump-annaconda-addon just keep same version, so let's
rename it to kexec-tools-annaconda-addon here kexec-tools- is a default prefix.
For version let's use default top level version.
At the same time, rename the kdump-anaconda-addon directory name to anaconda-addon
to make it more standard. Using the current data instead of version number as a
surfix of kdump-anaconda-addon tarball just like kexec-tools-po did.
Signed-off-by: Arthur Zou <zzou@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
makedumpfile_eppic.so (provided by kexec-tools-eppic) is built against
makedumpfile (provided by kexec-tools). kexec-tools-eppic must depend on
the same version.release of kexec-tools, otherwise there could be a ABI
compatibility issue.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Just merge the kdump-anadonda-addon.pot to po files of original
firstboot so that we can reuse most of the translation. But there
still are three sentences that has no translation.
Signed-off-by: Arthur Zou <zzou@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Currently this work is done by firstboot. Now we move to anaconda addon
to configurate in the system installation process.
Signed-off-by: Arthur Zou <zzou@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
During debugging of another problem issues were noted with the kdump udev
rules. The kdump service is restarted on memory add and remove events.
These are the wrong events for these types of devices and result in an overly
aggressive restarting of the kdump service.
There are four udev events to consider, "add", "remove", "online", and
"offline". The remove event is a complete removal from the system -- neither
the hardware nor the kernel know about the hardware; it has been physically
removed. The add event is associated with hardware being physically added to
the system. The kernel has some limited knowledge of the device, however,
it is not avaiable for the kernel to use until it is brought online. Online
events refer to the device being available for the kernel to use. Opposite
to that is the offline event, which occurs when a device is no longer in
use by the kernel.
Note that in all four events the kernel *may* have some remaining information
stored about the device.
In the case of memory hotplug, kdump should be restarted when a memory module
is onlined or offlined. This is because the memory is not in use by the
kernel until the memory is onlined, and it is unused when the memory is
offlined.
Making these modifications results in smooth service on systems that do
heavy memory onlining and offlining.
Cc: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In current systemd implementation, nofail mount will not block
local-fs.target, which means our kdump.sh (in dracut-pre-pivot.service)
can't wait for nofail mount. And kdump.sh could run early than nofail
mount happens.
For short term, let's stop passing nofail to mount. As for
sysroot.mount, since we have explicitly specify to wait for it, "nofail"
isn't a problem.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In case no target is specified explicitly in /etc/kdump.conf, the behavior
of path is changed, a check need be taken to see if any separate file
system is mounted on any tier of 'path', and also to take the relevant
action. Now the path section need be renewed for kdump.conf and its
man page accordingly.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In case no target is specified explicitly in /etc/kdump.conf, the behavior
of path is changed, a check need be taken to see if any separate file
system is mounted on any tier of 'path', and also to take the relevant
action. Now the path section need be renewed accordingly.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
If default target is a separate disk, the related information need be
stored in /etc/kdump.conf of kdump initramfs. This includes the disk
info which will help to deduce the dump_code and path which the vmcore
will be written into.
v5->v7:
No v6 for this patch. Just use newly introduced function
is_fs_type_nfs in default_dump_target_install_conf().
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
When user does not specify dump target explicitly, it's better to
dump to the "path" specified. That means after dump user enter into
1st kernel, can find vmcore in the "path". If that path is in root
fs, vmcore is stored in root fs. If separate disk is mounted on
any tier of "path", we just dump vmcore into the left path on the
left separate disk.
E.g in kdump.conf
path /mnt/nfs
in mount info,
/dev/vdb on /mnt type ext4 (rw,relatime,seclabel,data=ordered)
Then vmcore will be saved in /nfs of /dev/vdb.
In this patch, pass mount info to dracut in this case if separate
disk is mounted on any tier of "path".
Meanwhile introduce a function in kdump-lib.sh to check if any
target is specified.
v4->v5:
Per Vivek's comment, add a helper function is_fs_dump_target.
Then is_user_configured_dump_target is rewrite with these helper
functions.
v5->v7:
No v6 for this patch. Just use newly introduced function
is_fs_type_nfs in handle_default_dump_target.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
kdump need create the dir specified in "path" formerly if it does not
exist. Now change the behavior to be that ueser takes charge of the
"path", make sure "path" has been created, especially when separate disk
is mounted on this "path".
Also introduce 2 helper functions to help check the existence of path.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
These utility function will be shared by several files, they are all
operation related to mount stuff.
Meantime define DEFAULT_PATH="/var/crash".
v5-> v6:
Since in rhel7 nfs4 becomes default nfs version, and its fstype is
nfs4. So change the implementation of get_fs_type_from_target(),
whatever fstype returned from findmount, just echo nfs as fstype for all
nfs version.
v6->v7:
Introduce is_fs_type_nfs to check if fstype is nfs or nfs4 per Vivek's
idea.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
kdump-dep-generator is a systemd generator, used to write out kdump
service dependencies.
Currently it's only useful for ssh dump case. And in ssh dump case, it
writes out a dependency which kdump.service "Wants"
network-online.target:
# ls -l /run/systemd/generator/kdump.service.wants/
[..] network-online.target -> /usr/lib/systemd/system/network-online.target
So that kdump.service will pull in network-online.target and delayed
start until network-online.target is reached.
In the future, we could use generator to write out kdump.service
dynamically and get rid of the static defined kdump.service at all.
v1->v2:
Vivek: not using hardcoded run time generator path, use what systemd pass in.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Adds two new options to kdump.conf to be able to configure fence_kdump
support for generic clusters:
fence_kdump_args <arg(s)>
- Command line arguments for fence_kdump_send (it can contain all
valid arguments except hosts to send notification to)
fence_kdump_nodes <node(s)>
- List of cluster node(s) separated by space to send fence_kdump
notification to (this option is mandatory to enable fence_kdump)
Generic clusters fence_kdump configuration take precedence over older
method of fence_kdump configuration for Pacemaker clusters. It means
that if fence_kdump is configured using above options in kdump.conf, old
Pacemaker configuration is not used even if it exists.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>