If user configured target is used, path should be used as the absolute
path within the dump target direct, and user should be fully aware of
the path structure within the target device. The adjust_bind_mount_path
call here make it very hard to control the behavior.
Especially, if it's a cross device bind mount, this will likely create a
invalid path in the target. And for atomic case, adjust_bind_mount_path call
here assumes user will always pass root device as the explicitly configured
dump target, which is not true.
If user configured target device is used, the path is always be the
absolute path inside of given target. If user don't know about the path
structure in the target device, then user should either use the path
based config, or carefully exam the target device before using it as a
dump target.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Lianbo Jiang <lijiang@redhat.com>
This commit remove almost all special workaround for atomic, and treat
all bind mounts in any environment equally.
Use a helper get_bind_mount_directory_from_path to get the bind mount
source path of given path.
is_atomic function now only used to determine the right /boot path
for atomic/silverblue environment.
And remove get_mntpoint_from_path(), it's the only function that never
ignore bind mount, and it have no caller after this clean up.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Lianbo Jiang <lijiang@redhat.com>
In /etc/hosts, the alias name can come at the 2nd column, regardless of the
recommendation.
E.g. the following format is valid although not recommended
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.22.21 fastvm-rhel-7-6-21 fastvm-rhel-7-6-21.localdomain
192.168.22.22 fastvm-rhel-7-6-22 fastvm-rhel-7-6-22.localdomain
192.168.22.21 node1_hb
192.168.22.22 node2_hb
So filtering out both 2nd and 3rd column for matching.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
Process substitution is not POSIX standard syntax, so if bash is configured
to strictly follow POSIC, this will fail.
Just use a POSIX friendly syntax instead.
Fixes: bz1708321
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Lianbo Jiang <lijiang@redhat.com>
This help remove redundant spaces and tailing comment in installed
kdump.conf, currently installed kdump.conf always contain extra empty
lines.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Lianbo Jiang <lijiang@redhat.com>
Latest dracut release stopped creating
$systemdsystemunitdir/initrd.target.wants dir for us, so ensure it
exists before creating the symlink.
Signed-off-by: Kairui Song <kasong@redhat.com>
Tested-and-Reviewed-by: Bhupesh Sharma <bhsharma@redhat.com>
Currently while trying to save vmcore via vlan eth interface, the Kdump
kernel fails with network unreachable message.
This is because mkdumprd produces a vlan config that does not get
ip address for vlan on eth device.
Fix the same via this patch.
Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
The dracut initqueue may quit immediately and won't trigger any hook if
there is no "finished" hook still pending (finished hook will be deleted
once it return 0).
This issue start to appear with latest dracut, latest dracut use
network-manager to configure the network,
network-manager module only install "settled" hook, and we didn't
install any other hook. So NFS/SSH dump will fail. iSCSI dump works
because dracut iscsi module will install a "finished" hook to detect if
the iscsi target is up.
So for NFS/SSH we keep initqueue running until the host successfully get
a valid IP address, which means the network is ready.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
sed and awk is heavily used everywhere in the code, but it's not
explicitely installed by kdump dracut module. If the module in dracut
stop installing them (which already happened with latest dracut
upstream), kdump will break.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
For ssh/nfs dump, kdump need the 'ip' tool to get the host ip address
for naming the vmcore. But kdump-module-setup.sh never installed this
tool. kdump-module-setup.sh worked so far as dracut network module will
help install it.
After dracut changed to use 35network-manager for network setup, "ip"
command won't be installed in second kernel by default. So need to
ensure "ip" is installed when installing kdump dracut module.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
This help deduplicate the code. Use a single function instead of
repeat the same logic.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
By default kernel have vm.zone_reclaim_mode = 0 and large page
allocation might fail as kernel is very conservative on memory
reclaiming. If the page allocation failure is not handled carefully
it could lead to more serious problems.
This issue can be reproduced by change with following steps:
- Fill up page cache use:
# dd if=/dev/urandom of=/test bs=1M count=1300
- Now the memory is filled with write cache:
# free -m
total used free shared buff/cache available
Mem: 1790 184 132 2 1473 1348
Swap: 2119 7 2112
- Insert a module which simply calls "kmalloc(SZ_1M, GFP_KERNEL)" for
512 times: (Notice: vmalloc don't have such problem)
# insmod debug_module.ko
- Got following allocation failure:
insmod: page allocation failure: order:8, mode:0x40cc0(GFP_KERNEL|__GFP_COMP), nodemask=(null),cpuset=/,mems_allowed=0
- Clean up and repeat again with vm.zone_reclaim_mode = 3, OOM is not
observed.
In kdump kernel there is usually only one online CPU and limited memory,
so we set vm.zone_reclaim_mode = 3 to let kernel reclaim memory more
aggresively to avoid such issue.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
Merge kdump_setup_netdev into kdump_install_net.
kdump_install_net is a wrapper of calling kdump_setup_netdev, and
it do following three extra things:
1. Sanitize and resolve the hostname
2. Resolve the route to the destination
3. Set the default gateway for once
There is currently only one caller of kdump_setup_netdev, the iscsi
network setup code, and it's doing 1 and 2 by itself. And there should
only be one default gateway in kdump enviroment, so applying 3 here is
fine.
And the comment of kdump_install_net is wrong and obsoleted, update the
comment too.
Just merge kdump_setup_netdev into kdump_install_net and always use
kdump_install_net instead.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
In commit a431a7e354 (module-setup: fix 99kdumpbase network dependency),
the statement for OR operation is still wrong.
The OR condition statement should be: if a || b
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
Bond options in ifcfg is space separated, dracut expected it to be comma
separated, so it have to be parsed and converted during initramfs
building.
The currently parsing and convert pattern is flawed, for example:
" downdelay=0 miimon=100 mode=802.3ad updelay=0 "
is converted to :
":,downdelay=0 miimon=100 mode=802.3ad updelay=0 "
should be:
":downdelay=0,miimon=100,mode=802.3ad,updelay=0"
So fix this issue by using more simple but robust method for processing
the options.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
The localhost is filtered out in case of is_pcs_fence_kdump, do it too in
case of is_generic_fence_kdump.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
'hostname -A' can not get the alias, meanwhile 'hostname -a' is deprecated.
So we should do it by ourselves.
The parsing is based on the format of /etc/hosts, i.e.
IP_address canonical_hostname [aliases...]
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
fadump will alter the normal boot initramfs and we don't want a normal
boot to foward and drop the journalctl logs.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
The current code only exclude the hostname, while localhost can have alias in
/etc/hosts. All of the alias should be excluded from the fence dump node to
avoid deadlock issue.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
Squash module is used to save memory. For fadump this is not neccessary
and may slow down the build time, and make it more fragile.
fadump initramfs is used for normal boot as well, although squash module
is capable of being used for generic normal boot, but there are cases
where is doesn't work well. So disable it and make fadump more robust.
Signed-off-by: Kairui Song <kasong@redhat.com>
Tested-by: Hari Bathini <hbathini@linux.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Don't use any log storage and forward to console directly, this make
console output more useful, and also save more memory. On a fresh
installed Fedora 30 it saved ~5M of memory, and the amount of log being
printed to console is still accetable.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
When reading kdump configs, a single parsing should be enough and this
saves a lot of duplicated striping call which speed up the total load
speed.
Speed up about 2 second when building and 0.1 second for reload in my
tests.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
When someone is using a minimal kernel without squash module installed,
including squash dracut module will either either fail to build or fail to
boot the initramfs.
As kdump always build the image for one single kernel, we can safely just
use modprobe to check if a modules is already built in, or it exists and
loadable for the kernel we are using for kdump image, and don't include
the squash module if they are missing. Everything will still work just
fine without squash module.
We do the check in kdump dracut modules not in squash dracut module
because kdump dracut module could leverage of the KDUMP_KERNELVER variable
to know which kernel it should check against, squash dracut module may be
used to build for a generic image.
And we only check for the kernel module dependency, other binary
dependencies are either well checked or well declared in dracut.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Currently we still don't support multipath route, when parsing multipath
route kdumpctl will wrongly consider 'nexthop' as the destination address,
and raise errors in second kernel.
When multipath route is in use, ip route output should be like this:
$ /sbin/ip route show
default via 192.168.122.1 dev ens1 proto dhcp metric 100
192.168.122.0/24 dev ens1 proto kernel scope link src 192.168.122.161 metric 100
192.168.122.8
nexthop via 192.168.122.1 dev ens1 weight 50
nexthop via 192.168.122.2 dev ens1 weight 5
As we don't care about HA/performance, simply use the rule with highest
weight and ignore the rest.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
In dracut-049, a new squash module is introduced, it can reduce the
memory usage of kdump initramfs in the capture kernel, this helps a lot
on lowering the risk of OOM failure.
Tested with latest rawhide with NFS, SSH and local dump.
Signed-off-by: Kairui Song <kasong@redhat.com>
We test if to include the drm module or not by testing if there are any
drm entry in sysfs. But there is an exception for hyper-v, DRM module
take care of hyperv's framebuffer driver as well but hyperv_fb will
not create any drm entry. So currently we got black screen on
hyperv guest.
Fix by detect hyperv's special entry as well.
Signed-off-by: Kairui Song <kasong@redhat.com>
This commit basically reverts commit c755499fad,
and make use of new introduced tri-state hostonly mode.
Following dracut commits merged multipath-hostonly into multipath
module, and introduced a tri-state hostonly mode.
commit 35e86ac117acbfd699f371f163cdda9db0ebc047
Author: Kairui Song <kasong@redhat.com>
Date: Thu Jul 5 16:20:04 2018 +0800
Merge 90-multipath-hostonly and 90-multipath
commit a695250ec7db21359689e50733c6581a8d211215
Author: Kairui Song <kasong@redhat.com>
Date: Wed Jul 4 17:21:37 2018 +0800
Introduce tri-state hostonly mode
multipath-hostonly module was introduced only for kdump, because kdump
need a more strict hostonly policy for multipath device to save memory.
Now multipath module will provide the behave we wanted by setting
hostonly mode to strict.
Kdump always use _proto=dhcp for both ipv4 and ipv6. But for ipv6
the dhcp address assignment is not like ipv4, there are different ways
for it, stateless and stateful, see below document:
https://fedoraproject.org/wiki/IPv6Guide
In case stateless, kernel can do the address assignment, dracut use
_proto=auto6; for stateful case, dracut use _proto=dhcp6.
But it is hard to decide whether stateless or stateful takes effect,
hence, dracut introduces ip=either6 option, which can try both of these
method automatically for us. For detail, refer to dracut:
commit 67354ee 40network: introduce ip=either6 option
We do not see bug reports before because for the most auto6 cases
kernel assign ip address before dhclient, kdump just happened to work.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
When using fence_kdump, module-setup will create a kdump.conf with
fence_kdump_nodes. The node name comes from the cluster xml, which may
use the hostname alias. Later in kdump stage, "fence_kdump_send alias_1
alias_2" sends out notification to peers. Hence it requires /etc/hosts
and nsswitch.conf to make alias work.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
This reverts commit 2f4149f276.
It is not proved to be right to get auto6 or dhcpv6 in 1st kernel,
pingfan is working on a dracut fix to do some fallback in 2nd kernel initramfs.
So revert this commit
Kdump always use _proto=dhcp for both ipv4 and ipv6. But for ipv6
the dhcp address assignment is not like ipv4, there are different ways
for it, stateless and stateful, see below document:
https://fedoraproject.org/wiki/IPv6Guide
In case stateless, kernel can do the address assignment, dracut use
_proto=auto6; for stateful case, dracut use _proto=dhcp6.
We do not see bug reports before because for the most auto6 cases
kernel assign ip address before dhclient, kdump just happened to work.
Here we use auto6 if possible first. And we take the assumption that
host use auto6 if /proc/sys/net/ipv6/conf/$netdev/autoconf is enabled
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Due to the following commit in dracut, which splits out hostonly modules
commit 5ce7cc7337a4c769b223152c083914f2052aa348
Author: Harald Hoyer <harald@redhat.com>
Date: Mon Jul 10 13:28:40 2017 +0200
add 90multipath-hostonly module
hardcoding the wwid of the drives in the initramfs causes problems
when the drives are cloned to a system with the same hardware, but
different disk wwid's
https://bugzilla.redhat.com/show_bug.cgi?id=1457311
So kdump should decide whether to include the hostonly module.
The multipath-hostonly can help kdump to include only the needed mpath device,
in order to use less memory by 2nd kernel.
---- The performance -----
before this patch
[root@localhost ~]# time kdumpctl start
Detected change(s) in the following file(s):
/etc/kdump.conf
Rebuilding /boot/initramfs-4.13.9-300.fc27.x86_64kdump.img
kexec: loaded kdump kernel
Starting kdump: [OK]
real 0m12.485s
user 0m10.096s
sys 0m1.887s
after this patch
root@localhost ~]# time kdumpctl start
Detected change(s) in the following file(s):
/etc/kdump.conf
Rebuilding /boot/initramfs-4.13.9-300.fc27.x86_64kdump.img
kexec: loaded kdump kernel
Starting kdump: [OK]
real 0m15.839s
user 0m13.015s
sys 0m1.853s
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
This commit eliminates a redundant kdump_get_mac_addr call in
kdump_setup_netdev.
Signed-off-by: Ziyue Yang <ziyang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
After adding "--hostonly-cmdline", besides 99kdump, 95iscsi
also generates iscsi related cmdline. IOW, we have duplicate
software iscsi cmdlines.
95iscsi generated software iscsi cmdline doesn't work, so we
remove that of 95iscsi and use that of 99kdump which has been
well tested.
We can change to use 95iscsi when possible in the future.
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Currently, we throw the error message at the very beginning, as
a result on a pure-hardware(all-offload) iscsi machine with many
iscsi partitions, we suffered from too much noise as follows:
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.1/host0/session1
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.3/host1/session2
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.1/host0/session1
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.3/host1/session2
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.1/host0/session1
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.3/host1/session2
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.1/host0/session1
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.3/host1/session2
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.1/host0/session1
iscsiadm: No records found
Unable to find iscsi record for
/sys/devices/pci0000:00/0000:00:06.0/0000:07:00.0/0000:08:01.3/host1/session2
kexec: loaded kdump kernel
Starting kdump: [OK]
There's no need to know the very early error messages, we can
remove the error output which is actually normal for the pure
hardware iscsi.
As for unexpected errors, we kept the error outputs in the
succeeding kdump_iscsi_get_rec_val() calls by not appending
"2>/dev/null".
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Currently, systemd uses 90s as the default mount unit timeout
(see "man 5 systemd-system.conf " for "DefaultTimeoutStartSec"),
in some cases, although it works well in 1st kernel, it's not
enough under kdump and results in mount timeout, further results
in kdump dumping failure.
We've met several such issues, we decided to enlarge this default
value a little for kdump.
We know that dracut has a default initqueue timeout value of 180s
("rd.retry"), we finalized a little larger value 300s as kdump's
default timeout if there is no explicit "DefaultTimeoutStartSec=X,
specified by users.
"DefaultTimeoutStartSec=X" can be overridden by individual mount
option "x-systemd.device-timeout=X", users can specify their own
values as needed.
This patch achieves the purpose by creating a dedicated conf file
"/etc/systemd/system.conf.d/kdump.conf" which has the content of
"DefaultTimeoutStartSec=300s", this is based on the fact that all
the conf files will be parsed by systemd and the last parsed one
will be used if there are duplicate definitions.
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
I noticed that network is still enabled for local dumping,
like the following kdump boot message on my test machine
using local disk as the dump target:
tg3.c:v3.137 (May 11, 2014)
tg3 0000:02:00.0 eth2: Tigon3 [partno(BCM95720) rev
(PCI Express) MAC address c8:1f:66:c9:35:0d
tg3 0000:02:00.0 eth2: attached PHY is 5720C
After some debugging, found it due to a misuse in code below:
if [ is_generic_fence_kdump -o is_pcs_fence_kdump ]; then
_dep="$_dep network"
fi
The "if" condition always results in "true", and should be
changed as follows:
if is_generic_fence_kdump -o is_pcs_fence_kdump; then
_dep="$_dep network"
fi
After this, network won't be involved in non-network dumping,
as for dumpings require network such as nfs/ssh/iscsi/fcoe/etc,
dracut will add network accordingly. And kdump initramfs size
can be reduced from 24MB to 17MB tested on some real hardware,
and from 19MB to 14MB on my kvm. Moreover, it could avoid the
network (driver) initialization thereby saving us more memory.
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
The /sys/modules/*/drivers sysfs entries do not exist anymore on newer
kernels which means that the DRM moduels would never be included.
Instead check if there is any device with a "drm" sysfs directory to
decide on whether DRM modules need to be included.
Acked-by: Dave Young <dyoung@redhat.com>
Since the current dracut of Fedora already supports not always
mounting root device, we can remove "root=X" from the command
line directly, and always get the dump target specified in
"/etc/kdump.conf" and mount it. If the dump target is located
at root filesystem, we will add the root mount info explicitly
from kdump side instead of from dracut side.
For example, in case of nfs/ssh/usb/raw/etc(non-root) dumping,
kdump will not mount the unnecessary root fs after this change.
This patch removes "root=X" via the "KDUMP_COMMANDLINE_REMOVE"
(if "default dump_to_rootfs" is specified, don't remove "root=X"),
and mounts non-root target under "/kdumproot", the root target
still under "/sysroot"(to be align with systemd sysroot.mount).
After removing "root=X", we now add root fs mount information
explicitly from the kdump side.
Changed check_dump_fs_modified() a little to avoid rebuild when
dump target is root, since we add root fs mount explicitly now.
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Acked-by: Pratyush Anand <panand@redhat.com>
Acked-by:Dave Young <dyoung@redhat.com>
We met a problem that the kdump emergency service failed to
start when the target dump timeout(we passed "rd.timeout=30"
to kdump), it reported "Transaction is destructive" messages:
[ TIME ] Timed out waiting for device dev-mapper-fedora\x2droot.device.
[DEPEND] Dependency failed for Initrd Root Device.
[ SKIP ] Ordering cycle found, skipping System Initialization
[DEPEND] Dependency failed for /sysroot.
[DEPEND] Dependency failed for Initrd Root File System.
[DEPEND] Dependency failed for Reload Configuration from the Real Root.
[ SKIP ] Ordering cycle found, skipping System Initialization
[ SKIP ] Ordering cycle found, skipping Initrd Default Target
[DEPEND] Dependency failed for File System Check on /dev/mapper/fedora-root.
[ OK ] Reached target Initrd File Systems.
[ OK ] Stopped dracut pre-udev hook.
[ OK ] Stopped dracut cmdline hook.
Starting Setup Virtual Console...
Starting Kdump Emergency...
[ OK ] Reached target Initrd Default Target.
[ OK ] Stopped dracut initqueue hook.
Failed to start kdump-error-handler.service: Transaction is destructive.
See system logs and 'systemctl status kdump-error-handler.service' for details.
[FAILED] Failed to start Kdump Emergency.
See 'systemctl status emergency.service' for details.
[DEPEND] Dependency failed for Emergency Mode.
This is because in case of root failure, initrd-root-fs.target
will trigger systemd emergency target which requires the systemd
emergency service actually is kdump-emergency.service, then our
kdump-emergency.service starts kdump-error-handler.service with
"systemctl isolate"(see 99kdumpbase/kdump-emergency.service, we
replace systemd's with this one under kdump).
This will lead to systemd two contradictable jobs queued as an
atomic transaction:
job 1) the emergency service gets started by initrd-root-fs.target
job 2) the emergency service gets stopped due to "systemctl isolate"
thereby throwing "Transaction is destructive".
In order to solve it, we can utilize "IgnoreOnIsolate=yes" for both
kdump-emergency.service and kdump-emergency.target. Unit with attribute
"IgnoreOnIsolate=yes" won't be stopped when isolating another unit,
they can keep going as expected in case be triggered by any failure.
We add kdump-emergency.target dedicated to kdump the similar way
as did for kdump-emergency.service(i.e. will replace systemd's
emergency.target with kdump-emergency.target under kdump), and
adds "IgnoreOnIsolate=yes" into both of them.
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Pratyush Anand <panand@redhat.com>
[bhe: improve the patch log about IgnoreOnIsolate="]
We replace "reserved_memory = XXXX"(default value is 8192) with
"reserved_memory = 1024" in /etc/lvm/lvm.conf used by "lvm2", it
can save 7MB peak memory consumption, so lower the possibility of
OOM under kdump.
For kdump, we don't have too many lvm targets, lvm2 locates in the
RAM(rootfs), so don't need that much memory, as discussed with lvm
people, they agreed that we use 1MB under kdump as long as there
are not that many lvm targets invloved.
We modify /etc/lvm/lvm.conf when "99kdumpbase" install() is executed,
because it is parsed after "90lvm" by dracut.
We add the code unconditionally with &>/dev/null to ignore errors, it
doesn't matter in case of "lvm" not included(i.e. there is no lvm.conf).
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
kdump_to_udev_name function name cause confusion to people, change it to
kdump_get_persistent_dev which sounds better.
Signed-off-by: Dave Young <dyoung@redhat.com>
Reviewed-by: Xunlei Pang <xlpang@redhat.com>
Although we use by-id in mkdumprd as persistent policy for the dump target
checking, finally it is not used in kdump 2nd kernel because we call dracut
function in module-setup.sh without persistent policy specified that means
kdump will copy default "by-uuid" dev name.
Though by-uuid usually works and it is still better to fix it as raw disk
uuid make no sense.
Also do not need to call bind mount adjust function for raw dump, here add
another switch case for raw dump and cleanup the functions with short
variable names to keep code shorter.
Signed-off-by: Dave Young <dyoung@redhat.com>
Reviewed-by: Xunlei Pang <xlpang@redhat.com>
When a remote dump target is specified, kdump dracut module prefixes
'kdump-' to network interface name (ifname) as kernel assigned names
are not persistent. In fadump mode, kdump dracut module is added to
the default initrd, which adds the 'kdump-' prefix to the ifname of
the prodcution kernel itself. If fadump mode is disabled after this,
kdump dracut module picks the ifname that is already prefixed with
'kdump-' in the production kernel and adds another 'kdump-' to it,
making the ifname something like kdump-kdump-eth0 for kdump kernel.
Eventually, kdump kernel fails with below traces:
dracut-initqueue[246]: RTNETLINK answers: Network is unreachable
dracut-initqueue[246]: arping: Device kdump-kdump-eth0 not available.
The ip command shows the below:
kdump:/# ip addr show kdump-kdump-eth0
2: kdump-kdump-eth: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 \
qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 22:82:87:7b:98:02 brd ff:ff:ff:ff:ff:ff
inet6 2002:903:15f:550:2082:87ff:fe7b:9802/64 scope global \
mngtmpaddr dynamic
valid_lft 2591890sec preferred_lft 604690sec
inet6 fe80::2082:87ff:fe7b:9802/64 scope link
valid_lft forever preferred_lft forever
kdump:/#
The trailing 0 from kdump-kdump-eth0 is missing in the ifname, probably
truncated owing to ifname length limit, while setting.
This patch fixes this by avoiding addition of the prefix 'kdump-' when
such prefix is already present in the ifname.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
There are some complaints about nfs kdump that users must mount
nfs beforehand, which may cause some overhead to nfs server.
For example, there're thounsands of diskless clients deployed with
nfs dumping, each time the client is boot up, it will trigger
kdump rebuilding so will mount nfs, thus resulting in thousands
of nfs request concurrently imposed on the same nfs server.
We introduce a new way of specifying mount information via the
already-existent "dracut_args" directive(so avoid adding extra
directives in /etc/kdump.conf), we will skip all the filesystem
mounting and checking stuff for it. So it can be used in the
above-mentioned nfs scenario to avoid severe nfs server overhead.
Specifically, if there is any "--mount" information specified via
"dracut_args" in /etc/kdump.conf, always use it as the final mount
without any validation(mounting or checking like mount options,
fs size, etc), so users are expected to ensure its correctness.
NOTE:
-Only one mount target is allowed using "dracut_args" globally.
-Dracut will create <mountpoint> if it doesn't exist in kdump kernel,
<mountpoint> must be specified as an absolute path.
-Users should do a test first and ensure it works because kdump does
not prepare the mount or check all the validity.
Reviewed-by: Pratyush Anand <panand@redhat.com>
Suggested-by: Dave Young <dyoung@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Now dracut takes care to add module for active watchdog. Therefore we do
not need to pass iTCO_wdt and lpc_ich module in rd.driver.pre specifically
here.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
There are several kinds of iSCSI mode rhel support currently.
- Pure hardware iSCSI
- iBFT iSCSI
- Pure software iSCSI
Except for the 1st one that firmware takes care of everything to
make it behave like a local disk, both iBFT and pure software iSCSI
mode need pass information to kdump kernel for configuring them
correctly.
Currently kdump takes iBFT mode as a software iSCSI and collects
the related information to set up software iSCSI in 2nd kernel,
though dracut can detect and collect information to set up iBFT
iSCSI of 2nd kernel. This brings up 2 problems:
1) Redundent information about the related iSCSI is collected. One
is done by kdump, the other is from dracut.
2) These 2 sessions of 2nd kernel for a certain session of 1st kernel
could contain two "ip=xxx" cmdline option. This will cause cmdline
handling error in dracut.
The 1st one is not critical while the 2nd is. In order to avoid above
2 problems, kdump need detect iBFT mode iSCSI and leave it to dracut.
This is what is done in this patch.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Xunlei Pang <xlpang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
The ifcfg file name of <netif> under "/etc/sysconfig/network-scripts/"
may not be "ifcfg-<netif>". For example, for "enp0s25" we are able to
generate its ifcfg like "/etc/sysconfig/network-scripts/ifcfg-enp0s25test"
via network-manager. If we alway assume "ifcfg-<netif>" is there, we will
got the wrong result in some cases.
The issue can be resolved by using the new get_ifcfg_filename() introduced
by PATCH "kdump-lib: Add get_ifcfg_filename() to get the proper ifcfg file",
so we hereby change all the "ifcfg-<netif>" users to use get_ifcfg_filename().
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
dracut will place the config in the random path during generating the
initramfs. Remove the duplicate prefix path ${initdir}.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
The system may have multiple default route entry. Following is an
example to show the details.
# ip -6 route list dev eth0
2620:52:0:1040::/64 proto kernel metric 256 expires 2591978sec fe80::/64 proto kernel metric 256
default via fe80:52:0:1040::1 proto ra metric 1024 expires 1778sec hoplimit 64
default via fe80:52:0:1040::2 proto ra metric 1024 expires 1778sec hoplimit 64
Choose the first matched entry.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Dracut will die in the situation that dracut detects to use dhcp to
setup ip address, but kdump passes the ip address to it.
In commit 7ea50dc7a3, we start to use
option permanent to get the ip address in kdump_static_ip. If the
network is setuped by static, we will get the ip address, otherwise
getting none.
In commit c994a80698 which it used to
support ipv6 protocol, I miss the option permanent.
This patch is not a fixing patch, just pulls back something to make
kdump work as original.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Due to the different format between ipv4 and ipv6 protocol, quote the
ipv6 address with bracket "[]" to make dracut notify.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Kdump will parse the hostname to get the ip address, if hostname is
specfied in /etc/kdump.conf. We will get the ip address(ipv4 or ipv6,
according to the DNS server) by using "getent hosts".
For now, it is more reasonable that we shall get all of the ip
address(including ipv4 and ipv6 address) which point to the hostname by
using "getent ahosts". And we will prefer to use the ipv4 address, if
both ipv4 and ipv6 address work.
The reason why we choose the ipv4 as preferred address is to solve the
issue kdump will fail to connect the hostname machine(parsed as ipv6
address), due to the DNS server is ipv4 address in 2nd kernel.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Previously, Kdump will save route to setup the network route in the 2nd
kernel for ipv4 protocol. To support ipv6 protocol, make Kdump fetch
correct nexthop, since the ruturning format is different.
In order to enhance kdump to support ipv6, support the static ip for
ipv6 protocol, which ipv4 has supported already.
Introduce a new lib function get_remote_host which is used to factor out
the ip address(ipv4 or ipv6) and hostname in /etc/kdump.conf.
Introduce a new lib function is_ipv6_address which is used to make sure
whether the passed ip address is ipv4 or ipv6.
Introduce a new lib function is_hostname which is used to confirm
whether the passed parameter is hostname, not the ip address.
Introduce a new function get_ip_route_field which is used to factor out
the specified string in ip route info.
Due to the different format between ipv4 and ipv6 protocol, quote the
ipv6 address with bracket "[]" to make dracut notify.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
For now, Kdump will use ipv4 address as dump directory, and it works, if
ipv4 is enabled.
Once Kdump start to support ipv6 protocol, we may only setup the ipv6
address exclusively. Modify the code to make Kdump work in either ipv4
and ipv6 protocol.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Now Kdump will ingore the DNS config in /etc/resolv.conf, when it
generates the initram. And most users do not concern about this issue,
because they never use deployment tools to configure machines
environment, like puppet.
It is more convenient to add the DNS config to /etc/resolv.conf for
people who use deployment tools to configure machines concurrently.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
We have added *wdt in kdump initramfs, but to improve it more we can do below
(1) load wdt drivers as early as possible so that we can save time before wdt timeout
some drivers like iTCO_wdt can stop the watchdog while driver initialization, so
it can give more chance for kdump.
It can save time especially in case some drivers take long time to init, like
some storage and networking cards.
(2) add only used wdt drivers in kdump initrd instead of add *wdt
wdt driver layer need a change so that we can get the proper driver name from
/dev/watchdog. Question to this is are we sure 1st kernel use /dev/watchdog
instead of /dev/watchdog1? It need more investigation.
(3) in case a driver can not stop (nowayout?) during module_init, we need load it
as early as possible and kick the watchdog. Likely we can use systemd default
watchdog functionality.
This patch is about to address (1), and specially for iTCO_wdt, we only tested
iTCO_wdt, thus in this patch only add this driver, need investigate on other drivers
later to see if other drivers works in this way.
Signed-off-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Minfei Huang <mhuang@redhat.com>
The ipv6 patchset is still under review, previously the commit was mistakenly
merged, thus let's revert it.
Revert "dracut-kdump: Use proper the known hosts entry in the file known_hosts"
This reverts commit 63476302aa.
Conflicts:
kdump-lib.sh
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Signed-off-by: Dave Young <dyoung@redhat.com>
Kdump will dump the vmcore in incorrect target directory, if the target
is bind mounted.
As commented in the previous patch, we can construct the real path in
Atomic, which contains two part, one bind mounted path, the other
specified dump path. Then replace the path as the real path in
/etc/kdump.conf.
findmnt can find the real path for nfs, although the path is in bind
mode. So nfs can work well with the path in bind mode.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Now kdump cannt parse the path correctly, if the path contains
duplicated "/". Following is an example to explain it detail. (the
directory /mnt is a mount point which is mounted a block device)
path //mnt/var/crash
Then the warning will raise.
Force rebuild /boot/initramfs-3.19.1kdump.img
Rebuilding /boot/initramfs-3.19.1kdump.img
df: ‘/mnt///mnt/var/crash’: No such file or directory
/sbin/mkdumprd: line 239: [: -lt: unary operator expected
kexec: loaded kdump kernel
Starting kdump: [OK]
For above case, kdump fails to check the fs size, due to the incorrect
path.
In kdump code flow, we will cut out the mount point(/mnt) from the
path(//mnt/var/crash). But the mount point cannt match the path, because
of the duplicated "/".
To fix it, we will strip the duplicated "/" firstly.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Harald warned it's dangerous to use /tmp/*$$* in shell scripts of dracut
modules.
Quote his saying as below:
***************************
This can be exploited so easily and used to overwrite e.g. /etc/shadow.
The only thing you have to do is waiting until the next time the kdump
initramfs is generated on a kernel update.
If at all, please use "$initdir/tmp/" because $initdir is a mktemp generated
directory with a non-guessable name!
**************************
So make a clean up in this patch.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Steve found a bug. When mount a disk in /var and not specify path
in /etc/kdump.conf, the vmcore will be dumped into /var/crash of
that disk, but not /crash on that disk.
This is because when write the parsed path into /tmp/$$-kdump.conf
in default_dump_target_install_conf() of mkdumprd, it uses below
sed command. So if no path specified at all, this sed command won't
add it to /tmp/$$-kdump.conf. Then in 2nd kernel it will take default
path, namely "/var/crash" as path if no path in /etc/kdump.conf in
2nd kernel.
sed -i -e "s#$_save_path#$_path#" /tmp/$$-kdump.conf
According to Dave Young's suggestion, erase the old path line and then
insert the parsed path. This can fix it.
v2->v3:
erase the old path line and then insert the parsed path.
sed -i -e "s#^path[[:space:]]\+$_save_path##" /tmp/$$-kdump.conf
echo "path $_path" >> /tmp/$$-kdump.conf
v3->v4:
Change the sed pattern, erase lines starting with "path" and then
insert the parsed path.
sed -i -e "s#^path.*##" /tmp/$$-kdump.conf
echo "path $_path" >> /tmp/$$-kdump.conf
v4->v5:
Chaowang suggested using sed command d to remove the whole line
like below:
sed -i "/^path/d" /tmp/$$-kdump.conf
echo "path $_path" >> /tmp/$$-kdump.conf
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Chao pointed out that it's better to use get_option_value to get
get a specific config_val.
And also there's a potential risk when use below sed command to
do the replacement.
sed -i -e "s#$_save_path#$_path#" /tmp/$$-kdump.conf
Say user configure kdump.conf like the following. Then sed may
replace "/var/crash/post.sh" with something else, depanding on
mount point.
kdump_post /var/crash/post.sh
path /var/crash
So in this patch clean them up.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Certain kernel parameters like min_free_kbytes can be configured at runtime
using sysctl. While this is useful in first kernel, it can lead to unnecessary
failures like OOM in kdump kernel. This patch enforces default vaules for all
sysctl parameters, in kdump kernel, by removing sysctl.conf & sysctl.d/* files.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Once login using ssh, the ssh will store the known hosts entry to the
local ~/.ssh/known_hosts. From now, we can login using ssh automaticly.
The ssh will check the ~/ssh/.known_hosts entry, if set the option
StrictHostKeyChecking=yes/ask in the config or command line, when you
want to login the target. the default value of StrictHostKeyChecking is
ask.
And the kdump using the ssh will append the option
StrictHostKeyChecking=yes in the command line.
We can using following ip to connect peer machine, if enable the ipv6.
fe80::5054:ff:fe48:ca80%eth0
Obviously, above ip contains the ethX.
Kdump will add the prefix "kdump-" before ethX to avoid flowing
netdevice name in case netdevice names ethX in the 2nd kernel. So the
ip address will change to fe80::5054:ff:fe48:ca80%kdump-eth0.
Kdump will login the target manully in the 2nd kernel, because of the
option StrictHostKeyChecking=yes and inexistence known hosts entry
in the local ~/.ssh/known_hosts. Hence dumping core will fail.
In order to login automaticly using ssh, we should add the prefix
"kdump-" before ethX in the local ~/.ssh/known_hosts.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
For ethX, it may fail to setup the network in the 2nd kernel due to the
mapping of ethernet device name and MAC changes.
The commit(ba7660f37e) has fixed this
issue by add the prefix "kdump-" before ethX. But the network will fail
to work in the static route mode because of this commit.
Here is the config which is used to setup the static route:
rd.route=192.168.201.215:192.168.200.137:eth1
Obviously, the static route config comtains the ethX. But the network
device names kdump-ethX in the 2nd kernel, so the static route config
will fail to execute. To fix it, we should identify the network device.
Add the prefix "kdump-" before the ethX in the static route config to
setup it successfully in the 2nd kernel.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
It is boring that internal result is shown in the terminal. Do not print
anything to standard output by using the command "grep -q".
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Previously for solving static route issues, all routes which go
through a specific dev will be saved in 1st kernel, and then added
in 2nd kernel. Because we use below search pattern, an exception
will happen:
/sbin/ip route show | grep -v default | grep "^[[:digit:]].*via.* $_netdev"
That exception is a corner case which happened when 2 machines connected
directly by cable and the 2 network interfaces are configured in
different network subnets. E.g there are 2 machines A and B:
A:ens10 < ------ > B:ens9
A:ens10 inet 192.168.100.111/24 scope global ens10
route need be added in A:
192.168.110.0/24 dev ens10
B:ens9 inet 192.168.110.222/24 scope global ens9
route need be added in B
192.168.100.0/24 dev ens9
Now if A want to dump to B, the route "192.168.110.0/24 dev ens10"
has to be saved and added in 2nd kernel.
So in this patch "ip route get to $target" command is executed, then
an exact route can be got for going to that target. By this, static
route works and the corner case can be fixed too.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Marc Milgram <mmilgram@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
In case of iscsi boot, kernel cmdline will contain ip=xxx kernel
parameter for dracut setting up iscsi root in initramfs. For example:
"root=xxx ip=192.168.3.26:::255.255.255.0:localhost.localdomain:eno19:none ..."
dracut doesn't allow duplicate ip conf for the same network card. dracut
will not ignore the either of the duplicate. Instead, it refuses to
continue:
[ 15.876306] dracut: FATAL: For argument 'ip=192.168.3.26:::255.255.255.0:localhost.localdomain:eno19:none'\n
Duplication configurations for 'eno19'
[ 16.055513] dracut: Refusing to continue ev argument for multiple ip= lines
That's why in our code we don't add a duplicate ip conf when handling
the same network card the second time. But we never consider the case
that ip conf is already added in kernel cmdline for some special
purpose, for example, iscsi boot.
Now we also look up /proc/cmdline for ip conf. If it exists, we use the
existing one. The existing one should work out of box because dracut
will handle it in second kernel like it does for first kernel. That
said, the network card will be brought up and root disk will be mounted
under /sysroot.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Node could be referenced by short hostname (hostname -s) in cluster
configuration:
[root@virt-068 /]# pcs status nodes
Pacemaker Nodes:
Online: virt-066 virt-067 virt-068
Standby:
Offline:
We didn't know it before. Martin noticed the kdump failure, and provide
this fix. Thanks to Martin.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Tested-by: Martin Juricek <mjuricek@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
If one target address is not local and its route is different than
default gateway, the specific route to this target address need be
added. E.g, target is 192.168.200.222.
sh> ip route show
default via 192.168.122.1 dev eth0 proto static metric 1024
192.168.200.0/24 via 192.168.100.222 dev ens10 proto static metric 1
In this patch, get the route to the specific target address and store
it as cmdline, here is /etc/cmdline.d/45-route-static.conf. And the
route options are separated by semicolon like below. Then the stored
route can be parsed when kdump kernel boot up.
192.168.200.0/24:192.168.100.222:ens10
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This patch introduce a new kdump-capture.service which is used to run
kdump.sh.
kdump-capture.service has OnFailure=emergency.target and
OnFailureIsolate=yes set. When kdump.sh fails, the kdump emergency
service will be triggered and enter the error handling path.
In 2nd kernel, the default target for systemd is initrd.target, so we
put kdump-capture.service in initrd.target.wants/ and by that, system
will start kdump-capture as part of the boot process.
kdump.sh used to run in dracut-pre-pivot hook. Now kdump-capture.service
is placed after dracut-pre-pivot.service and other dependencies are all
copied from dracut-pre-pivot.service. So the start point of
kdump.sh will be almost the same as it used to be.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Now upon failure kdump script might not be called at all and it might
not be able to execute default action. It results in a hang.
Because we disable emergency shell and rely on kdump.sh being invoked
through dracut-pre-pivot hook. But it might happen that we never call
into dracut-pre-pivot hook because certain systemd targets could not
reach due to failure in their dependencies. In those cases error
handling code does not run and system hangs. For example:
sysroot-var-crash.mount --> initrd-root-fs.target --> initrd.target \
--> dracut-pre-pivot.service --> kdump.sh
If /sysroot/var/crash mount fails, initrd-root-fs.target will not be
reached. And then initrd.target will not be reached,
dracut-pre-pivot.service wouldn't run. Finally kdump.sh wouldn't run.
To solve this problem, we need to separate the error handling code from
dracut-pre-pivot hook, and every time when a failure shows up, the
separated code can be called by the emergency service.
By default systemd provides an emergency service which will drop us into
shell every time upon a critical failure. It's very convenient for us to
re-use the framework of systemd emergency, because we don't have to
touch the other parts of systemd. We can use our own script instead of
the default one.
This new scheme will overwrite emergency shell and replace with kdump
error handling code. And this code will do the error handling as needed.
Now, we will not rely on dracut-pre-pivot hook running always. Instead
whenever error happens and it is serious enough that emergency shell
needed to run, now kdump error handler will run.
dracut-emergency is also replaced by kdump error handler and it's
enabled again all the way down. So all the failure (including systemd
and dracut) in 2nd kernel could be captured, and trigger kdump error
handler.
dracut-initqueue is a special case, which calls "systemctl start
emergency" directly, not via "OnFailure=emergency". In case of failure,
emergency is started, but not in a isolation mode, which means
dracut-initqueue is still running. On the other hand, emergency will
call dracut-initqueue again when default action is dump_to_rootfs.
systemd would block on the last dracut-initqueue, waiting for the first
instance to exit, which leaves us hang. It looks like the following:
dracut-initqueue (running)
--> call dracut-emergency:
--> dracut-emergency (running)
--> kdump-error-handler.sh (running)
--> call dracut-initqueue:
--> blocking and waiting for the original instance to exit.
To fix this, I'd like to introduce a wrapper emergency service. This
emegency service will replace both the systemd and dracut emergency. And
this service does nothing but to isolate to real kdump error handler
service:
dracut-initqueue (running)
--> call dracut-emergency:
--> dracut-emergency isolate to kdump-error-handler.service
--> dracut-emergency and dracut-initqueue will both be stopped
and kdump-error-handler.service will run kdump-error-handler.sh.
In a normal failure case, this still works:
foo.service fails
--> trigger emergency.service
--> emergency.service isolates to kdump-error-handler.service
--> kdump-error-handler.service will run kdump-error-handler.sh
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Extract functions from kdump.sh, and construct kdump-lib-initramfs.sh as
kdump common functions/varaibles library.
kdump-lib-initramfs.sh will include kdump-lib.sh, because it will use
the functions from there. IOW, kdump-lib-initramfs.sh will be a superset
of kdump-lib.sh
So after this cleanup:
- scripts running in 1st kernel only have to include kdump-lib.sh
- scripts running in 2nd kernel only have to include kdump-lib-initramfs.sh
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
We met a problem that eth0 ends up being eth1 and eth1 being eth0
between 1st and 2nd kernel. Because we pass ifname=eth0:$mac to force
it's named eth0 and since "eth0"is already taken by the other NIC, udev
fails to bring up the NIC we want, thus kdump fails.
kernel assigned network interface names are not persistent. So if first
kernel is using kernel assigned interface names, then force it to use
"kdump-" prefixed names in second kernel.
For ethX, we put a prefix "kdump-" before it, so in 2nd kernel, ethX
will name to "kdump-ethX". So that we can avoid the naming conflict.
We only need to change the ethernet card name, that means, for bridge,
vlan, bond, team devices' names , we never prefix them. Because these
names are assigned when they're created by userspace.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
We handle different types of device for vlan. For each type, it should
write different options for vlan.conf in each control path.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
NetworkManager changed the format of ifcfg-device files. They may define
static IP addresses with the following format:
IPADDR0=192.168.122.100
PREFIX0=24
There may be up to 255 ip addresses for a network device - each with a unique
number tagged to the end of IPADDR and PREFIX.
Prior to this fix, kdump only handled static ip addresses defined with
IPADDR=192.168.122.100
PREFIX=24
ie. without the number.
The solution is to use "ip" commands to find the correct network information.
Tested with both static and dynamic IP addresses.
v2: Fixed a local variable that was set incorrectly
v3: Fix iscsi case
Signed-off-by: Marc Milgram <mmilgram@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
If default target is a separate disk, the related information need be
stored in /etc/kdump.conf of kdump initramfs. This includes the disk
info which will help to deduce the dump_code and path which the vmcore
will be written into.
v5->v7:
No v6 for this patch. Just use newly introduced function
is_fs_type_nfs in default_dump_target_install_conf().
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Adds two new options to kdump.conf to be able to configure fence_kdump
support for generic clusters:
fence_kdump_args <arg(s)>
- Command line arguments for fence_kdump_send (it can contain all
valid arguments except hosts to send notification to)
fence_kdump_nodes <node(s)>
- List of cluster node(s) separated by space to send fence_kdump
notification to (this option is mandatory to enable fence_kdump)
Generic clusters fence_kdump configuration take precedence over older
method of fence_kdump configuration for Pacemaker clusters. It means
that if fence_kdump is configured using above options in kdump.conf, old
Pacemaker configuration is not used even if it exists.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames kdump_check_fence_kdump kdump_configure_fence_kdump to clearly
identify what this function does.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames is_fence_kdump to is_pcs_fence_kdump to identify that this
method should be used to detect fence_kdump configuration only in
Pacemaker clusters.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames FENCE_KDUMP_NODES variable to FENCE_KDUMP_NODES_FILE to
distinguish it from values read from fence_kdump_nodes option in
kdump.conf (introduced in following patches).
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames FENCE_KDUMP_CONFIG variable to FENCE_KDUMP_CONFIG_FILE to
distinguish it from values read from fence_kdump_args option in
kdump.conf (introduced in following patches).
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
The old implementation in installkernel() will not return success when
added wdt module is not iTCO_wdt. The returned value is related to the
comparison. This is not correct and will cause kdump load failed.
Now move the exact wdt module inserting to the right place, this can
be fixed.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
When watchdog is enabled in 1st kernel, then crash dump in kdump
kernel will be interrupted if watchdog is timeout. Since some
wdt drivers can stop the watchdog when its driver is loaded,
e.g iTCO_wdt, this can benefit crash dump.
Add watchdog driver which is active in system to initramfs, its
loading can stop watchdog.
For now, put this adding in 99kdumpbase.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In ssh dump, we use random-seed to feed /dev/urandom. Since the systemd
random-seed file could change location, it's better we create our
own random-seed.
The discussion is listed below for future reference:
https://lists.fedoraproject.org/pipermail/kexec/2014-January/000340.html
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In the remote dump case, and if fence kdump is configured, chances are
that the same network interface will be setup more than once.
One time for network dump, the other times for fence kdump. The result
is we will have two or more duplicate ip= configuration in 40ip.conf.
These are exactly duplicates, however dracut will refuse to continue and
raise a fatal error if there are duplicate configuration for the same
interface. So we have to avoid adding these duplicates.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Tested-by: Zhi Zou <zzou@redhat.com>
Tested-by: Marek Grac <mgrac@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This patch is used to setup fence kdump environment when building kdump
initrd:
1. Check if it's cluster and fence_kdump is configured.
2. Get all the nodes in the cluster and pass them to 2nd kernel via
/etc/fence_kdump_nodes
3. Setup network interface which will be used by fence kdump notifier in
2nd kernel.
4. Install fence kdump notifier (/usr/libexec/fence_kdump_send) to
initrd.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Tested-by: Zhi Zou <zzou@redhat.com>
Tested-by: Marek Grac <mgrac@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
From: Wade Mealing <wmealing@redhat.com>
The RHEL 5 release of mkdumprd allowed for comments in the kdump config
file as shown below:
net 192.168.1.1 # this is the comment part
This patch strips them out during processing, but leaves the configuration
file in original condition.
Signed-off-by: Wade Mealing <wmealing@redhat.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Currently in the whole kdump framework, we have some common functions
used across not only mkdumprd context and dracut context, but also 1st
kernel and 2nd kernel. We defined these functions at each script, which
is obviously not decent.
So let's introduce kdump-lib.sh for the shared functions and put it
to /lib/kdump/kdump-lib.sh.
It starts small, as you can see, only 3 functions are extracted. But in
the future more and more common functions can be added.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In kdump_setup_bridge/bond/team(), we use _dev as a global variable.
That causes following issues when network is br0 over bond0:
-> kdump_setup_bridge br0: _dev to be "bond0" as a brif
-> kdump_setup_bond bond0: _dev is modified to be eth0 as a bond slave
-> (jump back) kdump_setup_bridge br0: we really need _dev is
"bond0" not "eth0".
_dev must be a local variable because it has been used multiple places.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Chaowang measured the selinux load_policy memory usage, it need ~50M
It's too much under kdump 2nd kernel, it cause more OOM then before.
Here is the findings from Vivek:
- If we don't load policy or don't do restorecon, kernel automatically
uses a label for file as specified by file
/sys/fs/selinux/initial_contexts/file
On my system this value is "system_u:object_r:file_t:s0". Kernel
enforces this label on a file if it is not labeled. That's the reason
that you see above label on vmcore file when selinux policy was not
loaded in second kernel or restorecon was not done.
Note: I did some testing with rhel6 and there also I see file_t context.
Not sure why that's the case.
- Relabeling of root file system over boot happens if there is a file
/.autorelabel present. This file is touched by systemd service
fedora-autorelabel-mark.service. And this file comes from initscritps
package.
So if this service thinks that system was booted with selinux disabled
it will put this file on root and when next time system boots with
selinux enabled, relabeling is enforced by fedora-autorelabel.service
service.
- In our case relabeling is not happening after saving vmcore because
there does not seem be any fedora-autorelabel-mark.service running
from initramfs context. Looks like this service runs after switching
to real root.
Aug 08 10:44:13 vm9-f19 systemd[1]: Started Mark the need to relabel after reboot.
- selinux poicy is now loaded by systemd after root switch has taken
place.
Aug 08 10:44:10 vm9-f19 systemd[1]: Successfully loaded SELinux policy in 357.693ms.
So now we know that why selinux relabeling is not taking place. Reason
being that systemd service which marks the file system for autorelabeling
does not run from initramfs context.
And it might not make to run this service from initramfs context before
switch root. In general it makes sense to first switch to root, load
selinux policy if needed and then check whether to mark this filesystem
for relabel or not. Ideally root is mourted read only before that. It is
just that we break this rule for kdump. So as long as we make sure we
relabel files created by kdump after booting back, things should be fine.
Since we will relabel the vmcore dir after reboot so let's remove
the selinux dracut module dependency to avoid load_policy in 2nd kernel.
If in the future load_policy memory usage shrinks to an acceptable level
or there's a better solution we can add selinux load_policy back later.
Signed-off-by: Dave Young <dyoung@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Currently in initrd, hardware clock is always considered to use UTC time
format and system time zone is also UTC. Thus system time isn't correct
if hw clock is localtime or we're using other time zone in real root.
To fix this, install /etc/adjtime and /etc/localtime to initrd.
Previously, this functionality was implemented in dracut base module:
commit 77364fd
Author: WANG Chao <chaowang@redhat.com>
base: setup correct system time and time zone in initrd
But some people complains about a normal boot initrd needs to rebuild
every time if time zone is changed. So let's fix it on our side.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Currently when action_on_fail is enabled, the emergency_shell won't be called
either. In kdump even though user specify the default action as emergency_shell,
dracut still skip it. Now change the implementation of action_on_fail to depend
on a file which is created by kdump when making kdump initrd, then remove it
at the beginning of kdump. This can solve the explicit emergency_shell problem.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
When directory is empty, echo * will output *, not empty string. That's
not intended.
Also it looks a little bit nicer now.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
kernel has exported mac address for each interface, we can get it
directly instead of parsing the output from ip address show.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Currently we use runtime mac addr to 2nd kernel to setup bonding
interface. But Bonding master will modify its slaves' mac addr and
incorrect mac addr is passed to 2nd kernel. Thus dracut in 2nd kernel
can't find expected slaves and bonding will fail.
Fix this issue by using perm address.
Tested in Fedora 19 KVM guest configured bonding with two slaves.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>