kdump-error-handler.sh does nothing except calling three functions,
it can be easily merged into kdump.sh by using a parameter to run the
error handling routine.
kdump-lib-initramfs.sh was created to hold the three shared functions
and related code, so by merging these two files, kdump-lib-initramfs.sh
can be simplified by a lot.
Following up commits will clean up kdump-lib-initramfs.sh.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Philipp Rudo <prudo@redhat.com>
Add a helper `kdump_read_conf` to replace read_strip_comments.
`kdump_read_conf` does a few more things:
- remove trailing spaces.
- format the content, remove duplicated spaces between name and value.
- read from KDUMP_CONFIG_FILE (/etc/kdump.conf) directly, avoid pasting
"/etc/kdump.conf" path everywhere in the code.
- check if config file exists, just in case.
Also unify the environmental variable, now KDUMP_CONFIG_FILE stands for
the default config location.
This helps avoid some shell pitfalls about spaces when reading config.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Philipp Rudo <prudo@redhat.com>
Previously when dumping vmcore to a remote machine through ssh,
the files are created remotely and file permissions are taken
from the default umask value, which making the files accessible to
anyone on the remote machine.
This patch fixed the security issue by setting a customized umask value
before the file creation on the remote machine.
Signed-off-by: Tao Liu <ltao@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
Currently, kdump will fail to save vmcore when using the scp and ipv6.
The reason is that the scp requires IPv6 addresses to be enclosed in
square brackets, but ssh doesn’t require this.
Let's enclose the ipv6 address in square brackets for scp dump.
Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
Currently, if saving vmcore failed, the final failure information won't
be saved to the kexec-dmesg.log, because the action of saving the log
occurs before the final log is printed, it has no chance to save the
log(marked it with the '^^^' below) to the log file(kexec-dmesg.log).
For example:
[1] console log:
[ 3.589967] kdump[453]: saving vmcore-dmesg.txt to /sysroot//var/crash/127.0.0.1-2020-11-26-14:19:17/
[ 3.627261] kdump[458]: saving vmcore-dmesg.txt complete
[ 3.633923] kdump[460]: saving vmcore
[ 3.661020] kdump[465]: saving vmcore failed
^^^^^^^^^^^^^^^^^^^^
[2] kexec-dmesg.log:
Nov 26 14:19:17 kvm-06-guest25.hv2.lab.eng.bos.redhat.com kdump[453]: saving vmcore-dmesg.txt to /sysroot//var/crash/127.0.0.1-2020-11-26-14:19:17/
Nov 26 14:19:17 kvm-06-guest25.hv2.lab.eng.bos.redhat.com kdump[458]: saving vmcore-dmesg.txt complete
Nov 26 14:19:17 kvm-06-guest25.hv2.lab.eng.bos.redhat.com kdump[460]: saving vmcore
Let's improve it in order to avoid the loss of important information.
Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
Let's use the logger in the second kernel and collect the kernel ring
buffer(dmesg) of the second kernel.
Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
The following scenario is observed:
kdump: kdump_pre script exited with non-zero status!
[ 5.104841] systemd[1]: Shutting down.
[ 5.122162] printk: systemd-shutdow: 27 output lines suppressed due to ratelimiting
kdump: dump target is /dev/mapper/rhel_hpe--dl380pgen8--02--vm--12-root
kdump: saving to /sysroot//var/crash/127.0.0.1-2020-06-27-03:55:01/
kdump: saving vmcore-dmesg.txt
kdump: saving vmcore-dmesg.txt complete
kdump: saving vmcore
Checking for memory holes : [ 0.0 %] / Checking for memory holes : [100.0 %] | [ 5.516573] systemd-shutdown[1]: Syncing filesystems and block devices.
[ 5.519515] systemd-shutdown[1]: Sending SIGTERM to remaining processes...
It is caused by the following script
if [ $? -ne 0 ]; then
echo "kdump: kdump_pre script exited with non-zero status!"
do_final_action
fi
When do_final_action runs, a systemd service is forked for reboot, then the
subshell returns, and parent continues to execute. Place "exit 1" to stop
executing and make kdump service failure.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Kairui Song <kasong@redhat.com>
commit 61e0169 changed definition of dump_fs function, so
need to do a mount target conversion before calling it.
Signed-off-by: Kairui Song <kasong@redhat.com>
This patch executes the binary and script files in /etc/kdump/{pre.d,post.d}
just like kdump_pre or kdump_post directive written in /etc/kdump.conf.
Signed-off-by: Shinichi Onitsuka <onitsuka.shinic@fujitsu.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
This reverts commit cee618593c.
Upstream dracut have provided a parameter for adding mandantory network
requirement by appending "rd.neednet" parameter, so we should use that
instead.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
With FADump support added on POWERNV paltform, enable the scripts to
capture /proc/vmcore. Also, if CONFIG_OPAL_CORE is enabled, OPAL core
is preserved and exported on POWERNV platform. So, offload OPAL core,
if it is available.
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Acked-by: Kairui Song <kasong@redhat.com>
The dracut initqueue may quit immediately and won't trigger any hook if
there is no "finished" hook still pending (finished hook will be deleted
once it return 0).
This issue start to appear with latest dracut, latest dracut use
network-manager to configure the network,
network-manager module only install "settled" hook, and we didn't
install any other hook. So NFS/SSH dump will fail. iSCSI dump works
because dracut iscsi module will install a "finished" hook to detect if
the iscsi target is up.
So for NFS/SSH we keep initqueue running until the host successfully get
a valid IP address, which means the network is ready.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Pingfan Liu <piliu@redhat.com>
When reading kdump configs, a single parsing should be enough and this
saves a lot of duplicated striping call which speed up the total load
speed.
Speed up about 2 second when building and 0.1 second for reload in my
tests.
Signed-off-by: Kairui Song <kasong@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
There are some complaints about nfs kdump that users must mount
nfs beforehand, which may cause some overhead to nfs server.
For example, there're thounsands of diskless clients deployed with
nfs dumping, each time the client is boot up, it will trigger
kdump rebuilding so will mount nfs, thus resulting in thousands
of nfs request concurrently imposed on the same nfs server.
We introduce a new way of specifying mount information via the
already-existent "dracut_args" directive(so avoid adding extra
directives in /etc/kdump.conf), we will skip all the filesystem
mounting and checking stuff for it. So it can be used in the
above-mentioned nfs scenario to avoid severe nfs server overhead.
Specifically, if there is any "--mount" information specified via
"dracut_args" in /etc/kdump.conf, always use it as the final mount
without any validation(mounting or checking like mount options,
fs size, etc), so users are expected to ensure its correctness.
NOTE:
-Only one mount target is allowed using "dracut_args" globally.
-Dracut will create <mountpoint> if it doesn't exist in kdump kernel,
<mountpoint> must be specified as an absolute path.
-Users should do a test first and ensure it works because kdump does
not prepare the mount or check all the validity.
Reviewed-by: Pratyush Anand <panand@redhat.com>
Suggested-by: Dave Young <dyoung@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
For now, Kdump will use ipv4 address as dump directory, and it works, if
ipv4 is enabled.
Once Kdump start to support ipv6 protocol, we may only setup the ipv6
address exclusively. Modify the code to make Kdump work in either ipv4
and ipv6 protocol.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
The ipv6 patchset is still under review, previously the commit was mistakenly
merged, thus let's revert it.
Revert "dracut-kdump: Use proper the known hosts entry in the file known_hosts"
This reverts commit 63476302aa.
Conflicts:
kdump-lib.sh
Signed-off-by: Minfei Huang <mhuang@redhat.com>
Signed-off-by: Dave Young <dyoung@redhat.com>
This reverts commit f4c45236bf.
Since that commit will change the behaviour of kdump_post. That is not
good.
Signed-off-by: Baoquan He <bhe@redhat.com>
User complains that kdump_post script doesn't execute after mount
failed. This happened since mount failure will trigger
kdump-error-handler.service, and then start kdump-error-handler.sh.
However in kdump-error-handler.sh it doesn't execute kdump_post.
Hence add it in this patch.
Surely the function do_kdump_post need be moved into kdump-lib-initramfs.sh
to be a common function.
v1->v2:
Add a return value to do_kdump_post when invoked in kdump_error-handler.sh.
And call do_kdump_post earlier than do_default_action, otherwise
it may not execute if reboot/poweroff/halt.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Meifei Huang <mhuang@redhat.com>
Once login using ssh, the ssh will store the known hosts entry to the
local ~/.ssh/known_hosts. From now, we can login using ssh automaticly.
The ssh will check the ~/ssh/.known_hosts entry, if set the option
StrictHostKeyChecking=yes/ask in the config or command line, when you
want to login the target. the default value of StrictHostKeyChecking is
ask.
And the kdump using the ssh will append the option
StrictHostKeyChecking=yes in the command line.
We can using following ip to connect peer machine, if enable the ipv6.
fe80::5054:ff:fe48:ca80%eth0
Obviously, above ip contains the ethX.
Kdump will add the prefix "kdump-" before ethX to avoid flowing
netdevice name in case netdevice names ethX in the 2nd kernel. So the
ip address will change to fe80::5054:ff:fe48:ca80%kdump-eth0.
Kdump will login the target manully in the 2nd kernel, because of the
option StrictHostKeyChecking=yes and inexistence known hosts entry
in the local ~/.ssh/known_hosts. Hence dumping core will fail.
In order to login automaticly using ssh, we should add the prefix
"kdump-" before ethX in the local ~/.ssh/known_hosts.
Signed-off-by: Minfei Huang <mhuang@redhat.com>
In ssh or raw dump case, if user do not specify "core_collector" in
kdump.conf, kdump will fail. Because global DEFAULT_CORE_COLLECTOR
variable isn't applied to CORE_COLLECTOR. Now fix it and clean up the
duplicate code in kdump.sh.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
With fadump support, dracut-kdump.sh script is installed into default
initrd to capture vmcore generated by firmware assisted dump. Thus in
fadump case, the same initrd is being used for normal boot as well as
boot after system crash. Hence a device node, added by firmware while
system crashes, is checked to identify if it is a normal boot or boot
after crash to determine whether or not capture vmcore. While testing
fadump in fedora21 alpha, observed that vmcore capture is initiated
even during normal boot, inspite of this check, with the below error:
"kdump.sh[451]: /bin/kdump.sh: line 5: return: can only `return'
from a function or sourced script"
The below patch tries to fix this issue.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This patch introduce a new kdump-capture.service which is used to run
kdump.sh.
kdump-capture.service has OnFailure=emergency.target and
OnFailureIsolate=yes set. When kdump.sh fails, the kdump emergency
service will be triggered and enter the error handling path.
In 2nd kernel, the default target for systemd is initrd.target, so we
put kdump-capture.service in initrd.target.wants/ and by that, system
will start kdump-capture as part of the boot process.
kdump.sh used to run in dracut-pre-pivot hook. Now kdump-capture.service
is placed after dracut-pre-pivot.service and other dependencies are all
copied from dracut-pre-pivot.service. So the start point of
kdump.sh will be almost the same as it used to be.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Now upon failure kdump script might not be called at all and it might
not be able to execute default action. It results in a hang.
Because we disable emergency shell and rely on kdump.sh being invoked
through dracut-pre-pivot hook. But it might happen that we never call
into dracut-pre-pivot hook because certain systemd targets could not
reach due to failure in their dependencies. In those cases error
handling code does not run and system hangs. For example:
sysroot-var-crash.mount --> initrd-root-fs.target --> initrd.target \
--> dracut-pre-pivot.service --> kdump.sh
If /sysroot/var/crash mount fails, initrd-root-fs.target will not be
reached. And then initrd.target will not be reached,
dracut-pre-pivot.service wouldn't run. Finally kdump.sh wouldn't run.
To solve this problem, we need to separate the error handling code from
dracut-pre-pivot hook, and every time when a failure shows up, the
separated code can be called by the emergency service.
By default systemd provides an emergency service which will drop us into
shell every time upon a critical failure. It's very convenient for us to
re-use the framework of systemd emergency, because we don't have to
touch the other parts of systemd. We can use our own script instead of
the default one.
This new scheme will overwrite emergency shell and replace with kdump
error handling code. And this code will do the error handling as needed.
Now, we will not rely on dracut-pre-pivot hook running always. Instead
whenever error happens and it is serious enough that emergency shell
needed to run, now kdump error handler will run.
dracut-emergency is also replaced by kdump error handler and it's
enabled again all the way down. So all the failure (including systemd
and dracut) in 2nd kernel could be captured, and trigger kdump error
handler.
dracut-initqueue is a special case, which calls "systemctl start
emergency" directly, not via "OnFailure=emergency". In case of failure,
emergency is started, but not in a isolation mode, which means
dracut-initqueue is still running. On the other hand, emergency will
call dracut-initqueue again when default action is dump_to_rootfs.
systemd would block on the last dracut-initqueue, waiting for the first
instance to exit, which leaves us hang. It looks like the following:
dracut-initqueue (running)
--> call dracut-emergency:
--> dracut-emergency (running)
--> kdump-error-handler.sh (running)
--> call dracut-initqueue:
--> blocking and waiting for the original instance to exit.
To fix this, I'd like to introduce a wrapper emergency service. This
emegency service will replace both the systemd and dracut emergency. And
this service does nothing but to isolate to real kdump error handler
service:
dracut-initqueue (running)
--> call dracut-emergency:
--> dracut-emergency isolate to kdump-error-handler.service
--> dracut-emergency and dracut-initqueue will both be stopped
and kdump-error-handler.service will run kdump-error-handler.sh.
In a normal failure case, this still works:
foo.service fails
--> trigger emergency.service
--> emergency.service isolates to kdump-error-handler.service
--> kdump-error-handler.service will run kdump-error-handler.sh
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Extract functions from kdump.sh, and construct kdump-lib-initramfs.sh as
kdump common functions/varaibles library.
kdump-lib-initramfs.sh will include kdump-lib.sh, because it will use
the functions from there. IOW, kdump-lib-initramfs.sh will be a superset
of kdump-lib.sh
So after this cleanup:
- scripts running in 1st kernel only have to include kdump-lib.sh
- scripts running in 2nd kernel only have to include kdump-lib-initramfs.sh
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Recently somebody reported an issue where vmcore-dmesg.txt was saved
successfully but later saving vmcore failed to due to lack of space on disk.
System rebooted but after reboot there was nothing on disk. Not even
vmcore-dmesg.txt.
Issue a sync after saving vmcore-dmesg.txt to solve this issue.
I think this is happening because we are doing "reboot -f" instead of going
through systemd reboot path. Anyway, doing a sync now should take care of
this.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
The script dracut-kdump.sh is responsible for capturing vmcore during
second kernel boot. Currently this script gets installed into kdump
initrd as part of kdumpbase dracut module.
With fadump support, 'dracut-kdump.sh' script also gets installed into
default initrd to capture vmcore generated by firmware assisted dump.
Thus in fadump case, the same initrd is going to be used for normal
boot as well as boot after system crash. Hence a check is required to
see if it is a normal boot or boot after crash.
A new node "ibm,kernel-dump" is added, to the device tree, by firmware
to notify kernel if it is booting after crash. The below patch adds a
check for this node before executing steps to capture vmcore. This
check will help bypassing the vmcore capture steps during normal boot
process.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Adds two new options to kdump.conf to be able to configure fence_kdump
support for generic clusters:
fence_kdump_args <arg(s)>
- Command line arguments for fence_kdump_send (it can contain all
valid arguments except hosts to send notification to)
fence_kdump_nodes <node(s)>
- List of cluster node(s) separated by space to send fence_kdump
notification to (this option is mandatory to enable fence_kdump)
Generic clusters fence_kdump configuration take precedence over older
method of fence_kdump configuration for Pacemaker clusters. It means
that if fence_kdump is configured using above options in kdump.conf, old
Pacemaker configuration is not used even if it exists.
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames FENCE_KDUMP_NODES variable to FENCE_KDUMP_NODES_FILE to
distinguish it from values read from fence_kdump_nodes option in
kdump.conf (introduced in following patches).
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Renames FENCE_KDUMP_CONFIG variable to FENCE_KDUMP_CONFIG_FILE to
distinguish it from values read from fence_kdump_args option in
kdump.conf (introduced in following patches).
Bug-Url: https://bugzilla.redhat.com/1078134
Signed-off-by: Martin Perina <mperina@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In 2nd kernel, to prevent the crashed system from being fenced off,
fence kdump message must be send to other nodes in the cluster
periodically before dumping process.
We preserve every node's name in /etc/fence_kdump_nodes in the
initrd, so we parse this file and send notify them.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Tested-by: Zhi Zou <zzou@redhat.com>
Tested-by: Marek Grac <mgrac@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Lzo is proven faster than zlib, for large memory machine it will
extremely shorten the time for saving vmcore. Let's switch to lzo as the
default compression method for makedumpfile.
The drawback is lzo has a little less compression ratio than zlib. But
considering for most users, speed/time is a more serious concern than
vmcore size. So I think default to lzo will benefit most of the users.
v1->v2: update kdump.conf.5 [DaveY]
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Description:
Currently we only added memdebug code before different dracut
hooks ie. pre-udev pre-pivot etc. Add memdebug in kdump.sh before
capturing vmcore is also good for debugging.
solution:
Add make_trace_mem before saving vmcore.
Signed-off-by: arthur <zzou@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
From: Wade Mealing <wmealing@redhat.com>
The RHEL 5 release of mkdumprd allowed for comments in the kdump config
file as shown below:
net 192.168.1.1 # this is the comment part
This patch strips them out during processing, but leaves the configuration
file in original condition.
Signed-off-by: Wade Mealing <wmealing@redhat.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Currently in the whole kdump framework, we have some common functions
used across not only mkdumprd context and dracut context, but also 1st
kernel and 2nd kernel. We defined these functions at each script, which
is obviously not decent.
So let's introduce kdump-lib.sh for the shared functions and put it
to /lib/kdump/kdump-lib.sh.
It starts small, as you can see, only 3 functions are extracted. But in
the future more and more common functions can be added.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
do_dump() takes care of dump procedure. It'll error out if failing to
save vmcore.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Currently when action_on_fail is enabled, the emergency_shell won't be called
either. In kdump even though user specify the default action as emergency_shell,
dracut still skip it. Now change the implementation of action_on_fail to depend
on a file which is created by kdump when making kdump initrd, then remove it
at the beginning of kdump. This can solve the explicit emergency_shell problem.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Currently in kdump.sh, we redirect stdout to stderr, because dracut
pre-pivot service (which kdump.sh is running within) only output stderr
to console. That behavior is defined in dracut-pre-pivot.service:
[Service]
...
StandardInput=null
StandardOutput=syslog
StandardError=syslog+console
...
But during testing, it has been observed that systemd will cache stderr
buffer, and first record to syslog (and it's own journal), then copy the
logs to /dev/console. And this practice is somehow unexpected in our
kdump script. We may have suppressed stdout/stderr that hasn't been
write to /dev/console before we run a force reboot.
With this change of redirecting stdout/stderr to /dev/console, kdump.sh
will output everything immediately to console, not cached/hidden by
systemd.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
dump_to_rootfs is a special handling of dump_fs. It's better we merge them
together to cleaup code.
Now dump_fs() function takes two types of $1, a mount point like
/sysroot or a dump target device like /dev/mapper/vg-lv_kdump.
v2: remove -F option in makedumpfile case from Vivek
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
After dump the vmcore, explicitly commit changed cache to disk in case
umount fail or chances we'll have an incomplete vmcore.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
When using makedumpfile as core_collector, makedumpfile will show its
own progress bar, it will mix with the monitor_dd_progress and cause confusion.
In this patch just call monitor_dd_progress when core_collector is not
makedumpfile
Signed-off-by: Dave Young <dyoung@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Currently umount fs happens right after saving vmcore. Therefore vmcore
isn't directly accessible in kdump_post script or shell. This patch moves
the umount fs operation down to the very last part, right before kdump
exits.
The patch adds a global variable MOUNTS to keep track which filesystem
is used. And umount these filesystems at do_default_action() and
do_final_action().
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Certain dracut module will mount fs under real root(/sysroot/ or
$NEWROOT/). Thus root fs can not be umounted by `umount /sysroot/`.
We should use `umount -R /sysroot/` to recursively umount root and
its submounts.
v2: do the same for dump_fs() from Baoquan
Signed-off-by: WANG Chao <chaowang@redhat.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Replace $1/$2 with local variable names in dump_raw() and dump_ssh()
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
When makedumpfile failed, it could still generate a invalid vmcore. It's
better to suffix these invalid vmcore files with "-incomplete", as we do
in RHEL6.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
set -x is removed and we'll have little output about the dumping
progress. So it's best to output some messages on the top level to let
user know what's going on.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Currently "set -x" is specified in dracut-kdump.sh and I see the script
execution commands by default on console while testing with F19. That's
not right. This should be done only if user asked for it. Remove it.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
In dracut-kdump.sh, kdump did not umount rootfs after dump_to_rootfs, just
like dump_fs does. And in kdump, the FINAL_ACTION is "reboot -f", no umount
action is taken.
Even though "sync" has been executed, it's safer to take a "umount rootfs"
action. Anyway no harm to umount.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In kdump.conf, space key is used as delimiter by default.
In kdump_install_conf of dracut-module-setup.sh, if specify
core_collector with a tab delimiter, the tool may not be
copied into kdump-initrd.
E.g, core_collector scp -v
And in dump_ssh of dracut-kdump.sh, dumping will fail caused
by tab key in core_collector.
Here change code to allow tab key as delimiter when specifying
core_collector.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>