Resolves: rhbz#1131169
Zbigniew (systemd developer) pointed out that our udev rules should
install to /usr/lib/ not /etc. Because /etc is supposed to be used by
sysadmins only and package should install by default into /usr/lib.
As advised here:
http://www.freedesktop.org/software/systemd/man/udev.html#Rules%20Files
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This patch introduce a new kdump-capture.service which is used to run
kdump.sh.
kdump-capture.service has OnFailure=emergency.target and
OnFailureIsolate=yes set. When kdump.sh fails, the kdump emergency
service will be triggered and enter the error handling path.
In 2nd kernel, the default target for systemd is initrd.target, so we
put kdump-capture.service in initrd.target.wants/ and by that, system
will start kdump-capture as part of the boot process.
kdump.sh used to run in dracut-pre-pivot hook. Now kdump-capture.service
is placed after dracut-pre-pivot.service and other dependencies are all
copied from dracut-pre-pivot.service. So the start point of
kdump.sh will be almost the same as it used to be.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Now upon failure kdump script might not be called at all and it might
not be able to execute default action. It results in a hang.
Because we disable emergency shell and rely on kdump.sh being invoked
through dracut-pre-pivot hook. But it might happen that we never call
into dracut-pre-pivot hook because certain systemd targets could not
reach due to failure in their dependencies. In those cases error
handling code does not run and system hangs. For example:
sysroot-var-crash.mount --> initrd-root-fs.target --> initrd.target \
--> dracut-pre-pivot.service --> kdump.sh
If /sysroot/var/crash mount fails, initrd-root-fs.target will not be
reached. And then initrd.target will not be reached,
dracut-pre-pivot.service wouldn't run. Finally kdump.sh wouldn't run.
To solve this problem, we need to separate the error handling code from
dracut-pre-pivot hook, and every time when a failure shows up, the
separated code can be called by the emergency service.
By default systemd provides an emergency service which will drop us into
shell every time upon a critical failure. It's very convenient for us to
re-use the framework of systemd emergency, because we don't have to
touch the other parts of systemd. We can use our own script instead of
the default one.
This new scheme will overwrite emergency shell and replace with kdump
error handling code. And this code will do the error handling as needed.
Now, we will not rely on dracut-pre-pivot hook running always. Instead
whenever error happens and it is serious enough that emergency shell
needed to run, now kdump error handler will run.
dracut-emergency is also replaced by kdump error handler and it's
enabled again all the way down. So all the failure (including systemd
and dracut) in 2nd kernel could be captured, and trigger kdump error
handler.
dracut-initqueue is a special case, which calls "systemctl start
emergency" directly, not via "OnFailure=emergency". In case of failure,
emergency is started, but not in a isolation mode, which means
dracut-initqueue is still running. On the other hand, emergency will
call dracut-initqueue again when default action is dump_to_rootfs.
systemd would block on the last dracut-initqueue, waiting for the first
instance to exit, which leaves us hang. It looks like the following:
dracut-initqueue (running)
--> call dracut-emergency:
--> dracut-emergency (running)
--> kdump-error-handler.sh (running)
--> call dracut-initqueue:
--> blocking and waiting for the original instance to exit.
To fix this, I'd like to introduce a wrapper emergency service. This
emegency service will replace both the systemd and dracut emergency. And
this service does nothing but to isolate to real kdump error handler
service:
dracut-initqueue (running)
--> call dracut-emergency:
--> dracut-emergency isolate to kdump-error-handler.service
--> dracut-emergency and dracut-initqueue will both be stopped
and kdump-error-handler.service will run kdump-error-handler.sh.
In a normal failure case, this still works:
foo.service fails
--> trigger emergency.service
--> emergency.service isolates to kdump-error-handler.service
--> kdump-error-handler.service will run kdump-error-handler.sh
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Extract functions from kdump.sh, and construct kdump-lib-initramfs.sh as
kdump common functions/varaibles library.
kdump-lib-initramfs.sh will include kdump-lib.sh, because it will use
the functions from there. IOW, kdump-lib-initramfs.sh will be a superset
of kdump-lib.sh
So after this cleanup:
- scripts running in 1st kernel only have to include kdump-lib.sh
- scripts running in 2nd kernel only have to include kdump-lib-initramfs.sh
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
This is a backport of the following upstream commit.
commit 0b732828091a545185ad13d0b2e6800600788d61
Author: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Date: Tue Jun 10 13:57:29 2014 +0900
[PATCH 3/3] Stop maximizing the bitmap buffer to reduce the risk of OOM.
We tried to maximize the bitmap buffer to get the best performance,
but the performance degradation caused by multi-cycle processing
looks very small according to the benchmark on 2TB memory:
https://lkml.org/lkml/2013/3/26/914
This result means we don't need to make an effort to maximize the
bitmap buffer, it will just increase the risk of OOM.
This patch sets a small fixed value (4MB) as a safety limit,
it may be safer and enough in most cases.
Signed-off-by: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit.
commit 2648a8f7caa63e3ec82fd4bce471cec0a895b704
Author: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Date: Mon Jun 9 17:48:30 2014 +0900
[PATCH 2/3] Move counting pfn_memhole for cyclic mode.
In cyclic mode, memory holes are checked in initialize_2nd_bitmap_cyclic()
in both the kdump path and the ELF path, so pfn_memhole should be
counted there.
Signed-off-by: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit.
commit 16b94ab7fad6744d8b77f2b26838f220307e3118
Author: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Date: Mon Jun 9 17:44:43 2014 +0900
[PATCH 1/3] Remove the 1st bitmap buffer from the ELF path in cyclic mode.
We can create the 2nd bitmap without creating the 1st bitmap by commit
363d53fc8, so we don't need to create the 1st bitmap in cyclic mode
in the ELF path since it isn't used. Thus, we can use the whole bitmap
buffer only for the 2nd bitmap like the kdump path.
Signed-off-by: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit. It is about freeing
the wrong bitmap thing, it could increase the risk of OOM when system is
in an edge of OOM.
commit 0e7b1a6e3c1919c9222b662d458637ddf802dd04
Author: Arthur Zou <zzou@redhat.com>
Date: Wed May 7 17:54:16 2014 +0900
[PATCH v3] Fix free bitmap_buffer_cyclic error.
Description:
In create_dump_bitmap() and write_kdump_pages_and_bitmap_cyclic(),
What should be freed is info->partial_bitmap instead of info->bitmap.
Solution:
Add two functions to free the bitmap_buffer_cyclic. info->partial_bitmap1
is freed by free_bitmap1_buffer_cyclic(). info->partial_bitmap2 is
freed by free_bitmap2_buffer_cyclic(). At the same time, remove
thoes frees that free partial_bitmap1 or partial_bitmap2 at the end
of main() because partial_bitmap1 and partial_bitmap2 has been freed
after dump file has been written out, so there is no need to free it
again at the end of main.
Signed-off-by: Arthur Zou <zzou@redhat.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit. Late back ported
commit depends on it.
commit 9dc6440c63320066bc6344c6e3ca3c3af88bcc42
Author: Petr Tesarik <ptesarik@suse.cz>
Date: Thu Apr 24 10:58:43 2014 +0900
[PATCH v3] Introduce the mdf_pfn_t type.
Replace unsigned long long with mdf_pfn_t where:
a. the variable denotes a PFN
b. the variable is a number of pages
The number of pages is converted to a mdf_pfn_t, because it is a result
of subtracting two PFNs or incremented in a loop over a range of PFNs,
so it can get as large as a PFN.
Note: The mdf_ (i.e. makedumpfile) prefix is used to prevent possible
conflicts with other software that defines a pfn_t type.
Signed-off-by: Petr Tesarik <ptesarik@suse.cz>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com
Backport from the following commit from upstream makedumpfile:
commit 45fc42c
Author: WANG Chao <chaowang@redhat.com>
Date: Tue Jun 10 14:11:27 2014 +0900
[PATCH] Fix Makefile for eppic_makedumpfile.so build.
When libeppic isn't installed on a standard location, building
eppic_makedumpfile.so with -leppic directly doesn't work.
Add LDFLAGS to build arguments, so that one can pass LDFLAGS="-Ldir
-Idir" to tell where to search for libeppic library and its header
files.
For example, if eppic source is installed on the same directory level
with makedumpfile as the following:
makedumpfile
|--- arch
+--- eeppic_scripts
eppic
|--- applications
+--- libeppic
After compiling libeppic, one can use the following command to build
eppic_makedumpfile.so:
make LDFLAGS="-I../eppic/libeppic -L../eppic/libeppic" eppic_makedumpfile.so
Signed-off-by: WANG Chao <chaowang@redhat.com>
With this patch, we don't need use a fedora-specific patch for building
eppic_makedumpfile.so.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Rename the subpackage kdump-anaconda-addon to kexec-tools-anaconda-addon
to keep consistency and make fedpkg build happy
Because every time fedpkg builds a new release the package version number
should increase. But kdump-annaconda-addon just keep same version, so let's
rename it to kexec-tools-annaconda-addon here kexec-tools- is a default prefix.
For version let's use default top level version.
At the same time, rename the kdump-anaconda-addon directory name to anaconda-addon
to make it more standard. Using the current data instead of version number as a
surfix of kdump-anaconda-addon tarball just like kexec-tools-po did.
Signed-off-by: Arthur Zou <zzou@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
makedumpfile_eppic.so (provided by kexec-tools-eppic) is built against
makedumpfile (provided by kexec-tools). kexec-tools-eppic must depend on
the same version.release of kexec-tools, otherwise there could be a ABI
compatibility issue.
Signed-off-by: WANG Chao <chaowang@redhat.com>
kdump-dep-generator is a systemd generator, used to write out kdump
service dependencies.
Currently it's only useful for ssh dump case. And in ssh dump case, it
writes out a dependency which kdump.service "Wants"
network-online.target:
# ls -l /run/systemd/generator/kdump.service.wants/
[..] network-online.target -> /usr/lib/systemd/system/network-online.target
So that kdump.service will pull in network-online.target and delayed
start until network-online.target is reached.
In the future, we could use generator to write out kdump.service
dynamically and get rid of the static defined kdump.service at all.
v1->v2:
Vivek: not using hardcoded run time generator path, use what systemd pass in.
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
In exteme case vmcore-dmesg will overflow. upstream has fixed the
some problem. so simply backport it
Signed-off-by: Arthur Zou <zzou@redhat.com>
Acked-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
This is a backport of the following upstream commit:
commit 4404368
Author: WANG Chao <chaowang@redhat.com>
Date: Wed Dec 18 22:34:43 2013 +0900
[PATCH] memset() in cyclic bitmap initialization introduce segment fault.
We are using memset() to improve performance when creating 1st and 2nd
bitmap. After doing round up the pfn_start and round down pfn_end, it's
possible that pfn_start_roundup is greater than pfn_end_round. A segment
fault could happen in that case because memset is taking roughly the
value of (pfn_end_round << 3 - pfn_start_roundup << 3 ), which is
negative, as its third argument.
So we can skip the memset if start is greater than end. It's safe
because we will set bit for the round up part and also round down part.
Actually this happens on my EFI virtual machine:
cat /proc/iomem:
00000000-00000fff : reserved
00001000-0009ffff : System RAM
000a0000-000bffff : PCI Bus 0000:00
000f0000-000fffff : System ROM
00100000-3d162017 : System RAM
01000000-015cab9b : Kernel code
015cab9c-019beb3f : Kernel data
01b4f000-01da9fff : Kernel bss
30000000-37ffffff : Crash kernel
3d162018-3d171e57 : System RAM
3d171e58-3d172017 : System RAM
3d172018-3d17ae57 : System RAM
3d17ae58-3dc10fff : System RAM
3dc11000-3dc18fff : reserved
3dc19000-3dc41fff : System RAM
3dc42000-3ddcefff : reserved
3ddcf000-3f7fefff : System RAM
3f7ff000-3f856fff : reserved
[..]
gdb ./makedumpfile core
(gdb) bt full
[..]
#1 0x000000000042775d in create_1st_bitmap_cyclic () at makedumpfile.c:4543
i = 0x5
pfn = 0x3d190
phys_start = 0x3d18ee58
phys_end = 0x3d18f018
pfn_start = 0x3d18e
pfn_end = 0x3d18f
pfn_start_roundup = 0x3d190
pfn_end_round = 0x3d188
pfn_start_byte = 0x7a32
pfn_end_byte = 0x7a31
[..]
(gdb) list makedumpfile.c:4543
4538 return FALSE;
4539
4540 pfn_start_byte = (pfn_start_roundup - info->cyclic_start_pfn) >> 3;
4541 pfn_end_byte = (pfn_end_round - info->cyclic_start_pfn) >> 3;
4542
4543 memset(info->partial_bitmap2 + pfn_start_byte,
4544 0xff,
4545 pfn_end_byte - pfn_start_byte);
4546
4547 for (pfn = pfn_end_round; pfn < pfn_end; ++pfn)
Signed-off-by: WANG Chao <chaowang@redhat.com>
This patch fixes segment fault issues on the systems with very small
memory map range (less than 8 pages).
Signed-off-by: WANG Chao <chaowang@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Backport from upstream.
commit 20ecc0827e7837c52f3903638a59959f8bf17f9e
Author: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Date: Tue Nov 5 00:29:35 2013 +0900
[PATCH v2] Improve progress information for huge memory system.
On system with huge memory, percentage in progress information is
updated at very slow interval, because 1 percent on 1 TiB memory is
about 10 GiB, which looks like as if system has freezed. Then,
confused users might get tempted to push a reset button to recover the
system. We want to avoid such situation as much as possible.
Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>