pull in some SLUB fixes from Mel Gorman for testing

This commit is contained in:
Kyle McMartin 2011-05-11 16:20:10 -04:00
parent 9b1caead9a
commit a7e4f1ccd5
4 changed files with 197 additions and 0 deletions

View File

@ -711,6 +711,10 @@ Patch12205: runtime_pm_fixups.patch
Patch12303: dmar-disable-when-ricoh-multifunction.patch Patch12303: dmar-disable-when-ricoh-multifunction.patch
Patch12400: mm-slub-do-not-wake-kswapd-for-slubs-speculative-high-order-allocations.patch
Patch12401: mm-slub-do-not-take-expensive-steps-for-slubs-speculative-high-order-allocations.patch
Patch12402: mm-slub-default-slub_max_order-to-0.patch
%endif %endif
BuildRoot: %{_tmppath}/kernel-%{KVERREL}-root BuildRoot: %{_tmppath}/kernel-%{KVERREL}-root
@ -1317,6 +1321,10 @@ ApplyPatch acpi_reboot.patch
# rhbz#605888 # rhbz#605888
ApplyPatch dmar-disable-when-ricoh-multifunction.patch ApplyPatch dmar-disable-when-ricoh-multifunction.patch
ApplyPatch mm-slub-do-not-wake-kswapd-for-slubs-speculative-high-order-allocations.patch
ApplyPatch mm-slub-do-not-take-expensive-steps-for-slubs-speculative-high-order-allocations.patch
ApplyPatch mm-slub-default-slub_max_order-to-0.patch
# END OF PATCH APPLICATIONS # END OF PATCH APPLICATIONS
%endif %endif
@ -1925,6 +1933,9 @@ fi
# and build. # and build.
%changelog %changelog
* Wed May 11 2011 Kyle McMartin <kmcmartin@redhat.com>
- Pull in some SLUB fixes from Mel Gorman for testing.
* Tue May 09 2011 Kyle McMartin <kmcmartin@redhat.com> 2.6.39-0.rc7.git0.0 * Tue May 09 2011 Kyle McMartin <kmcmartin@redhat.com> 2.6.39-0.rc7.git0.0
- Linux 2.6.39-rc7 - Linux 2.6.39-rc7

View File

@ -0,0 +1,67 @@
From owner-linux-mm@kvack.org Wed May 11 11:35:30 2011
From: Mel Gorman <mgorman@suse.de>
To: Andrew Morton <akpm@linux-foundation.org>
Subject: [PATCH 3/3] mm: slub: Default slub_max_order to 0
Date: Wed, 11 May 2011 16:29:33 +0100
Message-Id: <1305127773-10570-4-git-send-email-mgorman@suse.de>
To avoid locking and per-cpu overhead, SLUB optimisically uses
high-order allocations up to order-3 by default and falls back to
lower allocations if they fail. While care is taken that the caller
and kswapd take no unusual steps in response to this, there are
further consequences like shrinkers who have to free more objects to
release any memory. There is anecdotal evidence that significant time
is being spent looping in shrinkers with insufficient progress being
made (https://lkml.org/lkml/2011/4/28/361) and keeping kswapd awake.
SLUB is now the default allocator and some bug reports have been
pinned down to SLUB using high orders during operations like
copying large amounts of data. SLUBs use of high-orders benefits
applications that are sized to memory appropriately but this does not
necessarily apply to large file servers or desktops. This patch
causes SLUB to use order-0 pages like SLAB does by default.
There is further evidence that this keeps kswapd's usage lower
(https://lkml.org/lkml/2011/5/10/383).
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
Documentation/vm/slub.txt | 2 +-
mm/slub.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/Documentation/vm/slub.txt b/Documentation/vm/slub.txt
index 07375e7..778e9fa 100644
--- a/Documentation/vm/slub.txt
+++ b/Documentation/vm/slub.txt
@@ -117,7 +117,7 @@ can be influenced by kernel parameters:
slub_min_objects=x (default 4)
slub_min_order=x (default 0)
-slub_max_order=x (default 1)
+slub_max_order=x (default 0)
slub_min_objects allows to specify how many objects must at least fit
into one slab in order for the allocation order to be acceptable.
diff --git a/mm/slub.c b/mm/slub.c
index 1071723..23a4789 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2198,7 +2198,7 @@ EXPORT_SYMBOL(kmem_cache_free);
* take the list_lock.
*/
static int slub_min_order;
-static int slub_max_order = PAGE_ALLOC_COSTLY_ORDER;
+static int slub_max_order;
static int slub_min_objects;
/*
--
1.7.3.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

View File

@ -0,0 +1,73 @@
From owner-linux-mm@kvack.org Wed May 11 11:29:53 2011
From: Mel Gorman <mgorman@suse.de>
To: Andrew Morton <akpm@linux-foundation.org>
Subject: [PATCH 2/3] mm: slub: Do not take expensive steps for SLUBs speculative high-order allocations
Date: Wed, 11 May 2011 16:29:32 +0100
Message-Id: <1305127773-10570-3-git-send-email-mgorman@suse.de>
To avoid locking and per-cpu overhead, SLUB optimisically uses
high-order allocations and falls back to lower allocations if they
fail. However, by simply trying to allocate, the caller can enter
compaction or reclaim - both of which are likely to cost more than the
benefit of using high-order pages in SLUB. On a desktop system, two
users report that the system is getting stalled with kswapd using large
amounts of CPU.
This patch prevents SLUB taking any expensive steps when trying to
use high-order allocations. Instead, it is expected to fall back to
smaller orders more aggressively. Testing from users was somewhat
inconclusive on how much this helped but local tests showed it made
a positive difference. It makes sense that falling back to order-0
allocations is faster than entering compaction or direct reclaim.
Signed-off-yet: Mel Gorman <mgorman@suse.de>
---
mm/page_alloc.c | 3 ++-
mm/slub.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9f8a97b..057f1e2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1972,6 +1972,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
{
int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
const gfp_t wait = gfp_mask & __GFP_WAIT;
+ const gfp_t can_wake_kswapd = !(gfp_mask & __GFP_NO_KSWAPD);
/* __GFP_HIGH is assumed to be the same as ALLOC_HIGH to save a branch. */
BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_HIGH);
@@ -1984,7 +1985,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
*/
alloc_flags |= (__force int) (gfp_mask & __GFP_HIGH);
- if (!wait) {
+ if (!wait && can_wake_kswapd) {
/*
* Not worth trying to allocate harder for
* __GFP_NOMEMALLOC even if it can't schedule.
diff --git a/mm/slub.c b/mm/slub.c
index 98c358d..1071723 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1170,7 +1170,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
* Let the initial higher-order allocation fail under memory pressure
* so we fall-back to the minimum order allocation.
*/
- alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY | __GFP_NO_KSWAPD) & ~__GFP_NOFAIL;
+ alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY | __GFP_NO_KSWAPD) &
+ ~(__GFP_NOFAIL | __GFP_WAIT);
page = alloc_slab_page(alloc_gfp, node, oo);
if (unlikely(!page)) {
--
1.7.3.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

View File

@ -0,0 +1,46 @@
From owner-linux-mm@kvack.org Wed May 11 11:29:50 2011
From: Mel Gorman <mgorman@suse.de>
To: Andrew Morton <akpm@linux-foundation.org>
Subject: [PATCH 1/3] mm: slub: Do not wake kswapd for SLUBs speculative high-order allocations
Date: Wed, 11 May 2011 16:29:31 +0100
Message-Id: <1305127773-10570-2-git-send-email-mgorman@suse.de>
To avoid locking and per-cpu overhead, SLUB optimisically uses
high-order allocations and falls back to lower allocations if they
fail. However, by simply trying to allocate, kswapd is woken up to
start reclaiming at that order. On a desktop system, two users report
that the system is getting locked up with kswapd using large amounts
of CPU. Using SLAB instead of SLUB made this problem go away.
This patch prevents kswapd being woken up for high-order allocations.
Testing indicated that with this patch applied, the system was much
harder to hang and even when it did, it eventually recovered.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
mm/slub.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 9d2e5e4..98c358d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1170,7 +1170,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
* Let the initial higher-order allocation fail under memory pressure
* so we fall-back to the minimum order allocation.
*/
- alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY) & ~__GFP_NOFAIL;
+ alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY | __GFP_NO_KSWAPD) & ~__GFP_NOFAIL;
page = alloc_slab_page(alloc_gfp, node, oo);
if (unlikely(!page)) {
--
1.7.3.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>