* Tue Nov 12 2019 Danilo Cesar Lemes de Paula <ddepaula@redhat.com> - 4.1.0-14.el8

- kvm-blockdev-reduce-aio_context-locked-sections-in-bitma.patch [bz#1756413]
- kvm-qapi-implement-block-dirty-bitmap-remove-transaction.patch [bz#1756413]
- kvm-iotests-test-bitmap-moving-inside-254.patch [bz#1756413]
- kvm-spapr-xive-skip-partially-initialized-vCPUs-in-prese.patch [bz#1754710]
- kvm-nbd-Grab-aio-context-lock-in-more-places.patch [bz#1741094]
- kvm-tests-Use-iothreads-during-iotest-223.patch [bz#1741094]
- Resolves: bz#1741094
  ([Upstream]Incremental backup: Qemu coredump when expose an active bitmap via pull mode(data plane enable))
- Resolves: bz#1754710
  (qemu core dumped when hotpluging vcpus)
- Resolves: bz#1756413
  (backport support for transactionable block-dirty-bitmap-remove for incremental backup support)
This commit is contained in:
Danilo C. L. de Paula 2019-11-12 01:37:10 +00:00
parent 1eb8acbee7
commit 32a3ac0fa9
7 changed files with 970 additions and 1 deletions

View File

@ -0,0 +1,122 @@
From 107ad619739795199df98c56d0ad4db14fec3722 Mon Sep 17 00:00:00 2001
From: John Snow <jsnow@redhat.com>
Date: Fri, 27 Sep 2019 20:18:44 +0100
Subject: [PATCH 1/6] blockdev: reduce aio_context locked sections in bitmap
add/remove
RH-Author: John Snow <jsnow@redhat.com>
Message-id: <20190927201846.6823-2-jsnow@redhat.com>
Patchwork-id: 90908
O-Subject: [RHEL-AV-8.1.0 qemu-kvm PATCH 1/3] blockdev: reduce aio_context locked sections in bitmap add/remove
Bugzilla: 1756413
RH-Acked-by: Maxim Levitsky <mlevitsk@redhat.com>
RH-Acked-by: Max Reitz <mreitz@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Commit 0a6c86d024c52 returned these locks back to add/remove
functionality, to protect from intersection of persistent bitmap
related IO with other IO. But other bitmap-related functions called
here are unrelated to the problem, and there are no needs to keep these
calls inside critical sections.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190708220502.12977-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 2899f41eef2806cf8eb119811c9d6fcf15ce80f6)
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
blockdev.c | 30 +++++++++++++-----------------
1 file changed, 13 insertions(+), 17 deletions(-)
diff --git a/blockdev.c b/blockdev.c
index 4d141e9..0124825 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2811,7 +2811,6 @@ void qmp_block_dirty_bitmap_add(const char *node, const char *name,
{
BlockDriverState *bs;
BdrvDirtyBitmap *bitmap;
- AioContext *aio_context = NULL;
if (!name || name[0] == '\0') {
error_setg(errp, "Bitmap name cannot be empty");
@@ -2847,16 +2846,20 @@ void qmp_block_dirty_bitmap_add(const char *node, const char *name,
}
if (persistent) {
- aio_context = bdrv_get_aio_context(bs);
+ AioContext *aio_context = bdrv_get_aio_context(bs);
+ bool ok;
+
aio_context_acquire(aio_context);
- if (!bdrv_can_store_new_dirty_bitmap(bs, name, granularity, errp)) {
- goto out;
+ ok = bdrv_can_store_new_dirty_bitmap(bs, name, granularity, errp);
+ aio_context_release(aio_context);
+ if (!ok) {
+ return;
}
}
bitmap = bdrv_create_dirty_bitmap(bs, granularity, name, errp);
if (bitmap == NULL) {
- goto out;
+ return;
}
if (disabled) {
@@ -2864,10 +2867,6 @@ void qmp_block_dirty_bitmap_add(const char *node, const char *name,
}
bdrv_dirty_bitmap_set_persistence(bitmap, persistent);
- out:
- if (aio_context) {
- aio_context_release(aio_context);
- }
}
void qmp_block_dirty_bitmap_remove(const char *node, const char *name,
@@ -2875,8 +2874,6 @@ void qmp_block_dirty_bitmap_remove(const char *node, const char *name,
{
BlockDriverState *bs;
BdrvDirtyBitmap *bitmap;
- Error *local_err = NULL;
- AioContext *aio_context = NULL;
bitmap = block_dirty_bitmap_lookup(node, name, &bs, errp);
if (!bitmap || !bs) {
@@ -2889,20 +2886,19 @@ void qmp_block_dirty_bitmap_remove(const char *node, const char *name,
}
if (bdrv_dirty_bitmap_get_persistence(bitmap)) {
- aio_context = bdrv_get_aio_context(bs);
+ AioContext *aio_context = bdrv_get_aio_context(bs);
+ Error *local_err = NULL;
+
aio_context_acquire(aio_context);
bdrv_remove_persistent_dirty_bitmap(bs, name, &local_err);
+ aio_context_release(aio_context);
if (local_err != NULL) {
error_propagate(errp, local_err);
- goto out;
+ return;
}
}
bdrv_release_dirty_bitmap(bs, bitmap);
- out:
- if (aio_context) {
- aio_context_release(aio_context);
- }
}
/**
--
1.8.3.1

View File

@ -0,0 +1,209 @@
From b15fa18e724e356bd889f0566d512daedb9a09dc Mon Sep 17 00:00:00 2001
From: John Snow <jsnow@redhat.com>
Date: Fri, 27 Sep 2019 20:18:46 +0100
Subject: [PATCH 3/6] iotests: test bitmap moving inside 254
RH-Author: John Snow <jsnow@redhat.com>
Message-id: <20190927201846.6823-4-jsnow@redhat.com>
Patchwork-id: 90910
O-Subject: [RHEL-AV-8.1.0 qemu-kvm PATCH 3/3] iotests: test bitmap moving inside 254
Bugzilla: 1756413
RH-Acked-by: Maxim Levitsky <mlevitsk@redhat.com>
RH-Acked-by: Max Reitz <mreitz@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Test persistent bitmap copying with and without removal of original
bitmap.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190708220502.12977-4-jsnow@redhat.com
[Edited comment "bitmap1" --> "bitmap2" as per review. --js]
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 3f7b2fa8cd476fe871ce1d996c640317730752a0)
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
tests/qemu-iotests/254 | 30 +++++++++++++++--
tests/qemu-iotests/254.out | 82 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 110 insertions(+), 2 deletions(-)
diff --git a/tests/qemu-iotests/254 b/tests/qemu-iotests/254
index 8edba91..09584f3 100755
--- a/tests/qemu-iotests/254
+++ b/tests/qemu-iotests/254
@@ -1,6 +1,6 @@
#!/usr/bin/env python
#
-# Test external snapshot with bitmap copying.
+# Test external snapshot with bitmap copying and moving.
#
# Copyright (c) 2019 Virtuozzo International GmbH. All rights reserved.
#
@@ -32,6 +32,10 @@ vm = iotests.VM().add_drive(disk, opts='node-name=base')
vm.launch()
vm.qmp_log('block-dirty-bitmap-add', node='drive0', name='bitmap0')
+vm.qmp_log('block-dirty-bitmap-add', node='drive0', name='bitmap1',
+ persistent=True)
+vm.qmp_log('block-dirty-bitmap-add', node='drive0', name='bitmap2',
+ persistent=True)
vm.hmp_qemu_io('drive0', 'write 0 512K')
@@ -39,16 +43,38 @@ vm.qmp_log('transaction', indent=2, actions=[
{'type': 'blockdev-snapshot-sync',
'data': {'device': 'drive0', 'snapshot-file': top,
'snapshot-node-name': 'snap'}},
+
+ # copy non-persistent bitmap0
{'type': 'block-dirty-bitmap-add',
'data': {'node': 'snap', 'name': 'bitmap0'}},
{'type': 'block-dirty-bitmap-merge',
'data': {'node': 'snap', 'target': 'bitmap0',
- 'bitmaps': [{'node': 'base', 'name': 'bitmap0'}]}}
+ 'bitmaps': [{'node': 'base', 'name': 'bitmap0'}]}},
+
+ # copy persistent bitmap1, original will be saved to base image
+ {'type': 'block-dirty-bitmap-add',
+ 'data': {'node': 'snap', 'name': 'bitmap1', 'persistent': True}},
+ {'type': 'block-dirty-bitmap-merge',
+ 'data': {'node': 'snap', 'target': 'bitmap1',
+ 'bitmaps': [{'node': 'base', 'name': 'bitmap1'}]}},
+
+ # move persistent bitmap2, original will be removed and not saved
+ # to base image
+ {'type': 'block-dirty-bitmap-add',
+ 'data': {'node': 'snap', 'name': 'bitmap2', 'persistent': True}},
+ {'type': 'block-dirty-bitmap-merge',
+ 'data': {'node': 'snap', 'target': 'bitmap2',
+ 'bitmaps': [{'node': 'base', 'name': 'bitmap2'}]}},
+ {'type': 'block-dirty-bitmap-remove',
+ 'data': {'node': 'base', 'name': 'bitmap2'}}
], filters=[iotests.filter_qmp_testfiles])
result = vm.qmp('query-block')['return'][0]
log("query-block: device = {}, node-name = {}, dirty-bitmaps:".format(
result['device'], result['inserted']['node-name']))
log(result['dirty-bitmaps'], indent=2)
+log("\nbitmaps in backing image:")
+log(result['inserted']['image']['backing-image']['format-specific'] \
+ ['data']['bitmaps'], indent=2)
vm.shutdown()
diff --git a/tests/qemu-iotests/254.out b/tests/qemu-iotests/254.out
index d7394cf..d185c05 100644
--- a/tests/qemu-iotests/254.out
+++ b/tests/qemu-iotests/254.out
@@ -1,5 +1,9 @@
{"execute": "block-dirty-bitmap-add", "arguments": {"name": "bitmap0", "node": "drive0"}}
{"return": {}}
+{"execute": "block-dirty-bitmap-add", "arguments": {"name": "bitmap1", "node": "drive0", "persistent": true}}
+{"return": {}}
+{"execute": "block-dirty-bitmap-add", "arguments": {"name": "bitmap2", "node": "drive0", "persistent": true}}
+{"return": {}}
{
"execute": "transaction",
"arguments": {
@@ -31,6 +35,55 @@
"target": "bitmap0"
},
"type": "block-dirty-bitmap-merge"
+ },
+ {
+ "data": {
+ "name": "bitmap1",
+ "node": "snap",
+ "persistent": true
+ },
+ "type": "block-dirty-bitmap-add"
+ },
+ {
+ "data": {
+ "bitmaps": [
+ {
+ "name": "bitmap1",
+ "node": "base"
+ }
+ ],
+ "node": "snap",
+ "target": "bitmap1"
+ },
+ "type": "block-dirty-bitmap-merge"
+ },
+ {
+ "data": {
+ "name": "bitmap2",
+ "node": "snap",
+ "persistent": true
+ },
+ "type": "block-dirty-bitmap-add"
+ },
+ {
+ "data": {
+ "bitmaps": [
+ {
+ "name": "bitmap2",
+ "node": "base"
+ }
+ ],
+ "node": "snap",
+ "target": "bitmap2"
+ },
+ "type": "block-dirty-bitmap-merge"
+ },
+ {
+ "data": {
+ "name": "bitmap2",
+ "node": "base"
+ },
+ "type": "block-dirty-bitmap-remove"
}
]
}
@@ -44,9 +97,38 @@ query-block: device = drive0, node-name = snap, dirty-bitmaps:
"busy": false,
"count": 524288,
"granularity": 65536,
+ "name": "bitmap2",
+ "persistent": true,
+ "recording": true,
+ "status": "active"
+ },
+ {
+ "busy": false,
+ "count": 524288,
+ "granularity": 65536,
+ "name": "bitmap1",
+ "persistent": true,
+ "recording": true,
+ "status": "active"
+ },
+ {
+ "busy": false,
+ "count": 524288,
+ "granularity": 65536,
"name": "bitmap0",
"persistent": false,
"recording": true,
"status": "active"
}
]
+
+bitmaps in backing image:
+[
+ {
+ "flags": [
+ "auto"
+ ],
+ "granularity": 65536,
+ "name": "bitmap1"
+ }
+]
--
1.8.3.1

View File

@ -0,0 +1,200 @@
From 7cf87a669fa0dd580013b0ca5e4510f12aff2319 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Wed, 9 Oct 2019 14:10:07 +0100
Subject: [PATCH 5/6] nbd: Grab aio context lock in more places
RH-Author: Eric Blake <eblake@redhat.com>
Message-id: <20191009141008.24439-2-eblake@redhat.com>
Patchwork-id: 91353
O-Subject: [RHEL-AV-8.1.1 qemu-kvm PATCH 1/2] nbd: Grab aio context lock in more places
Bugzilla: 1741094
RH-Acked-by: John Snow <jsnow@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
When iothreads are in use, the failure to grab the aio context results
in an assertion failure when trying to unlock things during blk_unref,
when trying to unlock a mutex that was not locked. In short, all
calls to nbd_export_put need to done while within the correct aio
context. But since nbd_export_put can recursively reach itself via
nbd_export_close, and recursively grabbing the context would deadlock,
we can't do the context grab directly in those functions, but must do
so in their callers.
Hoist the use of the correct aio_context from nbd_export_new() to its
caller qmp_nbd_server_add(). Then tweak qmp_nbd_server_remove(),
nbd_eject_notifier(), and nbd_esport_close_all() to grab the right
context, so that all callers during qemu now own the context before
nbd_export_put() can call blk_unref().
Remaining uses in qemu-nbd don't matter (since that use case does not
support iothreads).
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190917023917.32226-1-eblake@redhat.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
(cherry picked from commit 61bc846d8c58535af6884b637a4005dd6111ea95)
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
blockdev-nbd.c | 14 ++++++++++++--
include/block/nbd.h | 1 +
nbd/server.c | 22 ++++++++++++++++++----
3 files changed, 31 insertions(+), 6 deletions(-)
diff --git a/blockdev-nbd.c b/blockdev-nbd.c
index 06041a2..bed9370 100644
--- a/blockdev-nbd.c
+++ b/blockdev-nbd.c
@@ -152,6 +152,7 @@ void qmp_nbd_server_add(const char *device, bool has_name, const char *name,
BlockBackend *on_eject_blk;
NBDExport *exp;
int64_t len;
+ AioContext *aio_context;
if (!nbd_server) {
error_setg(errp, "NBD server not running");
@@ -174,11 +175,13 @@ void qmp_nbd_server_add(const char *device, bool has_name, const char *name,
return;
}
+ aio_context = bdrv_get_aio_context(bs);
+ aio_context_acquire(aio_context);
len = bdrv_getlength(bs);
if (len < 0) {
error_setg_errno(errp, -len,
"Failed to determine the NBD export's length");
- return;
+ goto out;
}
if (!has_writable) {
@@ -192,13 +195,16 @@ void qmp_nbd_server_add(const char *device, bool has_name, const char *name,
writable ? 0 : NBD_FLAG_READ_ONLY,
NULL, false, on_eject_blk, errp);
if (!exp) {
- return;
+ goto out;
}
/* The list of named exports has a strong reference to this export now and
* our only way of accessing it is through nbd_export_find(), so we can drop
* the strong reference that is @exp. */
nbd_export_put(exp);
+
+ out:
+ aio_context_release(aio_context);
}
void qmp_nbd_server_remove(const char *name,
@@ -206,6 +212,7 @@ void qmp_nbd_server_remove(const char *name,
Error **errp)
{
NBDExport *exp;
+ AioContext *aio_context;
if (!nbd_server) {
error_setg(errp, "NBD server not running");
@@ -222,7 +229,10 @@ void qmp_nbd_server_remove(const char *name,
mode = NBD_SERVER_REMOVE_MODE_SAFE;
}
+ aio_context = nbd_export_aio_context(exp);
+ aio_context_acquire(aio_context);
nbd_export_remove(exp, mode, errp);
+ aio_context_release(aio_context);
}
void qmp_nbd_server_stop(Error **errp)
diff --git a/include/block/nbd.h b/include/block/nbd.h
index bb9f5bc..82f9b9e 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -335,6 +335,7 @@ void nbd_export_put(NBDExport *exp);
BlockBackend *nbd_export_get_blockdev(NBDExport *exp);
+AioContext *nbd_export_aio_context(NBDExport *exp);
NBDExport *nbd_export_find(const char *name);
void nbd_export_close_all(void);
diff --git a/nbd/server.c b/nbd/server.c
index ea0353a..81f8217 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -1460,7 +1460,12 @@ static void blk_aio_detach(void *opaque)
static void nbd_eject_notifier(Notifier *n, void *data)
{
NBDExport *exp = container_of(n, NBDExport, eject_notifier);
+ AioContext *aio_context;
+
+ aio_context = exp->ctx;
+ aio_context_acquire(aio_context);
nbd_export_close(exp);
+ aio_context_release(aio_context);
}
NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
@@ -1479,12 +1484,11 @@ NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
* NBD exports are used for non-shared storage migration. Make sure
* that BDRV_O_INACTIVE is cleared and the image is ready for write
* access since the export could be available before migration handover.
+ * ctx was acquired in the caller.
*/
assert(name);
ctx = bdrv_get_aio_context(bs);
- aio_context_acquire(ctx);
bdrv_invalidate_cache(bs, NULL);
- aio_context_release(ctx);
/* Don't allow resize while the NBD server is running, otherwise we don't
* care what happens with the node. */
@@ -1492,7 +1496,7 @@ NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
if ((nbdflags & NBD_FLAG_READ_ONLY) == 0) {
perm |= BLK_PERM_WRITE;
}
- blk = blk_new(bdrv_get_aio_context(bs), perm,
+ blk = blk_new(ctx, perm,
BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
ret = blk_insert_bs(blk, bs, errp);
@@ -1549,7 +1553,7 @@ NBDExport *nbd_export_new(BlockDriverState *bs, uint64_t dev_offset,
}
exp->close = close;
- exp->ctx = blk_get_aio_context(blk);
+ exp->ctx = ctx;
blk_add_aio_context_notifier(blk, blk_aio_attached, blk_aio_detach, exp);
if (on_eject_blk) {
@@ -1582,6 +1586,12 @@ NBDExport *nbd_export_find(const char *name)
return NULL;
}
+AioContext *
+nbd_export_aio_context(NBDExport *exp)
+{
+ return exp->ctx;
+}
+
void nbd_export_close(NBDExport *exp)
{
NBDClient *client, *next;
@@ -1676,9 +1686,13 @@ BlockBackend *nbd_export_get_blockdev(NBDExport *exp)
void nbd_export_close_all(void)
{
NBDExport *exp, *next;
+ AioContext *aio_context;
QTAILQ_FOREACH_SAFE(exp, &exports, next, next) {
+ aio_context = exp->ctx;
+ aio_context_acquire(aio_context);
nbd_export_close(exp);
+ aio_context_release(aio_context);
}
}
--
1.8.3.1

View File

@ -0,0 +1,274 @@
From fd8ecebf0c0632e473bcb8bb08dc8311a5530dcf Mon Sep 17 00:00:00 2001
From: John Snow <jsnow@redhat.com>
Date: Fri, 27 Sep 2019 20:18:45 +0100
Subject: [PATCH 2/6] qapi: implement block-dirty-bitmap-remove transaction
action
RH-Author: John Snow <jsnow@redhat.com>
Message-id: <20190927201846.6823-3-jsnow@redhat.com>
Patchwork-id: 90911
O-Subject: [RHEL-AV-8.1.0 qemu-kvm PATCH 2/3] qapi: implement block-dirty-bitmap-remove transaction action
Bugzilla: 1756413
RH-Acked-by: Maxim Levitsky <mlevitsk@redhat.com>
RH-Acked-by: Max Reitz <mreitz@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
It is used to do transactional movement of the bitmap (which is
possible in conjunction with merge command). Transactional bitmap
movement is needed in scenarios with external snapshot, when we don't
want to leave copy of the bitmap in the base image.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190708220502.12977-3-jsnow@redhat.com
[Edited "since" version to 4.2 --js]
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit c4e4b0fa598ddc9cee6ba7a06899ce0a8dae6c61)
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
block.c | 2 +-
block/dirty-bitmap.c | 15 ++++----
blockdev.c | 79 ++++++++++++++++++++++++++++++++++++++----
include/block/dirty-bitmap.h | 2 +-
migration/block-dirty-bitmap.c | 2 +-
qapi/transaction.json | 2 ++
6 files changed, 85 insertions(+), 17 deletions(-)
diff --git a/block.c b/block.c
index cbd8da5..92a3e9f 100644
--- a/block.c
+++ b/block.c
@@ -5334,7 +5334,7 @@ static void coroutine_fn bdrv_co_invalidate_cache(BlockDriverState *bs,
for (bm = bdrv_dirty_bitmap_next(bs, NULL); bm;
bm = bdrv_dirty_bitmap_next(bs, bm))
{
- bdrv_dirty_bitmap_set_migration(bm, false);
+ bdrv_dirty_bitmap_skip_store(bm, false);
}
ret = refresh_total_sectors(bs, bs->total_sectors);
diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
index 95a9c2a..a308e1f 100644
--- a/block/dirty-bitmap.c
+++ b/block/dirty-bitmap.c
@@ -48,10 +48,9 @@ struct BdrvDirtyBitmap {
bool inconsistent; /* bitmap is persistent, but inconsistent.
It cannot be used at all in any way, except
a QMP user can remove it. */
- bool migration; /* Bitmap is selected for migration, it should
- not be stored on the next inactivation
- (persistent flag doesn't matter until next
- invalidation).*/
+ bool skip_store; /* We are either migrating or deleting this
+ * bitmap; it should not be stored on the next
+ * inactivation. */
QLIST_ENTRY(BdrvDirtyBitmap) list;
};
@@ -757,16 +756,16 @@ void bdrv_dirty_bitmap_set_inconsistent(BdrvDirtyBitmap *bitmap)
}
/* Called with BQL taken. */
-void bdrv_dirty_bitmap_set_migration(BdrvDirtyBitmap *bitmap, bool migration)
+void bdrv_dirty_bitmap_skip_store(BdrvDirtyBitmap *bitmap, bool skip)
{
qemu_mutex_lock(bitmap->mutex);
- bitmap->migration = migration;
+ bitmap->skip_store = skip;
qemu_mutex_unlock(bitmap->mutex);
}
bool bdrv_dirty_bitmap_get_persistence(BdrvDirtyBitmap *bitmap)
{
- return bitmap->persistent && !bitmap->migration;
+ return bitmap->persistent && !bitmap->skip_store;
}
bool bdrv_dirty_bitmap_inconsistent(const BdrvDirtyBitmap *bitmap)
@@ -778,7 +777,7 @@ bool bdrv_has_changed_persistent_bitmaps(BlockDriverState *bs)
{
BdrvDirtyBitmap *bm;
QLIST_FOREACH(bm, &bs->dirty_bitmaps, list) {
- if (bm->persistent && !bm->readonly && !bm->migration) {
+ if (bm->persistent && !bm->readonly && !bm->skip_store) {
return true;
}
}
diff --git a/blockdev.c b/blockdev.c
index 0124825..800b3dc 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2134,6 +2134,51 @@ static void block_dirty_bitmap_merge_prepare(BlkActionState *common,
errp);
}
+static BdrvDirtyBitmap *do_block_dirty_bitmap_remove(
+ const char *node, const char *name, bool release,
+ BlockDriverState **bitmap_bs, Error **errp);
+
+static void block_dirty_bitmap_remove_prepare(BlkActionState *common,
+ Error **errp)
+{
+ BlockDirtyBitmap *action;
+ BlockDirtyBitmapState *state = DO_UPCAST(BlockDirtyBitmapState,
+ common, common);
+
+ if (action_check_completion_mode(common, errp) < 0) {
+ return;
+ }
+
+ action = common->action->u.block_dirty_bitmap_remove.data;
+
+ state->bitmap = do_block_dirty_bitmap_remove(action->node, action->name,
+ false, &state->bs, errp);
+ if (state->bitmap) {
+ bdrv_dirty_bitmap_skip_store(state->bitmap, true);
+ bdrv_dirty_bitmap_set_busy(state->bitmap, true);
+ }
+}
+
+static void block_dirty_bitmap_remove_abort(BlkActionState *common)
+{
+ BlockDirtyBitmapState *state = DO_UPCAST(BlockDirtyBitmapState,
+ common, common);
+
+ if (state->bitmap) {
+ bdrv_dirty_bitmap_skip_store(state->bitmap, false);
+ bdrv_dirty_bitmap_set_busy(state->bitmap, false);
+ }
+}
+
+static void block_dirty_bitmap_remove_commit(BlkActionState *common)
+{
+ BlockDirtyBitmapState *state = DO_UPCAST(BlockDirtyBitmapState,
+ common, common);
+
+ bdrv_dirty_bitmap_set_busy(state->bitmap, false);
+ bdrv_release_dirty_bitmap(state->bs, state->bitmap);
+}
+
static void abort_prepare(BlkActionState *common, Error **errp)
{
error_setg(errp, "Transaction aborted using Abort action");
@@ -2211,6 +2256,12 @@ static const BlkActionOps actions[] = {
.commit = block_dirty_bitmap_free_backup,
.abort = block_dirty_bitmap_restore,
},
+ [TRANSACTION_ACTION_KIND_BLOCK_DIRTY_BITMAP_REMOVE] = {
+ .instance_size = sizeof(BlockDirtyBitmapState),
+ .prepare = block_dirty_bitmap_remove_prepare,
+ .commit = block_dirty_bitmap_remove_commit,
+ .abort = block_dirty_bitmap_remove_abort,
+ },
/* Where are transactions for MIRROR, COMMIT and STREAM?
* Although these blockjobs use transaction callbacks like the backup job,
* these jobs do not necessarily adhere to transaction semantics.
@@ -2869,20 +2920,21 @@ void qmp_block_dirty_bitmap_add(const char *node, const char *name,
bdrv_dirty_bitmap_set_persistence(bitmap, persistent);
}
-void qmp_block_dirty_bitmap_remove(const char *node, const char *name,
- Error **errp)
+static BdrvDirtyBitmap *do_block_dirty_bitmap_remove(
+ const char *node, const char *name, bool release,
+ BlockDriverState **bitmap_bs, Error **errp)
{
BlockDriverState *bs;
BdrvDirtyBitmap *bitmap;
bitmap = block_dirty_bitmap_lookup(node, name, &bs, errp);
if (!bitmap || !bs) {
- return;
+ return NULL;
}
if (bdrv_dirty_bitmap_check(bitmap, BDRV_BITMAP_BUSY | BDRV_BITMAP_RO,
errp)) {
- return;
+ return NULL;
}
if (bdrv_dirty_bitmap_get_persistence(bitmap)) {
@@ -2892,13 +2944,28 @@ void qmp_block_dirty_bitmap_remove(const char *node, const char *name,
aio_context_acquire(aio_context);
bdrv_remove_persistent_dirty_bitmap(bs, name, &local_err);
aio_context_release(aio_context);
+
if (local_err != NULL) {
error_propagate(errp, local_err);
- return;
+ return NULL;
}
}
- bdrv_release_dirty_bitmap(bs, bitmap);
+ if (release) {
+ bdrv_release_dirty_bitmap(bs, bitmap);
+ }
+
+ if (bitmap_bs) {
+ *bitmap_bs = bs;
+ }
+
+ return release ? NULL : bitmap;
+}
+
+void qmp_block_dirty_bitmap_remove(const char *node, const char *name,
+ Error **errp)
+{
+ do_block_dirty_bitmap_remove(node, name, true, NULL, errp);
}
/**
diff --git a/include/block/dirty-bitmap.h b/include/block/dirty-bitmap.h
index 62682eb..a21d54a 100644
--- a/include/block/dirty-bitmap.h
+++ b/include/block/dirty-bitmap.h
@@ -83,7 +83,7 @@ void bdrv_dirty_bitmap_set_inconsistent(BdrvDirtyBitmap *bitmap);
void bdrv_dirty_bitmap_set_busy(BdrvDirtyBitmap *bitmap, bool busy);
void bdrv_merge_dirty_bitmap(BdrvDirtyBitmap *dest, const BdrvDirtyBitmap *src,
HBitmap **backup, Error **errp);
-void bdrv_dirty_bitmap_set_migration(BdrvDirtyBitmap *bitmap, bool migration);
+void bdrv_dirty_bitmap_skip_store(BdrvDirtyBitmap *bitmap, bool skip);
/* Functions that require manual locking. */
void bdrv_dirty_bitmap_lock(BdrvDirtyBitmap *bitmap);
diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
index 4a896a0..d650ba4 100644
--- a/migration/block-dirty-bitmap.c
+++ b/migration/block-dirty-bitmap.c
@@ -326,7 +326,7 @@ static int init_dirty_bitmap_migration(void)
/* unset migration flags here, to not roll back it */
QSIMPLEQ_FOREACH(dbms, &dirty_bitmap_mig_state.dbms_list, entry) {
- bdrv_dirty_bitmap_set_migration(dbms->bitmap, true);
+ bdrv_dirty_bitmap_skip_store(dbms->bitmap, true);
}
if (QSIMPLEQ_EMPTY(&dirty_bitmap_mig_state.dbms_list)) {
diff --git a/qapi/transaction.json b/qapi/transaction.json
index 95edb78..0590dbc 100644
--- a/qapi/transaction.json
+++ b/qapi/transaction.json
@@ -45,6 +45,7 @@
#
# - @abort: since 1.6
# - @block-dirty-bitmap-add: since 2.5
+# - @block-dirty-bitmap-remove: since 4.2
# - @block-dirty-bitmap-clear: since 2.5
# - @block-dirty-bitmap-enable: since 4.0
# - @block-dirty-bitmap-disable: since 4.0
@@ -61,6 +62,7 @@
'data': {
'abort': 'Abort',
'block-dirty-bitmap-add': 'BlockDirtyBitmapAdd',
+ 'block-dirty-bitmap-remove': 'BlockDirtyBitmap',
'block-dirty-bitmap-clear': 'BlockDirtyBitmap',
'block-dirty-bitmap-enable': 'BlockDirtyBitmap',
'block-dirty-bitmap-disable': 'BlockDirtyBitmap',
--
1.8.3.1

View File

@ -0,0 +1,65 @@
From 3a7d0411addca79192ed60939f55ec019c27a72a Mon Sep 17 00:00:00 2001
From: David Gibson <dgibson@redhat.com>
Date: Tue, 8 Oct 2019 05:08:36 +0100
Subject: [PATCH 4/6] spapr/xive: skip partially initialized vCPUs in presenter
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: David Gibson <dgibson@redhat.com>
Message-id: <20191008050836.11479-1-dgibson@redhat.com>
Patchwork-id: 90994
O-Subject: [RHEL-AV-8.1.1 qemu-kvm PATCH] spapr/xive: skip partially initialized vCPUs in presenter
Bugzilla: 1754710
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Thomas Huth <thuth@redhat.com>
RH-Acked-by: Philippe Mathieu-Daudé <philmd@redhat.com>
From: Cédric Le Goater <clg@kaod.org>
When vCPUs are hotplugged, they are added to the QEMU CPU list before
being fully realized. This can crash the XIVE presenter because the
'tctx' pointer is not necessarily initialized when looking for a
matching target.
These vCPUs are not valid targets for the presenter. Skip them.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20191001085722.32755-1-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 627fa61746f70f7c799f08e9048bb6a482402138)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1754710
Branch: rhel-av-8.1.1
Brew: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=23900462
Testing: Could no longer reproduce bug with brewed qemu
Signed-off-by: David Gibson <dgibson@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
hw/intc/xive.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index da148e9..8f639f6 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1345,6 +1345,14 @@ static bool xive_presenter_match(XiveRouter *xrtr, uint8_t format,
int ring;
/*
+ * Skip partially initialized vCPUs. This can happen when
+ * vCPUs are hotplugged.
+ */
+ if (!tctx) {
+ continue;
+ }
+
+ /*
* HW checks that the CPU is enabled in the Physical Thread
* Enable Register (PTER).
*/
--
1.8.3.1

View File

@ -0,0 +1,73 @@
From c03d23733166328e70f98504d7dfaa528e889633 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Wed, 9 Oct 2019 14:10:08 +0100
Subject: [PATCH 6/6] tests: Use iothreads during iotest 223
RH-Author: Eric Blake <eblake@redhat.com>
Message-id: <20191009141008.24439-3-eblake@redhat.com>
Patchwork-id: 91355
O-Subject: [RHEL-AV-8.1.1 qemu-kvm PATCH 2/2] tests: Use iothreads during iotest 223
Bugzilla: 1741094
RH-Acked-by: John Snow <jsnow@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
Doing so catches the bugs we just fixed with NBD not properly using
correct contexts.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190920220729.31801-1-eblake@redhat.com>
(cherry picked from commit 506902c6fa80210b002e30ff33794bfc718b15c6)
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
tests/qemu-iotests/223 | 6 ++++--
tests/qemu-iotests/223.out | 1 +
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/tests/qemu-iotests/223 b/tests/qemu-iotests/223
index cc48e78..2ba3d81 100755
--- a/tests/qemu-iotests/223
+++ b/tests/qemu-iotests/223
@@ -2,7 +2,7 @@
#
# Test reading dirty bitmap over NBD
#
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2018-2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -109,7 +109,7 @@ echo
echo "=== End dirty bitmaps, and start serving image over NBD ==="
echo
-_launch_qemu 2> >(_filter_nbd)
+_launch_qemu -object iothread,id=io0 2> >(_filter_nbd)
# Intentionally provoke some errors as well, to check error handling
silent=
@@ -117,6 +117,8 @@ _send_qemu_cmd $QEMU_HANDLE '{"execute":"qmp_capabilities"}' "return"
_send_qemu_cmd $QEMU_HANDLE '{"execute":"blockdev-add",
"arguments":{"driver":"qcow2", "node-name":"n",
"file":{"driver":"file", "filename":"'"$TEST_IMG"'"}}}' "return"
+_send_qemu_cmd $QEMU_HANDLE '{"execute":"x-blockdev-set-iothread",
+ "arguments":{"node-name":"n", "iothread":"io0"}}' "return"
_send_qemu_cmd $QEMU_HANDLE '{"execute":"block-dirty-bitmap-disable",
"arguments":{"node":"n", "name":"b"}}' "return"
_send_qemu_cmd $QEMU_HANDLE '{"execute":"nbd-server-add",
diff --git a/tests/qemu-iotests/223.out b/tests/qemu-iotests/223.out
index d5201b2..90cc4b6 100644
--- a/tests/qemu-iotests/223.out
+++ b/tests/qemu-iotests/223.out
@@ -27,6 +27,7 @@ wrote 2097152/2097152 bytes at offset 2097152
{"return": {}}
{"return": {}}
{"return": {}}
+{"return": {}}
{"error": {"class": "GenericError", "desc": "NBD server not running"}}
{"return": {}}
{"error": {"class": "GenericError", "desc": "NBD server already running"}}
--
1.8.3.1

View File

@ -67,7 +67,7 @@ Obsoletes: %1-rhev
Summary: QEMU is a machine emulator and virtualizer
Name: qemu-kvm
Version: 4.1.0
Release: 13%{?dist}
Release: 14%{?dist}
# Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped
Epoch: 15
License: GPLv2 and GPLv2+ and CC-BY
@ -220,6 +220,18 @@ Patch67: kvm-qemu-iotests-Add-test-for-bz-1745922.patch
Patch68: kvm-nbd-server-attach-client-channel-to-the-export-s-Aio.patch
# For bz#1744955 - Qemu hang when block resize a qcow2 image
Patch69: kvm-virtio-blk-schedule-virtio_notify_config-to-run-on-m.patch
# For bz#1756413 - backport support for transactionable block-dirty-bitmap-remove for incremental backup support
Patch70: kvm-blockdev-reduce-aio_context-locked-sections-in-bitma.patch
# For bz#1756413 - backport support for transactionable block-dirty-bitmap-remove for incremental backup support
Patch71: kvm-qapi-implement-block-dirty-bitmap-remove-transaction.patch
# For bz#1756413 - backport support for transactionable block-dirty-bitmap-remove for incremental backup support
Patch72: kvm-iotests-test-bitmap-moving-inside-254.patch
# For bz#1754710 - qemu core dumped when hotpluging vcpus
Patch73: kvm-spapr-xive-skip-partially-initialized-vCPUs-in-prese.patch
# For bz#1741094 - [Upstream]Incremental backup: Qemu coredump when expose an active bitmap via pull mode(data plane enable)
Patch74: kvm-nbd-Grab-aio-context-lock-in-more-places.patch
# For bz#1741094 - [Upstream]Incremental backup: Qemu coredump when expose an active bitmap via pull mode(data plane enable)
Patch75: kvm-tests-Use-iothreads-during-iotest-223.patch
BuildRequires: wget
BuildRequires: rpm-build
@ -1161,6 +1173,20 @@ useradd -r -u 107 -g qemu -G kvm -d / -s /sbin/nologin \
%changelog
* Tue Nov 12 2019 Danilo Cesar Lemes de Paula <ddepaula@redhat.com> - 4.1.0-14.el8
- kvm-blockdev-reduce-aio_context-locked-sections-in-bitma.patch [bz#1756413]
- kvm-qapi-implement-block-dirty-bitmap-remove-transaction.patch [bz#1756413]
- kvm-iotests-test-bitmap-moving-inside-254.patch [bz#1756413]
- kvm-spapr-xive-skip-partially-initialized-vCPUs-in-prese.patch [bz#1754710]
- kvm-nbd-Grab-aio-context-lock-in-more-places.patch [bz#1741094]
- kvm-tests-Use-iothreads-during-iotest-223.patch [bz#1741094]
- Resolves: bz#1741094
([Upstream]Incremental backup: Qemu coredump when expose an active bitmap via pull mode(data plane enable))
- Resolves: bz#1754710
(qemu core dumped when hotpluging vcpus)
- Resolves: bz#1756413
(backport support for transactionable block-dirty-bitmap-remove for incremental backup support)
* Fri Sep 27 2019 Danilo Cesar Lemes de Paula <ddepaula@redhat.com> - 4.1.0-13.el8
- kvm-nbd-server-attach-client-channel-to-the-export-s-Aio.patch [bz#1748253]
- kvm-virtio-blk-schedule-virtio_notify_config-to-run-on-m.patch [bz#1744955]