* Thu Feb 17 2022 Miroslav Rezanina <mrezanin@redhat.com> - 6.2.0-9

- kvm-block-Lock-AioContext-for-drain_end-in-blockdev-reop.patch [bz#2046659]
- kvm-iotests-Test-blockdev-reopen-with-iothreads-and-thro.patch [bz#2046659]
- kvm-block-nbd-Delete-reconnect-delay-timer-when-done.patch [bz#2033626]
- kvm-block-nbd-Assert-there-are-no-timers-when-closed.patch [bz#2033626]
- kvm-iotests.py-Add-QemuStorageDaemon-class.patch [bz#2033626]
- kvm-iotests-281-Test-lingering-timers.patch [bz#2033626]
- kvm-block-nbd-Move-s-ioc-on-AioContext-change.patch [bz#2033626]
- kvm-iotests-281-Let-NBD-connection-yield-in-iothread.patch [bz#2033626]
- Resolves: bz#2046659
  (qemu crash after execute blockdev-reopen with  iothread)
- Resolves: bz#2033626
  (Qemu core dump when start guest with nbd node or do block jobs to nbd node)
This commit is contained in:
Miroslav Rezanina 2022-02-17 01:48:18 -05:00
parent 0daf0004a7
commit ed795e95d8
10 changed files with 792 additions and 2 deletions

View File

@ -0,0 +1,63 @@
From 7b973b9cb7b890eaf9a31c99f5c272b513322ac1 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Thu, 3 Feb 2022 15:05:33 +0100
Subject: [PATCH 1/8] block: Lock AioContext for drain_end in blockdev-reopen
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 73: block: Lock AioContext for drain_end in blockdev-reopen
RH-Commit: [1/2] db25e999152b0e4f09decade1ac76b9f56cd9706 (kmwolf/centos-qemu-kvm)
RH-Bugzilla: 2046659
RH-Acked-by: Sergio Lopez <None>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Acked-by: Hanna Reitz <hreitz@redhat.com>
bdrv_subtree_drained_end() requires the caller to hold the AioContext
lock for the drained node. Not doing this for nodes outside of the main
AioContext leads to crashes when AIO_WAIT_WHILE() needs to wait and
tries to temporarily release the lock.
Fixes: 3908b7a8994fa5ef7a89aa58cd5a02fc58141592
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2046659
Reported-by: Qing Wang <qinwang@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220203140534.36522-2-kwolf@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit aba8205be0707b9d108e32254e186ba88107a869)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
blockdev.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/blockdev.c b/blockdev.c
index b35072644e..565f6a81fd 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3562,6 +3562,7 @@ void qmp_blockdev_reopen(BlockdevOptionsList *reopen_list, Error **errp)
{
BlockReopenQueue *queue = NULL;
GSList *drained = NULL;
+ GSList *p;
/* Add each one of the BDS that we want to reopen to the queue */
for (; reopen_list != NULL; reopen_list = reopen_list->next) {
@@ -3611,7 +3612,15 @@ void qmp_blockdev_reopen(BlockdevOptionsList *reopen_list, Error **errp)
fail:
bdrv_reopen_queue_free(queue);
- g_slist_free_full(drained, (GDestroyNotify) bdrv_subtree_drained_end);
+ for (p = drained; p; p = p->next) {
+ BlockDriverState *bs = p->data;
+ AioContext *ctx = bdrv_get_aio_context(bs);
+
+ aio_context_acquire(ctx);
+ bdrv_subtree_drained_end(bs);
+ aio_context_release(ctx);
+ }
+ g_slist_free(drained);
}
void qmp_blockdev_del(const char *node_name, Error **errp)
--
2.27.0

View File

@ -0,0 +1,52 @@
From 76b03619435d0b2f0125ee7aa5c94f2b889247de Mon Sep 17 00:00:00 2001
From: Hanna Reitz <hreitz@redhat.com>
Date: Fri, 4 Feb 2022 12:10:08 +0100
Subject: [PATCH 4/8] block/nbd: Assert there are no timers when closed
RH-Author: Hanna Reitz <hreitz@redhat.com>
RH-MergeRequest: 74: block/nbd: Handle AioContext changes
RH-Commit: [2/6] 56903457ca35d9c596aeb6827a48f80e8eabd66a (hreitz/qemu-kvm-c-9-s)
RH-Bugzilla: 2033626
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Our two timers must not remain armed beyond nbd_clear_bdrvstate(), or
they will access freed data when they fire.
This patch is separate from the patches that actually fix the issue
(HEAD^^ and HEAD^) so that you can run the associated regression iotest
(281) on a configuration that reproducibly exposes the bug.
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit 8a39c381e5e407d2fe5500324323f90a8540fa90)
Conflict:
- block/nbd.c: open_timer was introduced after the 6.2 release (for
nbd's @open-timeout parameter), and has not been backported, so drop
the assertion that it is NULL
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
block/nbd.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/nbd.c b/block/nbd.c
index b8e5a9b4cc..aab20125d8 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -108,6 +108,9 @@ static void nbd_clear_bdrvstate(BlockDriverState *bs)
yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
+ /* Must not leave timers behind that would access freed data */
+ assert(!s->reconnect_delay_timer);
+
object_unref(OBJECT(s->tlscreds));
qapi_free_SocketAddress(s->saddr);
s->saddr = NULL;
--
2.27.0

View File

@ -0,0 +1,54 @@
From eeb4683ad8c40a03a4e91463ec1d1b651974b744 Mon Sep 17 00:00:00 2001
From: Hanna Reitz <hreitz@redhat.com>
Date: Fri, 4 Feb 2022 12:10:06 +0100
Subject: [PATCH 3/8] block/nbd: Delete reconnect delay timer when done
RH-Author: Hanna Reitz <hreitz@redhat.com>
RH-MergeRequest: 74: block/nbd: Handle AioContext changes
RH-Commit: [1/6] 34f92910b6ffd256d781109a2b39737fc6ab449c (hreitz/qemu-kvm-c-9-s)
RH-Bugzilla: 2033626
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
We start the reconnect delay timer to cancel the reconnection attempt
after a while. Once nbd_co_do_establish_connection() has returned, this
attempt is over, and we no longer need the timer.
Delete it before returning from nbd_reconnect_attempt(), so that it does
not persist beyond the I/O request that was paused for reconnecting; we
do not want it to fire in a drained section, because all sort of things
can happen in such a section (e.g. the AioContext might be changed, and
we do not want the timer to fire in the wrong context; or the BDS might
even be deleted, and so the timer CB would access already-freed data).
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit 3ce1fc16bad9c3f8b7b10b451a224d6d76e5c551)
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
block/nbd.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/block/nbd.c b/block/nbd.c
index 5ef462db1b..b8e5a9b4cc 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -353,6 +353,13 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState *s)
}
nbd_co_do_establish_connection(s->bs, NULL);
+
+ /*
+ * The reconnect attempt is done (maybe successfully, maybe not), so
+ * we no longer need this timer. Delete it so it will not outlive
+ * this I/O request (so draining removes all timers).
+ */
+ reconnect_delay_timer_del(s);
}
static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t handle)
--
2.27.0

View File

@ -0,0 +1,107 @@
From 6d9d86cc4e6149d4c0793e8ceb65dab7535a4561 Mon Sep 17 00:00:00 2001
From: Hanna Reitz <hreitz@redhat.com>
Date: Fri, 4 Feb 2022 12:10:11 +0100
Subject: [PATCH 7/8] block/nbd: Move s->ioc on AioContext change
RH-Author: Hanna Reitz <hreitz@redhat.com>
RH-MergeRequest: 74: block/nbd: Handle AioContext changes
RH-Commit: [5/6] b3c1eb21ac70d64fdac6094468a72cfbe50a30a9 (hreitz/qemu-kvm-c-9-s)
RH-Bugzilla: 2033626
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
s->ioc must always be attached to the NBD node's AioContext. If that
context changes, s->ioc must be attached to the new context.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2033626
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit e15f3a66c830e3fce99c9d56c493c2f7078a1225)
Conflict:
- block/nbd.c: open_timer was added after the 6.2 release, so we need
not (and cannot) assert it is NULL here.
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
block/nbd.c | 41 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
diff --git a/block/nbd.c b/block/nbd.c
index aab20125d8..a3896c7f5f 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -2003,6 +2003,38 @@ static void nbd_cancel_in_flight(BlockDriverState *bs)
nbd_co_establish_connection_cancel(s->conn);
}
+static void nbd_attach_aio_context(BlockDriverState *bs,
+ AioContext *new_context)
+{
+ BDRVNBDState *s = bs->opaque;
+
+ /*
+ * The reconnect_delay_timer is scheduled in I/O paths when the
+ * connection is lost, to cancel the reconnection attempt after a
+ * given time. Once this attempt is done (successfully or not),
+ * nbd_reconnect_attempt() ensures the timer is deleted before the
+ * respective I/O request is resumed.
+ * Since the AioContext can only be changed when a node is drained,
+ * the reconnect_delay_timer cannot be active here.
+ */
+ assert(!s->reconnect_delay_timer);
+
+ if (s->ioc) {
+ qio_channel_attach_aio_context(s->ioc, new_context);
+ }
+}
+
+static void nbd_detach_aio_context(BlockDriverState *bs)
+{
+ BDRVNBDState *s = bs->opaque;
+
+ assert(!s->reconnect_delay_timer);
+
+ if (s->ioc) {
+ qio_channel_detach_aio_context(s->ioc);
+ }
+}
+
static BlockDriver bdrv_nbd = {
.format_name = "nbd",
.protocol_name = "nbd",
@@ -2026,6 +2058,9 @@ static BlockDriver bdrv_nbd = {
.bdrv_dirname = nbd_dirname,
.strong_runtime_opts = nbd_strong_runtime_opts,
.bdrv_cancel_in_flight = nbd_cancel_in_flight,
+
+ .bdrv_attach_aio_context = nbd_attach_aio_context,
+ .bdrv_detach_aio_context = nbd_detach_aio_context,
};
static BlockDriver bdrv_nbd_tcp = {
@@ -2051,6 +2086,9 @@ static BlockDriver bdrv_nbd_tcp = {
.bdrv_dirname = nbd_dirname,
.strong_runtime_opts = nbd_strong_runtime_opts,
.bdrv_cancel_in_flight = nbd_cancel_in_flight,
+
+ .bdrv_attach_aio_context = nbd_attach_aio_context,
+ .bdrv_detach_aio_context = nbd_detach_aio_context,
};
static BlockDriver bdrv_nbd_unix = {
@@ -2076,6 +2114,9 @@ static BlockDriver bdrv_nbd_unix = {
.bdrv_dirname = nbd_dirname,
.strong_runtime_opts = nbd_strong_runtime_opts,
.bdrv_cancel_in_flight = nbd_cancel_in_flight,
+
+ .bdrv_attach_aio_context = nbd_attach_aio_context,
+ .bdrv_detach_aio_context = nbd_detach_aio_context,
};
static void bdrv_nbd_init(void)
--
2.27.0

View File

@ -0,0 +1,108 @@
From 06583ce33fab2976157461ac4503d6f8eeb59e75 Mon Sep 17 00:00:00 2001
From: Hanna Reitz <hreitz@redhat.com>
Date: Fri, 4 Feb 2022 12:10:12 +0100
Subject: [PATCH 8/8] iotests/281: Let NBD connection yield in iothread
RH-Author: Hanna Reitz <hreitz@redhat.com>
RH-MergeRequest: 74: block/nbd: Handle AioContext changes
RH-Commit: [6/6] 632b9ef5177a80d1c0c00121e1acc37272076d3e (hreitz/qemu-kvm-c-9-s)
RH-Bugzilla: 2033626
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Put an NBD block device into an I/O thread, and then read data from it,
hoping that the NBD connection will yield during that read. When it
does, the coroutine must be reentered in the block device's I/O thread,
which will only happen if the NBD block driver attaches the connection's
QIOChannel to the new AioContext. It did not do that after 4ddb5d2fde
("block/nbd: drop connection_co") and prior to "block/nbd: Move s->ioc
on AioContext change", which would cause an assertion failure.
To improve our chances of yielding, the NBD server is throttled to
reading 64 kB/s, and the NBD client reads 128 kB, so it should yield at
some point.
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit 8cfbe929e8c26050f0a4580a1606a370a947d4ce)
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
tests/qemu-iotests/281 | 28 +++++++++++++++++++++++++---
tests/qemu-iotests/281.out | 4 ++--
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/tests/qemu-iotests/281 b/tests/qemu-iotests/281
index 13c588be75..b2ead7f388 100755
--- a/tests/qemu-iotests/281
+++ b/tests/qemu-iotests/281
@@ -253,8 +253,9 @@ class TestYieldingAndTimers(iotests.QMPTestCase):
self.create_nbd_export()
# Simple VM with an NBD block device connected to the NBD export
- # provided by the QSD
+ # provided by the QSD, and an (initially unused) iothread
self.vm = iotests.VM()
+ self.vm.add_object('iothread,id=iothr')
self.vm.add_blockdev('nbd,node-name=nbd,server.type=unix,' +
f'server.path={self.sock},export=exp,' +
'reconnect-delay=1')
@@ -293,19 +294,40 @@ class TestYieldingAndTimers(iotests.QMPTestCase):
# thus not see the error, and so the test will pass.)
time.sleep(2)
+ def test_yield_in_iothread(self):
+ # Move the NBD node to the I/O thread; the NBD block driver should
+ # attach the connection's QIOChannel to that thread's AioContext, too
+ result = self.vm.qmp('x-blockdev-set-iothread',
+ node_name='nbd', iothread='iothr')
+ self.assert_qmp(result, 'return', {})
+
+ # Do some I/O that will be throttled by the QSD, so that the network
+ # connection hopefully will yield here. When it is resumed, it must
+ # then be resumed in the I/O thread's AioContext.
+ result = self.vm.qmp('human-monitor-command',
+ command_line='qemu-io nbd "read 0 128K"')
+ self.assert_qmp(result, 'return', '')
+
def create_nbd_export(self):
assert self.qsd is None
- # Simple NBD export of a null-co BDS
+ # Export a throttled null-co BDS: Reads are throttled (max 64 kB/s),
+ # writes are not.
self.qsd = QemuStorageDaemon(
+ '--object',
+ 'throttle-group,id=thrgr,x-bps-read=65536,x-bps-read-max=65536',
+
'--blockdev',
'null-co,node-name=null,read-zeroes=true',
+ '--blockdev',
+ 'throttle,node-name=thr,file=null,throttle-group=thrgr',
+
'--nbd-server',
f'addr.type=unix,addr.path={self.sock}',
'--export',
- 'nbd,id=exp,node-name=null,name=exp,writable=true'
+ 'nbd,id=exp,node-name=thr,name=exp,writable=true'
)
def stop_nbd_export(self):
diff --git a/tests/qemu-iotests/281.out b/tests/qemu-iotests/281.out
index 914e3737bd..3f8a935a08 100644
--- a/tests/qemu-iotests/281.out
+++ b/tests/qemu-iotests/281.out
@@ -1,5 +1,5 @@
-.....
+......
----------------------------------------------------------------------
-Ran 5 tests
+Ran 6 tests
OK
--
2.27.0

View File

@ -0,0 +1,174 @@
From 3d2d7a46713d362d2ff5137841e689593da976a3 Mon Sep 17 00:00:00 2001
From: Hanna Reitz <hreitz@redhat.com>
Date: Fri, 4 Feb 2022 12:10:10 +0100
Subject: [PATCH 6/8] iotests/281: Test lingering timers
RH-Author: Hanna Reitz <hreitz@redhat.com>
RH-MergeRequest: 74: block/nbd: Handle AioContext changes
RH-Commit: [4/6] d228ba3fcdfaab2d54dd5b023688a1c055cce2c2 (hreitz/qemu-kvm-c-9-s)
RH-Bugzilla: 2033626
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Prior to "block/nbd: Delete reconnect delay timer when done" and
"block/nbd: Delete open timer when done", both of those timers would
remain scheduled even after successfully (re-)connecting to the server,
and they would not even be deleted when the BDS is deleted.
This test constructs exactly this situation:
(1) Configure an @open-timeout, so the open timer is armed, and
(2) Configure a @reconnect-delay and trigger a reconnect situation
(which succeeds immediately), so the reconnect delay timer is armed.
Then we immediately delete the BDS, and sleep for longer than the
@open-timeout and @reconnect-delay. Prior to said patches, this caused
one (or both) of the timer CBs to access already-freed data.
Accessing freed data may or may not crash, so this test can produce
false successes, but I do not know how to show the problem in a better
or more reliable way. If you run this test on "block/nbd: Assert there
are no timers when closed" and without the fix patches mentioned above,
you should reliably see an assertion failure.
(But all other tests that use the reconnect delay timer (264 and 277)
will fail in that configuration, too; as will nbd-reconnect-on-open,
which uses the open timer.)
Remove this test from the quick group because of the two second sleep
this patch introduces.
(I decided to put this test case into 281, because the main bug this
series addresses is in the interaction of the NBD block driver and I/O
threads, which is precisely the scope of 281. The test case for that
other bug will also be put into the test class added here.
Also, excuse the test class's name, I couldn't come up with anything
better. The "yield" part will make sense two patches from now.)
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit eaf1e85d4ddefdbd197f393fa9c5acc7ba8133b0)
Conflict:
- @open-timeout was introduced after the 6.2 release, and has not been
backported. Consequently, there is no open_timer, and we can (and
must) drop the respective parts of the test here.
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
tests/qemu-iotests/281 | 73 ++++++++++++++++++++++++++++++++++++--
tests/qemu-iotests/281.out | 4 +--
2 files changed, 73 insertions(+), 4 deletions(-)
diff --git a/tests/qemu-iotests/281 b/tests/qemu-iotests/281
index 956698083f..13c588be75 100755
--- a/tests/qemu-iotests/281
+++ b/tests/qemu-iotests/281
@@ -1,5 +1,5 @@
#!/usr/bin/env python3
-# group: rw quick
+# group: rw
#
# Test cases for blockdev + IOThread interactions
#
@@ -20,8 +20,9 @@
#
import os
+import time
import iotests
-from iotests import qemu_img
+from iotests import qemu_img, QemuStorageDaemon
image_len = 64 * 1024 * 1024
@@ -243,6 +244,74 @@ class TestBlockdevBackupAbort(iotests.QMPTestCase):
# Hangs on failure, we expect this error.
self.assert_qmp(result, 'error/class', 'GenericError')
+# Test for RHBZ#2033626
+class TestYieldingAndTimers(iotests.QMPTestCase):
+ sock = os.path.join(iotests.sock_dir, 'nbd.sock')
+ qsd = None
+
+ def setUp(self):
+ self.create_nbd_export()
+
+ # Simple VM with an NBD block device connected to the NBD export
+ # provided by the QSD
+ self.vm = iotests.VM()
+ self.vm.add_blockdev('nbd,node-name=nbd,server.type=unix,' +
+ f'server.path={self.sock},export=exp,' +
+ 'reconnect-delay=1')
+
+ self.vm.launch()
+
+ def tearDown(self):
+ self.stop_nbd_export()
+ self.vm.shutdown()
+
+ def test_timers_with_blockdev_del(self):
+ # Stop and restart the NBD server, and do some I/O on the client to
+ # trigger a reconnect and start the reconnect delay timer
+ self.stop_nbd_export()
+ self.create_nbd_export()
+
+ result = self.vm.qmp('human-monitor-command',
+ command_line='qemu-io nbd "write 0 512"')
+ self.assert_qmp(result, 'return', '')
+
+ # Reconnect is done, so the reconnect delay timer should be gone.
+ # (But there used to be a bug where it remained active, for which this
+ # is a regression test.)
+
+ # Delete the BDS to see whether the timer is gone. If it is not,
+ # it will remain active, fire later, and then access freed data.
+ # (Or, with "block/nbd: Assert there are no timers when closed"
+ # applied, the assertion added in that patch will fail.)
+ result = self.vm.qmp('blockdev-del', node_name='nbd')
+ self.assert_qmp(result, 'return', {})
+
+ # Give the timer some time to fire (it has a timeout of 1 s).
+ # (Sleeping in an iotest may ring some alarm bells, but note that if
+ # the timing is off here, the test will just always pass. If we kill
+ # the VM too early, then we just kill the timer before it can fire,
+ # thus not see the error, and so the test will pass.)
+ time.sleep(2)
+
+ def create_nbd_export(self):
+ assert self.qsd is None
+
+ # Simple NBD export of a null-co BDS
+ self.qsd = QemuStorageDaemon(
+ '--blockdev',
+ 'null-co,node-name=null,read-zeroes=true',
+
+ '--nbd-server',
+ f'addr.type=unix,addr.path={self.sock}',
+
+ '--export',
+ 'nbd,id=exp,node-name=null,name=exp,writable=true'
+ )
+
+ def stop_nbd_export(self):
+ self.qsd.stop()
+ self.qsd = None
+
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])
diff --git a/tests/qemu-iotests/281.out b/tests/qemu-iotests/281.out
index 89968f35d7..914e3737bd 100644
--- a/tests/qemu-iotests/281.out
+++ b/tests/qemu-iotests/281.out
@@ -1,5 +1,5 @@
-....
+.....
----------------------------------------------------------------------
-Ran 4 tests
+Ran 5 tests
OK
--
2.27.0

View File

@ -0,0 +1,106 @@
From 37593348e7d95580fb2b0009dcb026c07367f1f8 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Thu, 3 Feb 2022 15:05:34 +0100
Subject: [PATCH 2/8] iotests: Test blockdev-reopen with iothreads and
throttling
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 73: block: Lock AioContext for drain_end in blockdev-reopen
RH-Commit: [2/2] d19d5fa9efa4813ece75708436891041754ab910 (kmwolf/centos-qemu-kvm)
RH-Bugzilla: 2046659
RH-Acked-by: Sergio Lopez <None>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Acked-by: Hanna Reitz <hreitz@redhat.com>
The 'throttle' block driver implements .bdrv_co_drain_end, so
blockdev-reopen will have to wait for it to complete in the polling
loop at the end of qmp_blockdev_reopen(). This makes AIO_WAIT_WHILE()
release the AioContext lock, which causes a crash if the lock hasn't
correctly been taken.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220203140534.36522-3-kwolf@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ee810602376125ca0e0afd6b7c715e13740978ea)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
tests/qemu-iotests/245 | 36 +++++++++++++++++++++++++++++++++---
tests/qemu-iotests/245.out | 4 ++--
2 files changed, 35 insertions(+), 5 deletions(-)
diff --git a/tests/qemu-iotests/245 b/tests/qemu-iotests/245
index 24ac43f70e..8cbed7821b 100755
--- a/tests/qemu-iotests/245
+++ b/tests/qemu-iotests/245
@@ -1138,12 +1138,13 @@ class TestBlockdevReopen(iotests.QMPTestCase):
self.assertEqual(self.get_node('hd1'), None)
self.assert_qmp(self.get_node('hd2'), 'ro', True)
- def run_test_iothreads(self, iothread_a, iothread_b, errmsg = None):
- opts = hd_opts(0)
+ def run_test_iothreads(self, iothread_a, iothread_b, errmsg = None,
+ opts_a = None, opts_b = None):
+ opts = opts_a or hd_opts(0)
result = self.vm.qmp('blockdev-add', conv_keys = False, **opts)
self.assert_qmp(result, 'return', {})
- opts2 = hd_opts(2)
+ opts2 = opts_b or hd_opts(2)
result = self.vm.qmp('blockdev-add', conv_keys = False, **opts2)
self.assert_qmp(result, 'return', {})
@@ -1194,6 +1195,35 @@ class TestBlockdevReopen(iotests.QMPTestCase):
def test_iothreads_switch_overlay(self):
self.run_test_iothreads('', 'iothread0')
+ def test_iothreads_with_throttling(self):
+ # Create a throttle-group object
+ opts = { 'qom-type': 'throttle-group', 'id': 'group0',
+ 'limits': { 'iops-total': 1000 } }
+ result = self.vm.qmp('object-add', conv_keys = False, **opts)
+ self.assert_qmp(result, 'return', {})
+
+ # Options with a throttle filter between format and protocol
+ opts = [
+ {
+ 'driver': iotests.imgfmt,
+ 'node-name': f'hd{idx}',
+ 'file' : {
+ 'node-name': f'hd{idx}-throttle',
+ 'driver': 'throttle',
+ 'throttle-group': 'group0',
+ 'file': {
+ 'driver': 'file',
+ 'node-name': f'hd{idx}-file',
+ 'filename': hd_path[idx],
+ },
+ },
+ }
+ for idx in (0, 2)
+ ]
+
+ self.run_test_iothreads('iothread0', 'iothread0', None,
+ opts[0], opts[1])
+
if __name__ == '__main__':
iotests.activate_logging()
iotests.main(supported_fmts=["qcow2"],
diff --git a/tests/qemu-iotests/245.out b/tests/qemu-iotests/245.out
index 4eced19294..a4e04a3266 100644
--- a/tests/qemu-iotests/245.out
+++ b/tests/qemu-iotests/245.out
@@ -17,8 +17,8 @@ read 1/1 bytes at offset 262152
read 1/1 bytes at offset 262160
1 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-...............
+................
----------------------------------------------------------------------
-Ran 25 tests
+Ran 26 tests
OK
--
2.27.0

View File

@ -0,0 +1,92 @@
From c21502a220d107261c9a8627158f357489d86543 Mon Sep 17 00:00:00 2001
From: Hanna Reitz <hreitz@redhat.com>
Date: Fri, 4 Feb 2022 12:10:09 +0100
Subject: [PATCH 5/8] iotests.py: Add QemuStorageDaemon class
RH-Author: Hanna Reitz <hreitz@redhat.com>
RH-MergeRequest: 74: block/nbd: Handle AioContext changes
RH-Commit: [3/6] 5da1cda4d025c1bd7029ed8071b4ccf25459a878 (hreitz/qemu-kvm-c-9-s)
RH-Bugzilla: 2033626
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
This is a rather simple class that allows creating a QSD instance
running in the background and stopping it when no longer needed.
The __del__ handler is a safety net for when something goes so wrong in
a test that e.g. the tearDown() method is not called (e.g. setUp()
launches the QSD, but then launching a VM fails). We do not want the
QSD to continue running after the test has failed, so __del__() will
take care to kill it.
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit 091dc7b2b5553a529bff9a7bf9ad3bc85bc5bdcd)
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
tests/qemu-iotests/iotests.py | 40 +++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 83bfedb902..a51b5ce8cd 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -72,6 +72,8 @@
qemu_prog = os.environ.get('QEMU_PROG', 'qemu')
qemu_opts = os.environ.get('QEMU_OPTIONS', '').strip().split(' ')
+qsd_prog = os.environ.get('QSD_PROG', 'qemu-storage-daemon')
+
gdb_qemu_env = os.environ.get('GDB_OPTIONS')
qemu_gdb = []
if gdb_qemu_env:
@@ -312,6 +314,44 @@ def cmd(self, cmd):
return self._read_output()
+class QemuStorageDaemon:
+ def __init__(self, *args: str, instance_id: str = 'a'):
+ assert '--pidfile' not in args
+ self.pidfile = os.path.join(test_dir, f'qsd-{instance_id}-pid')
+ all_args = [qsd_prog] + list(args) + ['--pidfile', self.pidfile]
+
+ # Cannot use with here, we want the subprocess to stay around
+ # pylint: disable=consider-using-with
+ self._p = subprocess.Popen(all_args)
+ while not os.path.exists(self.pidfile):
+ if self._p.poll() is not None:
+ cmd = ' '.join(all_args)
+ raise RuntimeError(
+ 'qemu-storage-daemon terminated with exit code ' +
+ f'{self._p.returncode}: {cmd}')
+
+ time.sleep(0.01)
+
+ with open(self.pidfile, encoding='utf-8') as f:
+ self._pid = int(f.read().strip())
+
+ assert self._pid == self._p.pid
+
+ def stop(self, kill_signal=15):
+ self._p.send_signal(kill_signal)
+ self._p.wait()
+ self._p = None
+
+ try:
+ os.remove(self.pidfile)
+ except OSError:
+ pass
+
+ def __del__(self):
+ if self._p is not None:
+ self.stop(kill_signal=9)
+
+
def qemu_nbd(*args):
'''Run qemu-nbd in daemon mode and return the parent's exit code'''
return subprocess.call(qemu_nbd_args + ['--fork'] + list(args))
--
2.27.0

View File

@ -144,7 +144,7 @@ Obsoletes: %{name}-block-iscsi <= %{version} \
Summary: QEMU is a machine emulator and virtualizer
Name: qemu-kvm
Version: 6.2.0
Release: 8%{?rcrel}%{?dist}%{?cc_suffix}
Release: 9%{?rcrel}%{?dist}%{?cc_suffix}
# Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped
# Epoch 15 used for RHEL 8
# Epoch 17 used for RHEL 9 (due to release versioning offset in RHEL 8.5)
@ -251,6 +251,22 @@ Patch53: kvm-block-backend-prevent-dangling-BDS-pointers-across-a.patch
Patch54: kvm-iotests-stream-error-on-reset-New-test.patch
# For bz#2042481 - [aarch64] Launch guest with "default-bus-bypass-iommu=off,iommu=smmuv3" and "iommu_platform=on", guest hangs after system_reset
Patch55: kvm-hw-arm-smmuv3-Fix-device-reset.patch
# For bz#2046659 - qemu crash after execute blockdev-reopen with iothread
Patch56: kvm-block-Lock-AioContext-for-drain_end-in-blockdev-reop.patch
# For bz#2046659 - qemu crash after execute blockdev-reopen with iothread
Patch57: kvm-iotests-Test-blockdev-reopen-with-iothreads-and-thro.patch
# For bz#2033626 - Qemu core dump when start guest with nbd node or do block jobs to nbd node
Patch58: kvm-block-nbd-Delete-reconnect-delay-timer-when-done.patch
# For bz#2033626 - Qemu core dump when start guest with nbd node or do block jobs to nbd node
Patch59: kvm-block-nbd-Assert-there-are-no-timers-when-closed.patch
# For bz#2033626 - Qemu core dump when start guest with nbd node or do block jobs to nbd node
Patch60: kvm-iotests.py-Add-QemuStorageDaemon-class.patch
# For bz#2033626 - Qemu core dump when start guest with nbd node or do block jobs to nbd node
Patch61: kvm-iotests-281-Test-lingering-timers.patch
# For bz#2033626 - Qemu core dump when start guest with nbd node or do block jobs to nbd node
Patch62: kvm-block-nbd-Move-s-ioc-on-AioContext-change.patch
# For bz#2033626 - Qemu core dump when start guest with nbd node or do block jobs to nbd node
Patch63: kvm-iotests-281-Let-NBD-connection-yield-in-iothread.patch
# Source-git patches
@ -1309,6 +1325,20 @@ useradd -r -u 107 -g qemu -G kvm -d / -s /sbin/nologin \
%endif
%changelog
* Thu Feb 17 2022 Miroslav Rezanina <mrezanin@redhat.com> - 6.2.0-9
- kvm-block-Lock-AioContext-for-drain_end-in-blockdev-reop.patch [bz#2046659]
- kvm-iotests-Test-blockdev-reopen-with-iothreads-and-thro.patch [bz#2046659]
- kvm-block-nbd-Delete-reconnect-delay-timer-when-done.patch [bz#2033626]
- kvm-block-nbd-Assert-there-are-no-timers-when-closed.patch [bz#2033626]
- kvm-iotests.py-Add-QemuStorageDaemon-class.patch [bz#2033626]
- kvm-iotests-281-Test-lingering-timers.patch [bz#2033626]
- kvm-block-nbd-Move-s-ioc-on-AioContext-change.patch [bz#2033626]
- kvm-iotests-281-Let-NBD-connection-yield-in-iothread.patch [bz#2033626]
- Resolves: bz#2046659
(qemu crash after execute blockdev-reopen with iothread)
- Resolves: bz#2033626
(Qemu core dump when start guest with nbd node or do block jobs to nbd node)
* Mon Feb 14 2022 Miroslav Rezanina <mrezanin@redhat.com> - 6.2.0-8
- kvm-numa-Enable-numa-for-SGX-EPC-sections.patch [bz#2033708]
- kvm-numa-Support-SGX-numa-in-the-monitor-and-Libvirt-int.patch [bz#2033708]

View File

@ -3,4 +3,8 @@ elf:
exclude_path: (.*s390-ccw.img.*)|(.*s390-netboot.img.*)
inspections:
badfuncs: off
annocheck:
- hardened: --skip-cf-protection --skip-property-note --ignore-unknown --verbose
ignore:
- /usr/share/qemu-kvm/s390-ccw.img
- /usr/share/qemu-kvm/s390-netboot.img on s390x