* Mon Feb 12 2024 Miroslav Rezanina <mrezanin@redhat.com> - 8.2.0-5

- kvm-hv-balloon-use-get_min_alignment-to-express-32-GiB-a.patch [RHEL-20341]
- kvm-memory-device-reintroduce-memory-region-size-check.patch [RHEL-20341]
- kvm-block-backend-Allow-concurrent-context-changes.patch [RHEL-24593]
- kvm-scsi-Await-request-purging.patch [RHEL-24593]
- kvm-string-output-visitor-show-structs-as-omitted.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-string-output-visitor-Fix-pseudo-struct-handling.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-qdev-properties-alias-all-object-class-properties.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-qdev-add-IOThreadVirtQueueMappingList-property-type.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-add-iothread-vq-mapping-parameter.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-Fix-potential-nullpointer-read-access-in-.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-iotests-add-filter_qmp_generated_node_ids.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-iotests-port-141-to-Python-for-reliable-QMP-testing.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-monitor-only-run-coroutine-commands-in-qemu_aio_cont.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-move-dataplane-code-into-virtio-blk.c.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-rename-dataplane-create-destroy-functions.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-rename-dataplane-to-ioeventfd.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-restart-s-rq-reqs-in-vq-AioContexts.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-tolerate-failure-to-set-BlockBackend-AioC.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-always-set-ioeventfd-during-startup.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-tests-unit-Bump-test-replication-timeout-to-60-secon.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-iotests-iothreads-stream-Use-the-right-TimeoutError.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-mem-default-enable-dynamic-memslots.patch [RHEL-24045]
- Resolves: RHEL-20341
  (memory-device size alignment check invalid in QEMU 8.2)
- Resolves: RHEL-24593
  (qemu crash blk_get_aio_context(BlockBackend *): Assertion `ctx == blk->ctx' when repeatedly hotplug/unplug disk)
- Resolves: RHEL-17369
  ([nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.)
- Resolves: RHEL-20764
  ([qemu-kvm] Enable qemu multiqueue block layer support)
- Resolves: RHEL-7356
  ([qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9])
- Resolves: RHEL-24045
  (QEMU: default-enable dynamically using multiple memslots for virtio-mem)
This commit is contained in:
Miroslav Rezanina 2024-02-12 01:37:48 -05:00
parent 1285325f55
commit 6b2ce2692f
23 changed files with 5702 additions and 1 deletions

View File

@ -0,0 +1,104 @@
From afa842e9fdf6e1d6e5d5785679a22779632142bd Mon Sep 17 00:00:00 2001
From: Hanna Czenczek <hreitz@redhat.com>
Date: Fri, 2 Feb 2024 15:47:54 +0100
Subject: [PATCH 03/22] block-backend: Allow concurrent context changes
RH-Author: Hanna Czenczek <hreitz@redhat.com>
RH-MergeRequest: 222: Allow concurrent BlockBackend context changes
RH-Jira: RHEL-24593
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Commit: [1/2] 9e1b535f60f7afa94a0817dc3e71136e41631c71 (hreitz/qemu-kvm-c-9-s)
Since AioContext locks have been removed, a BlockBackend's AioContext
may really change at any time (only exception is that it is often
confined to a drained section, as noted in this patch). Therefore,
blk_get_aio_context() cannot rely on its root node's context always
matching that of the BlockBackend.
In practice, whether they match does not matter anymore anyway: Requests
can be sent to BDSs from any context, so anyone who requests the BB's
context should have no reason to require the root node to have the same
context. Therefore, we can and should remove the assertion to that
effect.
In addition, because the context can be set and queried from different
threads concurrently, it has to be accessed with atomic operations.
Buglink: https://issues.redhat.com/browse/RHEL-19381
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-ID: <20240202144755.671354-2-hreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ad893672027ffe26db498947d70cde6d4f58a111)
---
block/block-backend.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/block/block-backend.c b/block/block-backend.c
index 209eb07528..9c4de79e6b 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -44,7 +44,7 @@ struct BlockBackend {
char *name;
int refcnt;
BdrvChild *root;
- AioContext *ctx;
+ AioContext *ctx; /* access with atomic operations only */
DriveInfo *legacy_dinfo; /* null unless created by drive_new() */
QTAILQ_ENTRY(BlockBackend) link; /* for block_backends */
QTAILQ_ENTRY(BlockBackend) monitor_link; /* for monitor_block_backends */
@@ -2414,22 +2414,22 @@ void blk_op_unblock_all(BlockBackend *blk, Error *reason)
}
}
+/**
+ * Return BB's current AioContext. Note that this context may change
+ * concurrently at any time, with one exception: If the BB has a root node
+ * attached, its context will only change through bdrv_try_change_aio_context(),
+ * which creates a drained section. Therefore, incrementing such a BB's
+ * in-flight counter will prevent its context from changing.
+ */
AioContext *blk_get_aio_context(BlockBackend *blk)
{
- BlockDriverState *bs;
IO_CODE();
if (!blk) {
return qemu_get_aio_context();
}
- bs = blk_bs(blk);
- if (bs) {
- AioContext *ctx = bdrv_get_aio_context(blk_bs(blk));
- assert(ctx == blk->ctx);
- }
-
- return blk->ctx;
+ return qatomic_read(&blk->ctx);
}
int blk_set_aio_context(BlockBackend *blk, AioContext *new_context,
@@ -2442,7 +2442,7 @@ int blk_set_aio_context(BlockBackend *blk, AioContext *new_context,
GLOBAL_STATE_CODE();
if (!bs) {
- blk->ctx = new_context;
+ qatomic_set(&blk->ctx, new_context);
return 0;
}
@@ -2471,7 +2471,7 @@ static void blk_root_set_aio_ctx_commit(void *opaque)
AioContext *new_context = s->new_ctx;
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
- blk->ctx = new_context;
+ qatomic_set(&blk->ctx, new_context);
if (tgm->throttle_state) {
throttle_group_detach_aio_context(tgm);
throttle_group_attach_aio_context(tgm, new_context);
--
2.39.3

View File

@ -0,0 +1,94 @@
From a5b4eec5f456b1ca3fe753e1d76f96cf3f8914ef Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Wed, 17 Jan 2024 14:55:53 +0100
Subject: [PATCH 01/22] hv-balloon: use get_min_alignment() to express 32 GiB
alignment
RH-Author: David Hildenbrand <david@redhat.com>
RH-MergeRequest: 221: memory-device: reintroduce memory region size check
RH-Jira: RHEL-20341
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Igor Mammedov <imammedo@redhat.com>
RH-Commit: [1/2] cbe092fe549552928270892253b31cd8fe199825
https://issues.redhat.com/browse/RHEL-20341
Let's implement the get_min_alignment() callback for memory devices, and
copy for the device memory region the alignment of the host memory
region. This mimics what virtio-mem does, and allows for re-introducing
proper alignment checks for the memory region size (where we don't care
about additional device requirements) in memory device core.
Message-ID: <20240117135554.787344-2-david@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
(cherry picked from commit f77c5f38f49c71bc14cf1019ac92b0b95f572414)
Signed-off-by: David Hildenbrand <david@redhat.com>
---
hw/hyperv/hv-balloon.c | 37 +++++++++++++++++++++----------------
1 file changed, 21 insertions(+), 16 deletions(-)
diff --git a/hw/hyperv/hv-balloon.c b/hw/hyperv/hv-balloon.c
index 66f297c1d7..0829c495b0 100644
--- a/hw/hyperv/hv-balloon.c
+++ b/hw/hyperv/hv-balloon.c
@@ -1476,22 +1476,7 @@ static void hv_balloon_ensure_mr(HvBalloon *balloon)
balloon->mr = g_new0(MemoryRegion, 1);
memory_region_init(balloon->mr, OBJECT(balloon), TYPE_HV_BALLOON,
memory_region_size(hostmem_mr));
-
- /*
- * The VM can indicate an alignment up to 32 GiB. Memory device core can
- * usually only handle/guarantee 1 GiB alignment. The user will have to
- * specify a larger maxmem eventually.
- *
- * The memory device core will warn the user in case maxmem might have to be
- * increased and will fail plugging the device if there is not sufficient
- * space after alignment.
- *
- * TODO: we could do the alignment ourselves in a slightly bigger region.
- * But this feels better, although the warning might be annoying. Maybe
- * we can optimize that in the future (e.g., with such a device on the
- * cmdline place/size the device memory region differently.
- */
- balloon->mr->align = MAX(32 * GiB, memory_region_get_alignment(hostmem_mr));
+ balloon->mr->align = memory_region_get_alignment(hostmem_mr);
}
static void hv_balloon_free_mr(HvBalloon *balloon)
@@ -1653,6 +1638,25 @@ static MemoryRegion *hv_balloon_md_get_memory_region(MemoryDeviceState *md,
return balloon->mr;
}
+static uint64_t hv_balloon_md_get_min_alignment(const MemoryDeviceState *md)
+{
+ /*
+ * The VM can indicate an alignment up to 32 GiB. Memory device core can
+ * usually only handle/guarantee 1 GiB alignment. The user will have to
+ * specify a larger maxmem eventually.
+ *
+ * The memory device core will warn the user in case maxmem might have to be
+ * increased and will fail plugging the device if there is not sufficient
+ * space after alignment.
+ *
+ * TODO: we could do the alignment ourselves in a slightly bigger region.
+ * But this feels better, although the warning might be annoying. Maybe
+ * we can optimize that in the future (e.g., with such a device on the
+ * cmdline place/size the device memory region differently.
+ */
+ return 32 * GiB;
+}
+
static void hv_balloon_md_fill_device_info(const MemoryDeviceState *md,
MemoryDeviceInfo *info)
{
@@ -1765,5 +1769,6 @@ static void hv_balloon_class_init(ObjectClass *klass, void *data)
mdc->get_memory_region = hv_balloon_md_get_memory_region;
mdc->decide_memslots = hv_balloon_decide_memslots;
mdc->get_memslots = hv_balloon_get_memslots;
+ mdc->get_min_alignment = hv_balloon_md_get_min_alignment;
mdc->fill_device_info = hv_balloon_md_fill_device_info;
}
--
2.39.3

View File

@ -0,0 +1,49 @@
From a9be663beaace1c31d75ca353e5d3bb0657a4f6c Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 18 Jan 2024 09:48:21 -0500
Subject: [PATCH 11/22] iotests: add filter_qmp_generated_node_ids()
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [7/17] 8dd20acc5b1e992294ed422e80897a9c221940dd (stefanha/centos-stream-qemu-kvm)
Add a filter function for QMP responses that contain QEMU's
automatically generated node ids. The ids change between runs and must
be masked in the reference output.
The next commit will use this new function.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240118144823.1497953-2-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit da62b507a20510d819bcfbe8f5e573409b954006)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
tests/qemu-iotests/iotests.py | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index e5c5798c71..ea48af4a7b 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -651,6 +651,13 @@ def _filter(_key, value):
def filter_generated_node_ids(msg):
return re.sub("#block[0-9]+", "NODE_NAME", msg)
+def filter_qmp_generated_node_ids(qmsg):
+ def _filter(_key, value):
+ if is_str(value):
+ return filter_generated_node_ids(value)
+ return value
+ return filter_qmp(qmsg, _filter)
+
def filter_img_info(output: str, filename: str,
drop_child_info: bool = True) -> str:
lines = []
--
2.39.3

View File

@ -0,0 +1,49 @@
From 453da839a7d81896d03b827a95c1991a60740dc5 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Thu, 25 Jan 2024 16:21:50 +0100
Subject: [PATCH 21/22] iotests/iothreads-stream: Use the right TimeoutError
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [17/17] ca5a512ccccb668089b726d7499562d1e294c828 (stefanha/centos-stream-qemu-kvm)
Since Python 3.11 asyncio.TimeoutError is an alias for TimeoutError, but
in older versions it's not. We really have to catch asyncio.TimeoutError
here, otherwise a slow test run will fail (as has happened multiple
times on CI recently).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20240125152150.42389-1-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c9c0b37ff4c11b712b21efabe8e5381d223d0295)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
tests/qemu-iotests/tests/iothreads-stream | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tests/qemu-iotests/tests/iothreads-stream b/tests/qemu-iotests/tests/iothreads-stream
index 503f221f16..231195b5e8 100755
--- a/tests/qemu-iotests/tests/iothreads-stream
+++ b/tests/qemu-iotests/tests/iothreads-stream
@@ -18,6 +18,7 @@
#
# Creator/Owner: Kevin Wolf <kwolf@redhat.com>
+import asyncio
import iotests
iotests.script_initialize(supported_fmts=['qcow2'],
@@ -69,6 +70,6 @@ with iotests.FilePath('disk1.img') as base1_path, \
# The test is done once both jobs are gone
if finished == 2:
break
- except TimeoutError:
+ except asyncio.TimeoutError:
pass
vm.cmd('query-jobs')
--
2.39.3

View File

@ -0,0 +1,592 @@
From 70efc3bbf1f7d7b1b0c2475d9ce3bb70cc9d1cc7 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 18 Jan 2024 09:48:22 -0500
Subject: [PATCH 12/22] iotests: port 141 to Python for reliable QMP testing
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [8/17] 0783f536508916feac4b4c39e41c22c24a2e52e7 (stefanha/centos-stream-qemu-kvm)
The common.qemu bash functions allow tests to interact with the QMP
monitor of a QEMU process. I spent two days trying to update 141 when
the order of the test output changed, but found it would still fail
occassionally because printf() and QMP events race with synchronous QMP
communication.
I gave up and ported 141 to the existing Python API for QMP tests. The
Python API is less affected by the order in which QEMU prints output
because it does not print all QMP traffic by default.
The next commit changes the order in which QMP messages are received.
Make 141 reliable first.
Cc: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240118144823.1497953-3-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 9ee2dd4c22a3639c5462b3fc20df60c005c3de64)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
tests/qemu-iotests/141 | 307 ++++++++++++++++---------------------
tests/qemu-iotests/141.out | 200 ++++++------------------
2 files changed, 176 insertions(+), 331 deletions(-)
diff --git a/tests/qemu-iotests/141 b/tests/qemu-iotests/141
index a37030ee17..a7d3985a02 100755
--- a/tests/qemu-iotests/141
+++ b/tests/qemu-iotests/141
@@ -1,9 +1,12 @@
-#!/usr/bin/env bash
+#!/usr/bin/env python3
# group: rw auto quick
#
# Test case for ejecting BDSs with block jobs still running on them
#
-# Copyright (C) 2016 Red Hat, Inc.
+# Originally written in bash by Hanna Czenczek, ported to Python by Stefan
+# Hajnoczi.
+#
+# Copyright Red Hat
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -19,177 +22,129 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
-# creator
-owner=hreitz@redhat.com
-
-seq="$(basename $0)"
-echo "QA output created by $seq"
-
-status=1 # failure is the default!
-
-_cleanup()
-{
- _cleanup_qemu
- _cleanup_test_img
- for img in "$TEST_DIR"/{b,m,o}.$IMGFMT; do
- _rm_test_img "$img"
- done
-}
-trap "_cleanup; exit \$status" 0 1 2 3 15
-
-# get standard environment, filters and checks
-. ./common.rc
-. ./common.filter
-. ./common.qemu
-
-# Needs backing file and backing format support
-_supported_fmt qcow2 qed
-_supported_proto file
-_supported_os Linux
-
-
-test_blockjob()
-{
- _send_qemu_cmd $QEMU_HANDLE \
- "{'execute': 'blockdev-add',
- 'arguments': {
- 'node-name': 'drv0',
- 'driver': '$IMGFMT',
- 'file': {
- 'driver': 'file',
- 'filename': '$TEST_IMG'
- }}}" \
- 'return'
-
- # If "$2" is an event, we may or may not see it before the
- # {"return": {}}. Therefore, filter the {"return": {}} out both
- # here and in the next command. (Naturally, if we do not see it
- # here, we will see it before the next command can be executed,
- # so it will appear in the next _send_qemu_cmd's output.)
- _send_qemu_cmd $QEMU_HANDLE \
- "$1" \
- "$2" \
- | _filter_img_create | _filter_qmp_empty_return
-
- # We want this to return an error because the block job is still running
- _send_qemu_cmd $QEMU_HANDLE \
- "{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}" \
- 'error' | _filter_generated_node_ids | _filter_qmp_empty_return
-
- _send_qemu_cmd $QEMU_HANDLE \
- "{'execute': 'block-job-cancel',
- 'arguments': {'device': 'job0'}}" \
- "$3"
-
- _send_qemu_cmd $QEMU_HANDLE \
- "{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}" \
- 'return'
-}
-
-
-TEST_IMG="$TEST_DIR/b.$IMGFMT" _make_test_img 1M
-TEST_IMG="$TEST_DIR/m.$IMGFMT" _make_test_img -b "$TEST_DIR/b.$IMGFMT" -F $IMGFMT 1M
-_make_test_img -b "$TEST_DIR/m.$IMGFMT" 1M -F $IMGFMT
-
-_launch_qemu -nodefaults
-
-_send_qemu_cmd $QEMU_HANDLE \
- "{'execute': 'qmp_capabilities'}" \
- 'return'
-
-echo
-echo '=== Testing drive-backup ==='
-echo
-
-# drive-backup will not send BLOCK_JOB_READY by itself, and cancelling the job
-# will consequently result in BLOCK_JOB_CANCELLED being emitted.
-
-test_blockjob \
- "{'execute': 'drive-backup',
- 'arguments': {'job-id': 'job0',
- 'device': 'drv0',
- 'target': '$TEST_DIR/o.$IMGFMT',
- 'format': '$IMGFMT',
- 'sync': 'none'}}" \
- 'return' \
- '"status": "null"'
-
-echo
-echo '=== Testing drive-mirror ==='
-echo
-
-# drive-mirror will send BLOCK_JOB_READY basically immediately, and cancelling
-# the job will consequently result in BLOCK_JOB_COMPLETED being emitted.
-
-test_blockjob \
- "{'execute': 'drive-mirror',
- 'arguments': {'job-id': 'job0',
- 'device': 'drv0',
- 'target': '$TEST_DIR/o.$IMGFMT',
- 'format': '$IMGFMT',
- 'sync': 'none'}}" \
- 'BLOCK_JOB_READY' \
- '"status": "null"'
-
-echo
-echo '=== Testing active block-commit ==='
-echo
-
-# An active block-commit will send BLOCK_JOB_READY basically immediately, and
-# cancelling the job will consequently result in BLOCK_JOB_COMPLETED being
-# emitted.
-
-test_blockjob \
- "{'execute': 'block-commit',
- 'arguments': {'job-id': 'job0', 'device': 'drv0'}}" \
- 'BLOCK_JOB_READY' \
- '"status": "null"'
-
-echo
-echo '=== Testing non-active block-commit ==='
-echo
-
-# Give block-commit something to work on, otherwise it would be done
-# immediately, send a BLOCK_JOB_COMPLETED and ejecting the BDS would work just
-# fine without the block job still running.
-
-$QEMU_IO -c 'write 0 1M' "$TEST_DIR/m.$IMGFMT" | _filter_qemu_io
-
-test_blockjob \
- "{'execute': 'block-commit',
- 'arguments': {'job-id': 'job0',
- 'device': 'drv0',
- 'top': '$TEST_DIR/m.$IMGFMT',
- 'speed': 1}}" \
- 'return' \
- '"status": "null"'
-
-echo
-echo '=== Testing block-stream ==='
-echo
-
-# Give block-stream something to work on, otherwise it would be done
-# immediately, send a BLOCK_JOB_COMPLETED and ejecting the BDS would work just
-# fine without the block job still running.
-
-$QEMU_IO -c 'write 0 1M' "$TEST_DIR/b.$IMGFMT" | _filter_qemu_io
-
-# With some data to stream (and @speed set to 1), block-stream will not complete
-# until we send the block-job-cancel command.
-
-test_blockjob \
- "{'execute': 'block-stream',
- 'arguments': {'job-id': 'job0',
- 'device': 'drv0',
- 'speed': 1}}" \
- 'return' \
- '"status": "null"'
-
-_cleanup_qemu
-
-# success, all done
-echo "*** done"
-rm -f $seq.full
-status=0
+import iotests
+
+# Common filters to mask values that vary in the test output
+QMP_FILTERS = [iotests.filter_qmp_testfiles, \
+ iotests.filter_qmp_imgfmt]
+
+
+class TestCase:
+ def __init__(self, name, vm, image_path, cancel_event):
+ self.name = name
+ self.vm = vm
+ self.image_path = image_path
+ self.cancel_event = cancel_event
+
+ def __enter__(self):
+ iotests.log(f'=== Testing {self.name} ===')
+ self.vm.qmp_log('blockdev-add', \
+ node_name='drv0', \
+ driver=iotests.imgfmt, \
+ file={'driver': 'file', 'filename': self.image_path}, \
+ filters=QMP_FILTERS)
+
+ def __exit__(self, *exc_details):
+ # This is expected to fail because the job still exists
+ self.vm.qmp_log('blockdev-del', node_name='drv0', \
+ filters=[iotests.filter_qmp_generated_node_ids])
+
+ self.vm.qmp_log('block-job-cancel', device='job0')
+ event = self.vm.event_wait(self.cancel_event)
+ iotests.log(event, filters=[iotests.filter_qmp_event])
+
+ # This time it succeeds
+ self.vm.qmp_log('blockdev-del', node_name='drv0')
+
+ # Separate test cases in output
+ iotests.log('')
+
+
+def main() -> None:
+ with iotests.FilePath('bottom', 'middle', 'top', 'target') as \
+ (bottom_path, middle_path, top_path, target_path), \
+ iotests.VM() as vm:
+
+ iotests.log('Creating bottom <- middle <- top backing file chain...')
+ IMAGE_SIZE='1M'
+ iotests.qemu_img_create('-f', iotests.imgfmt, bottom_path, IMAGE_SIZE)
+ iotests.qemu_img_create('-f', iotests.imgfmt, \
+ '-F', iotests.imgfmt, \
+ '-b', bottom_path, \
+ middle_path, \
+ IMAGE_SIZE)
+ iotests.qemu_img_create('-f', iotests.imgfmt, \
+ '-F', iotests.imgfmt, \
+ '-b', middle_path, \
+ top_path, \
+ IMAGE_SIZE)
+
+ iotests.log('Starting VM...')
+ vm.add_args('-nodefaults')
+ vm.launch()
+
+ # drive-backup will not send BLOCK_JOB_READY by itself, and cancelling
+ # the job will consequently result in BLOCK_JOB_CANCELLED being
+ # emitted.
+ with TestCase('drive-backup', vm, top_path, 'BLOCK_JOB_CANCELLED'):
+ vm.qmp_log('drive-backup', \
+ job_id='job0', \
+ device='drv0', \
+ target=target_path, \
+ format=iotests.imgfmt, \
+ sync='none', \
+ filters=QMP_FILTERS)
+
+ # drive-mirror will send BLOCK_JOB_READY basically immediately, and
+ # cancelling the job will consequently result in BLOCK_JOB_COMPLETED
+ # being emitted.
+ with TestCase('drive-mirror', vm, top_path, 'BLOCK_JOB_COMPLETED'):
+ vm.qmp_log('drive-mirror', \
+ job_id='job0', \
+ device='drv0', \
+ target=target_path, \
+ format=iotests.imgfmt, \
+ sync='none', \
+ filters=QMP_FILTERS)
+ event = vm.event_wait('BLOCK_JOB_READY')
+ assert event is not None # silence mypy
+ iotests.log(event, filters=[iotests.filter_qmp_event])
+
+ # An active block-commit will send BLOCK_JOB_READY basically
+ # immediately, and cancelling the job will consequently result in
+ # BLOCK_JOB_COMPLETED being emitted.
+ with TestCase('active block-commit', vm, top_path, \
+ 'BLOCK_JOB_COMPLETED'):
+ vm.qmp_log('block-commit', \
+ job_id='job0', \
+ device='drv0')
+ event = vm.event_wait('BLOCK_JOB_READY')
+ assert event is not None # silence mypy
+ iotests.log(event, filters=[iotests.filter_qmp_event])
+
+ # Give block-commit something to work on, otherwise it would be done
+ # immediately, send a BLOCK_JOB_COMPLETED and ejecting the BDS would
+ # work just fine without the block job still running.
+ iotests.qemu_io(middle_path, '-c', f'write 0 {IMAGE_SIZE}')
+ with TestCase('non-active block-commit', vm, top_path, \
+ 'BLOCK_JOB_CANCELLED'):
+ vm.qmp_log('block-commit', \
+ job_id='job0', \
+ device='drv0', \
+ top=middle_path, \
+ speed=1, \
+ filters=[iotests.filter_qmp_testfiles])
+
+ # Give block-stream something to work on, otherwise it would be done
+ # immediately, send a BLOCK_JOB_COMPLETED and ejecting the BDS would
+ # work just fine without the block job still running.
+ iotests.qemu_io(bottom_path, '-c', f'write 0 {IMAGE_SIZE}')
+ with TestCase('block-stream', vm, top_path, 'BLOCK_JOB_CANCELLED'):
+ vm.qmp_log('block-stream', \
+ job_id='job0', \
+ device='drv0', \
+ speed=1)
+
+if __name__ == '__main__':
+ iotests.script_main(main, supported_fmts=['qcow2', 'qed'],
+ supported_protocols=['file'])
diff --git a/tests/qemu-iotests/141.out b/tests/qemu-iotests/141.out
index 63203d9944..91b7ba50af 100644
--- a/tests/qemu-iotests/141.out
+++ b/tests/qemu-iotests/141.out
@@ -1,179 +1,69 @@
-QA output created by 141
-Formatting 'TEST_DIR/b.IMGFMT', fmt=IMGFMT size=1048576
-Formatting 'TEST_DIR/m.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/b.IMGFMT backing_fmt=IMGFMT
-Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/m.IMGFMT backing_fmt=IMGFMT
-{'execute': 'qmp_capabilities'}
-{"return": {}}
-
+Creating bottom <- middle <- top backing file chain...
+Starting VM...
=== Testing drive-backup ===
-
-{'execute': 'blockdev-add',
- 'arguments': {
- 'node-name': 'drv0',
- 'driver': 'IMGFMT',
- 'file': {
- 'driver': 'file',
- 'filename': 'TEST_DIR/t.IMGFMT'
- }}}
-{"return": {}}
-{'execute': 'drive-backup',
-'arguments': {'job-id': 'job0',
-'device': 'drv0',
-'target': 'TEST_DIR/o.IMGFMT',
-'format': 'IMGFMT',
-'sync': 'none'}}
-Formatting 'TEST_DIR/o.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"execute": "blockdev-add", "arguments": {"driver": "IMGFMT", "file": {"driver": "file", "filename": "TEST_DIR/PID-top"}, "node-name": "drv0"}}
+{"return": {}}
+{"execute": "drive-backup", "arguments": {"device": "drv0", "format": "IMGFMT", "job-id": "job0", "sync": "none", "target": "TEST_DIR/PID-target"}}
+{"return": {}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: node is used as backing hd of 'NODE_NAME'"}}
-{'execute': 'block-job-cancel',
- 'arguments': {'device': 'job0'}}
+{"execute": "block-job-cancel", "arguments": {"device": "job0"}}
{"return": {}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "job0", "len": 1048576, "offset": 0, "speed": 0, "type": "backup"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"data": {"device": "job0", "len": 1048576, "offset": 0, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"return": {}}
=== Testing drive-mirror ===
-
-{'execute': 'blockdev-add',
- 'arguments': {
- 'node-name': 'drv0',
- 'driver': 'IMGFMT',
- 'file': {
- 'driver': 'file',
- 'filename': 'TEST_DIR/t.IMGFMT'
- }}}
-{"return": {}}
-{'execute': 'drive-mirror',
-'arguments': {'job-id': 'job0',
-'device': 'drv0',
-'target': 'TEST_DIR/o.IMGFMT',
-'format': 'IMGFMT',
-'sync': 'none'}}
-Formatting 'TEST_DIR/o.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "mirror"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"execute": "blockdev-add", "arguments": {"driver": "IMGFMT", "file": {"driver": "file", "filename": "TEST_DIR/PID-top"}, "node-name": "drv0"}}
+{"return": {}}
+{"execute": "drive-mirror", "arguments": {"device": "drv0", "format": "IMGFMT", "job-id": "job0", "sync": "none", "target": "TEST_DIR/PID-target"}}
+{"return": {}}
+{"data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "mirror"}, "event": "BLOCK_JOB_READY", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: mirror"}}
-{'execute': 'block-job-cancel',
- 'arguments': {'device': 'job0'}}
+{"execute": "block-job-cancel", "arguments": {"device": "job0"}}
{"return": {}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "mirror"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "mirror"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"return": {}}
=== Testing active block-commit ===
-
-{'execute': 'blockdev-add',
- 'arguments': {
- 'node-name': 'drv0',
- 'driver': 'IMGFMT',
- 'file': {
- 'driver': 'file',
- 'filename': 'TEST_DIR/t.IMGFMT'
- }}}
-{"return": {}}
-{'execute': 'block-commit',
-'arguments': {'job-id': 'job0', 'device': 'drv0'}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"execute": "blockdev-add", "arguments": {"driver": "IMGFMT", "file": {"driver": "file", "filename": "TEST_DIR/PID-top"}, "node-name": "drv0"}}
+{"return": {}}
+{"execute": "block-commit", "arguments": {"device": "drv0", "job-id": "job0"}}
+{"return": {}}
+{"data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}, "event": "BLOCK_JOB_READY", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: commit"}}
-{'execute': 'block-job-cancel',
- 'arguments': {'device': 'job0'}}
+{"execute": "block-job-cancel", "arguments": {"device": "job0"}}
{"return": {}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"return": {}}
=== Testing non-active block-commit ===
-
-wrote 1048576/1048576 bytes at offset 0
-1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-{'execute': 'blockdev-add',
- 'arguments': {
- 'node-name': 'drv0',
- 'driver': 'IMGFMT',
- 'file': {
- 'driver': 'file',
- 'filename': 'TEST_DIR/t.IMGFMT'
- }}}
-{"return": {}}
-{'execute': 'block-commit',
-'arguments': {'job-id': 'job0',
-'device': 'drv0',
-'top': 'TEST_DIR/m.IMGFMT',
-'speed': 1}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"execute": "blockdev-add", "arguments": {"driver": "IMGFMT", "file": {"driver": "file", "filename": "TEST_DIR/PID-top"}, "node-name": "drv0"}}
+{"return": {}}
+{"execute": "block-commit", "arguments": {"device": "drv0", "job-id": "job0", "speed": 1, "top": "TEST_DIR/PID-middle"}}
+{"return": {}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: commit"}}
-{'execute': 'block-job-cancel',
- 'arguments': {'device': 'job0'}}
+{"execute": "block-job-cancel", "arguments": {"device": "job0"}}
{"return": {}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "job0", "len": 1048576, "offset": 524288, "speed": 1, "type": "commit"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"data": {"device": "job0", "len": 1048576, "offset": 524288, "speed": 1, "type": "commit"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"return": {}}
=== Testing block-stream ===
-
-wrote 1048576/1048576 bytes at offset 0
-1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-{'execute': 'blockdev-add',
- 'arguments': {
- 'node-name': 'drv0',
- 'driver': 'IMGFMT',
- 'file': {
- 'driver': 'file',
- 'filename': 'TEST_DIR/t.IMGFMT'
- }}}
-{"return": {}}
-{'execute': 'block-stream',
-'arguments': {'job-id': 'job0',
-'device': 'drv0',
-'speed': 1}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"execute": "blockdev-add", "arguments": {"driver": "IMGFMT", "file": {"driver": "file", "filename": "TEST_DIR/PID-top"}, "node-name": "drv0"}}
+{"return": {}}
+{"execute": "block-stream", "arguments": {"device": "drv0", "job-id": "job0", "speed": 1}}
+{"return": {}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: stream"}}
-{'execute': 'block-job-cancel',
- 'arguments': {'device': 'job0'}}
+{"execute": "block-job-cancel", "arguments": {"device": "job0"}}
{"return": {}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "job0", "len": 1048576, "offset": 524288, "speed": 1, "type": "stream"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
-{'execute': 'blockdev-del',
- 'arguments': {'node-name': 'drv0'}}
+{"data": {"device": "job0", "len": 1048576, "offset": 524288, "speed": 1, "type": "stream"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drv0"}}
{"return": {}}
-*** done
+
--
2.39.3

View File

@ -0,0 +1,112 @@
From 633c6a52ac88526534466ae311522fe5447bcf91 Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Wed, 17 Jan 2024 14:55:54 +0100
Subject: [PATCH 02/22] memory-device: reintroduce memory region size check
RH-Author: David Hildenbrand <david@redhat.com>
RH-MergeRequest: 221: memory-device: reintroduce memory region size check
RH-Jira: RHEL-20341
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Igor Mammedov <imammedo@redhat.com>
RH-Commit: [2/2] e9ff2339b0c07c3f48f5834c9c80cd6d4cbc8f71
JIRA: https://issues.redhat.com/browse/RHEL-20341
We used to check that the memory region size is multiples of the overall
requested address alignment for the device memory address.
We removed that check, because there are cases (i.e., hv-balloon) where
devices unconditionally request an address alignment that has a very large
alignment (i.e., 32 GiB), but the actual memory device size might not be
multiples of that alignment.
However, this change:
(a) allows for some practically impossible DIMM sizes, like "1GB+1 byte".
(b) allows for DIMMs that partially cover hugetlb pages, previously
reported in [1].
Both scenarios don't make any sense: we might even waste memory.
So let's reintroduce that check, but only check that the
memory region size is multiples of the memory region alignment (i.e.,
page size, huge page size), but not any additional memory device
requirements communicated using md->get_min_alignment().
The following examples now fail again as expected:
(a) 1M with 2M THP
qemu-system-x86_64 -m 4g,maxmem=16g,slots=1 -S -nodefaults -nographic \
-object memory-backend-ram,id=mem1,size=1M \
-device pc-dimm,id=dimm1,memdev=mem1
-> backend memory size must be multiple of 0x200000
(b) 1G+1byte
qemu-system-x86_64 -m 4g,maxmem=16g,slots=1 -S -nodefaults -nographic \
-object memory-backend-ram,id=mem1,size=1073741825B \
-device pc-dimm,id=dimm1,memdev=mem1
-> backend memory size must be multiple of 0x200000
(c) Unliagned hugetlb size (2M)
qemu-system-x86_64 -m 4g,maxmem=16g,slots=1 -S -nodefaults -nographic \
-object memory-backend-file,id=mem1,mem-path=/dev/hugepages/tmp,size=511M \
-device pc-dimm,id=dimm1,memdev=mem1
backend memory size must be multiple of 0x200000
(d) Unliagned hugetlb size (1G)
qemu-system-x86_64 -m 4g,maxmem=16g,slots=1 -S -nodefaults -nographic \
-object memory-backend-file,id=mem1,mem-path=/dev/hugepages1G/tmp,size=2047M \
-device pc-dimm,id=dimm1,memdev=mem1
-> backend memory size must be multiple of 0x40000000
Note that this fix depends on a hv-balloon change to communicate its
additional alignment requirements using get_min_alignment() instead of
through the memory region.
[1] https://lkml.kernel.org/r/f77d641d500324525ac036fe1827b3070de75fc1.1701088320.git.mprivozn@redhat.com
Message-ID: <20240117135554.787344-3-david@redhat.com>
Reported-by: Zhenyu Zhang <zhenyzha@redhat.com>
Reported-by: Michal Privoznik <mprivozn@redhat.com>
Fixes: eb1b7c4bd413 ("memory-device: Drop size alignment check")
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
Tested-by: Mario Casquero <mcasquer@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
(cherry picked from commit 540a1abbf0b243e4cfb4333c5d30a041f7080ba4)
Signed-off-by: David Hildenbrand <david@redhat.com>
---
hw/mem/memory-device.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c
index a1b1af26bc..e098585cda 100644
--- a/hw/mem/memory-device.c
+++ b/hw/mem/memory-device.c
@@ -374,6 +374,20 @@ void memory_device_pre_plug(MemoryDeviceState *md, MachineState *ms,
goto out;
}
+ /*
+ * We always want the memory region size to be multiples of the memory
+ * region alignment: for example, DIMMs with 1G+1byte size don't make
+ * any sense. Note that we don't check that the size is multiples
+ * of any additional alignment requirements the memory device might
+ * have when it comes to the address in physical address space.
+ */
+ if (!QEMU_IS_ALIGNED(memory_region_size(mr),
+ memory_region_get_alignment(mr))) {
+ error_setg(errp, "backend memory size must be multiple of 0x%"
+ PRIx64, memory_region_get_alignment(mr));
+ return;
+ }
+
if (legacy_align) {
align = *legacy_align;
} else {
--
2.39.3

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,167 @@
From f1e82fe5076b4030d385dfa49b8284899386114d Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Wed, 20 Dec 2023 08:47:54 -0500
Subject: [PATCH 08/22] qdev: add IOThreadVirtQueueMappingList property type
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [4/17] 817aa1339da8ed3814730473342ba045e66d5b51 (stefanha/centos-stream-qemu-kvm)
virtio-blk and virtio-scsi devices will need a way to specify the
mapping between IOThreads and virtqueues. At the moment all virtqueues
are assigned to a single IOThread or the main loop. This single thread
can be a CPU bottleneck, so it is necessary to allow finer-grained
assignment to spread the load.
Introduce DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST() so devices can take a
parameter that maps virtqueues to IOThreads. The command-line syntax for
this new property is as follows:
--device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]}'
IOThreads are specified by name and virtqueues are specified by 0-based
index.
It will be common to simply assign virtqueues round-robin across a set
of IOThreads. A convenient syntax that does not require specifying
individual virtqueue indices is available:
--device '{"driver":"foo","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]}'
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20231220134755.814917-4-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit cf03a152c5d749fd0083bfe540df9524f1d2ff1d)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/core/qdev-properties-system.c | 46 +++++++++++++++++++++++++++++
include/hw/qdev-properties-system.h | 5 ++++
qapi/virtio.json | 29 ++++++++++++++++++
3 files changed, 80 insertions(+)
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 73cced4626..1a396521d5 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -18,6 +18,7 @@
#include "qapi/qapi-types-block.h"
#include "qapi/qapi-types-machine.h"
#include "qapi/qapi-types-migration.h"
+#include "qapi/qapi-visit-virtio.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ctype.h"
#include "qemu/cutils.h"
@@ -1160,3 +1161,48 @@ const PropertyInfo qdev_prop_cpus390entitlement = {
.set = qdev_propinfo_set_enum,
.set_default_value = qdev_propinfo_set_default_value_enum,
};
+
+/* --- IOThreadVirtQueueMappingList --- */
+
+static void get_iothread_vq_mapping_list(Object *obj, Visitor *v,
+ const char *name, void *opaque, Error **errp)
+{
+ IOThreadVirtQueueMappingList **prop_ptr =
+ object_field_prop_ptr(obj, opaque);
+
+ visit_type_IOThreadVirtQueueMappingList(v, name, prop_ptr, errp);
+}
+
+static void set_iothread_vq_mapping_list(Object *obj, Visitor *v,
+ const char *name, void *opaque, Error **errp)
+{
+ IOThreadVirtQueueMappingList **prop_ptr =
+ object_field_prop_ptr(obj, opaque);
+ IOThreadVirtQueueMappingList *list;
+
+ if (!visit_type_IOThreadVirtQueueMappingList(v, name, &list, errp)) {
+ return;
+ }
+
+ qapi_free_IOThreadVirtQueueMappingList(*prop_ptr);
+ *prop_ptr = list;
+}
+
+static void release_iothread_vq_mapping_list(Object *obj,
+ const char *name, void *opaque)
+{
+ IOThreadVirtQueueMappingList **prop_ptr =
+ object_field_prop_ptr(obj, opaque);
+
+ qapi_free_IOThreadVirtQueueMappingList(*prop_ptr);
+ *prop_ptr = NULL;
+}
+
+const PropertyInfo qdev_prop_iothread_vq_mapping_list = {
+ .name = "IOThreadVirtQueueMappingList",
+ .description = "IOThread virtqueue mapping list [{\"iothread\":\"<id>\", "
+ "\"vqs\":[1,2,3,...]},...]",
+ .get = get_iothread_vq_mapping_list,
+ .set = set_iothread_vq_mapping_list,
+ .release = release_iothread_vq_mapping_list,
+};
diff --git a/include/hw/qdev-properties-system.h b/include/hw/qdev-properties-system.h
index 91f7a2452d..06c359c190 100644
--- a/include/hw/qdev-properties-system.h
+++ b/include/hw/qdev-properties-system.h
@@ -24,6 +24,7 @@ extern const PropertyInfo qdev_prop_off_auto_pcibar;
extern const PropertyInfo qdev_prop_pcie_link_speed;
extern const PropertyInfo qdev_prop_pcie_link_width;
extern const PropertyInfo qdev_prop_cpus390entitlement;
+extern const PropertyInfo qdev_prop_iothread_vq_mapping_list;
#define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d) \
DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
@@ -82,4 +83,8 @@ extern const PropertyInfo qdev_prop_cpus390entitlement;
DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_cpus390entitlement, \
CpuS390Entitlement)
+#define DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST(_name, _state, _field) \
+ DEFINE_PROP(_name, _state, _field, qdev_prop_iothread_vq_mapping_list, \
+ IOThreadVirtQueueMappingList *)
+
#endif
diff --git a/qapi/virtio.json b/qapi/virtio.json
index e6dcee7b83..19c7c36e36 100644
--- a/qapi/virtio.json
+++ b/qapi/virtio.json
@@ -928,3 +928,32 @@
'data': { 'path': 'str', 'queue': 'uint16', '*index': 'uint16' },
'returns': 'VirtioQueueElement',
'features': [ 'unstable' ] }
+
+##
+# @IOThreadVirtQueueMapping:
+#
+# Describes the subset of virtqueues assigned to an IOThread.
+#
+# @iothread: the id of IOThread object
+#
+# @vqs: an optional array of virtqueue indices that will be handled by this
+# IOThread. When absent, virtqueues are assigned round-robin across all
+# IOThreadVirtQueueMappings provided. Either all IOThreadVirtQueueMappings
+# must have @vqs or none of them must have it.
+#
+# Since: 9.0
+##
+
+{ 'struct': 'IOThreadVirtQueueMapping',
+ 'data': { 'iothread': 'str', '*vqs': ['uint16'] } }
+
+##
+# @DummyVirtioForceArrays:
+#
+# Not used by QMP; hack to let us use IOThreadVirtQueueMappingList internally
+#
+# Since: 9.0
+##
+
+{ 'struct': 'DummyVirtioForceArrays',
+ 'data': { 'unused-iothread-vq-mapping': ['IOThreadVirtQueueMapping'] } }
--
2.39.3

View File

@ -0,0 +1,85 @@
From 4251aab5b2beb68d1800cd4a329361ff6f57c430 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Wed, 20 Dec 2023 08:47:52 -0500
Subject: [PATCH 07/22] qdev-properties: alias all object class properties
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [3/17] bc5d0aafe4645dacf9277904a2b20760d6e676e1 (stefanha/centos-stream-qemu-kvm)
qdev_alias_all_properties() aliases a DeviceState's qdev properties onto
an Object. This is used for VirtioPCIProxy types so that --device
virtio-blk-pci has properties of its embedded --device virtio-blk-device
object.
Currently this function is implemented using qdev properties. Change the
function to use QOM object class properties instead. This works because
qdev properties create QOM object class properties, but it also catches
any QOM object class-only properties that have no qdev properties.
This change ensures that properties of devices are shown with --device
foo,\? even if they are QOM object class properties.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20231220134755.814917-2-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 350147a871a545ab56b4a1062c8485635d9ffc24)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/core/qdev-properties.c | 18 ++++++++++--------
include/hw/qdev-properties.h | 4 ++--
2 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 840006e953..7d6fa726fd 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -1076,16 +1076,18 @@ void device_class_set_props(DeviceClass *dc, Property *props)
void qdev_alias_all_properties(DeviceState *target, Object *source)
{
ObjectClass *class;
- Property *prop;
+ ObjectPropertyIterator iter;
+ ObjectProperty *prop;
class = object_get_class(OBJECT(target));
- do {
- DeviceClass *dc = DEVICE_CLASS(class);
- for (prop = dc->props_; prop && prop->name; prop++) {
- object_property_add_alias(source, prop->name,
- OBJECT(target), prop->name);
+ object_class_property_iter_init(&iter, class);
+ while ((prop = object_property_iter_next(&iter))) {
+ if (object_property_find(source, prop->name)) {
+ continue; /* skip duplicate properties */
}
- class = object_class_get_parent(class);
- } while (class != object_class_by_name(TYPE_DEVICE));
+
+ object_property_add_alias(source, prop->name,
+ OBJECT(target), prop->name);
+ }
}
diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index 25743a29a0..09aa04ca1e 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -230,8 +230,8 @@ void qdev_property_add_static(DeviceState *dev, Property *prop);
* @target: Device which has properties to be aliased
* @source: Object to add alias properties to
*
- * Add alias properties to the @source object for all qdev properties on
- * the @target DeviceState.
+ * Add alias properties to the @source object for all properties on the @target
+ * DeviceState.
*
* This is useful when @target is an internal implementation object
* owned by @source, and you want to expose all the properties of that
--
2.39.3

View File

@ -0,0 +1,124 @@
From 94d6458a58239b52394d58b6880509041186d5a8 Mon Sep 17 00:00:00 2001
From: Hanna Czenczek <hreitz@redhat.com>
Date: Fri, 2 Feb 2024 15:47:55 +0100
Subject: [PATCH 04/22] scsi: Await request purging
RH-Author: Hanna Czenczek <hreitz@redhat.com>
RH-MergeRequest: 222: Allow concurrent BlockBackend context changes
RH-Jira: RHEL-24593
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Commit: [2/2] 35a89273cab0af8f999881e67d359fe1328363a0 (hreitz/qemu-kvm-c-9-s)
scsi_device_for_each_req_async() currently does not provide any way to
be awaited. One of its callers is scsi_device_purge_requests(), which
therefore currently does not guarantee that all requests are fully
settled when it returns.
We want all requests to be settled, because scsi_device_purge_requests()
is called through the unrealize path, including the one invoked by
virtio_scsi_hotunplug() through qdev_simple_device_unplug_cb(), which
most likely assumes that all SCSI requests are done then.
In fact, scsi_device_purge_requests() already contains a blk_drain(),
but this will not fully await scsi_device_for_each_req_async(), only the
I/O requests it potentially cancels (not the non-I/O requests).
However, we can have scsi_device_for_each_req_async() increment the BB
in-flight counter, and have scsi_device_for_each_req_async_bh()
decrement it when it is done. This way, the blk_drain() will fully
await all SCSI requests to be purged.
This also removes the need for scsi_device_for_each_req_async_bh() to
double-check the current context and potentially re-schedule itself,
should it now differ from the BB's context: Changing a BB's AioContext
with a root node is done through bdrv_try_change_aio_context(), which
creates a drained section. With this patch, we keep the BB in-flight
counter elevated throughout, so we know the BB's context cannot change.
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-ID: <20240202144755.671354-3-hreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 1604c0493193273e4eac547f86fbd2845e7f9af4)
---
hw/scsi/scsi-bus.c | 30 +++++++++++++++++++++---------
1 file changed, 21 insertions(+), 9 deletions(-)
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 5b08cbf60a..b1bf8e6433 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -120,17 +120,13 @@ static void scsi_device_for_each_req_async_bh(void *opaque)
SCSIRequest *next;
/*
- * If the AioContext changed before this BH was called then reschedule into
- * the new AioContext before accessing ->requests. This can happen when
- * scsi_device_for_each_req_async() is called and then the AioContext is
- * changed before BHs are run.
+ * The BB cannot have changed contexts between this BH being scheduled and
+ * now: BBs' AioContexts, when they have a node attached, can only be
+ * changed via bdrv_try_change_aio_context(), in a drained section. While
+ * we have the in-flight counter incremented, that drain must block.
*/
ctx = blk_get_aio_context(s->conf.blk);
- if (ctx != qemu_get_current_aio_context()) {
- aio_bh_schedule_oneshot(ctx, scsi_device_for_each_req_async_bh,
- g_steal_pointer(&data));
- return;
- }
+ assert(ctx == qemu_get_current_aio_context());
QTAILQ_FOREACH_SAFE(req, &s->requests, next, next) {
data->fn(req, data->fn_opaque);
@@ -138,11 +134,16 @@ static void scsi_device_for_each_req_async_bh(void *opaque)
/* Drop the reference taken by scsi_device_for_each_req_async() */
object_unref(OBJECT(s));
+
+ /* Paired with blk_inc_in_flight() in scsi_device_for_each_req_async() */
+ blk_dec_in_flight(s->conf.blk);
}
/*
* Schedule @fn() to be invoked for each enqueued request in device @s. @fn()
* runs in the AioContext that is executing the request.
+ * Keeps the BlockBackend's in-flight counter incremented until everything is
+ * done, so draining it will settle all scheduled @fn() calls.
*/
static void scsi_device_for_each_req_async(SCSIDevice *s,
void (*fn)(SCSIRequest *, void *),
@@ -163,6 +164,8 @@ static void scsi_device_for_each_req_async(SCSIDevice *s,
*/
object_ref(OBJECT(s));
+ /* Paired with blk_dec_in_flight() in scsi_device_for_each_req_async_bh() */
+ blk_inc_in_flight(s->conf.blk);
aio_bh_schedule_oneshot(blk_get_aio_context(s->conf.blk),
scsi_device_for_each_req_async_bh,
data);
@@ -1728,11 +1731,20 @@ static void scsi_device_purge_one_req(SCSIRequest *req, void *opaque)
scsi_req_cancel_async(req, NULL);
}
+/**
+ * Cancel all requests, and block until they are deleted.
+ */
void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense)
{
scsi_device_for_each_req_async(sdev, scsi_device_purge_one_req, NULL);
+ /*
+ * Await all the scsi_device_purge_one_req() calls scheduled by
+ * scsi_device_for_each_req_async(), and all I/O requests that were
+ * cancelled this way, but may still take a bit of time to settle.
+ */
blk_drain(sdev->conf.blk);
+
scsi_device_set_ua(sdev, sense);
}
--
2.39.3

View File

@ -0,0 +1,190 @@
From c5f9e92cd49a2171a5b0223cafd7fab3f45edb82 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 9 Jan 2024 19:17:17 +0100
Subject: [PATCH 06/22] string-output-visitor: Fix (pseudo) struct handling
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [2/17] 84e226f161680dd61b6635e213203d062c1aa556 (stefanha/centos-stream-qemu-kvm)
Commit ff32bb53 tried to get minimal struct support into the string
output visitor by just making it return "<omitted>". Unfortunately, it
forgot that the caller will still make more visitor calls for the
content of the struct.
If the struct is contained in a list, such as IOThreadVirtQueueMapping,
in the better case its fields show up as separate list entries. In the
worse case, it contains another list, and the string output visitor
doesn't support nested lists and asserts that this doesn't happen. So as
soon as the optional "vqs" field in IOThreadVirtQueueMapping is
specified, we get a crash.
This can be reproduced with the following command line:
echo "info qtree" | ./qemu-system-x86_64 \
-object iothread,id=t0 \
-blockdev null-co,node-name=disk \
-device '{"driver": "virtio-blk-pci", "drive": "disk",
"iothread-vq-mapping": [{"iothread": "t0", "vqs": [0]}]}' \
-monitor stdio
Fix the problem by counting the nesting level of structs and ignoring
any visitor calls for values (apart from start/end_struct) while we're
not on the top level.
Lists nested directly within lists remain unimplemented, as we don't
currently have a use case for them.
Fixes: ff32bb53476539d352653f4ed56372dced73a388
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2069
Reported-by: Aihua Liang <aliang@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20240109181717.42493-1-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 014b99a8e41c8cd1e895137654b44dec5430122c)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
qapi/string-output-visitor.c | 46 ++++++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/qapi/string-output-visitor.c b/qapi/string-output-visitor.c
index f0c1dea89e..5115536b15 100644
--- a/qapi/string-output-visitor.c
+++ b/qapi/string-output-visitor.c
@@ -65,6 +65,7 @@ struct StringOutputVisitor
} range_start, range_end;
GList *ranges;
void *list; /* Only needed for sanity checking the caller */
+ unsigned int struct_nesting;
};
static StringOutputVisitor *to_sov(Visitor *v)
@@ -144,6 +145,10 @@ static bool print_type_int64(Visitor *v, const char *name, int64_t *obj,
StringOutputVisitor *sov = to_sov(v);
GList *l;
+ if (sov->struct_nesting) {
+ return true;
+ }
+
switch (sov->list_mode) {
case LM_NONE:
string_output_append(sov, *obj);
@@ -231,6 +236,10 @@ static bool print_type_size(Visitor *v, const char *name, uint64_t *obj,
uint64_t val;
char *out, *psize;
+ if (sov->struct_nesting) {
+ return true;
+ }
+
if (!sov->human) {
out = g_strdup_printf("%"PRIu64, *obj);
string_output_set(sov, out);
@@ -250,6 +259,11 @@ static bool print_type_bool(Visitor *v, const char *name, bool *obj,
Error **errp)
{
StringOutputVisitor *sov = to_sov(v);
+
+ if (sov->struct_nesting) {
+ return true;
+ }
+
string_output_set(sov, g_strdup(*obj ? "true" : "false"));
return true;
}
@@ -260,6 +274,10 @@ static bool print_type_str(Visitor *v, const char *name, char **obj,
StringOutputVisitor *sov = to_sov(v);
char *out;
+ if (sov->struct_nesting) {
+ return true;
+ }
+
if (sov->human) {
out = *obj ? g_strdup_printf("\"%s\"", *obj) : g_strdup("<null>");
} else {
@@ -273,6 +291,11 @@ static bool print_type_number(Visitor *v, const char *name, double *obj,
Error **errp)
{
StringOutputVisitor *sov = to_sov(v);
+
+ if (sov->struct_nesting) {
+ return true;
+ }
+
string_output_set(sov, g_strdup_printf("%.17g", *obj));
return true;
}
@@ -283,6 +306,10 @@ static bool print_type_null(Visitor *v, const char *name, QNull **obj,
StringOutputVisitor *sov = to_sov(v);
char *out;
+ if (sov->struct_nesting) {
+ return true;
+ }
+
if (sov->human) {
out = g_strdup("<null>");
} else {
@@ -295,6 +322,9 @@ static bool print_type_null(Visitor *v, const char *name, QNull **obj,
static bool start_struct(Visitor *v, const char *name, void **obj,
size_t size, Error **errp)
{
+ StringOutputVisitor *sov = to_sov(v);
+
+ sov->struct_nesting++;
return true;
}
@@ -302,6 +332,10 @@ static void end_struct(Visitor *v, void **obj)
{
StringOutputVisitor *sov = to_sov(v);
+ if (--sov->struct_nesting) {
+ return;
+ }
+
/* TODO actually print struct fields */
string_output_set(sov, g_strdup("<omitted>"));
}
@@ -312,6 +346,10 @@ start_list(Visitor *v, const char *name, GenericList **list, size_t size,
{
StringOutputVisitor *sov = to_sov(v);
+ if (sov->struct_nesting) {
+ return true;
+ }
+
/* we can't traverse a list in a list */
assert(sov->list_mode == LM_NONE);
/* We don't support visits without a list */
@@ -329,6 +367,10 @@ static GenericList *next_list(Visitor *v, GenericList *tail, size_t size)
StringOutputVisitor *sov = to_sov(v);
GenericList *ret = tail->next;
+ if (sov->struct_nesting) {
+ return ret;
+ }
+
if (ret && !ret->next) {
sov->list_mode = LM_END;
}
@@ -339,6 +381,10 @@ static void end_list(Visitor *v, void **obj)
{
StringOutputVisitor *sov = to_sov(v);
+ if (sov->struct_nesting) {
+ return;
+ }
+
assert(sov->list == obj);
assert(sov->list_mode == LM_STARTED ||
sov->list_mode == LM_END ||
--
2.39.3

View File

@ -0,0 +1,90 @@
From fb2069be402ec1322834c555714f0e993778cc9d Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Tue, 12 Dec 2023 08:49:34 -0500
Subject: [PATCH 05/22] string-output-visitor: show structs as "<omitted>"
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [1/17] 0c08e8237d28fbdbdbc7576d4c17d2eeeb413c2a (stefanha/centos-stream-qemu-kvm)
StringOutputVisitor crashes when it visits a struct because
->start_struct() is NULL.
Show "<omitted>" instead of crashing. This is necessary because the
virtio-blk-pci iothread-vq-mapping parameter that I'd like to introduce
soon is a list of IOThreadMapping structs.
This patch is a quick fix to solve the crash, but the long-term solution
is replacing StringOutputVisitor with something that can handle the full
gamut of values in QEMU.
Cc: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20231212134934.500289-1-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ff32bb53476539d352653f4ed56372dced73a388)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
include/qapi/string-output-visitor.h | 6 +++---
qapi/string-output-visitor.c | 16 ++++++++++++++++
2 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/include/qapi/string-output-visitor.h b/include/qapi/string-output-visitor.h
index 268dfe9986..b1ee473b30 100644
--- a/include/qapi/string-output-visitor.h
+++ b/include/qapi/string-output-visitor.h
@@ -26,9 +26,9 @@ typedef struct StringOutputVisitor StringOutputVisitor;
* If everything else succeeds, pass @result to visit_complete() to
* collect the result of the visit.
*
- * The string output visitor does not implement support for visiting
- * QAPI structs, alternates, null, or arbitrary QTypes. It also
- * requires a non-null list argument to visit_start_list().
+ * The string output visitor does not implement support for alternates, null,
+ * or arbitrary QTypes. Struct fields are not shown. It also requires a
+ * non-null list argument to visit_start_list().
*/
Visitor *string_output_visitor_new(bool human, char **result);
diff --git a/qapi/string-output-visitor.c b/qapi/string-output-visitor.c
index c0cb72dbe4..f0c1dea89e 100644
--- a/qapi/string-output-visitor.c
+++ b/qapi/string-output-visitor.c
@@ -292,6 +292,20 @@ static bool print_type_null(Visitor *v, const char *name, QNull **obj,
return true;
}
+static bool start_struct(Visitor *v, const char *name, void **obj,
+ size_t size, Error **errp)
+{
+ return true;
+}
+
+static void end_struct(Visitor *v, void **obj)
+{
+ StringOutputVisitor *sov = to_sov(v);
+
+ /* TODO actually print struct fields */
+ string_output_set(sov, g_strdup("<omitted>"));
+}
+
static bool
start_list(Visitor *v, const char *name, GenericList **list, size_t size,
Error **errp)
@@ -379,6 +393,8 @@ Visitor *string_output_visitor_new(bool human, char **result)
v->visitor.type_str = print_type_str;
v->visitor.type_number = print_type_number;
v->visitor.type_null = print_type_null;
+ v->visitor.start_struct = start_struct;
+ v->visitor.end_struct = end_struct;
v->visitor.start_list = start_list;
v->visitor.next_list = next_list;
v->visitor.end_list = end_list;
--
2.39.3

View File

@ -0,0 +1,46 @@
From bbe64d706b3cb8b10ecd22bd71cf76b21eea257f Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Thu, 25 Jan 2024 17:58:03 +0100
Subject: [PATCH 20/22] tests/unit: Bump test-replication timeout to 60 seconds
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [16/17] 200768aedee44d10aa8d199b92a9c17a9002fc3f (stefanha/centos-stream-qemu-kvm)
We're seeing timeouts for this test on CI runs (specifically for
ubuntu-20.04-s390x-all). It doesn't fail consistently, but even the
successful runs take about 27 or 28 seconds, which is not very far from
the 30 seconds timeout.
Bump the timeout a bit to make failure less likely even on this CI host.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20240125165803.48373-1-kwolf@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 63b18312d14ac984acaf13c7c55d9baa2d61496e)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
tests/unit/meson.build | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tests/unit/meson.build b/tests/unit/meson.build
index a05d471090..28db6adea8 100644
--- a/tests/unit/meson.build
+++ b/tests/unit/meson.build
@@ -173,7 +173,8 @@ test_env.set('G_TEST_BUILDDIR', meson.current_build_dir())
slow_tests = {
'test-crypto-tlscredsx509': 45,
- 'test-crypto-tlssession': 45
+ 'test-crypto-tlssession': 45,
+ 'test-replication': 60,
}
foreach test_name, extra: tests
--
2.39.3

View File

@ -0,0 +1,47 @@
From 376df80fbba5a9bb0ec43cad083cde9de59128d7 Mon Sep 17 00:00:00 2001
From: Stefan Weil via <qemu-trivial@nongnu.org>
Date: Sun, 24 Dec 2023 12:43:14 +0100
Subject: [PATCH 10/22] virtio-blk: Fix potential nullpointer read access in
virtio_blk_data_plane_destroy
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [6/17] 460005fc7719b2e1dd577dfe75d18537ab2b8d06 (stefanha/centos-stream-qemu-kvm)
Fixes: CID 1532828
Fixes: b6948ab01d ("virtio-blk: add iothread-vq-mapping parameter")
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit d819fc9516a4ec71e37a6c9edfcd285b7f98c2dc)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 6debd4401e..97a302cf49 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -152,7 +152,7 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
{
VirtIOBlock *vblk;
- VirtIOBlkConf *conf = s->conf;
+ VirtIOBlkConf *conf;
if (!s) {
return;
@@ -160,6 +160,7 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
vblk = VIRTIO_BLK(s->vdev);
assert(!vblk->dataplane_started);
+ conf = s->conf;
if (conf->iothread_vq_mapping_list) {
IOThreadVirtQueueMappingList *node;
--
2.39.3

View File

@ -0,0 +1,464 @@
From 733fc13f65286c849ad6618be89df450f8bc5f7e Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Wed, 20 Dec 2023 08:47:55 -0500
Subject: [PATCH 09/22] virtio-blk: add iothread-vq-mapping parameter
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [5/17] c371fe62376c4eb54da88272a5966cec28404224 (stefanha/centos-stream-qemu-kvm)
Add the iothread-vq-mapping parameter to assign virtqueues to IOThreads.
Store the vq:AioContext mapping in the new struct
VirtIOBlockDataPlane->vq_aio_context[] field and refactor the code to
use the per-vq AioContext instead of the BlockDriverState's AioContext.
Reimplement --device virtio-blk-pci,iothread= and non-IOThread mode by
assigning all virtqueues to the IOThread and main loop's AioContext in
vq_aio_context[], respectively.
The comment in struct VirtIOBlockDataPlane about EventNotifiers is
stale. Remove it.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20231220134755.814917-5-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b6948ab01df068bef591868c22d1f873d2d05cde)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 155 ++++++++++++++++++++++++--------
hw/block/dataplane/virtio-blk.h | 3 +
hw/block/virtio-blk.c | 92 ++++++++++++++++---
include/hw/virtio/virtio-blk.h | 2 +
4 files changed, 202 insertions(+), 50 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 7bbbd981ad..6debd4401e 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -32,13 +32,11 @@ struct VirtIOBlockDataPlane {
VirtIOBlkConf *conf;
VirtIODevice *vdev;
- /* Note that these EventNotifiers are assigned by value. This is
- * fine as long as you do not call event_notifier_cleanup on them
- * (because you don't own the file descriptor or handle; you just
- * use it).
+ /*
+ * The AioContext for each virtqueue. The BlockDriverState will use the
+ * first element as its AioContext.
*/
- IOThread *iothread;
- AioContext *ctx;
+ AioContext **vq_aio_context;
};
/* Raise an interrupt to signal guest, if necessary */
@@ -47,6 +45,45 @@ void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s, VirtQueue *vq)
virtio_notify_irqfd(s->vdev, vq);
}
+/* Generate vq:AioContext mappings from a validated iothread-vq-mapping list */
+static void
+apply_vq_mapping(IOThreadVirtQueueMappingList *iothread_vq_mapping_list,
+ AioContext **vq_aio_context, uint16_t num_queues)
+{
+ IOThreadVirtQueueMappingList *node;
+ size_t num_iothreads = 0;
+ size_t cur_iothread = 0;
+
+ for (node = iothread_vq_mapping_list; node; node = node->next) {
+ num_iothreads++;
+ }
+
+ for (node = iothread_vq_mapping_list; node; node = node->next) {
+ IOThread *iothread = iothread_by_id(node->value->iothread);
+ AioContext *ctx = iothread_get_aio_context(iothread);
+
+ /* Released in virtio_blk_data_plane_destroy() */
+ object_ref(OBJECT(iothread));
+
+ if (node->value->vqs) {
+ uint16List *vq;
+
+ /* Explicit vq:IOThread assignment */
+ for (vq = node->value->vqs; vq; vq = vq->next) {
+ vq_aio_context[vq->value] = ctx;
+ }
+ } else {
+ /* Round-robin vq:IOThread assignment */
+ for (unsigned i = cur_iothread; i < num_queues;
+ i += num_iothreads) {
+ vq_aio_context[i] = ctx;
+ }
+ }
+
+ cur_iothread++;
+ }
+}
+
/* Context: QEMU global mutex held */
bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
VirtIOBlockDataPlane **dataplane,
@@ -58,7 +95,7 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
*dataplane = NULL;
- if (conf->iothread) {
+ if (conf->iothread || conf->iothread_vq_mapping_list) {
if (!k->set_guest_notifiers || !k->ioeventfd_assign) {
error_setg(errp,
"device is incompatible with iothread "
@@ -86,13 +123,24 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
s = g_new0(VirtIOBlockDataPlane, 1);
s->vdev = vdev;
s->conf = conf;
+ s->vq_aio_context = g_new(AioContext *, conf->num_queues);
+
+ if (conf->iothread_vq_mapping_list) {
+ apply_vq_mapping(conf->iothread_vq_mapping_list, s->vq_aio_context,
+ conf->num_queues);
+ } else if (conf->iothread) {
+ AioContext *ctx = iothread_get_aio_context(conf->iothread);
+ for (unsigned i = 0; i < conf->num_queues; i++) {
+ s->vq_aio_context[i] = ctx;
+ }
- if (conf->iothread) {
- s->iothread = conf->iothread;
- object_ref(OBJECT(s->iothread));
- s->ctx = iothread_get_aio_context(s->iothread);
+ /* Released in virtio_blk_data_plane_destroy() */
+ object_ref(OBJECT(conf->iothread));
} else {
- s->ctx = qemu_get_aio_context();
+ AioContext *ctx = qemu_get_aio_context();
+ for (unsigned i = 0; i < conf->num_queues; i++) {
+ s->vq_aio_context[i] = ctx;
+ }
}
*dataplane = s;
@@ -104,6 +152,7 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
{
VirtIOBlock *vblk;
+ VirtIOBlkConf *conf = s->conf;
if (!s) {
return;
@@ -111,9 +160,21 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
vblk = VIRTIO_BLK(s->vdev);
assert(!vblk->dataplane_started);
- if (s->iothread) {
- object_unref(OBJECT(s->iothread));
+
+ if (conf->iothread_vq_mapping_list) {
+ IOThreadVirtQueueMappingList *node;
+
+ for (node = conf->iothread_vq_mapping_list; node; node = node->next) {
+ IOThread *iothread = iothread_by_id(node->value->iothread);
+ object_unref(OBJECT(iothread));
+ }
+ }
+
+ if (conf->iothread) {
+ object_unref(OBJECT(conf->iothread));
}
+
+ g_free(s->vq_aio_context);
g_free(s);
}
@@ -177,19 +238,13 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
trace_virtio_blk_data_plane_start(s);
- r = blk_set_aio_context(s->conf->conf.blk, s->ctx, &local_err);
+ r = blk_set_aio_context(s->conf->conf.blk, s->vq_aio_context[0],
+ &local_err);
if (r < 0) {
error_report_err(local_err);
goto fail_aio_context;
}
- /* Kick right away to begin processing requests already in vring */
- for (i = 0; i < nvqs; i++) {
- VirtQueue *vq = virtio_get_queue(s->vdev, i);
-
- event_notifier_set(virtio_queue_get_host_notifier(vq));
- }
-
/*
* These fields must be visible to the IOThread when it processes the
* virtqueue, otherwise it will think dataplane has not started yet.
@@ -206,8 +261,12 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
if (!blk_in_drain(s->conf->conf.blk)) {
for (i = 0; i < nvqs; i++) {
VirtQueue *vq = virtio_get_queue(s->vdev, i);
+ AioContext *ctx = s->vq_aio_context[i];
- virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+ /* Kick right away to begin processing requests already in vring */
+ event_notifier_set(virtio_queue_get_host_notifier(vq));
+
+ virtio_queue_aio_attach_host_notifier(vq, ctx);
}
}
return 0;
@@ -236,23 +295,18 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
*
* Context: BH in IOThread
*/
-static void virtio_blk_data_plane_stop_bh(void *opaque)
+static void virtio_blk_data_plane_stop_vq_bh(void *opaque)
{
- VirtIOBlockDataPlane *s = opaque;
- unsigned i;
-
- for (i = 0; i < s->conf->num_queues; i++) {
- VirtQueue *vq = virtio_get_queue(s->vdev, i);
- EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
+ VirtQueue *vq = opaque;
+ EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
- virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+ virtio_queue_aio_detach_host_notifier(vq, qemu_get_current_aio_context());
- /*
- * Test and clear notifier after disabling event, in case poll callback
- * didn't have time to run.
- */
- virtio_queue_host_notifier_read(host_notifier);
- }
+ /*
+ * Test and clear notifier after disabling event, in case poll callback
+ * didn't have time to run.
+ */
+ virtio_queue_host_notifier_read(host_notifier);
}
/* Context: QEMU global mutex held */
@@ -279,7 +333,12 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
trace_virtio_blk_data_plane_stop(s);
if (!blk_in_drain(s->conf->conf.blk)) {
- aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+ for (i = 0; i < nvqs; i++) {
+ VirtQueue *vq = virtio_get_queue(s->vdev, i);
+ AioContext *ctx = s->vq_aio_context[i];
+
+ aio_wait_bh_oneshot(ctx, virtio_blk_data_plane_stop_vq_bh, vq);
+ }
}
/*
@@ -322,3 +381,23 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
s->stopping = false;
}
+
+void virtio_blk_data_plane_detach(VirtIOBlockDataPlane *s)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(s->vdev);
+
+ for (uint16_t i = 0; i < s->conf->num_queues; i++) {
+ VirtQueue *vq = virtio_get_queue(vdev, i);
+ virtio_queue_aio_detach_host_notifier(vq, s->vq_aio_context[i]);
+ }
+}
+
+void virtio_blk_data_plane_attach(VirtIOBlockDataPlane *s)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(s->vdev);
+
+ for (uint16_t i = 0; i < s->conf->num_queues; i++) {
+ VirtQueue *vq = virtio_get_queue(vdev, i);
+ virtio_queue_aio_attach_host_notifier(vq, s->vq_aio_context[i]);
+ }
+}
diff --git a/hw/block/dataplane/virtio-blk.h b/hw/block/dataplane/virtio-blk.h
index 5e18bb99ae..1a806fe447 100644
--- a/hw/block/dataplane/virtio-blk.h
+++ b/hw/block/dataplane/virtio-blk.h
@@ -28,4 +28,7 @@ void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s, VirtQueue *vq);
int virtio_blk_data_plane_start(VirtIODevice *vdev);
void virtio_blk_data_plane_stop(VirtIODevice *vdev);
+void virtio_blk_data_plane_detach(VirtIOBlockDataPlane *s);
+void virtio_blk_data_plane_attach(VirtIOBlockDataPlane *s);
+
#endif /* HW_DATAPLANE_VIRTIO_BLK_H */
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index ec9ed09a6a..46e73b2c96 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1151,6 +1151,7 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
return;
}
}
+
virtio_blk_handle_vq(s, vq);
}
@@ -1463,6 +1464,68 @@ static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f,
return 0;
}
+static bool
+validate_iothread_vq_mapping_list(IOThreadVirtQueueMappingList *list,
+ uint16_t num_queues, Error **errp)
+{
+ g_autofree unsigned long *vqs = bitmap_new(num_queues);
+ g_autoptr(GHashTable) iothreads =
+ g_hash_table_new(g_str_hash, g_str_equal);
+
+ for (IOThreadVirtQueueMappingList *node = list; node; node = node->next) {
+ const char *name = node->value->iothread;
+ uint16List *vq;
+
+ if (!iothread_by_id(name)) {
+ error_setg(errp, "IOThread \"%s\" object does not exist", name);
+ return false;
+ }
+
+ if (!g_hash_table_add(iothreads, (gpointer)name)) {
+ error_setg(errp,
+ "duplicate IOThread name \"%s\" in iothread-vq-mapping",
+ name);
+ return false;
+ }
+
+ if (node != list) {
+ if (!!node->value->vqs != !!list->value->vqs) {
+ error_setg(errp, "either all items in iothread-vq-mapping "
+ "must have vqs or none of them must have it");
+ return false;
+ }
+ }
+
+ for (vq = node->value->vqs; vq; vq = vq->next) {
+ if (vq->value >= num_queues) {
+ error_setg(errp, "vq index %u for IOThread \"%s\" must be "
+ "less than num_queues %u in iothread-vq-mapping",
+ vq->value, name, num_queues);
+ return false;
+ }
+
+ if (test_and_set_bit(vq->value, vqs)) {
+ error_setg(errp, "cannot assign vq %u to IOThread \"%s\" "
+ "because it is already assigned", vq->value, name);
+ return false;
+ }
+ }
+ }
+
+ if (list->value->vqs) {
+ for (uint16_t i = 0; i < num_queues; i++) {
+ if (!test_bit(i, vqs)) {
+ error_setg(errp,
+ "missing vq %u IOThread assignment in iothread-vq-mapping",
+ i);
+ return false;
+ }
+ }
+ }
+
+ return true;
+}
+
static void virtio_resize_cb(void *opaque)
{
VirtIODevice *vdev = opaque;
@@ -1487,34 +1550,24 @@ static void virtio_blk_resize(void *opaque)
static void virtio_blk_drained_begin(void *opaque)
{
VirtIOBlock *s = opaque;
- VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
- AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
if (!s->dataplane || !s->dataplane_started) {
return;
}
- for (uint16_t i = 0; i < s->conf.num_queues; i++) {
- VirtQueue *vq = virtio_get_queue(vdev, i);
- virtio_queue_aio_detach_host_notifier(vq, ctx);
- }
+ virtio_blk_data_plane_detach(s->dataplane);
}
/* Resume virtqueue ioeventfd processing after drain */
static void virtio_blk_drained_end(void *opaque)
{
VirtIOBlock *s = opaque;
- VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
- AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
if (!s->dataplane || !s->dataplane_started) {
return;
}
- for (uint16_t i = 0; i < s->conf.num_queues; i++) {
- VirtQueue *vq = virtio_get_queue(vdev, i);
- virtio_queue_aio_attach_host_notifier(vq, ctx);
- }
+ virtio_blk_data_plane_attach(s->dataplane);
}
static const BlockDevOps virtio_block_ops = {
@@ -1600,6 +1653,19 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
return;
}
+ if (conf->iothread_vq_mapping_list) {
+ if (conf->iothread) {
+ error_setg(errp, "iothread and iothread-vq-mapping properties "
+ "cannot be set at the same time");
+ return;
+ }
+
+ if (!validate_iothread_vq_mapping_list(conf->iothread_vq_mapping_list,
+ conf->num_queues, errp)) {
+ return;
+ }
+ }
+
s->config_size = virtio_get_config_size(&virtio_blk_cfg_size_params,
s->host_features);
virtio_init(vdev, VIRTIO_ID_BLOCK, s->config_size);
@@ -1702,6 +1768,8 @@ static Property virtio_blk_properties[] = {
DEFINE_PROP_BOOL("seg-max-adjust", VirtIOBlock, conf.seg_max_adjust, true),
DEFINE_PROP_LINK("iothread", VirtIOBlock, conf.iothread, TYPE_IOTHREAD,
IOThread *),
+ DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST("iothread-vq-mapping", VirtIOBlock,
+ conf.iothread_vq_mapping_list),
DEFINE_PROP_BIT64("discard", VirtIOBlock, host_features,
VIRTIO_BLK_F_DISCARD, true),
DEFINE_PROP_BOOL("report-discard-granularity", VirtIOBlock,
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 9881009c22..5e4091e4da 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -21,6 +21,7 @@
#include "sysemu/block-backend.h"
#include "sysemu/block-ram-registrar.h"
#include "qom/object.h"
+#include "qapi/qapi-types-virtio.h"
#define TYPE_VIRTIO_BLK "virtio-blk-device"
OBJECT_DECLARE_SIMPLE_TYPE(VirtIOBlock, VIRTIO_BLK)
@@ -37,6 +38,7 @@ struct VirtIOBlkConf
{
BlockConf conf;
IOThread *iothread;
+ IOThreadVirtQueueMappingList *iothread_vq_mapping_list;
char *serial;
uint32_t request_merging;
uint16_t num_queues;
--
2.39.3

View File

@ -0,0 +1,63 @@
From 22730552442003e81c8c508c3e7ebacf647e4e75 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Fri, 19 Jan 2024 08:57:48 -0500
Subject: [PATCH 19/22] virtio-blk: always set ioeventfd during startup
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [15/17] 5f7142aeaa54fda41bd5c4fd3222fd8e3e18f370 (stefanha/centos-stream-qemu-kvm)
When starting ioeventfd it is common practice to set the event notifier
so that the ioeventfd handler is triggered to run immediately. There may
be no requests waiting to be processed, but the idea is that if a
request snuck in then we guarantee that it will be detected.
One scenario where self-triggering the ioeventfd is necessary is when
virtio_blk_handle_output() is called from a vCPU thread before the
VIRTIO Device Status transitions to DRIVER_OK. In that case we need to
self-trigger the ioeventfd so that the kick handled by the vCPU thread
causes the vq AioContext thread to take over handling the request(s).
Fixes: b6948ab01df0 ("virtio-blk: add iothread-vq-mapping parameter")
Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240119135748.270944-7-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit d3f6f294aeadd5f88caf0155e4360808c95b3146)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 81de06c9f6..0b9100b746 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1809,14 +1809,14 @@ static int virtio_blk_start_ioeventfd(VirtIODevice *vdev)
smp_wmb(); /* paired with aio_notify_accept() on the read side */
/* Get this show started by hooking up our callbacks */
- if (!blk_in_drain(s->conf.conf.blk)) {
- for (i = 0; i < nvqs; i++) {
- VirtQueue *vq = virtio_get_queue(vdev, i);
- AioContext *ctx = s->vq_aio_context[i];
+ for (i = 0; i < nvqs; i++) {
+ VirtQueue *vq = virtio_get_queue(vdev, i);
+ AioContext *ctx = s->vq_aio_context[i];
- /* Kick right away to begin processing requests already in vring */
- event_notifier_set(virtio_queue_get_host_notifier(vq));
+ /* Kick right away to begin processing requests already in vring */
+ event_notifier_set(virtio_queue_get_host_notifier(vq));
+ if (!blk_in_drain(s->conf.conf.blk)) {
virtio_queue_aio_attach_host_notifier(vq, ctx);
}
}
--
2.39.3

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,117 @@
From 71257c2f320f1511de1e275779cf4b90effc1f02 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Fri, 19 Jan 2024 08:57:44 -0500
Subject: [PATCH 15/22] virtio-blk: rename dataplane create/destroy functions
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [11/17] 60e7016d5f3e4e9e89945578279b12f812f85ddf (stefanha/centos-stream-qemu-kvm)
virtio_blk_data_plane_create() and virtio_blk_data_plane_destroy() are
actually about s->vq_aio_context[] rather than managing
dataplane-specific state.
As a prerequisite to using s->vq_aio_context[] in all code paths (even
when dataplane is not used), rename these functions to reflect that they
just manage s->vq_aio_context and call them regardless of whether or not
dataplane is in use.
Note that virtio-blk supports running with -device
virtio-blk-pci,ioevent=off where the vCPU thread enters the device
emulation code. In this mode ioeventfd is not used for virtqueue
processing. However, we still want to initialize s->vq_aio_context[] to
qemu_aio_context in that case since I/O completion callbacks will be
invoked in the main loop thread.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240119135748.270944-3-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 57bc2658935778d1ae0edbcd4402763da8c7bae2)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index cb623069f8..4d6f9377c6 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1608,7 +1608,7 @@ apply_vq_mapping(IOThreadVirtQueueMappingList *iothread_vq_mapping_list,
IOThread *iothread = iothread_by_id(node->value->iothread);
AioContext *ctx = iothread_get_aio_context(iothread);
- /* Released in virtio_blk_data_plane_destroy() */
+ /* Released in virtio_blk_vq_aio_context_cleanup() */
object_ref(OBJECT(iothread));
if (node->value->vqs) {
@@ -1631,7 +1631,7 @@ apply_vq_mapping(IOThreadVirtQueueMappingList *iothread_vq_mapping_list,
}
/* Context: BQL held */
-static bool virtio_blk_data_plane_create(VirtIOBlock *s, Error **errp)
+static bool virtio_blk_vq_aio_context_init(VirtIOBlock *s, Error **errp)
{
VirtIODevice *vdev = VIRTIO_DEVICE(s);
VirtIOBlkConf *conf = &s->conf;
@@ -1659,11 +1659,6 @@ static bool virtio_blk_data_plane_create(VirtIOBlock *s, Error **errp)
return false;
}
}
- /* Don't try if transport does not support notifiers. */
- if (!virtio_device_ioeventfd_enabled(vdev)) {
- s->dataplane_disabled = true;
- return false;
- }
s->vq_aio_context = g_new(AioContext *, conf->num_queues);
@@ -1676,7 +1671,7 @@ static bool virtio_blk_data_plane_create(VirtIOBlock *s, Error **errp)
s->vq_aio_context[i] = ctx;
}
- /* Released in virtio_blk_data_plane_destroy() */
+ /* Released in virtio_blk_vq_aio_context_cleanup() */
object_ref(OBJECT(conf->iothread));
} else {
AioContext *ctx = qemu_get_aio_context();
@@ -1689,7 +1684,7 @@ static bool virtio_blk_data_plane_create(VirtIOBlock *s, Error **errp)
}
/* Context: BQL held */
-static void virtio_blk_data_plane_destroy(VirtIOBlock *s)
+static void virtio_blk_vq_aio_context_cleanup(VirtIOBlock *s)
{
VirtIOBlkConf *conf = &s->conf;
@@ -2015,7 +2010,13 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
virtio_add_queue(vdev, conf->queue_size, virtio_blk_handle_output);
}
qemu_coroutine_inc_pool_size(conf->num_queues * conf->queue_size / 2);
- virtio_blk_data_plane_create(s, &err);
+
+ /* Don't start dataplane if transport does not support notifiers. */
+ if (!virtio_device_ioeventfd_enabled(vdev)) {
+ s->dataplane_disabled = true;
+ }
+
+ virtio_blk_vq_aio_context_init(s, &err);
if (err != NULL) {
error_propagate(errp, err);
for (i = 0; i < conf->num_queues; i++) {
@@ -2052,7 +2053,7 @@ static void virtio_blk_device_unrealize(DeviceState *dev)
blk_drain(s->blk);
del_boot_device_lchs(dev, "/disk@0,0");
- virtio_blk_data_plane_destroy(s);
+ virtio_blk_vq_aio_context_cleanup(s);
for (i = 0; i < conf->num_queues; i++) {
virtio_del_queue(vdev, i);
}
--
2.39.3

View File

@ -0,0 +1,307 @@
From ba80cdcd5604b9b9efc4682ade9828ab74ebf5e6 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Fri, 19 Jan 2024 08:57:45 -0500
Subject: [PATCH 16/22] virtio-blk: rename dataplane to ioeventfd
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [12/17] 4230005e0d1b4629fe4540f1f63cd705e58618da (stefanha/centos-stream-qemu-kvm)
The dataplane code is really about using ioeventfd. It's used both for
IOThreads (what we think of as dataplane) and for the core virtio-pci
code's ioeventfd feature (which is enabled by default and used when no
IOThread has been specified). Rename the code to reflect this.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240119135748.270944-4-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 3cdaf3dd4a4ca94ebabe7eab23b432f1a6c547cc)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 78 +++++++++++++++++-----------------
include/hw/virtio/virtio-blk.h | 8 ++--
2 files changed, 43 insertions(+), 43 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 4d6f9377c6..08c566946a 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -64,7 +64,7 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
iov_discard_undo(&req->inhdr_undo);
iov_discard_undo(&req->outhdr_undo);
virtqueue_push(req->vq, &req->elem, req->in_len);
- if (s->dataplane_started && !s->dataplane_disabled) {
+ if (s->ioeventfd_started && !s->ioeventfd_disabled) {
virtio_notify_irqfd(vdev, req->vq);
} else {
virtio_notify(vdev, req->vq);
@@ -1141,12 +1141,12 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
{
VirtIOBlock *s = (VirtIOBlock *)vdev;
- if (!s->dataplane_disabled && !s->dataplane_started) {
+ if (!s->ioeventfd_disabled && !s->ioeventfd_started) {
/* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
- * dataplane here instead of waiting for .set_status().
+ * ioeventfd here instead of waiting for .set_status().
*/
virtio_device_start_ioeventfd(vdev);
- if (!s->dataplane_disabled) {
+ if (!s->ioeventfd_disabled) {
return;
}
}
@@ -1213,7 +1213,7 @@ static void virtio_blk_reset(VirtIODevice *vdev)
VirtIOBlockReq *req;
/* Dataplane has stopped... */
- assert(!s->dataplane_started);
+ assert(!s->ioeventfd_started);
/* ...but requests may still be in flight. */
blk_drain(s->blk);
@@ -1380,7 +1380,7 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status)
VirtIOBlock *s = VIRTIO_BLK(vdev);
if (!(status & (VIRTIO_CONFIG_S_DRIVER | VIRTIO_CONFIG_S_DRIVER_OK))) {
- assert(!s->dataplane_started);
+ assert(!s->ioeventfd_started);
}
if (!(status & VIRTIO_CONFIG_S_DRIVER_OK)) {
@@ -1545,7 +1545,7 @@ static void virtio_blk_resize(void *opaque)
aio_bh_schedule_oneshot(qemu_get_aio_context(), virtio_resize_cb, vdev);
}
-static void virtio_blk_data_plane_detach(VirtIOBlock *s)
+static void virtio_blk_ioeventfd_detach(VirtIOBlock *s)
{
VirtIODevice *vdev = VIRTIO_DEVICE(s);
@@ -1555,7 +1555,7 @@ static void virtio_blk_data_plane_detach(VirtIOBlock *s)
}
}
-static void virtio_blk_data_plane_attach(VirtIOBlock *s)
+static void virtio_blk_ioeventfd_attach(VirtIOBlock *s)
{
VirtIODevice *vdev = VIRTIO_DEVICE(s);
@@ -1570,8 +1570,8 @@ static void virtio_blk_drained_begin(void *opaque)
{
VirtIOBlock *s = opaque;
- if (s->dataplane_started) {
- virtio_blk_data_plane_detach(s);
+ if (s->ioeventfd_started) {
+ virtio_blk_ioeventfd_detach(s);
}
}
@@ -1580,8 +1580,8 @@ static void virtio_blk_drained_end(void *opaque)
{
VirtIOBlock *s = opaque;
- if (s->dataplane_started) {
- virtio_blk_data_plane_attach(s);
+ if (s->ioeventfd_started) {
+ virtio_blk_ioeventfd_attach(s);
}
}
@@ -1651,11 +1651,11 @@ static bool virtio_blk_vq_aio_context_init(VirtIOBlock *s, Error **errp)
}
/*
- * If dataplane is (re-)enabled while the guest is running there could
+ * If ioeventfd is (re-)enabled while the guest is running there could
* be block jobs that can conflict.
*/
if (blk_op_is_blocked(conf->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)) {
- error_prepend(errp, "cannot start virtio-blk dataplane: ");
+ error_prepend(errp, "cannot start virtio-blk ioeventfd: ");
return false;
}
}
@@ -1688,7 +1688,7 @@ static void virtio_blk_vq_aio_context_cleanup(VirtIOBlock *s)
{
VirtIOBlkConf *conf = &s->conf;
- assert(!s->dataplane_started);
+ assert(!s->ioeventfd_started);
if (conf->iothread_vq_mapping_list) {
IOThreadVirtQueueMappingList *node;
@@ -1708,7 +1708,7 @@ static void virtio_blk_vq_aio_context_cleanup(VirtIOBlock *s)
}
/* Context: BQL held */
-static int virtio_blk_data_plane_start(VirtIODevice *vdev)
+static int virtio_blk_start_ioeventfd(VirtIODevice *vdev)
{
VirtIOBlock *s = VIRTIO_BLK(vdev);
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s)));
@@ -1718,11 +1718,11 @@ static int virtio_blk_data_plane_start(VirtIODevice *vdev)
Error *local_err = NULL;
int r;
- if (s->dataplane_started || s->dataplane_starting) {
+ if (s->ioeventfd_started || s->ioeventfd_starting) {
return 0;
}
- s->dataplane_starting = true;
+ s->ioeventfd_starting = true;
/* Set up guest notifier (irq) */
r = k->set_guest_notifiers(qbus->parent, nvqs, true);
@@ -1773,14 +1773,14 @@ static int virtio_blk_data_plane_start(VirtIODevice *vdev)
/*
* These fields must be visible to the IOThread when it processes the
- * virtqueue, otherwise it will think dataplane has not started yet.
+ * virtqueue, otherwise it will think ioeventfd has not started yet.
*
- * Make sure ->dataplane_started is false when blk_set_aio_context() is
+ * Make sure ->ioeventfd_started is false when blk_set_aio_context() is
* called above so that draining does not cause the host notifier to be
* detached/attached prematurely.
*/
- s->dataplane_starting = false;
- s->dataplane_started = true;
+ s->ioeventfd_starting = false;
+ s->ioeventfd_started = true;
smp_wmb(); /* paired with aio_notify_accept() on the read side */
/* Get this show started by hooking up our callbacks */
@@ -1812,8 +1812,8 @@ static int virtio_blk_data_plane_start(VirtIODevice *vdev)
fail_host_notifiers:
k->set_guest_notifiers(qbus->parent, nvqs, false);
fail_guest_notifiers:
- s->dataplane_disabled = true;
- s->dataplane_starting = false;
+ s->ioeventfd_disabled = true;
+ s->ioeventfd_starting = false;
return -ENOSYS;
}
@@ -1821,7 +1821,7 @@ static int virtio_blk_data_plane_start(VirtIODevice *vdev)
*
* Context: BH in IOThread
*/
-static void virtio_blk_data_plane_stop_vq_bh(void *opaque)
+static void virtio_blk_ioeventfd_stop_vq_bh(void *opaque)
{
VirtQueue *vq = opaque;
EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
@@ -1836,7 +1836,7 @@ static void virtio_blk_data_plane_stop_vq_bh(void *opaque)
}
/* Context: BQL held */
-static void virtio_blk_data_plane_stop(VirtIODevice *vdev)
+static void virtio_blk_stop_ioeventfd(VirtIODevice *vdev)
{
VirtIOBlock *s = VIRTIO_BLK(vdev);
BusState *qbus = qdev_get_parent_bus(DEVICE(s));
@@ -1844,24 +1844,24 @@ static void virtio_blk_data_plane_stop(VirtIODevice *vdev)
unsigned i;
unsigned nvqs = s->conf.num_queues;
- if (!s->dataplane_started || s->dataplane_stopping) {
+ if (!s->ioeventfd_started || s->ioeventfd_stopping) {
return;
}
/* Better luck next time. */
- if (s->dataplane_disabled) {
- s->dataplane_disabled = false;
- s->dataplane_started = false;
+ if (s->ioeventfd_disabled) {
+ s->ioeventfd_disabled = false;
+ s->ioeventfd_started = false;
return;
}
- s->dataplane_stopping = true;
+ s->ioeventfd_stopping = true;
if (!blk_in_drain(s->conf.conf.blk)) {
for (i = 0; i < nvqs; i++) {
VirtQueue *vq = virtio_get_queue(vdev, i);
AioContext *ctx = s->vq_aio_context[i];
- aio_wait_bh_oneshot(ctx, virtio_blk_data_plane_stop_vq_bh, vq);
+ aio_wait_bh_oneshot(ctx, virtio_blk_ioeventfd_stop_vq_bh, vq);
}
}
@@ -1886,10 +1886,10 @@ static void virtio_blk_data_plane_stop(VirtIODevice *vdev)
}
/*
- * Set ->dataplane_started to false before draining so that host notifiers
+ * Set ->ioeventfd_started to false before draining so that host notifiers
* are not detached/attached anymore.
*/
- s->dataplane_started = false;
+ s->ioeventfd_started = false;
/* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
blk_drain(s->conf.conf.blk);
@@ -1903,7 +1903,7 @@ static void virtio_blk_data_plane_stop(VirtIODevice *vdev)
/* Clean up guest notifier (irq) */
k->set_guest_notifiers(qbus->parent, nvqs, false);
- s->dataplane_stopping = false;
+ s->ioeventfd_stopping = false;
}
static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
@@ -2011,9 +2011,9 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
}
qemu_coroutine_inc_pool_size(conf->num_queues * conf->queue_size / 2);
- /* Don't start dataplane if transport does not support notifiers. */
+ /* Don't start ioeventfd if transport does not support notifiers. */
if (!virtio_device_ioeventfd_enabled(vdev)) {
- s->dataplane_disabled = true;
+ s->ioeventfd_disabled = true;
}
virtio_blk_vq_aio_context_init(s, &err);
@@ -2137,8 +2137,8 @@ static void virtio_blk_class_init(ObjectClass *klass, void *data)
vdc->reset = virtio_blk_reset;
vdc->save = virtio_blk_save_device;
vdc->load = virtio_blk_load_device;
- vdc->start_ioeventfd = virtio_blk_data_plane_start;
- vdc->stop_ioeventfd = virtio_blk_data_plane_stop;
+ vdc->start_ioeventfd = virtio_blk_start_ioeventfd;
+ vdc->stop_ioeventfd = virtio_blk_stop_ioeventfd;
}
static const TypeInfo virtio_blk_info = {
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index fecffdc303..833a9a344f 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -60,10 +60,10 @@ struct VirtIOBlock {
unsigned short sector_mask;
bool original_wce;
VMChangeStateEntry *change;
- bool dataplane_disabled;
- bool dataplane_started;
- bool dataplane_starting;
- bool dataplane_stopping;
+ bool ioeventfd_disabled;
+ bool ioeventfd_started;
+ bool ioeventfd_starting;
+ bool ioeventfd_stopping;
/*
* The AioContext for each virtqueue. The BlockDriverState will use the
--
2.39.3

View File

@ -0,0 +1,106 @@
From 9311035821b3fea3f78c7f06ddb8a3861584f907 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Fri, 19 Jan 2024 08:57:46 -0500
Subject: [PATCH 17/22] virtio-blk: restart s->rq reqs in vq AioContexts
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [13/17] cf5ad0352a78458ffc7588f967963f62b267fd64 (stefanha/centos-stream-qemu-kvm)
A virtio-blk device with the iothread-vq-mapping parameter has
per-virtqueue AioContexts. It is not thread-safe to process s->rq
requests in the BlockBackend AioContext since that may be different from
the virtqueue's AioContext to which this request belongs. The code
currently races and could crash.
Adapt virtio_blk_dma_restart_cb() to first split s->rq into per-vq lists
and then schedule a BH each vq's AioContext as necessary. This way
requests are safely processed in their vq's AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240119135748.270944-5-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 71ee0cdd14cc01a8b51aa4e9577dd0a1bb2f8e19)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 44 ++++++++++++++++++++++++++++++++-----------
1 file changed, 33 insertions(+), 11 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 08c566946a..f48ce5cbb8 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1156,16 +1156,11 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
static void virtio_blk_dma_restart_bh(void *opaque)
{
- VirtIOBlock *s = opaque;
+ VirtIOBlockReq *req = opaque;
+ VirtIOBlock *s = req->dev; /* we're called with at least one request */
- VirtIOBlockReq *req;
MultiReqBuffer mrb = {};
- WITH_QEMU_LOCK_GUARD(&s->rq_lock) {
- req = s->rq;
- s->rq = NULL;
- }
-
while (req) {
VirtIOBlockReq *next = req->next;
if (virtio_blk_handle_request(req, &mrb)) {
@@ -1195,16 +1190,43 @@ static void virtio_blk_dma_restart_cb(void *opaque, bool running,
RunState state)
{
VirtIOBlock *s = opaque;
+ uint16_t num_queues = s->conf.num_queues;
if (!running) {
return;
}
- /* Paired with dec in virtio_blk_dma_restart_bh() */
- blk_inc_in_flight(s->conf.conf.blk);
+ /* Split the device-wide s->rq request list into per-vq request lists */
+ g_autofree VirtIOBlockReq **vq_rq = g_new0(VirtIOBlockReq *, num_queues);
+ VirtIOBlockReq *rq;
+
+ WITH_QEMU_LOCK_GUARD(&s->rq_lock) {
+ rq = s->rq;
+ s->rq = NULL;
+ }
+
+ while (rq) {
+ VirtIOBlockReq *next = rq->next;
+ uint16_t idx = virtio_get_queue_index(rq->vq);
+
+ rq->next = vq_rq[idx];
+ vq_rq[idx] = rq;
+ rq = next;
+ }
- aio_bh_schedule_oneshot(blk_get_aio_context(s->conf.conf.blk),
- virtio_blk_dma_restart_bh, s);
+ /* Schedule a BH to submit the requests in each vq's AioContext */
+ for (uint16_t i = 0; i < num_queues; i++) {
+ if (!vq_rq[i]) {
+ continue;
+ }
+
+ /* Paired with dec in virtio_blk_dma_restart_bh() */
+ blk_inc_in_flight(s->conf.conf.blk);
+
+ aio_bh_schedule_oneshot(s->vq_aio_context[i],
+ virtio_blk_dma_restart_bh,
+ vq_rq[i]);
+ }
}
static void virtio_blk_reset(VirtIODevice *vdev)
--
2.39.3

View File

@ -0,0 +1,72 @@
From 282cebc22987958d11efc76e4f6ddb9601e709d9 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Fri, 19 Jan 2024 08:57:47 -0500
Subject: [PATCH 18/22] virtio-blk: tolerate failure to set BlockBackend
AioContext
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 219: virtio-blk: add iothread-vq-mapping parameter
RH-Jira: RHEL-17369 RHEL-20764 RHEL-7356
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Czenczek <hreitz@redhat.com>
RH-Commit: [14/17] edb113ce9fea0c1a88ae7b5d61c35c1981e6993f (stefanha/centos-stream-qemu-kvm)
We no longer rely on setting the AioContext since the block layer
IO_CODE APIs can be called from any thread. Now it's just a hint to help
block jobs and other operations co-locate themselves in a thread with
the guest I/O requests. Keep going if setting the AioContext fails.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240119135748.270944-6-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ea0736d7f84ead109a6b701427991828f97724c3)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 19 +++++--------------
1 file changed, 5 insertions(+), 14 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index f48ce5cbb8..81de06c9f6 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1786,11 +1786,14 @@ static int virtio_blk_start_ioeventfd(VirtIODevice *vdev)
memory_region_transaction_commit();
+ /*
+ * Try to change the AioContext so that block jobs and other operations can
+ * co-locate their activity in the same AioContext. If it fails, nevermind.
+ */
r = blk_set_aio_context(s->conf.conf.blk, s->vq_aio_context[0],
&local_err);
if (r < 0) {
- error_report_err(local_err);
- goto fail_aio_context;
+ warn_report_err(local_err);
}
/*
@@ -1819,18 +1822,6 @@ static int virtio_blk_start_ioeventfd(VirtIODevice *vdev)
}
return 0;
- fail_aio_context:
- memory_region_transaction_begin();
-
- for (i = 0; i < nvqs; i++) {
- virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
- }
-
- memory_region_transaction_commit();
-
- for (i = 0; i < nvqs; i++) {
- virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
- }
fail_host_notifiers:
k->set_guest_notifiers(qbus->parent, nvqs, false);
fail_guest_notifiers:
--
2.39.3

View File

@ -0,0 +1,70 @@
From 94bccae527f1ab8328cc7692532046d700e2ca71 Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Mon, 5 Feb 2024 19:27:07 +0100
Subject: [PATCH 22/22] virtio-mem: default-enable "dynamic-memslots"
RH-Author: David Hildenbrand <david@redhat.com>
RH-MergeRequest: 220: virtio-mem: default-enable "dynamic-memslots"
RH-Jira: RHEL-24045
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [1/1] d9a60acd7de1d8703ea3ca938e388e19f31f5347
JIRA: https://issues.redhat.com/browse/RHEL-24045
Upstream: RHEL only
We only support selected vhost-user devices in combination with
virtio-mem in RHEL. One devices that works well is virtiofsd, devices that
are currently incompatible include DPDK and SPDK.
The vhost devices we support must be compatible with the dynamic-memslot
feature (i.e., support at least 509 memslots, support dynamically adding/
removing memslots), such that setting "dynamic-memslots=on" will work a
expected and not make certain QEMU commandlines or hotplug of vhost-user
devices bail out.
Let's set "dynamic-memslots=on" starting with RHEL 9.4, so we
get the benefits (i.e., reduced metadata consumption in KVM, majority of
unplugged memory being inaccessible) as default.
When wanting to run virtio-mem with incompatible vhost-user devices, it
might just work (if the vhost-user device is created before the
virtio-mem device), or the feature can be manually disabled by
specifying "dynamic-memslots=off".
Signed-off-by: David Hildenbrand <david@redhat.com>
---
hw/core/machine.c | 2 ++
hw/virtio/virtio-mem.c | 3 ++-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/hw/core/machine.c b/hw/core/machine.c
index 446601ee30..309f6ba685 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -78,6 +78,8 @@ GlobalProperty hw_compat_rhel_9_4[] = {
{ "vfio-pci-nohotplug", "x-ramfb-migrate", "off" },
/* hw_compat_rhel_9_4 from hw_compat_8_1 */
{ "igb", "x-pcie-flr-init", "off" },
+ /* hw_compat_rhel_9_4 jira RHEL-24045 */
+ { "virtio-mem", "dynamic-memslots", "off" },
};
const size_t hw_compat_rhel_9_4_len = G_N_ELEMENTS(hw_compat_rhel_9_4);
diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
index 75ee38aa46..00ca91e8fe 100644
--- a/hw/virtio/virtio-mem.c
+++ b/hw/virtio/virtio-mem.c
@@ -1696,8 +1696,9 @@ static Property virtio_mem_properties[] = {
#endif
DEFINE_PROP_BOOL(VIRTIO_MEM_EARLY_MIGRATION_PROP, VirtIOMEM,
early_migration, true),
+ /* RHEL: default-enable "dynamic-memslots" (jira RHEL-24045) */
DEFINE_PROP_BOOL(VIRTIO_MEM_DYNAMIC_MEMSLOTS_PROP, VirtIOMEM,
- dynamic_memslots, false),
+ dynamic_memslots, true),
DEFINE_PROP_END_OF_LIST(),
};
--
2.39.3

View File

@ -149,7 +149,7 @@ Obsoletes: %{name}-block-ssh <= %{epoch}:%{version} \
Summary: QEMU is a machine emulator and virtualizer
Name: qemu-kvm
Version: 8.2.0
Release: 4%{?rcrel}%{?dist}%{?cc_suffix}
Release: 5%{?rcrel}%{?dist}%{?cc_suffix}
# Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped
# Epoch 15 used for RHEL 8
# Epoch 17 used for RHEL 9 (due to release versioning offset in RHEL 8.5)
@ -458,6 +458,84 @@ Patch116: kvm-include-ui-rect.h-fix-qemu_rect_init-mis-assignment.patch
Patch117: kvm-virtio-gpu-block-migration-of-VMs-with-blob-true.patch
# For RHEL-21293 - [emulated igb] Failed to set up TRIGGER eventfd signaling for interrupt INTX-0: VFIO_DEVICE_SET_IRQS failure: Invalid argument
Patch118: kvm-vfio-pci-Clear-MSI-X-IRQ-index-always.patch
# For RHEL-20341 - memory-device size alignment check invalid in QEMU 8.2
Patch119: kvm-hv-balloon-use-get_min_alignment-to-express-32-GiB-a.patch
# For RHEL-20341 - memory-device size alignment check invalid in QEMU 8.2
Patch120: kvm-memory-device-reintroduce-memory-region-size-check.patch
# For RHEL-24593 - qemu crash blk_get_aio_context(BlockBackend *): Assertion `ctx == blk->ctx' when repeatedly hotplug/unplug disk
Patch121: kvm-block-backend-Allow-concurrent-context-changes.patch
# For RHEL-24593 - qemu crash blk_get_aio_context(BlockBackend *): Assertion `ctx == blk->ctx' when repeatedly hotplug/unplug disk
Patch122: kvm-scsi-Await-request-purging.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch123: kvm-string-output-visitor-show-structs-as-omitted.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch124: kvm-string-output-visitor-Fix-pseudo-struct-handling.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch125: kvm-qdev-properties-alias-all-object-class-properties.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch126: kvm-qdev-add-IOThreadVirtQueueMappingList-property-type.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch127: kvm-virtio-blk-add-iothread-vq-mapping-parameter.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch128: kvm-virtio-blk-Fix-potential-nullpointer-read-access-in-.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch129: kvm-iotests-add-filter_qmp_generated_node_ids.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch130: kvm-iotests-port-141-to-Python-for-reliable-QMP-testing.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch131: kvm-monitor-only-run-coroutine-commands-in-qemu_aio_cont.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch132: kvm-virtio-blk-move-dataplane-code-into-virtio-blk.c.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch133: kvm-virtio-blk-rename-dataplane-create-destroy-functions.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch134: kvm-virtio-blk-rename-dataplane-to-ioeventfd.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch135: kvm-virtio-blk-restart-s-rq-reqs-in-vq-AioContexts.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch136: kvm-virtio-blk-tolerate-failure-to-set-BlockBackend-AioC.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch137: kvm-virtio-blk-always-set-ioeventfd-during-startup.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch138: kvm-tests-unit-Bump-test-replication-timeout-to-60-secon.patch
# For RHEL-17369 - [nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.
# For RHEL-20764 - [qemu-kvm] Enable qemu multiqueue block layer support
# For RHEL-7356 - [qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9]
Patch139: kvm-iotests-iothreads-stream-Use-the-right-TimeoutError.patch
# For RHEL-24045 - QEMU: default-enable dynamically using multiple memslots for virtio-mem
Patch140: kvm-virtio-mem-default-enable-dynamic-memslots.patch
%if %{have_clang}
BuildRequires: clang
@ -1519,6 +1597,42 @@ useradd -r -u 107 -g qemu -G kvm -d / -s /sbin/nologin \
%endif
%changelog
* Mon Feb 12 2024 Miroslav Rezanina <mrezanin@redhat.com> - 8.2.0-5
- kvm-hv-balloon-use-get_min_alignment-to-express-32-GiB-a.patch [RHEL-20341]
- kvm-memory-device-reintroduce-memory-region-size-check.patch [RHEL-20341]
- kvm-block-backend-Allow-concurrent-context-changes.patch [RHEL-24593]
- kvm-scsi-Await-request-purging.patch [RHEL-24593]
- kvm-string-output-visitor-show-structs-as-omitted.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-string-output-visitor-Fix-pseudo-struct-handling.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-qdev-properties-alias-all-object-class-properties.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-qdev-add-IOThreadVirtQueueMappingList-property-type.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-add-iothread-vq-mapping-parameter.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-Fix-potential-nullpointer-read-access-in-.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-iotests-add-filter_qmp_generated_node_ids.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-iotests-port-141-to-Python-for-reliable-QMP-testing.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-monitor-only-run-coroutine-commands-in-qemu_aio_cont.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-move-dataplane-code-into-virtio-blk.c.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-rename-dataplane-create-destroy-functions.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-rename-dataplane-to-ioeventfd.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-restart-s-rq-reqs-in-vq-AioContexts.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-tolerate-failure-to-set-BlockBackend-AioC.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-blk-always-set-ioeventfd-during-startup.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-tests-unit-Bump-test-replication-timeout-to-60-secon.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-iotests-iothreads-stream-Use-the-right-TimeoutError.patch [RHEL-17369 RHEL-20764 RHEL-7356]
- kvm-virtio-mem-default-enable-dynamic-memslots.patch [RHEL-24045]
- Resolves: RHEL-20341
(memory-device size alignment check invalid in QEMU 8.2)
- Resolves: RHEL-24593
(qemu crash blk_get_aio_context(BlockBackend *): Assertion `ctx == blk->ctx' when repeatedly hotplug/unplug disk)
- Resolves: RHEL-17369
([nfv virt][rt][post-copy migration] qemu-kvm: ../block/qcow2.c:5263: ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *, Error **): Assertion `false' failed.)
- Resolves: RHEL-20764
([qemu-kvm] Enable qemu multiqueue block layer support)
- Resolves: RHEL-7356
([qemu-kvm] no response with QMP command device_add when repeatedly hotplug/unplug virtio disks [RHEL-9])
- Resolves: RHEL-24045
(QEMU: default-enable dynamically using multiple memslots for virtio-mem)
* Tue Jan 30 2024 Miroslav Rezanina <mrezanin@redhat.com> - 8.2.0-4
- kvm-vfio-pci-Clear-MSI-X-IRQ-index-always.patch [RHEL-21293]
- Resolves: RHEL-21293