* Thu May 19 2022 Miroslav Rezanina <mrezanin@redhat.com> - 7.0.0-4

- kvm-qapi-machine.json-Add-cluster-id.patch [bz#2041823]
- kvm-qtest-numa-test-Specify-CPU-topology-in-aarch64_numa.patch [bz#2041823]
- kvm-hw-arm-virt-Consider-SMP-configuration-in-CPU-topolo.patch [bz#2041823]
- kvm-qtest-numa-test-Correct-CPU-and-NUMA-association-in-.patch [bz#2041823]
- kvm-hw-arm-virt-Fix-CPU-s-default-NUMA-node-ID.patch [bz#2041823]
- kvm-hw-acpi-aml-build-Use-existing-CPU-topology-to-build.patch [bz#2041823]
- kvm-coroutine-Rename-qemu_coroutine_inc-dec_pool_size.patch [bz#2079938]
- kvm-coroutine-Revert-to-constant-batch-size.patch [bz#2079938]
- kvm-virtio-scsi-fix-ctrl-and-event-handler-functions-in-.patch [bz#2079347]
- kvm-virtio-scsi-don-t-waste-CPU-polling-the-event-virtqu.patch [bz#2079347]
- kvm-virtio-scsi-clean-up-virtio_scsi_handle_event_vq.patch [bz#2079347]
- kvm-virtio-scsi-clean-up-virtio_scsi_handle_ctrl_vq.patch [bz#2079347]
- kvm-virtio-scsi-clean-up-virtio_scsi_handle_cmd_vq.patch [bz#2079347]
- kvm-virtio-scsi-move-request-related-items-from-.h-to-.c.patch [bz#2079347]
- kvm-Revert-virtio-scsi-Reject-scsi-cd-if-data-plane-enab.patch [bz#1995710]
- kvm-migration-Fix-operator-type.patch [bz#2064530]
- Resolves: bz#2041823
  ([aarch64][numa] When there are at least 6 Numa nodes serial log shows 'arch topology borken')
- Resolves: bz#2079938
  (qemu coredump when boot with multi disks (qemu) failed to set up stack guard page: Cannot allocate memory)
- Resolves: bz#2079347
  (Guest boot blocked when scsi disks using same iothread and 100% CPU consumption)
- Resolves: bz#1995710
  (RFE: Allow virtio-scsi CD-ROM media change with IOThreads)
- Resolves: bz#2064530
  (Rebuild qemu-kvm with clang-14)
This commit is contained in:
Miroslav Rezanina 2022-05-19 08:13:20 -04:00
parent 0b5c35c425
commit 550d33ded2
17 changed files with 1627 additions and 1 deletions

View File

@ -0,0 +1,51 @@
From 733acef2caea0758edd74fb634b095ce09bf5914 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Mon, 9 May 2022 03:46:23 -0400
Subject: [PATCH 15/16] Revert "virtio-scsi: Reject scsi-cd if data plane
enabled [RHEL only]"
RH-Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-MergeRequest: 91: Revert "virtio-scsi: Reject scsi-cd if data plane enabled [RHEL only]"
RH-Commit: [1/1] 1af55d792bc9166e5c86272afe8093c76ab41bb4 (eesposit/qemu-kvm)
RH-Bugzilla: 1995710
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
This reverts commit 4e17b1126e.
Over time AioContext usage and coverage has increased, and now block
backend is capable of handling AioContext change upon eject and insert.
Therefore the above downstream-only commit is not necessary anymore,
and can be safely reverted.
X-downstream-only: true
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
hw/scsi/virtio-scsi.c | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 2450c9438c..db54d104be 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -937,15 +937,6 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev,
AioContext *old_context;
int ret;
- /* XXX: Remove this check once block backend is capable of handling
- * AioContext change upon eject/insert.
- * s->ctx is NULL if ioeventfd is off, s->ctx is qemu_get_aio_context() if
- * data plane is not used, both cases are safe for scsi-cd. */
- if (s->ctx && s->ctx != qemu_get_aio_context() &&
- object_dynamic_cast(OBJECT(dev), "scsi-cd")) {
- error_setg(errp, "scsi-cd is not supported by data plane");
- return;
- }
if (s->ctx && !s->dataplane_fenced) {
if (blk_op_is_blocked(sd->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)) {
return;
--
2.31.1

View File

@ -0,0 +1,101 @@
From e3cb8849862a9f0dd20f2913d540336a037d43c7 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 10 May 2022 17:10:19 +0200
Subject: [PATCH 07/16] coroutine: Rename qemu_coroutine_inc/dec_pool_size()
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 87: coroutine: Fix crashes due to too large pool batch size
RH-Commit: [1/2] 6389b11f70225f221784c270d9b90c1ea43ca8fb (kmwolf/centos-qemu-kvm)
RH-Bugzilla: 2079938
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
It's true that these functions currently affect the batch size in which
coroutines are reused (i.e. moved from the global release pool to the
allocation pool of a specific thread), but this is a bug and will be
fixed in a separate patch.
In fact, the comment in the header file already just promises that it
influences the pool size, so reflect this in the name of the functions.
As a nice side effect, the shorter function name makes some line
wrapping unnecessary.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220510151020.105528-2-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 98e3ab35054b946f7c2aba5408822532b0920b53)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
hw/block/virtio-blk.c | 6 ++----
include/qemu/coroutine.h | 6 +++---
util/qemu-coroutine.c | 4 ++--
3 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 540c38f829..6a1cc41877 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1215,8 +1215,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
for (i = 0; i < conf->num_queues; i++) {
virtio_add_queue(vdev, conf->queue_size, virtio_blk_handle_output);
}
- qemu_coroutine_increase_pool_batch_size(conf->num_queues * conf->queue_size
- / 2);
+ qemu_coroutine_inc_pool_size(conf->num_queues * conf->queue_size / 2);
virtio_blk_data_plane_create(vdev, conf, &s->dataplane, &err);
if (err != NULL) {
error_propagate(errp, err);
@@ -1253,8 +1252,7 @@ static void virtio_blk_device_unrealize(DeviceState *dev)
for (i = 0; i < conf->num_queues; i++) {
virtio_del_queue(vdev, i);
}
- qemu_coroutine_decrease_pool_batch_size(conf->num_queues * conf->queue_size
- / 2);
+ qemu_coroutine_dec_pool_size(conf->num_queues * conf->queue_size / 2);
qemu_del_vm_change_state_handler(s->change);
blockdev_mark_auto_del(s->blk);
virtio_cleanup(vdev);
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
index c828a95ee0..5b621d1295 100644
--- a/include/qemu/coroutine.h
+++ b/include/qemu/coroutine.h
@@ -334,12 +334,12 @@ void coroutine_fn yield_until_fd_readable(int fd);
/**
* Increase coroutine pool size
*/
-void qemu_coroutine_increase_pool_batch_size(unsigned int additional_pool_size);
+void qemu_coroutine_inc_pool_size(unsigned int additional_pool_size);
/**
- * Devcrease coroutine pool size
+ * Decrease coroutine pool size
*/
-void qemu_coroutine_decrease_pool_batch_size(unsigned int additional_pool_size);
+void qemu_coroutine_dec_pool_size(unsigned int additional_pool_size);
#include "qemu/lockable.h"
diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
index c03b2422ff..faca0ca97c 100644
--- a/util/qemu-coroutine.c
+++ b/util/qemu-coroutine.c
@@ -205,12 +205,12 @@ AioContext *coroutine_fn qemu_coroutine_get_aio_context(Coroutine *co)
return co->ctx;
}
-void qemu_coroutine_increase_pool_batch_size(unsigned int additional_pool_size)
+void qemu_coroutine_inc_pool_size(unsigned int additional_pool_size)
{
qatomic_add(&pool_batch_size, additional_pool_size);
}
-void qemu_coroutine_decrease_pool_batch_size(unsigned int removing_pool_size)
+void qemu_coroutine_dec_pool_size(unsigned int removing_pool_size)
{
qatomic_sub(&pool_batch_size, removing_pool_size);
}
--
2.31.1

View File

@ -0,0 +1,138 @@
From 345107bfd5537b51f34aaeb97d6161858bb6feee Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 10 May 2022 17:10:20 +0200
Subject: [PATCH 08/16] coroutine: Revert to constant batch size
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 87: coroutine: Fix crashes due to too large pool batch size
RH-Commit: [2/2] 8a8a39af873854cdc8333d1a70f3479a97c3ec7a (kmwolf/centos-qemu-kvm)
RH-Bugzilla: 2079938
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
Commit 4c41c69e changed the way the coroutine pool is sized because for
virtio-blk devices with a large queue size and heavy I/O, it was just
too small and caused coroutines to be deleted and reallocated soon
afterwards. The change made the size dynamic based on the number of
queues and the queue size of virtio-blk devices.
There are two important numbers here: Slightly simplified, when a
coroutine terminates, it is generally stored in the global release pool
up to a certain pool size, and if the pool is full, it is freed.
Conversely, when allocating a new coroutine, the coroutines in the
release pool are reused if the pool already has reached a certain
minimum size (the batch size), otherwise we allocate new coroutines.
The problem after commit 4c41c69e is that it not only increases the
maximum pool size (which is the intended effect), but also the batch
size for reusing coroutines (which is a bug). It means that in cases
with many devices and/or a large queue size (which defaults to the
number of vcpus for virtio-blk-pci), many thousand coroutines could be
sitting in the release pool without being reused.
This is not only a waste of memory and allocations, but it actually
makes the QEMU process likely to hit the vm.max_map_count limit on Linux
because each coroutine requires two mappings (its stack and the guard
page for the stack), causing it to abort() in qemu_alloc_stack() because
when the limit is hit, mprotect() starts to fail with ENOMEM.
In order to fix the problem, change the batch size back to 64 to avoid
uselessly accumulating coroutines in the release pool, but keep the
dynamic maximum pool size so that coroutines aren't freed too early
in heavy I/O scenarios.
Note that this fix doesn't strictly make it impossible to hit the limit,
but this would only happen if most of the coroutines are actually in use
at the same time, not just sitting in a pool. This is the same behaviour
as we already had before commit 4c41c69e. Fully preventing this would
require allowing qemu_coroutine_create() to return an error, but it
doesn't seem to be a scenario that people hit in practice.
Cc: qemu-stable@nongnu.org
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2079938
Fixes: 4c41c69e05fe28c0f95f8abd2ebf407e95a4f04b
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220510151020.105528-3-kwolf@redhat.com>
Tested-by: Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 9ec7a59b5aad4b736871c378d30f5ef5ec51cb52)
Conflicts:
util/qemu-coroutine.c
Trivial merge conflict because we don't have commit ac387a08 downstream.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
util/qemu-coroutine.c | 22 ++++++++++++++--------
1 file changed, 14 insertions(+), 8 deletions(-)
diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
index faca0ca97c..804f672e0a 100644
--- a/util/qemu-coroutine.c
+++ b/util/qemu-coroutine.c
@@ -20,14 +20,20 @@
#include "qemu/coroutine_int.h"
#include "block/aio.h"
-/** Initial batch size is 64, and is increased on demand */
+/**
+ * The minimal batch size is always 64, coroutines from the release_pool are
+ * reused as soon as there are 64 coroutines in it. The maximum pool size starts
+ * with 64 and is increased on demand so that coroutines are not deleted even if
+ * they are not immediately reused.
+ */
enum {
- POOL_INITIAL_BATCH_SIZE = 64,
+ POOL_MIN_BATCH_SIZE = 64,
+ POOL_INITIAL_MAX_SIZE = 64,
};
/** Free list to speed up creation */
static QSLIST_HEAD(, Coroutine) release_pool = QSLIST_HEAD_INITIALIZER(pool);
-static unsigned int pool_batch_size = POOL_INITIAL_BATCH_SIZE;
+static unsigned int pool_max_size = POOL_INITIAL_MAX_SIZE;
static unsigned int release_pool_size;
static __thread QSLIST_HEAD(, Coroutine) alloc_pool = QSLIST_HEAD_INITIALIZER(pool);
static __thread unsigned int alloc_pool_size;
@@ -51,7 +57,7 @@ Coroutine *qemu_coroutine_create(CoroutineEntry *entry, void *opaque)
if (CONFIG_COROUTINE_POOL) {
co = QSLIST_FIRST(&alloc_pool);
if (!co) {
- if (release_pool_size > qatomic_read(&pool_batch_size)) {
+ if (release_pool_size > POOL_MIN_BATCH_SIZE) {
/* Slow path; a good place to register the destructor, too. */
if (!coroutine_pool_cleanup_notifier.notify) {
coroutine_pool_cleanup_notifier.notify = coroutine_pool_cleanup;
@@ -88,12 +94,12 @@ static void coroutine_delete(Coroutine *co)
co->caller = NULL;
if (CONFIG_COROUTINE_POOL) {
- if (release_pool_size < qatomic_read(&pool_batch_size) * 2) {
+ if (release_pool_size < qatomic_read(&pool_max_size) * 2) {
QSLIST_INSERT_HEAD_ATOMIC(&release_pool, co, pool_next);
qatomic_inc(&release_pool_size);
return;
}
- if (alloc_pool_size < qatomic_read(&pool_batch_size)) {
+ if (alloc_pool_size < qatomic_read(&pool_max_size)) {
QSLIST_INSERT_HEAD(&alloc_pool, co, pool_next);
alloc_pool_size++;
return;
@@ -207,10 +213,10 @@ AioContext *coroutine_fn qemu_coroutine_get_aio_context(Coroutine *co)
void qemu_coroutine_inc_pool_size(unsigned int additional_pool_size)
{
- qatomic_add(&pool_batch_size, additional_pool_size);
+ qatomic_add(&pool_max_size, additional_pool_size);
}
void qemu_coroutine_dec_pool_size(unsigned int removing_pool_size)
{
- qatomic_sub(&pool_batch_size, removing_pool_size);
+ qatomic_sub(&pool_max_size, removing_pool_size);
}
--
2.31.1

View File

@ -0,0 +1,179 @@
From 8a12049e97149056f61f7748d9869606d282d16e Mon Sep 17 00:00:00 2001
From: Gavin Shan <gshan@redhat.com>
Date: Wed, 11 May 2022 18:01:35 +0800
Subject: [PATCH 06/16] hw/acpi/aml-build: Use existing CPU topology to build
PPTT table
RH-Author: Gavin Shan <gshan@redhat.com>
RH-MergeRequest: 86: hw/arm/virt: Fix the default CPU topology
RH-Commit: [6/6] 53fa376531c204cf706cc1a7a0499019756106cb (gwshan/qemu-rhel-9)
RH-Bugzilla: 2041823
RH-Acked-by: Eric Auger <eric.auger@redhat.com>
RH-Acked-by: Cornelia Huck <cohuck@redhat.com>
RH-Acked-by: Andrew Jones <drjones@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2041823
When the PPTT table is built, the CPU topology is re-calculated, but
it's unecessary because the CPU topology has been populated in
virt_possible_cpu_arch_ids() on arm/virt machine.
This reworks build_pptt() to avoid by reusing the existing IDs in
ms->possible_cpus. Currently, the only user of build_pptt() is
arm/virt machine.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Tested-by: Yanan Wang <wangyanan55@huawei.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 20220503140304.855514-7-gshan@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit ae9141d4a3265553503bf07d3574b40f84615a34)
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
hw/acpi/aml-build.c | 111 +++++++++++++++++++-------------------------
1 file changed, 48 insertions(+), 63 deletions(-)
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 4086879ebf..e6bfac95c7 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -2002,86 +2002,71 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
const char *oem_id, const char *oem_table_id)
{
MachineClass *mc = MACHINE_GET_CLASS(ms);
- GQueue *list = g_queue_new();
- guint pptt_start = table_data->len;
- guint parent_offset;
- guint length, i;
- int uid = 0;
- int socket;
+ CPUArchIdList *cpus = ms->possible_cpus;
+ int64_t socket_id = -1, cluster_id = -1, core_id = -1;
+ uint32_t socket_offset = 0, cluster_offset = 0, core_offset = 0;
+ uint32_t pptt_start = table_data->len;
+ int n;
AcpiTable table = { .sig = "PPTT", .rev = 2,
.oem_id = oem_id, .oem_table_id = oem_table_id };
acpi_table_begin(&table, table_data);
- for (socket = 0; socket < ms->smp.sockets; socket++) {
- g_queue_push_tail(list,
- GUINT_TO_POINTER(table_data->len - pptt_start));
- build_processor_hierarchy_node(
- table_data,
- /*
- * Physical package - represents the boundary
- * of a physical package
- */
- (1 << 0),
- 0, socket, NULL, 0);
- }
-
- if (mc->smp_props.clusters_supported) {
- length = g_queue_get_length(list);
- for (i = 0; i < length; i++) {
- int cluster;
-
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
- for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
- g_queue_push_tail(list,
- GUINT_TO_POINTER(table_data->len - pptt_start));
- build_processor_hierarchy_node(
- table_data,
- (0 << 0), /* not a physical package */
- parent_offset, cluster, NULL, 0);
- }
+ /*
+ * This works with the assumption that cpus[n].props.*_id has been
+ * sorted from top to down levels in mc->possible_cpu_arch_ids().
+ * Otherwise, the unexpected and duplicated containers will be
+ * created.
+ */
+ for (n = 0; n < cpus->len; n++) {
+ if (cpus->cpus[n].props.socket_id != socket_id) {
+ assert(cpus->cpus[n].props.socket_id > socket_id);
+ socket_id = cpus->cpus[n].props.socket_id;
+ cluster_id = -1;
+ core_id = -1;
+ socket_offset = table_data->len - pptt_start;
+ build_processor_hierarchy_node(table_data,
+ (1 << 0), /* Physical package */
+ 0, socket_id, NULL, 0);
}
- }
- length = g_queue_get_length(list);
- for (i = 0; i < length; i++) {
- int core;
-
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
- for (core = 0; core < ms->smp.cores; core++) {
- if (ms->smp.threads > 1) {
- g_queue_push_tail(list,
- GUINT_TO_POINTER(table_data->len - pptt_start));
- build_processor_hierarchy_node(
- table_data,
- (0 << 0), /* not a physical package */
- parent_offset, core, NULL, 0);
- } else {
- build_processor_hierarchy_node(
- table_data,
- (1 << 1) | /* ACPI Processor ID valid */
- (1 << 3), /* Node is a Leaf */
- parent_offset, uid++, NULL, 0);
+ if (mc->smp_props.clusters_supported) {
+ if (cpus->cpus[n].props.cluster_id != cluster_id) {
+ assert(cpus->cpus[n].props.cluster_id > cluster_id);
+ cluster_id = cpus->cpus[n].props.cluster_id;
+ core_id = -1;
+ cluster_offset = table_data->len - pptt_start;
+ build_processor_hierarchy_node(table_data,
+ (0 << 0), /* Not a physical package */
+ socket_offset, cluster_id, NULL, 0);
}
+ } else {
+ cluster_offset = socket_offset;
}
- }
- length = g_queue_get_length(list);
- for (i = 0; i < length; i++) {
- int thread;
+ if (ms->smp.threads == 1) {
+ build_processor_hierarchy_node(table_data,
+ (1 << 1) | /* ACPI Processor ID valid */
+ (1 << 3), /* Node is a Leaf */
+ cluster_offset, n, NULL, 0);
+ } else {
+ if (cpus->cpus[n].props.core_id != core_id) {
+ assert(cpus->cpus[n].props.core_id > core_id);
+ core_id = cpus->cpus[n].props.core_id;
+ core_offset = table_data->len - pptt_start;
+ build_processor_hierarchy_node(table_data,
+ (0 << 0), /* Not a physical package */
+ cluster_offset, core_id, NULL, 0);
+ }
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
- for (thread = 0; thread < ms->smp.threads; thread++) {
- build_processor_hierarchy_node(
- table_data,
+ build_processor_hierarchy_node(table_data,
(1 << 1) | /* ACPI Processor ID valid */
(1 << 2) | /* Processor is a Thread */
(1 << 3), /* Node is a Leaf */
- parent_offset, uid++, NULL, 0);
+ core_offset, n, NULL, 0);
}
}
- g_queue_free(list);
acpi_table_end(linker, &table);
}
--
2.31.1

View File

@ -0,0 +1,74 @@
From 3b05d3464945295112b5d02d142422f524a52054 Mon Sep 17 00:00:00 2001
From: Gavin Shan <gshan@redhat.com>
Date: Wed, 11 May 2022 18:01:35 +0800
Subject: [PATCH 03/16] hw/arm/virt: Consider SMP configuration in CPU topology
RH-Author: Gavin Shan <gshan@redhat.com>
RH-MergeRequest: 86: hw/arm/virt: Fix the default CPU topology
RH-Commit: [3/6] 7125b41f038c2b1cb33377d0ef1222f1ea42b648 (gwshan/qemu-rhel-9)
RH-Bugzilla: 2041823
RH-Acked-by: Eric Auger <eric.auger@redhat.com>
RH-Acked-by: Cornelia Huck <cohuck@redhat.com>
RH-Acked-by: Andrew Jones <drjones@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2041823
Currently, the SMP configuration isn't considered when the CPU
topology is populated. In this case, it's impossible to provide
the default CPU-to-NUMA mapping or association based on the socket
ID of the given CPU.
This takes account of SMP configuration when the CPU topology
is populated. The die ID for the given CPU isn't assigned since
it's not supported on arm/virt machine. Besides, the used SMP
configuration in qtest/numa-test/aarch64_numa_cpu() is corrcted
to avoid testing failure
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 20220503140304.855514-4-gshan@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit c9ec4cb5e4936f980889e717524e73896b0200ed)
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
hw/arm/virt.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 8be12e121d..a87c8d396a 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2553,6 +2553,7 @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
int n;
unsigned int max_cpus = ms->smp.max_cpus;
VirtMachineState *vms = VIRT_MACHINE(ms);
+ MachineClass *mc = MACHINE_GET_CLASS(vms);
if (ms->possible_cpus) {
assert(ms->possible_cpus->len == max_cpus);
@@ -2566,8 +2567,20 @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
ms->possible_cpus->cpus[n].type = ms->cpu_type;
ms->possible_cpus->cpus[n].arch_id =
virt_cpu_mp_affinity(vms, n);
+
+ assert(!mc->smp_props.dies_supported);
+ ms->possible_cpus->cpus[n].props.has_socket_id = true;
+ ms->possible_cpus->cpus[n].props.socket_id =
+ n / (ms->smp.clusters * ms->smp.cores * ms->smp.threads);
+ ms->possible_cpus->cpus[n].props.has_cluster_id = true;
+ ms->possible_cpus->cpus[n].props.cluster_id =
+ (n / (ms->smp.cores * ms->smp.threads)) % ms->smp.clusters;
+ ms->possible_cpus->cpus[n].props.has_core_id = true;
+ ms->possible_cpus->cpus[n].props.core_id =
+ (n / ms->smp.threads) % ms->smp.cores;
ms->possible_cpus->cpus[n].props.has_thread_id = true;
- ms->possible_cpus->cpus[n].props.thread_id = n;
+ ms->possible_cpus->cpus[n].props.thread_id =
+ n % ms->smp.threads;
}
return ms->possible_cpus;
}
--
2.31.1

View File

@ -0,0 +1,88 @@
From 14e49ad3b98f01c1ad6fe456469d40a96a43dc3c Mon Sep 17 00:00:00 2001
From: Gavin Shan <gshan@redhat.com>
Date: Wed, 11 May 2022 18:01:35 +0800
Subject: [PATCH 05/16] hw/arm/virt: Fix CPU's default NUMA node ID
RH-Author: Gavin Shan <gshan@redhat.com>
RH-MergeRequest: 86: hw/arm/virt: Fix the default CPU topology
RH-Commit: [5/6] 5336f62bc0c53c0417db1d71ef89544907bc28c0 (gwshan/qemu-rhel-9)
RH-Bugzilla: 2041823
RH-Acked-by: Eric Auger <eric.auger@redhat.com>
RH-Acked-by: Cornelia Huck <cohuck@redhat.com>
RH-Acked-by: Andrew Jones <drjones@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2041823
When CPU-to-NUMA association isn't explicitly provided by users,
the default one is given by mc->get_default_cpu_node_id(). However,
the CPU topology isn't fully considered in the default association
and this causes CPU topology broken warnings on booting Linux guest.
For example, the following warning messages are observed when the
Linux guest is booted with the following command lines.
/home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \
-accel kvm -machine virt,gic-version=host \
-cpu host \
-smp 6,sockets=2,cores=3,threads=1 \
-m 1024M,slots=16,maxmem=64G \
-object memory-backend-ram,id=mem0,size=128M \
-object memory-backend-ram,id=mem1,size=128M \
-object memory-backend-ram,id=mem2,size=128M \
-object memory-backend-ram,id=mem3,size=128M \
-object memory-backend-ram,id=mem4,size=128M \
-object memory-backend-ram,id=mem4,size=384M \
-numa node,nodeid=0,memdev=mem0 \
-numa node,nodeid=1,memdev=mem1 \
-numa node,nodeid=2,memdev=mem2 \
-numa node,nodeid=3,memdev=mem3 \
-numa node,nodeid=4,memdev=mem4 \
-numa node,nodeid=5,memdev=mem5
:
alternatives: patching kernel code
BUG: arch topology borken
the CLS domain not a subset of the MC domain
<the above error log repeats>
BUG: arch topology borken
the DIE domain not a subset of the NODE domain
With current implementation of mc->get_default_cpu_node_id(),
CPU#0 to CPU#5 are associated with NODE#0 to NODE#5 separately.
That's incorrect because CPU#0/1/2 should be associated with same
NUMA node because they're seated in same socket.
This fixes the issue by considering the socket ID when the default
CPU-to-NUMA association is provided in virt_possible_cpu_arch_ids().
With this applied, no more CPU topology broken warnings are seen
from the Linux guest. The 6 CPUs are associated with NODE#0/1, but
there are no CPUs associated with NODE#2/3/4/5.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
Message-id: 20220503140304.855514-6-gshan@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 4c18bc192386dfbca530e7f550e0992df657818a)
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
hw/arm/virt.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index a87c8d396a..95d012d6eb 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2545,7 +2545,9 @@ virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
static int64_t virt_get_default_cpu_node_id(const MachineState *ms, int idx)
{
- return idx % ms->numa_state->num_nodes;
+ int64_t socket_id = ms->possible_cpus->cpus[idx].props.socket_id;
+
+ return socket_id % ms->numa_state->num_nodes;
}
static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
--
2.31.1

View File

@ -0,0 +1,47 @@
From 4bd48e784ae0c38c89f1a944b06c997fd28c4d37 Mon Sep 17 00:00:00 2001
From: Miroslav Rezanina <mrezanin@redhat.com>
Date: Thu, 19 May 2022 04:15:33 -0400
Subject: [PATCH 16/16] migration: Fix operator type
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Miroslav Rezanina <mrezanin@redhat.com>
RH-MergeRequest: 92: Fix build using clang 14
RH-Commit: [1/1] ad9980e64cf2e39085d68f1ff601444bf2afe228 (mrezanin/centos-src-qemu-kvm)
RH-Bugzilla: 2064530
RH-Acked-by: Daniel P. Berrangé <berrange@redhat.com>
RH-Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Clang spotted an & that should have been an &&; fix it.
Reported by: David Binderman / https://gitlab.com/dcb
Fixes: 65dacaa04fa ("migration: introduce save_normal_page()")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/963
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20220406102515.96320-1-dgilbert@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
(cherry picked from commit f912ec5b2d65644116ff496b58d7c9145c19e4c0)
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
migration/ram.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index 3532f64ecb..0ef4bd63eb 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1289,7 +1289,7 @@ static int save_normal_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
offset | RAM_SAVE_FLAG_PAGE));
if (async) {
qemu_put_buffer_async(rs->f, buf, TARGET_PAGE_SIZE,
- migrate_release_ram() &
+ migrate_release_ram() &&
migration_in_postcopy());
} else {
qemu_put_buffer(rs->f, buf, TARGET_PAGE_SIZE);
--
2.31.1

View File

@ -0,0 +1,126 @@
From e97c563f7146098119839aa146a6f25070eb7148 Mon Sep 17 00:00:00 2001
From: Gavin Shan <gshan@redhat.com>
Date: Wed, 11 May 2022 18:01:02 +0800
Subject: [PATCH 01/16] qapi/machine.json: Add cluster-id
RH-Author: Gavin Shan <gshan@redhat.com>
RH-MergeRequest: 86: hw/arm/virt: Fix the default CPU topology
RH-Commit: [1/6] 44d7d83008c6d28485ae44f7cced792f4987b919 (gwshan/qemu-rhel-9)
RH-Bugzilla: 2041823
RH-Acked-by: Eric Auger <eric.auger@redhat.com>
RH-Acked-by: Cornelia Huck <cohuck@redhat.com>
RH-Acked-by: Andrew Jones <drjones@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2041823
This adds cluster-id in CPU instance properties, which will be used
by arm/virt machine. Besides, the cluster-id is also verified or
dumped in various spots:
* hw/core/machine.c::machine_set_cpu_numa_node() to associate
CPU with its NUMA node.
* hw/core/machine.c::machine_numa_finish_cpu_init() to record
CPU slots with no NUMA mapping set.
* hw/core/machine-hmp-cmds.c::hmp_hotpluggable_cpus() to dump
cluster-id.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 20220503140304.855514-2-gshan@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 1dcf7001d4bae651129d46d5628b29e93a411d0b)
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
hw/core/machine-hmp-cmds.c | 4 ++++
hw/core/machine.c | 16 ++++++++++++++++
qapi/machine.json | 6 ++++--
3 files changed, 24 insertions(+), 2 deletions(-)
diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
index 4e2f319aeb..5cb5eecbfc 100644
--- a/hw/core/machine-hmp-cmds.c
+++ b/hw/core/machine-hmp-cmds.c
@@ -77,6 +77,10 @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
if (c->has_die_id) {
monitor_printf(mon, " die-id: \"%" PRIu64 "\"\n", c->die_id);
}
+ if (c->has_cluster_id) {
+ monitor_printf(mon, " cluster-id: \"%" PRIu64 "\"\n",
+ c->cluster_id);
+ }
if (c->has_core_id) {
monitor_printf(mon, " core-id: \"%" PRIu64 "\"\n", c->core_id);
}
diff --git a/hw/core/machine.c b/hw/core/machine.c
index dffc3ef4ab..168f4de910 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -890,6 +890,11 @@ void machine_set_cpu_numa_node(MachineState *machine,
return;
}
+ if (props->has_cluster_id && !slot->props.has_cluster_id) {
+ error_setg(errp, "cluster-id is not supported");
+ return;
+ }
+
if (props->has_socket_id && !slot->props.has_socket_id) {
error_setg(errp, "socket-id is not supported");
return;
@@ -909,6 +914,11 @@ void machine_set_cpu_numa_node(MachineState *machine,
continue;
}
+ if (props->has_cluster_id &&
+ props->cluster_id != slot->props.cluster_id) {
+ continue;
+ }
+
if (props->has_die_id && props->die_id != slot->props.die_id) {
continue;
}
@@ -1203,6 +1213,12 @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
}
g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
}
+ if (cpu->props.has_cluster_id) {
+ if (s->len) {
+ g_string_append_printf(s, ", ");
+ }
+ g_string_append_printf(s, "cluster-id: %"PRId64, cpu->props.cluster_id);
+ }
if (cpu->props.has_core_id) {
if (s->len) {
g_string_append_printf(s, ", ");
diff --git a/qapi/machine.json b/qapi/machine.json
index d25a481ce4..4c417e32a5 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -868,10 +868,11 @@
# @node-id: NUMA node ID the CPU belongs to
# @socket-id: socket number within node/board the CPU belongs to
# @die-id: die number within socket the CPU belongs to (since 4.1)
-# @core-id: core number within die the CPU belongs to
+# @cluster-id: cluster number within die the CPU belongs to (since 7.1)
+# @core-id: core number within cluster the CPU belongs to
# @thread-id: thread number within core the CPU belongs to
#
-# Note: currently there are 5 properties that could be present
+# Note: currently there are 6 properties that could be present
# but management should be prepared to pass through other
# properties with device_add command to allow for future
# interface extension. This also requires the filed names to be kept in
@@ -883,6 +884,7 @@
'data': { '*node-id': 'int',
'*socket-id': 'int',
'*die-id': 'int',
+ '*cluster-id': 'int',
'*core-id': 'int',
'*thread-id': 'int'
}
--
2.31.1

View File

@ -0,0 +1,100 @@
From a039ed652e6d2f5edcef9d5d1d3baec17ce7f929 Mon Sep 17 00:00:00 2001
From: Gavin Shan <gshan@redhat.com>
Date: Wed, 11 May 2022 18:01:35 +0800
Subject: [PATCH 04/16] qtest/numa-test: Correct CPU and NUMA association in
aarch64_numa_cpu()
RH-Author: Gavin Shan <gshan@redhat.com>
RH-MergeRequest: 86: hw/arm/virt: Fix the default CPU topology
RH-Commit: [4/6] 64e9908a179eb4fb586d662f70f275a81808e50c (gwshan/qemu-rhel-9)
RH-Bugzilla: 2041823
RH-Acked-by: Eric Auger <eric.auger@redhat.com>
RH-Acked-by: Cornelia Huck <cohuck@redhat.com>
RH-Acked-by: Andrew Jones <drjones@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2041823
In aarch64_numa_cpu(), the CPU and NUMA association is something
like below. Two threads in the same core/cluster/socket are
associated with two individual NUMA nodes, which is unreal as
Igor Mammedov mentioned. We don't expect the association to break
NUMA-to-socket boundary, which matches with the real world.
NUMA-node socket cluster core thread
------------------------------------------
0 0 0 0 0
1 0 0 0 1
This corrects the topology for CPUs and their association with
NUMA nodes. After this patch is applied, the CPU and NUMA
association becomes something like below, which looks real.
Besides, socket/cluster/core/thread IDs are all checked when
the NUMA node IDs are verified. It helps to check if the CPU
topology is properly populated or not.
NUMA-node socket cluster core thread
------------------------------------------
0 1 0 0 0
1 0 0 0 0
Suggested-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Gavin Shan <gshan@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 20220503140304.855514-5-gshan@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit e280ecb39bc1629f74ea5479d464fd1608dc8f76)
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
tests/qtest/numa-test.c | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
index aeda8c774c..32e35daaae 100644
--- a/tests/qtest/numa-test.c
+++ b/tests/qtest/numa-test.c
@@ -224,17 +224,17 @@ static void aarch64_numa_cpu(const void *data)
g_autofree char *cli = NULL;
cli = make_cli(data, "-machine "
- "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
+ "smp.cpus=2,smp.sockets=2,smp.clusters=1,smp.cores=1,smp.threads=1 "
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
- "-numa cpu,node-id=1,thread-id=0 "
- "-numa cpu,node-id=0,thread-id=1");
+ "-numa cpu,node-id=0,socket-id=1,cluster-id=0,core-id=0,thread-id=0 "
+ "-numa cpu,node-id=1,socket-id=0,cluster-id=0,core-id=0,thread-id=0");
qts = qtest_init(cli);
cpus = get_cpus(qts, &resp);
g_assert(cpus);
while ((e = qlist_pop(cpus))) {
QDict *cpu, *props;
- int64_t thread, node;
+ int64_t socket, cluster, core, thread, node;
cpu = qobject_to(QDict, e);
g_assert(qdict_haskey(cpu, "props"));
@@ -242,12 +242,18 @@ static void aarch64_numa_cpu(const void *data)
g_assert(qdict_haskey(props, "node-id"));
node = qdict_get_int(props, "node-id");
+ g_assert(qdict_haskey(props, "socket-id"));
+ socket = qdict_get_int(props, "socket-id");
+ g_assert(qdict_haskey(props, "cluster-id"));
+ cluster = qdict_get_int(props, "cluster-id");
+ g_assert(qdict_haskey(props, "core-id"));
+ core = qdict_get_int(props, "core-id");
g_assert(qdict_haskey(props, "thread-id"));
thread = qdict_get_int(props, "thread-id");
- if (thread == 0) {
+ if (socket == 0 && cluster == 0 && core == 0 && thread == 0) {
g_assert_cmpint(node, ==, 1);
- } else if (thread == 1) {
+ } else if (socket == 1 && cluster == 0 && core == 0 && thread == 0) {
g_assert_cmpint(node, ==, 0);
} else {
g_assert(false);
--
2.31.1

View File

@ -0,0 +1,68 @@
From 66f3928b40991d8467a3da086688f73d061886c8 Mon Sep 17 00:00:00 2001
From: Gavin Shan <gshan@redhat.com>
Date: Wed, 11 May 2022 18:01:35 +0800
Subject: [PATCH 02/16] qtest/numa-test: Specify CPU topology in
aarch64_numa_cpu()
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Gavin Shan <gshan@redhat.com>
RH-MergeRequest: 86: hw/arm/virt: Fix the default CPU topology
RH-Commit: [2/6] b851e7ad59e057825392ddf75e9040cc102a0385 (gwshan/qemu-rhel-9)
RH-Bugzilla: 2041823
RH-Acked-by: Eric Auger <eric.auger@redhat.com>
RH-Acked-by: Cornelia Huck <cohuck@redhat.com>
RH-Acked-by: Andrew Jones <drjones@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2041823
The CPU topology isn't enabled on arm/virt machine yet, but we're
going to do it in next patch. After the CPU topology is enabled by
next patch, "thread-id=1" becomes invalid because the CPU core is
preferred on arm/virt machine. It means these two CPUs have 0/1
as their core IDs, but their thread IDs are all 0. It will trigger
test failure as the following message indicates:
[14/21 qemu:qtest+qtest-aarch64 / qtest-aarch64/numa-test ERROR
1.48s killed by signal 6 SIGABRT
>>> G_TEST_DBUS_DAEMON=/home/gavin/sandbox/qemu.main/tests/dbus-vmstate-daemon.sh \
QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon \
QTEST_QEMU_BINARY=./qemu-system-aarch64 \
QTEST_QEMU_IMG=./qemu-img MALLOC_PERTURB_=83 \
/home/gavin/sandbox/qemu.main/build/tests/qtest/numa-test --tap -k
――――――――――――――――――――――――――――――――――――――――――――――
stderr:
qemu-system-aarch64: -numa cpu,node-id=0,thread-id=1: no match found
This fixes the issue by providing comprehensive SMP configurations
in aarch64_numa_cpu(). The SMP configurations aren't used before
the CPU topology is enabled in next patch.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
Message-id: 20220503140304.855514-3-gshan@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit ac7199a2523ce2ccf8e685087a5d177eeca89b09)
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
tests/qtest/numa-test.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
index 90bf68a5b3..aeda8c774c 100644
--- a/tests/qtest/numa-test.c
+++ b/tests/qtest/numa-test.c
@@ -223,7 +223,8 @@ static void aarch64_numa_cpu(const void *data)
QTestState *qts;
g_autofree char *cli = NULL;
- cli = make_cli(data, "-machine smp.cpus=2 "
+ cli = make_cli(data, "-machine "
+ "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
"-numa cpu,node-id=1,thread-id=0 "
"-numa cpu,node-id=0,thread-id=1");
--
2.31.1

View File

@ -0,0 +1,77 @@
From 975af1b9f1811e113e1babd928ae70f8e4ebefb5 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Tue, 17 May 2022 09:28:19 +0100
Subject: [PATCH 13/16] virtio-scsi: clean up virtio_scsi_handle_cmd_vq()
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 88: virtio-scsi: fix 100% CPU consumption with IOThreads
RH-Commit: [5/6] 27b0225783fa9bbb8fe5ee692bd3f0a888d49d07 (stefanha/centos-stream-qemu-kvm)
RH-Bugzilla: 2079347
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
virtio_scsi_handle_cmd_vq() is only called from hw/scsi/virtio-scsi.c
now and its return value is no longer used. Remove the function
prototype from virtio-scsi.h and drop the return value.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20220427143541.119567-6-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ad482b57ef841b2d4883c5079d20ba44ff5e4b3e)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/scsi/virtio-scsi.c | 5 +----
include/hw/virtio/virtio-scsi.h | 1 -
2 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index a47033d91d..df5ff8bab7 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -685,12 +685,11 @@ static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
scsi_req_unref(sreq);
}
-bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
+static void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
{
VirtIOSCSIReq *req, *next;
int ret = 0;
bool suppress_notifications = virtio_queue_get_notification(vq);
- bool progress = false;
QTAILQ_HEAD(, VirtIOSCSIReq) reqs = QTAILQ_HEAD_INITIALIZER(reqs);
@@ -700,7 +699,6 @@ bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
}
while ((req = virtio_scsi_pop_req(s, vq))) {
- progress = true;
ret = virtio_scsi_handle_cmd_req_prepare(s, req);
if (!ret) {
QTAILQ_INSERT_TAIL(&reqs, req, next);
@@ -725,7 +723,6 @@ bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
QTAILQ_FOREACH_SAFE(req, &reqs, next, next) {
virtio_scsi_handle_cmd_req_submit(s, req);
}
- return progress;
}
static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index 44dc3b81ec..2497530064 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -151,7 +151,6 @@ void virtio_scsi_common_realize(DeviceState *dev,
Error **errp);
void virtio_scsi_common_unrealize(DeviceState *dev);
-bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq);
void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req);
void virtio_scsi_free_req(VirtIOSCSIReq *req);
void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
--
2.31.1

View File

@ -0,0 +1,65 @@
From c6e16a7a5a18ec2bc4f8a6f5cc1c887e18b16cdf Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Tue, 17 May 2022 09:28:12 +0100
Subject: [PATCH 12/16] virtio-scsi: clean up virtio_scsi_handle_ctrl_vq()
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 88: virtio-scsi: fix 100% CPU consumption with IOThreads
RH-Commit: [4/6] ca3751b7bfad5163c5b1c81b8525936a848d42ea (stefanha/centos-stream-qemu-kvm)
RH-Bugzilla: 2079347
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
virtio_scsi_handle_ctrl_vq() is only called from hw/scsi/virtio-scsi.c
now and its return value is no longer used. Remove the function
prototype from virtio-scsi.h and drop the return value.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20220427143541.119567-5-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 73b3b49f1880f236b4d0ffd7efb00280c05a5fab)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/scsi/virtio-scsi.c | 5 +----
include/hw/virtio/virtio-scsi.h | 1 -
2 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index dd2185b943..a47033d91d 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -460,16 +460,13 @@ static void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
}
}
-bool virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq)
+static void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq)
{
VirtIOSCSIReq *req;
- bool progress = false;
while ((req = virtio_scsi_pop_req(s, vq))) {
- progress = true;
virtio_scsi_handle_ctrl_req(s, req);
}
- return progress;
}
/*
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index 5957597825..44dc3b81ec 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -152,7 +152,6 @@ void virtio_scsi_common_realize(DeviceState *dev,
void virtio_scsi_common_unrealize(DeviceState *dev);
bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq);
-bool virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq);
void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req);
void virtio_scsi_free_req(VirtIOSCSIReq *req);
void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
--
2.31.1

View File

@ -0,0 +1,62 @@
From 019d5a0ca5d13f837a59b9e2815e2fd7ac120807 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Tue, 17 May 2022 09:28:06 +0100
Subject: [PATCH 11/16] virtio-scsi: clean up virtio_scsi_handle_event_vq()
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 88: virtio-scsi: fix 100% CPU consumption with IOThreads
RH-Commit: [3/6] f8dbc4c1991c61e4cf8dea50942c3cd509c9c4bd (stefanha/centos-stream-qemu-kvm)
RH-Bugzilla: 2079347
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
virtio_scsi_handle_event_vq() is only called from hw/scsi/virtio-scsi.c
now and its return value is no longer used. Remove the function
prototype from virtio-scsi.h and drop the return value.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20220427143541.119567-4-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 37ce2de95169dacab3fb53d11bd4509b9c2e3a4c)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/scsi/virtio-scsi.c | 4 +---
include/hw/virtio/virtio-scsi.h | 1 -
2 files changed, 1 insertion(+), 4 deletions(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 7b69eeed64..dd2185b943 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -856,13 +856,11 @@ void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
virtio_scsi_complete_req(req);
}
-bool virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq)
+static void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq)
{
if (s->events_dropped) {
virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
- return true;
}
- return false;
}
static void virtio_scsi_handle_event(VirtIODevice *vdev, VirtQueue *vq)
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index 543681bc18..5957597825 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -151,7 +151,6 @@ void virtio_scsi_common_realize(DeviceState *dev,
Error **errp);
void virtio_scsi_common_unrealize(DeviceState *dev);
-bool virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq);
bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq);
bool virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq);
void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req);
--
2.31.1

View File

@ -0,0 +1,103 @@
From 1b609b2af303fb6498b2ef94ac4f2e900dc8c1b2 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Tue, 17 May 2022 09:27:45 +0100
Subject: [PATCH 10/16] virtio-scsi: don't waste CPU polling the event
virtqueue
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 88: virtio-scsi: fix 100% CPU consumption with IOThreads
RH-Commit: [2/6] 7e613d9b9fa8ceb668c78cb3ce7ebe1d73a004b5 (stefanha/centos-stream-qemu-kvm)
RH-Bugzilla: 2079347
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
The virtio-scsi event virtqueue is not emptied by its handler function.
This is typical for rx virtqueues where the device uses buffers when
some event occurs (e.g. a packet is received, an error condition
happens, etc).
Polling non-empty virtqueues wastes CPU cycles. We are not waiting for
new buffers to become available, we are waiting for an event to occur,
so it's a misuse of CPU resources to poll for buffers.
Introduce the new virtio_queue_aio_attach_host_notifier_no_poll() API,
which is identical to virtio_queue_aio_attach_host_notifier() except
that it does not poll the virtqueue.
Before this patch the following command-line consumed 100% CPU in the
IOThread polling and calling virtio_scsi_handle_event():
$ qemu-system-x86_64 -M accel=kvm -m 1G -cpu host \
--object iothread,id=iothread0 \
--device virtio-scsi-pci,iothread=iothread0 \
--blockdev file,filename=test.img,aio=native,cache.direct=on,node-name=drive0 \
--device scsi-hd,drive=drive0
After this patch CPU is no longer wasted.
Reported-by: Nir Soffer <nsoffer@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Nir Soffer <nsoffer@redhat.com>
Message-id: 20220427143541.119567-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 38738f7dbbda90fbc161757b7f4be35b52205552)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/scsi/virtio-scsi-dataplane.c | 2 +-
hw/virtio/virtio.c | 13 +++++++++++++
include/hw/virtio/virtio.h | 1 +
3 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 29575cbaf6..8bb6e6acfc 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -138,7 +138,7 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
aio_context_acquire(s->ctx);
virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
- virtio_queue_aio_attach_host_notifier(vs->event_vq, s->ctx);
+ virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
for (i = 0; i < vs->conf.num_queues; i++) {
virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 9d637e043e..67a873f54a 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3534,6 +3534,19 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
virtio_queue_host_notifier_aio_poll_end);
}
+/*
+ * Same as virtio_queue_aio_attach_host_notifier() but without polling. Use
+ * this for rx virtqueues and similar cases where the virtqueue handler
+ * function does not pop all elements. When the virtqueue is left non-empty
+ * polling consumes CPU cycles and should not be used.
+ */
+void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
+{
+ aio_set_event_notifier(ctx, &vq->host_notifier, true,
+ virtio_queue_host_notifier_read,
+ NULL, NULL);
+}
+
void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
{
aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index b31c4507f5..b62a35fdca 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -317,6 +317,7 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
void virtio_queue_host_notifier_read(EventNotifier *n);
void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx);
+void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx);
void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx);
VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
--
2.31.1

View File

@ -0,0 +1,119 @@
From 5aaf33dbbbc89d58a52337985641723b9ee13541 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Wed, 27 Apr 2022 15:35:36 +0100
Subject: [PATCH 09/16] virtio-scsi: fix ctrl and event handler functions in
dataplane mode
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 88: virtio-scsi: fix 100% CPU consumption with IOThreads
RH-Commit: [1/6] 3087889041b960f14a6b3893243f78523a78f637 (stefanha/centos-stream-qemu-kvm)
RH-Bugzilla: 2079347
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
Commit f34e8d8b8d48d73f36a67b6d5e492ef9784b5012 ("virtio-scsi: prepare
virtio_scsi_handle_cmd for dataplane") prepared the virtio-scsi cmd
virtqueue handler function to be used in both the dataplane and
non-datpalane code paths.
It failed to convert the ctrl and event virtqueue handler functions,
which are not designed to be called from the dataplane code path but
will be since the ioeventfd is set up for those virtqueues when
dataplane starts.
Convert the ctrl and event virtqueue handler functions now so they
operate correctly when called from the dataplane code path. Avoid code
duplication by extracting this code into a helper function.
Fixes: f34e8d8b8d48d73f36a67b6d5e492ef9784b5012 ("virtio-scsi: prepare virtio_scsi_handle_cmd for dataplane")
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20220427143541.119567-2-stefanha@redhat.com
[Fixed s/by used/be used/ typo pointed out by Michael Tokarev
<mjt@tls.msk.ru>.
--Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 2f743ef6366c2df4ef51ef3ae318138cdc0125ab)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/scsi/virtio-scsi.c | 42 +++++++++++++++++++++++++++---------------
1 file changed, 27 insertions(+), 15 deletions(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 7f6da33a8a..7b69eeed64 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -472,16 +472,32 @@ bool virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq)
return progress;
}
+/*
+ * If dataplane is configured but not yet started, do so now and return true on
+ * success.
+ *
+ * Dataplane is started by the core virtio code but virtqueue handler functions
+ * can also be invoked when a guest kicks before DRIVER_OK, so this helper
+ * function helps us deal with manually starting ioeventfd in that case.
+ */
+static bool virtio_scsi_defer_to_dataplane(VirtIOSCSI *s)
+{
+ if (!s->ctx || s->dataplane_started) {
+ return false;
+ }
+
+ virtio_device_start_ioeventfd(&s->parent_obj.parent_obj);
+ return !s->dataplane_fenced;
+}
+
static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
{
VirtIOSCSI *s = (VirtIOSCSI *)vdev;
- if (s->ctx) {
- virtio_device_start_ioeventfd(vdev);
- if (!s->dataplane_fenced) {
- return;
- }
+ if (virtio_scsi_defer_to_dataplane(s)) {
+ return;
}
+
virtio_scsi_acquire(s);
virtio_scsi_handle_ctrl_vq(s, vq);
virtio_scsi_release(s);
@@ -720,12 +736,10 @@ static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
/* use non-QOM casts in the data path */
VirtIOSCSI *s = (VirtIOSCSI *)vdev;
- if (s->ctx && !s->dataplane_started) {
- virtio_device_start_ioeventfd(vdev);
- if (!s->dataplane_fenced) {
- return;
- }
+ if (virtio_scsi_defer_to_dataplane(s)) {
+ return;
}
+
virtio_scsi_acquire(s);
virtio_scsi_handle_cmd_vq(s, vq);
virtio_scsi_release(s);
@@ -855,12 +869,10 @@ static void virtio_scsi_handle_event(VirtIODevice *vdev, VirtQueue *vq)
{
VirtIOSCSI *s = VIRTIO_SCSI(vdev);
- if (s->ctx) {
- virtio_device_start_ioeventfd(vdev);
- if (!s->dataplane_fenced) {
- return;
- }
+ if (virtio_scsi_defer_to_dataplane(s)) {
+ return;
}
+
virtio_scsi_acquire(s);
virtio_scsi_handle_event_vq(s, vq);
virtio_scsi_release(s);
--
2.31.1

View File

@ -0,0 +1,168 @@
From 6603f216dbc07a1d221b1665409cfec6cc9960e2 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Tue, 17 May 2022 09:28:26 +0100
Subject: [PATCH 14/16] virtio-scsi: move request-related items from .h to .c
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 88: virtio-scsi: fix 100% CPU consumption with IOThreads
RH-Commit: [6/6] ecdf5289abd04062c85c5ed8e577a5249684a3b0 (stefanha/centos-stream-qemu-kvm)
RH-Bugzilla: 2079347
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
There is no longer a need to expose the request and related APIs in
virtio-scsi.h since there are no callers outside virtio-scsi.c.
Note the block comment in VirtIOSCSIReq has been adjusted to meet the
coding style.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20220427143541.119567-7-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 3dc584abeef0e1277c2de8c1c1974cb49444eb0a)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/scsi/virtio-scsi.c | 45 ++++++++++++++++++++++++++++++---
include/hw/virtio/virtio-scsi.h | 40 -----------------------------
2 files changed, 41 insertions(+), 44 deletions(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index df5ff8bab7..2450c9438c 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -29,6 +29,43 @@
#include "hw/virtio/virtio-access.h"
#include "trace.h"
+typedef struct VirtIOSCSIReq {
+ /*
+ * Note:
+ * - fields up to resp_iov are initialized by virtio_scsi_init_req;
+ * - fields starting at vring are zeroed by virtio_scsi_init_req.
+ */
+ VirtQueueElement elem;
+
+ VirtIOSCSI *dev;
+ VirtQueue *vq;
+ QEMUSGList qsgl;
+ QEMUIOVector resp_iov;
+
+ union {
+ /* Used for two-stage request submission */
+ QTAILQ_ENTRY(VirtIOSCSIReq) next;
+
+ /* Used for cancellation of request during TMFs */
+ int remaining;
+ };
+
+ SCSIRequest *sreq;
+ size_t resp_size;
+ enum SCSIXferMode mode;
+ union {
+ VirtIOSCSICmdResp cmd;
+ VirtIOSCSICtrlTMFResp tmf;
+ VirtIOSCSICtrlANResp an;
+ VirtIOSCSIEvent event;
+ } resp;
+ union {
+ VirtIOSCSICmdReq cmd;
+ VirtIOSCSICtrlTMFReq tmf;
+ VirtIOSCSICtrlANReq an;
+ } req;
+} VirtIOSCSIReq;
+
static inline int virtio_scsi_get_lun(uint8_t *lun)
{
return ((lun[2] << 8) | lun[3]) & 0x3FFF;
@@ -45,7 +82,7 @@ static inline SCSIDevice *virtio_scsi_device_get(VirtIOSCSI *s, uint8_t *lun)
return scsi_device_get(&s->bus, 0, lun[1], virtio_scsi_get_lun(lun));
}
-void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req)
+static void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req)
{
VirtIODevice *vdev = VIRTIO_DEVICE(s);
const size_t zero_skip =
@@ -58,7 +95,7 @@ void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req)
memset((uint8_t *)req + zero_skip, 0, sizeof(*req) - zero_skip);
}
-void virtio_scsi_free_req(VirtIOSCSIReq *req)
+static void virtio_scsi_free_req(VirtIOSCSIReq *req)
{
qemu_iovec_destroy(&req->resp_iov);
qemu_sglist_destroy(&req->qsgl);
@@ -801,8 +838,8 @@ static void virtio_scsi_reset(VirtIODevice *vdev)
s->events_dropped = false;
}
-void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
- uint32_t event, uint32_t reason)
+static void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
+ uint32_t event, uint32_t reason)
{
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
VirtIOSCSIReq *req;
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index 2497530064..abdda2cbd0 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -94,42 +94,6 @@ struct VirtIOSCSI {
uint32_t host_features;
};
-typedef struct VirtIOSCSIReq {
- /* Note:
- * - fields up to resp_iov are initialized by virtio_scsi_init_req;
- * - fields starting at vring are zeroed by virtio_scsi_init_req.
- * */
- VirtQueueElement elem;
-
- VirtIOSCSI *dev;
- VirtQueue *vq;
- QEMUSGList qsgl;
- QEMUIOVector resp_iov;
-
- union {
- /* Used for two-stage request submission */
- QTAILQ_ENTRY(VirtIOSCSIReq) next;
-
- /* Used for cancellation of request during TMFs */
- int remaining;
- };
-
- SCSIRequest *sreq;
- size_t resp_size;
- enum SCSIXferMode mode;
- union {
- VirtIOSCSICmdResp cmd;
- VirtIOSCSICtrlTMFResp tmf;
- VirtIOSCSICtrlANResp an;
- VirtIOSCSIEvent event;
- } resp;
- union {
- VirtIOSCSICmdReq cmd;
- VirtIOSCSICtrlTMFReq tmf;
- VirtIOSCSICtrlANReq an;
- } req;
-} VirtIOSCSIReq;
-
static inline void virtio_scsi_acquire(VirtIOSCSI *s)
{
if (s->ctx) {
@@ -151,10 +115,6 @@ void virtio_scsi_common_realize(DeviceState *dev,
Error **errp);
void virtio_scsi_common_unrealize(DeviceState *dev);
-void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req);
-void virtio_scsi_free_req(VirtIOSCSIReq *req);
-void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
- uint32_t event, uint32_t reason);
void virtio_scsi_dataplane_setup(VirtIOSCSI *s, Error **errp);
int virtio_scsi_dataplane_start(VirtIODevice *s);
--
2.31.1

View File

@ -151,7 +151,7 @@ Obsoletes: %{name}-block-ssh <= %{epoch}:%{version} \
Summary: QEMU is a machine emulator and virtualizer
Name: qemu-kvm
Version: 7.0.0
Release: 3%{?rcrel}%{?dist}%{?cc_suffix}
Release: 4%{?rcrel}%{?dist}%{?cc_suffix}
# Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped
# Epoch 15 used for RHEL 8
# Epoch 17 used for RHEL 9 (due to release versioning offset in RHEL 8.5)
@ -208,6 +208,38 @@ Patch26: kvm-Enable-virtio-iommu-pci-on-aarch64.patch
Patch27: kvm-sysemu-tpm-Add-a-stub-function-for-TPM_IS_CRB.patch
# For bz#2037612 - [Win11][tpm][QL41112 PF] vfio_listener_region_add received unaligned region
Patch28: kvm-vfio-common-remove-spurious-tpm-crb-cmd-misalignment.patch
# For bz#2041823 - [aarch64][numa] When there are at least 6 Numa nodes serial log shows 'arch topology borken'
Patch29: kvm-qapi-machine.json-Add-cluster-id.patch
# For bz#2041823 - [aarch64][numa] When there are at least 6 Numa nodes serial log shows 'arch topology borken'
Patch30: kvm-qtest-numa-test-Specify-CPU-topology-in-aarch64_numa.patch
# For bz#2041823 - [aarch64][numa] When there are at least 6 Numa nodes serial log shows 'arch topology borken'
Patch31: kvm-hw-arm-virt-Consider-SMP-configuration-in-CPU-topolo.patch
# For bz#2041823 - [aarch64][numa] When there are at least 6 Numa nodes serial log shows 'arch topology borken'
Patch32: kvm-qtest-numa-test-Correct-CPU-and-NUMA-association-in-.patch
# For bz#2041823 - [aarch64][numa] When there are at least 6 Numa nodes serial log shows 'arch topology borken'
Patch33: kvm-hw-arm-virt-Fix-CPU-s-default-NUMA-node-ID.patch
# For bz#2041823 - [aarch64][numa] When there are at least 6 Numa nodes serial log shows 'arch topology borken'
Patch34: kvm-hw-acpi-aml-build-Use-existing-CPU-topology-to-build.patch
# For bz#2079938 - qemu coredump when boot with multi disks (qemu) failed to set up stack guard page: Cannot allocate memory
Patch35: kvm-coroutine-Rename-qemu_coroutine_inc-dec_pool_size.patch
# For bz#2079938 - qemu coredump when boot with multi disks (qemu) failed to set up stack guard page: Cannot allocate memory
Patch36: kvm-coroutine-Revert-to-constant-batch-size.patch
# For bz#2079347 - Guest boot blocked when scsi disks using same iothread and 100% CPU consumption
Patch37: kvm-virtio-scsi-fix-ctrl-and-event-handler-functions-in-.patch
# For bz#2079347 - Guest boot blocked when scsi disks using same iothread and 100% CPU consumption
Patch38: kvm-virtio-scsi-don-t-waste-CPU-polling-the-event-virtqu.patch
# For bz#2079347 - Guest boot blocked when scsi disks using same iothread and 100% CPU consumption
Patch39: kvm-virtio-scsi-clean-up-virtio_scsi_handle_event_vq.patch
# For bz#2079347 - Guest boot blocked when scsi disks using same iothread and 100% CPU consumption
Patch40: kvm-virtio-scsi-clean-up-virtio_scsi_handle_ctrl_vq.patch
# For bz#2079347 - Guest boot blocked when scsi disks using same iothread and 100% CPU consumption
Patch41: kvm-virtio-scsi-clean-up-virtio_scsi_handle_cmd_vq.patch
# For bz#2079347 - Guest boot blocked when scsi disks using same iothread and 100% CPU consumption
Patch42: kvm-virtio-scsi-move-request-related-items-from-.h-to-.c.patch
# For bz#1995710 - RFE: Allow virtio-scsi CD-ROM media change with IOThreads
Patch43: kvm-Revert-virtio-scsi-Reject-scsi-cd-if-data-plane-enab.patch
# For bz#2064530 - Rebuild qemu-kvm with clang-14
Patch44: kvm-migration-Fix-operator-type.patch
# Source-git patches
@ -1243,6 +1275,34 @@ useradd -r -u 107 -g qemu -G kvm -d / -s /sbin/nologin \
%endif
%changelog
* Thu May 19 2022 Miroslav Rezanina <mrezanin@redhat.com> - 7.0.0-4
- kvm-qapi-machine.json-Add-cluster-id.patch [bz#2041823]
- kvm-qtest-numa-test-Specify-CPU-topology-in-aarch64_numa.patch [bz#2041823]
- kvm-hw-arm-virt-Consider-SMP-configuration-in-CPU-topolo.patch [bz#2041823]
- kvm-qtest-numa-test-Correct-CPU-and-NUMA-association-in-.patch [bz#2041823]
- kvm-hw-arm-virt-Fix-CPU-s-default-NUMA-node-ID.patch [bz#2041823]
- kvm-hw-acpi-aml-build-Use-existing-CPU-topology-to-build.patch [bz#2041823]
- kvm-coroutine-Rename-qemu_coroutine_inc-dec_pool_size.patch [bz#2079938]
- kvm-coroutine-Revert-to-constant-batch-size.patch [bz#2079938]
- kvm-virtio-scsi-fix-ctrl-and-event-handler-functions-in-.patch [bz#2079347]
- kvm-virtio-scsi-don-t-waste-CPU-polling-the-event-virtqu.patch [bz#2079347]
- kvm-virtio-scsi-clean-up-virtio_scsi_handle_event_vq.patch [bz#2079347]
- kvm-virtio-scsi-clean-up-virtio_scsi_handle_ctrl_vq.patch [bz#2079347]
- kvm-virtio-scsi-clean-up-virtio_scsi_handle_cmd_vq.patch [bz#2079347]
- kvm-virtio-scsi-move-request-related-items-from-.h-to-.c.patch [bz#2079347]
- kvm-Revert-virtio-scsi-Reject-scsi-cd-if-data-plane-enab.patch [bz#1995710]
- kvm-migration-Fix-operator-type.patch [bz#2064530]
- Resolves: bz#2041823
([aarch64][numa] When there are at least 6 Numa nodes serial log shows 'arch topology borken')
- Resolves: bz#2079938
(qemu coredump when boot with multi disks (qemu) failed to set up stack guard page: Cannot allocate memory)
- Resolves: bz#2079347
(Guest boot blocked when scsi disks using same iothread and 100% CPU consumption)
- Resolves: bz#1995710
(RFE: Allow virtio-scsi CD-ROM media change with IOThreads)
- Resolves: bz#2064530
(Rebuild qemu-kvm with clang-14)
* Thu May 12 2022 Miroslav Rezanina <mrezanin@redhat.com> - 7.0.0-3
- kvm-hw-arm-virt-Remove-the-dtb-kaslr-seed-machine-option.patch [bz#2046029]
- kvm-hw-arm-virt-Fix-missing-initialization-in-instance-c.patch [bz#2046029]