qemu-kvm/kvm-virtio-scsi-don-t-waste-CPU-polling-the-event-virtqu.patch

104 lines
4.5 KiB
Diff
Raw Normal View History

* Thu May 19 2022 Miroslav Rezanina <mrezanin@redhat.com> - 7.0.0-4 - kvm-qapi-machine.json-Add-cluster-id.patch [bz#2041823] - kvm-qtest-numa-test-Specify-CPU-topology-in-aarch64_numa.patch [bz#2041823] - kvm-hw-arm-virt-Consider-SMP-configuration-in-CPU-topolo.patch [bz#2041823] - kvm-qtest-numa-test-Correct-CPU-and-NUMA-association-in-.patch [bz#2041823] - kvm-hw-arm-virt-Fix-CPU-s-default-NUMA-node-ID.patch [bz#2041823] - kvm-hw-acpi-aml-build-Use-existing-CPU-topology-to-build.patch [bz#2041823] - kvm-coroutine-Rename-qemu_coroutine_inc-dec_pool_size.patch [bz#2079938] - kvm-coroutine-Revert-to-constant-batch-size.patch [bz#2079938] - kvm-virtio-scsi-fix-ctrl-and-event-handler-functions-in-.patch [bz#2079347] - kvm-virtio-scsi-don-t-waste-CPU-polling-the-event-virtqu.patch [bz#2079347] - kvm-virtio-scsi-clean-up-virtio_scsi_handle_event_vq.patch [bz#2079347] - kvm-virtio-scsi-clean-up-virtio_scsi_handle_ctrl_vq.patch [bz#2079347] - kvm-virtio-scsi-clean-up-virtio_scsi_handle_cmd_vq.patch [bz#2079347] - kvm-virtio-scsi-move-request-related-items-from-.h-to-.c.patch [bz#2079347] - kvm-Revert-virtio-scsi-Reject-scsi-cd-if-data-plane-enab.patch [bz#1995710] - kvm-migration-Fix-operator-type.patch [bz#2064530] - Resolves: bz#2041823 ([aarch64][numa] When there are at least 6 Numa nodes serial log shows 'arch topology borken') - Resolves: bz#2079938 (qemu coredump when boot with multi disks (qemu) failed to set up stack guard page: Cannot allocate memory) - Resolves: bz#2079347 (Guest boot blocked when scsi disks using same iothread and 100% CPU consumption) - Resolves: bz#1995710 (RFE: Allow virtio-scsi CD-ROM media change with IOThreads) - Resolves: bz#2064530 (Rebuild qemu-kvm with clang-14)
2022-05-19 12:13:20 +00:00
From 1b609b2af303fb6498b2ef94ac4f2e900dc8c1b2 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Tue, 17 May 2022 09:27:45 +0100
Subject: [PATCH 10/16] virtio-scsi: don't waste CPU polling the event
virtqueue
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 88: virtio-scsi: fix 100% CPU consumption with IOThreads
RH-Commit: [2/6] 7e613d9b9fa8ceb668c78cb3ce7ebe1d73a004b5 (stefanha/centos-stream-qemu-kvm)
RH-Bugzilla: 2079347
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
The virtio-scsi event virtqueue is not emptied by its handler function.
This is typical for rx virtqueues where the device uses buffers when
some event occurs (e.g. a packet is received, an error condition
happens, etc).
Polling non-empty virtqueues wastes CPU cycles. We are not waiting for
new buffers to become available, we are waiting for an event to occur,
so it's a misuse of CPU resources to poll for buffers.
Introduce the new virtio_queue_aio_attach_host_notifier_no_poll() API,
which is identical to virtio_queue_aio_attach_host_notifier() except
that it does not poll the virtqueue.
Before this patch the following command-line consumed 100% CPU in the
IOThread polling and calling virtio_scsi_handle_event():
$ qemu-system-x86_64 -M accel=kvm -m 1G -cpu host \
--object iothread,id=iothread0 \
--device virtio-scsi-pci,iothread=iothread0 \
--blockdev file,filename=test.img,aio=native,cache.direct=on,node-name=drive0 \
--device scsi-hd,drive=drive0
After this patch CPU is no longer wasted.
Reported-by: Nir Soffer <nsoffer@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Nir Soffer <nsoffer@redhat.com>
Message-id: 20220427143541.119567-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 38738f7dbbda90fbc161757b7f4be35b52205552)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/scsi/virtio-scsi-dataplane.c | 2 +-
hw/virtio/virtio.c | 13 +++++++++++++
include/hw/virtio/virtio.h | 1 +
3 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 29575cbaf6..8bb6e6acfc 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -138,7 +138,7 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
aio_context_acquire(s->ctx);
virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
- virtio_queue_aio_attach_host_notifier(vs->event_vq, s->ctx);
+ virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
for (i = 0; i < vs->conf.num_queues; i++) {
virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 9d637e043e..67a873f54a 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3534,6 +3534,19 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
virtio_queue_host_notifier_aio_poll_end);
}
+/*
+ * Same as virtio_queue_aio_attach_host_notifier() but without polling. Use
+ * this for rx virtqueues and similar cases where the virtqueue handler
+ * function does not pop all elements. When the virtqueue is left non-empty
+ * polling consumes CPU cycles and should not be used.
+ */
+void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
+{
+ aio_set_event_notifier(ctx, &vq->host_notifier, true,
+ virtio_queue_host_notifier_read,
+ NULL, NULL);
+}
+
void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
{
aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index b31c4507f5..b62a35fdca 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -317,6 +317,7 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
void virtio_queue_host_notifier_read(EventNotifier *n);
void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx);
+void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx);
void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx);
VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
--
2.31.1