import qemu-kvm-6.2.0-11.el9_0.5

This commit is contained in:
CentOS Sources 2022-09-20 07:44:28 -04:00 committed by Stepan Oksanichenko
parent 9d4d37bd2f
commit 1371de00cb
10 changed files with 732 additions and 1 deletions

View File

@ -0,0 +1,50 @@
From 0cd0c916715c43f71cf249bafa2829b42aa67267 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 9 Jun 2022 17:47:12 +0100
Subject: [PATCH 2/2] linux-aio: explain why max batch is checked in
laio_io_unplug()
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 210: linux-aio: fix unbalanced plugged counter in laio_io_unplug()
RH-Bugzilla: 2109569
RH-Acked-by: Hanna Reitz <hreitz@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
RH-Acked-by: Alberto Faria <None>
RH-Commit: [2/2] caed03e006e8004d3c0670b24e4454a94274d7d9
It may not be obvious why laio_io_unplug() checks max batch. I discussed
this with Stefano and have added a comment summarizing the reason.
Cc: Stefano Garzarella <sgarzare@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20220609164712.1539045-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 99b969fbe105117f5af6060d3afef40ca39cc9c1)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
block/linux-aio.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/block/linux-aio.c b/block/linux-aio.c
index 77f17ad596..85650c4222 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -362,6 +362,12 @@ void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
assert(s->io_q.plugged);
s->io_q.plugged--;
+ /*
+ * Why max batch checking is performed here:
+ * Another BDS may have queued requests with a higher dev_max_batch and
+ * therefore in_queue could now exceed our dev_max_batch. Re-check the max
+ * batch so we can honor our device's dev_max_batch.
+ */
if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch) ||
(!s->io_q.plugged &&
!s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending))) {
--
2.31.1

View File

@ -0,0 +1,57 @@
From 9c5a68878b3c6ec16c94dfcfe388a830df8deb2f Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 9 Jun 2022 17:47:11 +0100
Subject: [PATCH 1/2] linux-aio: fix unbalanced plugged counter in
laio_io_unplug()
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 210: linux-aio: fix unbalanced plugged counter in laio_io_unplug()
RH-Bugzilla: 2109569
RH-Acked-by: Hanna Reitz <hreitz@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
RH-Acked-by: Alberto Faria <None>
RH-Commit: [1/2] bc1fa9b401cffb712f09935aba861d1a0bf74421
Every laio_io_plug() call has a matching laio_io_unplug() call. There is
a plugged counter that tracks the number of levels of plugging and
allows for nesting.
The plugged counter must reflect the balance between laio_io_plug() and
laio_io_unplug() calls accurately. Otherwise I/O stalls occur since
io_submit(2) calls are skipped while plugged.
Reported-by: Nikolay Tenev <nt@storpool.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20220609164712.1539045-2-stefanha@redhat.com
Cc: Stefano Garzarella <sgarzare@redhat.com>
Fixes: 68d7946648 ("linux-aio: add `dev_max_batch` parameter to laio_io_unplug()")
[Stefano Garzarella suggested adding a Fixes tag.
--Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit f387cac5af030a58ac5a0dacf64cab5e5a4fe5c7)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
block/linux-aio.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/block/linux-aio.c b/block/linux-aio.c
index f53ae72e21..77f17ad596 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -360,8 +360,10 @@ void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
uint64_t dev_max_batch)
{
assert(s->io_q.plugged);
+ s->io_q.plugged--;
+
if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch) ||
- (--s->io_q.plugged == 0 &&
+ (!s->io_q.plugged &&
!s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending))) {
ioq_submit(s);
}
--
2.31.1

View File

@ -0,0 +1,56 @@
From 1e3faef7048c8d36c9e3f004c7e08d96b30d055f Mon Sep 17 00:00:00 2001
From: Si-Wei Liu <si-wei.liu@oracle.com>
Date: Fri, 6 May 2022 19:28:15 -0700
Subject: [PATCH 4/7] vhost-net: fix improper cleanup in vhost_net_start
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Cindy Lu <lulu@redhat.com>
RH-MergeRequest: 204: vdpa :sync the Multiqueue fixes for vhost-vDPA
RH-Commit: [4/7] 31575b626fd5b381a4640e4f2608033bb141dc62
RH-Bugzilla: 2095795
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Eugenio Pérez <eperezma@redhat.com>
RH-Acked-by: Jason Wang <jasowang@redhat.com>
vhost_net_start() missed a corresponding stop_one() upon error from
vhost_set_vring_enable(). While at it, make the error handling for
err_start more robust. No real issue was found due to this though.
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <1651890498-24478-5-git-send-email-si-wei.liu@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 6f3910b5eee00b8cc959e94659c0d524c482a418)
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
hw/net/vhost_net.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 30379d2ca4..d6d7c51f62 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -381,6 +381,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
r = vhost_set_vring_enable(peer, peer->vring_enable);
if (r < 0) {
+ vhost_net_stop_one(get_vhost_net(peer), dev);
goto err_start;
}
}
@@ -390,7 +391,8 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
err_start:
while (--i >= 0) {
- peer = qemu_get_peer(ncs , i);
+ peer = qemu_get_peer(ncs, i < data_queue_pairs ?
+ i : n->max_queue_pairs);
vhost_net_stop_one(get_vhost_net(peer), dev);
}
e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
--
2.31.1

View File

@ -0,0 +1,58 @@
From 4e7f13419c3c45563210e8aed01ebbdf0dd43a01 Mon Sep 17 00:00:00 2001
From: Si-Wei Liu <si-wei.liu@oracle.com>
Date: Fri, 6 May 2022 19:28:16 -0700
Subject: [PATCH 5/7] vhost-vdpa: backend feature should set only once
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Cindy Lu <lulu@redhat.com>
RH-MergeRequest: 204: vdpa :sync the Multiqueue fixes for vhost-vDPA
RH-Commit: [5/7] 338375ebeab833b8ddd7c7f501aa348f28953778
RH-Bugzilla: 2095795
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Eugenio Pérez <eperezma@redhat.com>
RH-Acked-by: Jason Wang <jasowang@redhat.com>
The vhost_vdpa_one_time_request() branch in
vhost_vdpa_set_backend_cap() incorrectly sends down
ioctls on vhost_dev with non-zero index. This may
end up with multiple VHOST_SET_BACKEND_FEATURES
ioctl calls sent down on the vhost-vdpa fd that is
shared between all these vhost_dev's.
To fix it, send down ioctl only once via the first
vhost_dev with index 0. Toggle the polarity of the
vhost_vdpa_one_time_request() test should do the
trick.
Fixes: 4d191cfdc7de ("vhost-vdpa: classify one time request")
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Message-Id: <1651890498-24478-6-git-send-email-si-wei.liu@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 6aee7e4233f6467f69531fcd352adff028f3f5ea)
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
hw/virtio/vhost-vdpa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 78da48a333..a9be24776a 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -525,7 +525,7 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
features &= f;
- if (vhost_vdpa_one_time_request(dev)) {
+ if (!vhost_vdpa_one_time_request(dev)) {
r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
if (r) {
return -EFAULT;
--
2.31.1

View File

@ -0,0 +1,123 @@
From 0074686ee2de7ffb06b4eb2f9c14a2f7dcea248b Mon Sep 17 00:00:00 2001
From: Si-Wei Liu <si-wei.liu@oracle.com>
Date: Fri, 6 May 2022 19:28:17 -0700
Subject: [PATCH 6/7] vhost-vdpa: change name and polarity for
vhost_vdpa_one_time_request()
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Cindy Lu <lulu@redhat.com>
RH-MergeRequest: 204: vdpa :sync the Multiqueue fixes for vhost-vDPA
RH-Commit: [6/7] 9cc673a62032fdf8c84e3d82ff504ae4f4100ecf
RH-Bugzilla: 2095795
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Eugenio Pérez <eperezma@redhat.com>
RH-Acked-by: Jason Wang <jasowang@redhat.com>
The name vhost_vdpa_one_time_request() was confusing. No
matter whatever it returns, its typical occurrence had
always been at requests that only need to be applied once.
And the name didn't suggest what it actually checks for.
Change it to vhost_vdpa_first_dev() with polarity flipped
for better readibility of code. That way it is able to
reflect what the check is really about.
This call is applicable to request which performs operation
only once, before queues are set up, and usually at the beginning
of the caller function. Document the requirement for it in place.
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Message-Id: <1651890498-24478-7-git-send-email-si-wei.liu@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit d71b0609fc04217e28d17009f04d74b08be6f466)
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
hw/virtio/vhost-vdpa.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index a9be24776a..38bbcb3c18 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -319,11 +319,18 @@ static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
v->iova_range.last);
}
-static bool vhost_vdpa_one_time_request(struct vhost_dev *dev)
+/*
+ * The use of this function is for requests that only need to be
+ * applied once. Typically such request occurs at the beginning
+ * of operation, and before setting up queues. It should not be
+ * used for request that performs operation until all queues are
+ * set, which would need to check dev->vq_index_end instead.
+ */
+static bool vhost_vdpa_first_dev(struct vhost_dev *dev)
{
struct vhost_vdpa *v = dev->opaque;
- return v->index != 0;
+ return v->index == 0;
}
static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
@@ -351,7 +358,7 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
vhost_vdpa_get_iova_range(v);
- if (vhost_vdpa_one_time_request(dev)) {
+ if (!vhost_vdpa_first_dev(dev)) {
return 0;
}
@@ -468,7 +475,7 @@ static int vhost_vdpa_memslots_limit(struct vhost_dev *dev)
static int vhost_vdpa_set_mem_table(struct vhost_dev *dev,
struct vhost_memory *mem)
{
- if (vhost_vdpa_one_time_request(dev)) {
+ if (!vhost_vdpa_first_dev(dev)) {
return 0;
}
@@ -496,7 +503,7 @@ static int vhost_vdpa_set_features(struct vhost_dev *dev,
{
int ret;
- if (vhost_vdpa_one_time_request(dev)) {
+ if (!vhost_vdpa_first_dev(dev)) {
return 0;
}
@@ -525,7 +532,7 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
features &= f;
- if (!vhost_vdpa_one_time_request(dev)) {
+ if (vhost_vdpa_first_dev(dev)) {
r = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, &features);
if (r) {
return -EFAULT;
@@ -670,7 +677,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
static int vhost_vdpa_set_log_base(struct vhost_dev *dev, uint64_t base,
struct vhost_log *log)
{
- if (vhost_vdpa_one_time_request(dev)) {
+ if (!vhost_vdpa_first_dev(dev)) {
return 0;
}
@@ -739,7 +746,7 @@ static int vhost_vdpa_get_features(struct vhost_dev *dev,
static int vhost_vdpa_set_owner(struct vhost_dev *dev)
{
- if (vhost_vdpa_one_time_request(dev)) {
+ if (!vhost_vdpa_first_dev(dev)) {
return 0;
}
--
2.31.1

View File

@ -0,0 +1,48 @@
From b140a9fdeaab84d4a2d8828604ffb6aa8367dcbe Mon Sep 17 00:00:00 2001
From: Si-Wei Liu <si-wei.liu@oracle.com>
Date: Fri, 6 May 2022 19:28:14 -0700
Subject: [PATCH 3/7] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Cindy Lu <lulu@redhat.com>
RH-MergeRequest: 204: vdpa :sync the Multiqueue fixes for vhost-vDPA
RH-Commit: [3/7] 600138cb9945013179f5a3c14f52d637c4b9f6c7
RH-Bugzilla: 2095795
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Eugenio Pérez <eperezma@redhat.com>
RH-Acked-by: Jason Wang <jasowang@redhat.com>
... such that no memory leaks on dangling net clients in case of
error.
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <1651890498-24478-4-git-send-email-si-wei.liu@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 9bd055073e375c8a0d7ebce925e05d914d69fc7f)
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
net/vhost-vdpa.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 25dd6dd975..814f704687 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -306,7 +306,9 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
err:
if (i) {
- qemu_del_net_client(ncs[0]);
+ for (i--; i >= 0; i--) {
+ qemu_del_net_client(ncs[i]);
+ }
}
qemu_close(vdpa_device_fd);
g_free(ncs);
--
2.31.1

View File

@ -0,0 +1,143 @@
From 370df65141aa7ca10c4eaca8e862580e50dead65 Mon Sep 17 00:00:00 2001
From: Si-Wei Liu <si-wei.liu@oracle.com>
Date: Fri, 6 May 2022 19:28:13 -0700
Subject: [PATCH 2/7] virtio-net: align ctrl_vq index for non-mq guest for
vhost_vdpa
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Cindy Lu <lulu@redhat.com>
RH-MergeRequest: 204: vdpa :sync the Multiqueue fixes for vhost-vDPA
RH-Commit: [2/7] bb12ad61fac82935ef1ca6e37da6da2f04e43d51
RH-Bugzilla: 2095795
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Eugenio Pérez <eperezma@redhat.com>
RH-Acked-by: Jason Wang <jasowang@redhat.com>
With MQ enabled vdpa device and non-MQ supporting guest e.g.
booting vdpa with mq=on over OVMF of single vqp, below assert
failure is seen:
../hw/virtio/vhost-vdpa.c:560: vhost_vdpa_get_vq_index: Assertion `idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs' failed.
0 0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
1 0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
2 0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
3 0x00007f8ce3fec252 in () at /lib64/libc.so.6
4 0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
5 0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
6 0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, vdev=0x558f568f91f0, n=2, mask=<optimized out>) at ../hw/virtio/vhost.c:1557
7 0x0000558f52c6b89a in virtio_pci_set_guest_notifier (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, with_irqfd=with_irqfd@entry=false)
at ../hw/virtio/virtio-pci.c:974
8 0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers (d=0x558f568f0f60, nvqs=3, assign=true) at ../hw/virtio/virtio-pci.c:1019
9 0x0000558f52bf091d in vhost_net_start (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0, data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
at ../hw/net/vhost_net.c:361
10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, status=15 '\017') at ../hw/net/virtio-net.c:370
12 0x0000558f52d6c4b2 in virtio_set_status (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at ../hw/virtio/virtio.c:1945
13 0x0000558f52c69eff in virtio_pci_common_write (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
14 0x0000558f52d15d6e in memory_region_write_accessor (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1, shift=<optimized out>, mask=<optimized out>, attrs=...)
at ../softmmu/memory.c:492
15 0x0000558f52d127de in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x558f52d15cf0 <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at ../softmmu/memory.c:554
16 0x0000558f52d157ef in memory_region_dispatch_write (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...)
at ../softmmu/memory.c:1504
17 0x0000558f52d078e7 in flatview_write_continue (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=..., ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x558f568f19d0) at /home/opc/qemu-upstream/include/qemu/host-utils.h:165
18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at ../softmmu/physmem.c:2822
19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>)
at ../softmmu/physmem.c:2914
20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=...,
attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) at ../accel/kvm/kvm-all.c:2903
22 0x0000558f52dcfabd in kvm_vcpu_thread_fn (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:556
24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6
The cause for the assert failure is due to that the vhost_dev index
for the ctrl vq was not aligned with actual one in use by the guest.
Upon multiqueue feature negotiation in virtio_net_set_multiqueue(),
if guest doesn't support multiqueue, the guest vq layout would shrink
to a single queue pair, consisting of 3 vqs in total (rx, tx and ctrl).
This results in ctrl_vq taking a different vhost_dev group index than
the default. We can map vq to the correct vhost_dev group by checking
if MQ is supported by guest and successfully negotiated. Since the
MQ feature is only present along with CTRL_VQ, we ensure the index
2 is only meant for the control vq while MQ is not supported by guest.
Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
Suggested-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <1651890498-24478-3-git-send-email-si-wei.liu@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 68b0a6395f36a8f48f56f46d05f30be2067598b0)
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
hw/net/virtio-net.c | 33 +++++++++++++++++++++++++++++++--
1 file changed, 31 insertions(+), 2 deletions(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index ec045c3f41..f118379bb4 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -14,6 +14,7 @@
#include "qemu/osdep.h"
#include "qemu/atomic.h"
#include "qemu/iov.h"
+#include "qemu/log.h"
#include "qemu/main-loop.h"
#include "qemu/module.h"
#include "hw/virtio/virtio.h"
@@ -3163,8 +3164,22 @@ static NetClientInfo net_virtio_info = {
static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
{
VirtIONet *n = VIRTIO_NET(vdev);
- NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
+ NetClientState *nc;
assert(n->vhost_started);
+ if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
+ /* Must guard against invalid features and bogus queue index
+ * from being set by malicious guest, or penetrated through
+ * buggy migration stream.
+ */
+ if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "%s: bogus vq index ignored\n", __func__);
+ return false;
+ }
+ nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
+ } else {
+ nc = qemu_get_subqueue(n->nic, vq2q(idx));
+ }
return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
}
@@ -3172,8 +3187,22 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
bool mask)
{
VirtIONet *n = VIRTIO_NET(vdev);
- NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
+ NetClientState *nc;
assert(n->vhost_started);
+ if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_MQ) && idx == 2) {
+ /* Must guard against invalid features and bogus queue index
+ * from being set by malicious guest, or penetrated through
+ * buggy migration stream.
+ */
+ if (!virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "%s: bogus vq index ignored\n", __func__);
+ return;
+ }
+ nc = qemu_get_subqueue(n->nic, n->max_queue_pairs);
+ } else {
+ nc = qemu_get_subqueue(n->nic, vq2q(idx));
+ }
vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
vdev, idx, mask);
}
--
2.31.1

View File

@ -0,0 +1,109 @@
From 6182990c1327658c417280a557d16191f70c91b7 Mon Sep 17 00:00:00 2001
From: Si-Wei Liu <si-wei.liu@oracle.com>
Date: Fri, 6 May 2022 19:28:18 -0700
Subject: [PATCH 7/7] virtio-net: don't handle mq request in userspace handler
for vhost-vdpa
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Cindy Lu <lulu@redhat.com>
RH-MergeRequest: 204: vdpa :sync the Multiqueue fixes for vhost-vDPA
RH-Commit: [7/7] 2e636b805ab3f365b1f26fbdac7a7d0ade62508d
RH-Bugzilla: 2095795
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Eugenio Pérez <eperezma@redhat.com>
RH-Acked-by: Jason Wang <jasowang@redhat.com>
virtio_queue_host_notifier_read() tends to read pending event
left behind on ioeventfd in the vhost_net_stop() path, and
attempts to handle outstanding kicks from userspace vq handler.
However, in the ctrl_vq handler, virtio_net_handle_mq() has a
recursive call into virtio_net_set_status(), which may lead to
segmentation fault as shown in below stack trace:
0 0x000055f800df1780 in qdev_get_parent_bus (dev=0x0) at ../hw/core/qdev.c:376
1 0x000055f800c68ad8 in virtio_bus_device_iommu_enabled (vdev=vdev@entry=0x0) at ../hw/virtio/virtio-bus.c:331
2 0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>) at ../hw/virtio/vhost.c:318
3 0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>, buffer=0x7fc19bec5240, len=2052, is_write=1, access_len=2052) at ../hw/virtio/vhost.c:336
4 0x000055f800d71867 in vhost_virtqueue_stop (dev=dev@entry=0x55f8037ccc30, vdev=vdev@entry=0x55f8044ec590, vq=0x55f8037cceb0, idx=0) at ../hw/virtio/vhost.c:1241
5 0x000055f800d7406c in vhost_dev_stop (hdev=hdev@entry=0x55f8037ccc30, vdev=vdev@entry=0x55f8044ec590) at ../hw/virtio/vhost.c:1839
6 0x000055f800bf00a7 in vhost_net_stop_one (net=0x55f8037ccc30, dev=0x55f8044ec590) at ../hw/net/vhost_net.c:315
7 0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, cvq=cvq@entry=1)
at ../hw/net/vhost_net.c:423
8 0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
9 0x000055f800d4e628 in virtio_net_set_status (vdev=vdev@entry=0x55f8044ec590, status=15 '\017') at ../hw/net/virtio-net.c:370
10 0x000055f800d534d8 in virtio_net_handle_ctrl (iov_cnt=<optimized out>, iov=<optimized out>, cmd=0 '\000', n=0x55f8044ec590) at ../hw/net/virtio-net.c:1408
11 0x000055f800d534d8 in virtio_net_handle_ctrl (vdev=0x55f8044ec590, vq=0x7fc1a7e888d0) at ../hw/net/virtio-net.c:1452
12 0x000055f800d69f37 in virtio_queue_host_notifier_read (vq=0x7fc1a7e888d0) at ../hw/virtio/virtio.c:2331
13 0x000055f800d69f37 in virtio_queue_host_notifier_read (n=n@entry=0x7fc1a7e8894c) at ../hw/virtio/virtio.c:3575
14 0x000055f800c688e6 in virtio_bus_cleanup_host_notifier (bus=<optimized out>, n=n@entry=14) at ../hw/virtio/virtio-bus.c:312
15 0x000055f800d73106 in vhost_dev_disable_notifiers (hdev=hdev@entry=0x55f8035b51b0, vdev=vdev@entry=0x55f8044ec590)
at ../../../include/hw/virtio/virtio-bus.h:35
16 0x000055f800bf00b2 in vhost_net_stop_one (net=0x55f8035b51b0, dev=0x55f8044ec590) at ../hw/net/vhost_net.c:316
17 0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, cvq=cvq@entry=1)
at ../hw/net/vhost_net.c:423
18 0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
19 0x000055f800d4e628 in virtio_net_set_status (vdev=0x55f8044ec590, status=15 '\017') at ../hw/net/virtio-net.c:370
20 0x000055f800d6c4b2 in virtio_set_status (vdev=0x55f8044ec590, val=<optimized out>) at ../hw/virtio/virtio.c:1945
21 0x000055f800d11d9d in vm_state_notify (running=running@entry=false, state=state@entry=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:333
22 0x000055f800d04e7a in do_vm_stop (state=state@entry=RUN_STATE_SHUTDOWN, send_stop=send_stop@entry=false) at ../softmmu/cpus.c:262
23 0x000055f800d04e99 in vm_shutdown () at ../softmmu/cpus.c:280
24 0x000055f800d126af in qemu_cleanup () at ../softmmu/runstate.c:812
25 0x000055f800ad5b13 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:51
For now, temporarily disable handling MQ request from the ctrl_vq
userspace hanlder to avoid the recursive virtio_net_set_status()
call. Some rework is needed to allow changing the number of
queues without going through a full virtio_net_set_status cycle,
particularly for vhost-vdpa backend.
This patch will need to be reverted as soon as future patches of
having the change of #queues handled in userspace is merged.
Fixes: 402378407db ("vhost-vdpa: multiqueue support")
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <1651890498-24478-8-git-send-email-si-wei.liu@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 2a7888cc3aa31faee839fa5dddad354ff8941f4c)
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
hw/net/virtio-net.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index f118379bb4..7e172ef829 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1373,6 +1373,7 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
{
VirtIODevice *vdev = VIRTIO_DEVICE(n);
uint16_t queue_pairs;
+ NetClientState *nc = qemu_get_queue(n->nic);
virtio_net_disable_rss(n);
if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
@@ -1404,6 +1405,18 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
return VIRTIO_NET_ERR;
}
+ /* Avoid changing the number of queue_pairs for vdpa device in
+ * userspace handler. A future fix is needed to handle the mq
+ * change in userspace handler with vhost-vdpa. Let's disable
+ * the mq handling from userspace for now and only allow get
+ * done through the kernel. Ripples may be seen when falling
+ * back to userspace, but without doing it qemu process would
+ * crash on a recursive entry to virtio_net_set_status().
+ */
+ if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
+ return VIRTIO_NET_ERR;
+ }
+
n->curr_queue_pairs = queue_pairs;
/* stop the backend before changing the number of queue_pairs to avoid handling a
* disabled queue */
--
2.31.1

View File

@ -0,0 +1,52 @@
From b956af02efde25f458205cb5bc2c389409564e3f Mon Sep 17 00:00:00 2001
From: Si-Wei Liu <si-wei.liu@oracle.com>
Date: Fri, 6 May 2022 19:28:12 -0700
Subject: [PATCH 1/7] virtio-net: setup vhost_dev and notifiers for cvq only
when feature is negotiated
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Cindy Lu <lulu@redhat.com>
RH-MergeRequest: 204: vdpa :sync the Multiqueue fixes for vhost-vDPA
RH-Commit: [1/7] 4e1e54bbf5d91a590a61e3fee1100716b50837ee
RH-Bugzilla: 2095795
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Eugenio Pérez <eperezma@redhat.com>
RH-Acked-by: Jason Wang <jasowang@redhat.com>
When the control virtqueue feature is absent or not negotiated,
vhost_net_start() still tries to set up vhost_dev and install
vhost notifiers for the control virtqueue, which results in
erroneous ioctl calls with incorrect queue index sending down
to driver. Do that only when needed.
Fixes: 22288fe ("virtio-net: vhost control virtqueue support")
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <1651890498-24478-2-git-send-email-si-wei.liu@oracle.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit aa8581945a13712ff3eed0ad3ba7a9664fc1604b)
Signed-off-by: Cindy Lu <lulu@redhat.com>
---
hw/net/virtio-net.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index e1f4748831..ec045c3f41 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -244,7 +244,8 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
VirtIODevice *vdev = VIRTIO_DEVICE(n);
NetClientState *nc = qemu_get_queue(n->nic);
int queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
- int cvq = n->max_ncs - n->max_queue_pairs;
+ int cvq = virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) ?
+ n->max_ncs - n->max_queue_pairs : 0;
if (!get_vhost_net(nc->peer)) {
return;
--
2.31.1

View File

@ -151,7 +151,7 @@ Obsoletes: %{name}-block-ssh <= %{epoch}:%{version} \
Summary: QEMU is a machine emulator and virtualizer Summary: QEMU is a machine emulator and virtualizer
Name: qemu-kvm Name: qemu-kvm
Version: 6.2.0 Version: 6.2.0
Release: 11%{?rcrel}%{?dist}%{?cc_suffix}.3 Release: 11%{?rcrel}%{?dist}%{?cc_suffix}.5
# Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped # Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped
# Epoch 15 used for RHEL 8 # Epoch 15 used for RHEL 8
# Epoch 17 used for RHEL 9 (due to release versioning offset in RHEL 8.5) # Epoch 17 used for RHEL 9 (due to release versioning offset in RHEL 8.5)
@ -326,6 +326,24 @@ Patch85: kvm-RHEL-disable-seqpacket-for-vhost-vsock-device-in-rhe.patch
Patch86: kvm-virtio-net-fix-map-leaking-on-error-during-receive.patch Patch86: kvm-virtio-net-fix-map-leaking-on-error-during-receive.patch
# For bz#2075640 - CVE-2022-26354 qemu-kvm: QEMU: vhost-vsock: missing virtqueue detach on error can lead to memory leak [rhel-9] [rhel-9.0.0.z] # For bz#2075640 - CVE-2022-26354 qemu-kvm: QEMU: vhost-vsock: missing virtqueue detach on error can lead to memory leak [rhel-9] [rhel-9.0.0.z]
Patch87: kvm-vhost-vsock-detach-the-virqueue-element-in-case-of-e.patch Patch87: kvm-vhost-vsock-detach-the-virqueue-element-in-case-of-e.patch
# For bz#2095795 - PXE boot crash qemu when using multiqueue vDPA [rhel-9.0.0.z]
Patch88: kvm-virtio-net-setup-vhost_dev-and-notifiers-for-cvq-onl.patch
# For bz#2095795 - PXE boot crash qemu when using multiqueue vDPA [rhel-9.0.0.z]
Patch89: kvm-virtio-net-align-ctrl_vq-index-for-non-mq-guest-for-.patch
# For bz#2095795 - PXE boot crash qemu when using multiqueue vDPA [rhel-9.0.0.z]
Patch90: kvm-vhost-vdpa-fix-improper-cleanup-in-net_init_vhost_vd.patch
# For bz#2095795 - PXE boot crash qemu when using multiqueue vDPA [rhel-9.0.0.z]
Patch91: kvm-vhost-net-fix-improper-cleanup-in-vhost_net_start.patch
# For bz#2095795 - PXE boot crash qemu when using multiqueue vDPA [rhel-9.0.0.z]
Patch92: kvm-vhost-vdpa-backend-feature-should-set-only-once.patch
# For bz#2095795 - PXE boot crash qemu when using multiqueue vDPA [rhel-9.0.0.z]
Patch93: kvm-vhost-vdpa-change-name-and-polarity-for-vhost_vdpa_o.patch
# For bz#2095795 - PXE boot crash qemu when using multiqueue vDPA [rhel-9.0.0.z]
Patch94: kvm-virtio-net-don-t-handle-mq-request-in-userspace-hand.patch
# For bz#2109569 - Stalled IO Operations in VM [rhel-9.0.0.z]
Patch95: kvm-linux-aio-fix-unbalanced-plugged-counter-in-laio_io_.patch
# For bz#2109569 - Stalled IO Operations in VM [rhel-9.0.0.z]
Patch96: kvm-linux-aio-explain-why-max-batch-is-checked-in-laio_i.patch
# Source-git patches # Source-git patches
@ -1376,6 +1394,23 @@ useradd -r -u 107 -g qemu -G kvm -d / -s /sbin/nologin \
%endif %endif
%changelog %changelog
* Tue Aug 30 2022 Miroslav Rezanina <mrezanin@redhat.com> - 6.2.0-11.el9_0.5
- kvm-linux-aio-fix-unbalanced-plugged-counter-in-laio_io_.patch [bz#2109569]
- kvm-linux-aio-explain-why-max-batch-is-checked-in-laio_i.patch [bz#2109569]
- Resolves: bz#2109569
(Stalled IO Operations in VM [rhel-9.0.0.z])
* Fri Aug 05 2022 Miroslav Rezanina <mrezanin@redhat.com> - 6.2.0-11.el9_0.4
- kvm-virtio-net-setup-vhost_dev-and-notifiers-for-cvq-onl.patch [bz#2095795]
- kvm-virtio-net-align-ctrl_vq-index-for-non-mq-guest-for-.patch [bz#2095795]
- kvm-vhost-vdpa-fix-improper-cleanup-in-net_init_vhost_vd.patch [bz#2095795]
- kvm-vhost-net-fix-improper-cleanup-in-vhost_net_start.patch [bz#2095795]
- kvm-vhost-vdpa-backend-feature-should-set-only-once.patch [bz#2095795]
- kvm-vhost-vdpa-change-name-and-polarity-for-vhost_vdpa_o.patch [bz#2095795]
- kvm-virtio-net-don-t-handle-mq-request-in-userspace-hand.patch [bz#2095795]
- Resolves: bz#2095795
(PXE boot crash qemu when using multiqueue vDPA [rhel-9.0.0.z])
* Mon May 09 2022 Miroslav Rezanina <mrezanin@redhat.com> - 6.2.0-11.el9_0.3 * Mon May 09 2022 Miroslav Rezanina <mrezanin@redhat.com> - 6.2.0-11.el9_0.3
- kvm-RHEL-disable-seqpacket-for-vhost-vsock-device-in-rhe.patch [bz#2071102] - kvm-RHEL-disable-seqpacket-for-vhost-vsock-device-in-rhe.patch [bz#2071102]
- kvm-virtio-net-fix-map-leaking-on-error-during-receive.patch [bz#2075635] - kvm-virtio-net-fix-map-leaking-on-error-during-receive.patch [bz#2075635]