* Fri Aug 26 2022 Miroslav Rezanina <mrezanin@redhat.com> - 7.0.0-12

- kvm-scsi-generic-Fix-emulated-block-limits-VPD-page.patch [bz#2120275]
- kvm-vhost-Get-vring-base-from-vq-not-svq.patch [bz#2114060]
- kvm-vdpa-Skip-the-maps-not-in-the-iova-tree.patch [bz#2114060]
- kvm-vdpa-do-not-save-failed-dma-maps-in-SVQ-iova-tree.patch [bz#2114060]
- kvm-util-Return-void-on-iova_tree_remove.patch [bz#2114060]
- kvm-util-accept-iova_tree_remove_parameter-by-value.patch [bz#2114060]
- kvm-vdpa-Remove-SVQ-vring-from-iova_tree-at-shutdown.patch [bz#2114060]
- kvm-vdpa-Make-SVQ-vring-unmapping-return-void.patch [bz#2114060]
- kvm-vhost-Always-store-new-kick-fd-on-vhost_svq_set_svq_.patch [bz#2114060]
- kvm-vdpa-Use-ring-hwaddr-at-vhost_vdpa_svq_unmap_ring.patch [bz#2114060]
- kvm-vhost-stop-transfer-elem-ownership-in-vhost_handle_g.patch [bz#2114060]
- kvm-vhost-use-SVQ-element-ndescs-instead-of-opaque-data-.patch [bz#2114060]
- kvm-vhost-Delete-useless-read-memory-barrier.patch [bz#2114060]
- kvm-vhost-Do-not-depend-on-NULL-VirtQueueElement-on-vhos.patch [bz#2114060]
- kvm-vhost_net-Add-NetClientInfo-start-callback.patch [bz#2114060]
- kvm-vhost_net-Add-NetClientInfo-stop-callback.patch [bz#2114060]
- kvm-vdpa-add-net_vhost_vdpa_cvq_info-NetClientInfo.patch [bz#2114060]
- kvm-vdpa-Move-command-buffers-map-to-start-of-net-device.patch [bz#2114060]
- kvm-vdpa-extract-vhost_vdpa_net_cvq_add-from-vhost_vdpa_.patch [bz#2114060]
- kvm-vhost_net-add-NetClientState-load-callback.patch [bz#2114060]
- kvm-vdpa-Add-virtio-net-mac-address-via-CVQ-at-start.patch [bz#2114060]
- kvm-vdpa-Delete-CVQ-migration-blocker.patch [bz#2114060]
- kvm-virtio-scsi-fix-race-in-virtio_scsi_dataplane_start.patch [bz#2099541]
- Resolves: bz#2120275
  (Wrong max_sectors_kb and Maximum transfer length on the pass-through device [rhel-9.1])
- Resolves: bz#2114060
  (vDPA state restore support through control virtqueue in Qemu)
- Resolves: bz#2099541
  (qemu coredump with error Assertion `qemu_mutex_iothread_locked()' failed when repeatly hotplug/unplug disks in pause status)
This commit is contained in:
Miroslav Rezanina 2022-08-26 03:05:53 -04:00
parent 85d5f0ed1b
commit 716a3942b3
24 changed files with 2199 additions and 1 deletions

View File

@ -0,0 +1,96 @@
From e5360c1e76fee8b8dcbcba7efbb1e36f0b48ac40 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Mon, 22 Aug 2022 14:53:20 +0200
Subject: [PATCH 01/23] scsi-generic: Fix emulated block limits VPD page
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 115: scsi-generic: Fix emulated block limits VPD page
RH-Bugzilla: 2120275
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Hanna Reitz <hreitz@redhat.com>
RH-Acked-by: Paolo Bonzini <pbonzini@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [1/1] 336ba583311a80beeadd1900336056404f63211a (kmwolf/centos-qemu-kvm)
Commits 01ef8185b80 amd 24b36e9813e updated the way that the maximum
transfer length is calculated for patching block limits VPD page in an
INQUIRY response.
The same updates also need to be made for the case where the host device
does not support the block limits VPD page at all and we emulate the
whole page.
Without this fix, on host block devices a maximum transfer length of
(INT_MAX - sector_size) bytes is advertised to the guest, resulting in
I/O errors when a request that exceeds the host limits is made by the
guest. (Prior to commit 24b36e9813e, this code path would use the
max_transfer value from the host instead of INT_MAX, but still miss the
fix from 01ef8185b80 where max_transfer is also capped to max_iov
host pages, so it would be less wrong, but still wrong.)
Cc: qemu-stable@nongnu.org
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2096251
Fixes: 01ef8185b809af9d287e1a03a3f9d8ea8231118a
Fixes: 24b36e9813ec15da7db62e3b3621730710c5f020
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20220822125320.48257-1-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 51e15194b0a091e5c40aab2eb234a1d36c5c58ee)
Resolved conflict: qemu_real_host_page_size() is a getter function in
current upstream, but still just a public global variable downstream.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
hw/scsi/scsi-generic.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/hw/scsi/scsi-generic.c b/hw/scsi/scsi-generic.c
index 0306ccc7b1..3742899839 100644
--- a/hw/scsi/scsi-generic.c
+++ b/hw/scsi/scsi-generic.c
@@ -147,6 +147,18 @@ static int execute_command(BlockBackend *blk,
return 0;
}
+static uint64_t calculate_max_transfer(SCSIDevice *s)
+{
+ uint64_t max_transfer = blk_get_max_hw_transfer(s->conf.blk);
+ uint32_t max_iov = blk_get_max_hw_iov(s->conf.blk);
+
+ assert(max_transfer);
+ max_transfer = MIN_NON_ZERO(max_transfer,
+ max_iov * qemu_real_host_page_size);
+
+ return max_transfer / s->blocksize;
+}
+
static int scsi_handle_inquiry_reply(SCSIGenericReq *r, SCSIDevice *s, int len)
{
uint8_t page, page_idx;
@@ -179,12 +191,7 @@ static int scsi_handle_inquiry_reply(SCSIGenericReq *r, SCSIDevice *s, int len)
(r->req.cmd.buf[1] & 0x01)) {
page = r->req.cmd.buf[2];
if (page == 0xb0) {
- uint64_t max_transfer = blk_get_max_hw_transfer(s->conf.blk);
- uint32_t max_iov = blk_get_max_hw_iov(s->conf.blk);
-
- assert(max_transfer);
- max_transfer = MIN_NON_ZERO(max_transfer, max_iov * qemu_real_host_page_size)
- / s->blocksize;
+ uint64_t max_transfer = calculate_max_transfer(s);
stl_be_p(&r->buf[8], max_transfer);
/* Also take care of the opt xfer len. */
stl_be_p(&r->buf[12],
@@ -230,7 +237,7 @@ static int scsi_generic_emulate_block_limits(SCSIGenericReq *r, SCSIDevice *s)
uint8_t buf[64];
SCSIBlockLimits bl = {
- .max_io_sectors = blk_get_max_transfer(s->conf.blk) / s->blocksize
+ .max_io_sectors = calculate_max_transfer(s),
};
memset(r->buf, 0, r->buflen);
--
2.31.1

View File

@ -0,0 +1,70 @@
From 74c829f82eafa8e42ae94f7ace55c8aaed3bb5f4 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Wed, 27 Apr 2022 17:49:31 +0200
Subject: [PATCH 05/23] util: Return void on iova_tree_remove
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [4/21] 252287acca896eba7b5d2b62fc6247cfc565ba57 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: Merged
It always returns IOVA_OK so nobody uses it.
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Message-Id: <20220427154931.3166388-1-eperezma@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
(cherry picked from commit 832fef7cc14d65f99d523f883ef384014e6476a7)
---
include/qemu/iova-tree.h | 4 +---
util/iova-tree.c | 4 +---
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/include/qemu/iova-tree.h b/include/qemu/iova-tree.h
index c938fb0793..16bbfdf5f8 100644
--- a/include/qemu/iova-tree.h
+++ b/include/qemu/iova-tree.h
@@ -72,10 +72,8 @@ int iova_tree_insert(IOVATree *tree, const DMAMap *map);
* provided. The range does not need to be exactly what has inserted,
* all the mappings that are included in the provided range will be
* removed from the tree. Here map->translated_addr is meaningless.
- *
- * Return: 0 if succeeded, or <0 if error.
*/
-int iova_tree_remove(IOVATree *tree, const DMAMap *map);
+void iova_tree_remove(IOVATree *tree, const DMAMap *map);
/**
* iova_tree_find:
diff --git a/util/iova-tree.c b/util/iova-tree.c
index 6dff29c1f6..fee530a579 100644
--- a/util/iova-tree.c
+++ b/util/iova-tree.c
@@ -164,15 +164,13 @@ void iova_tree_foreach(IOVATree *tree, iova_tree_iterator iterator)
g_tree_foreach(tree->tree, iova_tree_traverse, iterator);
}
-int iova_tree_remove(IOVATree *tree, const DMAMap *map)
+void iova_tree_remove(IOVATree *tree, const DMAMap *map)
{
const DMAMap *overlap;
while ((overlap = iova_tree_find(tree, map))) {
g_tree_remove(tree->tree, overlap);
}
-
- return IOVA_OK;
}
/**
--
2.31.1

View File

@ -0,0 +1,182 @@
From 90697579eaf598614293d75f684d6e8c55f8ab9b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:20:04 +0200
Subject: [PATCH 06/23] util: accept iova_tree_remove_parameter by value
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [5/21] ddaf052789e7ab3c67a77c038347113301587ffb (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
It's convenient to call iova_tree_remove from a map returned from
iova_tree_find or iova_tree_find_iova. With the current code this is not
possible, since we will free it, and then we will try to search for it
again.
Fix it making accepting the map by value, forcing a copy of the
argument. Not applying a fixes tag, since there is no use like that at
the moment.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit d69ba6677405de86b3b617fc7688b549f84cf013)
---
hw/i386/intel_iommu.c | 6 +++---
hw/virtio/vhost-iova-tree.c | 2 +-
hw/virtio/vhost-iova-tree.h | 2 +-
hw/virtio/vhost-vdpa.c | 6 +++---
include/qemu/iova-tree.h | 2 +-
net/vhost-vdpa.c | 4 ++--
util/iova-tree.c | 4 ++--
7 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index c64aa81a83..6738cf0929 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -1157,7 +1157,7 @@ static int vtd_page_walk_one(IOMMUTLBEvent *event, vtd_page_walk_info *info)
return ret;
}
/* Drop any existing mapping */
- iova_tree_remove(as->iova_tree, &target);
+ iova_tree_remove(as->iova_tree, target);
/* Recover the correct type */
event->type = IOMMU_NOTIFIER_MAP;
entry->perm = cache_perm;
@@ -1170,7 +1170,7 @@ static int vtd_page_walk_one(IOMMUTLBEvent *event, vtd_page_walk_info *info)
trace_vtd_page_walk_one_skip_unmap(entry->iova, entry->addr_mask);
return 0;
}
- iova_tree_remove(as->iova_tree, &target);
+ iova_tree_remove(as->iova_tree, target);
}
trace_vtd_page_walk_one(info->domain_id, entry->iova,
@@ -3532,7 +3532,7 @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
map.iova = n->start;
map.size = size;
- iova_tree_remove(as->iova_tree, &map);
+ iova_tree_remove(as->iova_tree, map);
}
static void vtd_address_space_unmap_all(IntelIOMMUState *s)
diff --git a/hw/virtio/vhost-iova-tree.c b/hw/virtio/vhost-iova-tree.c
index 55fed1fefb..1339a4de8b 100644
--- a/hw/virtio/vhost-iova-tree.c
+++ b/hw/virtio/vhost-iova-tree.c
@@ -104,7 +104,7 @@ int vhost_iova_tree_map_alloc(VhostIOVATree *tree, DMAMap *map)
* @iova_tree: The vhost iova tree
* @map: The map to remove
*/
-void vhost_iova_tree_remove(VhostIOVATree *iova_tree, const DMAMap *map)
+void vhost_iova_tree_remove(VhostIOVATree *iova_tree, DMAMap map)
{
iova_tree_remove(iova_tree->iova_taddr_map, map);
}
diff --git a/hw/virtio/vhost-iova-tree.h b/hw/virtio/vhost-iova-tree.h
index 6a4f24e0f9..4adfd79ff0 100644
--- a/hw/virtio/vhost-iova-tree.h
+++ b/hw/virtio/vhost-iova-tree.h
@@ -22,6 +22,6 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(VhostIOVATree, vhost_iova_tree_delete);
const DMAMap *vhost_iova_tree_find_iova(const VhostIOVATree *iova_tree,
const DMAMap *map);
int vhost_iova_tree_map_alloc(VhostIOVATree *iova_tree, DMAMap *map);
-void vhost_iova_tree_remove(VhostIOVATree *iova_tree, const DMAMap *map);
+void vhost_iova_tree_remove(VhostIOVATree *iova_tree, DMAMap map);
#endif
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index cc15b7d8ee..39aa70f52d 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -238,7 +238,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
fail_map:
if (v->shadow_vqs_enabled) {
- vhost_iova_tree_remove(v->iova_tree, &mem_region);
+ vhost_iova_tree_remove(v->iova_tree, mem_region);
}
fail:
@@ -298,7 +298,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
return;
}
iova = result->iova;
- vhost_iova_tree_remove(v->iova_tree, result);
+ vhost_iova_tree_remove(v->iova_tree, *result);
}
vhost_vdpa_iotlb_batch_begin_once(v);
ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
@@ -942,7 +942,7 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa *v, DMAMap *needle,
needle->perm == IOMMU_RO);
if (unlikely(r != 0)) {
error_setg_errno(errp, -r, "Cannot map region to device");
- vhost_iova_tree_remove(v->iova_tree, needle);
+ vhost_iova_tree_remove(v->iova_tree, *needle);
}
return r == 0;
diff --git a/include/qemu/iova-tree.h b/include/qemu/iova-tree.h
index 16bbfdf5f8..8528e5c98f 100644
--- a/include/qemu/iova-tree.h
+++ b/include/qemu/iova-tree.h
@@ -73,7 +73,7 @@ int iova_tree_insert(IOVATree *tree, const DMAMap *map);
* all the mappings that are included in the provided range will be
* removed from the tree. Here map->translated_addr is meaningless.
*/
-void iova_tree_remove(IOVATree *tree, const DMAMap *map);
+void iova_tree_remove(IOVATree *tree, DMAMap map);
/**
* iova_tree_find:
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 411e71e6c2..ba65736f83 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -244,7 +244,7 @@ static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
error_report("Device cannot unmap: %s(%d)", g_strerror(r), r);
}
- vhost_iova_tree_remove(tree, map);
+ vhost_iova_tree_remove(tree, *map);
}
static size_t vhost_vdpa_net_cvq_cmd_len(void)
@@ -297,7 +297,7 @@ static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
return true;
dma_map_err:
- vhost_iova_tree_remove(v->iova_tree, &map);
+ vhost_iova_tree_remove(v->iova_tree, map);
return false;
}
diff --git a/util/iova-tree.c b/util/iova-tree.c
index fee530a579..536789797e 100644
--- a/util/iova-tree.c
+++ b/util/iova-tree.c
@@ -164,11 +164,11 @@ void iova_tree_foreach(IOVATree *tree, iova_tree_iterator iterator)
g_tree_foreach(tree->tree, iova_tree_traverse, iterator);
}
-void iova_tree_remove(IOVATree *tree, const DMAMap *map)
+void iova_tree_remove(IOVATree *tree, DMAMap map)
{
const DMAMap *overlap;
- while ((overlap = iova_tree_find(tree, map))) {
+ while ((overlap = iova_tree_find(tree, &map))) {
g_tree_remove(tree->tree, overlap);
}
}
--
2.31.1

View File

@ -0,0 +1,87 @@
From e1f9986cf77e4b2f16aca7b2523bc75bae0c4d3c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:36 +0200
Subject: [PATCH 21/23] vdpa: Add virtio-net mac address via CVQ at start
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [20/21] a7920816d5faf7a0cfbb7c2731a48ddfc456b8d4 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
This is needed so the destination vdpa device see the same state a the
guest set in the source.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit f34cd09b13855657a0d49c5ea6a1e37ba9dc2334)
---
net/vhost-vdpa.c | 40 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index f09f044ec1..79ebda7de1 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -363,11 +363,51 @@ static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
return vhost_svq_poll(svq);
}
+static int vhost_vdpa_net_load(NetClientState *nc)
+{
+ VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
+ const struct vhost_vdpa *v = &s->vhost_vdpa;
+ const VirtIONet *n;
+ uint64_t features;
+
+ assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
+
+ if (!v->shadow_vqs_enabled) {
+ return 0;
+ }
+
+ n = VIRTIO_NET(v->dev->vdev);
+ features = n->parent_obj.guest_features;
+ if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
+ const struct virtio_net_ctrl_hdr ctrl = {
+ .class = VIRTIO_NET_CTRL_MAC,
+ .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
+ };
+ char *cursor = s->cvq_cmd_out_buffer;
+ ssize_t dev_written;
+
+ memcpy(cursor, &ctrl, sizeof(ctrl));
+ cursor += sizeof(ctrl);
+ memcpy(cursor, n->mac, sizeof(n->mac));
+
+ dev_written = vhost_vdpa_net_cvq_add(s, sizeof(ctrl) + sizeof(n->mac),
+ sizeof(virtio_net_ctrl_ack));
+ if (unlikely(dev_written < 0)) {
+ return dev_written;
+ }
+
+ return *((virtio_net_ctrl_ack *)s->cvq_cmd_in_buffer) != VIRTIO_NET_OK;
+ }
+
+ return 0;
+}
+
static NetClientInfo net_vhost_vdpa_cvq_info = {
.type = NET_CLIENT_DRIVER_VHOST_VDPA,
.size = sizeof(VhostVDPAState),
.receive = vhost_vdpa_receive,
.start = vhost_vdpa_net_cvq_start,
+ .load = vhost_vdpa_net_load,
.stop = vhost_vdpa_net_cvq_stop,
.cleanup = vhost_vdpa_cleanup,
.has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
--
2.31.1

View File

@ -0,0 +1,98 @@
From 896f7749c72afe988ab28ac6af77b9c53b685c03 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:37 +0200
Subject: [PATCH 22/23] vdpa: Delete CVQ migration blocker
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [21/21] 286f55177a132a8845c2912fb28cb4add472005a (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
We can restore the device state in the destination via CVQ now. Remove
the migration blocker.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit fe2b0cd71cddbec4eaf6e325eaf357a4e72a469d)
---
hw/virtio/vhost-vdpa.c | 15 ---------------
include/hw/virtio/vhost-vdpa.h | 1 -
net/vhost-vdpa.c | 2 --
3 files changed, 18 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 0bea1e1eb9..b61e313953 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1031,13 +1031,6 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
return true;
}
- if (v->migration_blocker) {
- int r = migrate_add_blocker(v->migration_blocker, &err);
- if (unlikely(r < 0)) {
- return false;
- }
- }
-
for (i = 0; i < v->shadow_vqs->len; ++i) {
VirtQueue *vq = virtio_get_queue(dev->vdev, dev->vq_index + i);
VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
@@ -1080,10 +1073,6 @@ err:
vhost_svq_stop(svq);
}
- if (v->migration_blocker) {
- migrate_del_blocker(v->migration_blocker);
- }
-
return false;
}
@@ -1099,10 +1088,6 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev)
VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
vhost_vdpa_svq_unmap_rings(dev, svq);
}
-
- if (v->migration_blocker) {
- migrate_del_blocker(v->migration_blocker);
- }
}
static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index d10a89303e..1111d85643 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -35,7 +35,6 @@ typedef struct vhost_vdpa {
bool shadow_vqs_enabled;
/* IOVA mapping used by the Shadow Virtqueue */
VhostIOVATree *iova_tree;
- Error *migration_blocker;
GPtrArray *shadow_vqs;
const VhostShadowVirtqueueOps *shadow_vq_ops;
void *shadow_vq_ops_opaque;
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 79ebda7de1..f4f16583e4 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -555,8 +555,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
s->vhost_vdpa.shadow_vq_ops_opaque = s;
- error_setg(&s->vhost_vdpa.migration_blocker,
- "Migration disabled: vhost-vdpa uses CVQ.");
}
ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
if (ret) {
--
2.31.1

View File

@ -0,0 +1,133 @@
From 8e36feb4d3480b7c09d9dcbde18c9db1e8063f18 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:20:06 +0200
Subject: [PATCH 08/23] vdpa: Make SVQ vring unmapping return void
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [7/21] 3366340dc7ae65f83894f5d0da0d1e0f64713751 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
Nothing actually reads the return value, but an error in cleaning some
entries could cause device stop to abort, making a restart impossible.
Better ignore explicitely the return value.
Reported-by: Lei Yang <leiyang@redhat.com>
Fixes: 34e3c94eda ("vdpa: Add custom IOTLB translations to SVQ")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit bb5cf89ef2338ab6be946ede6821c3f61347eb1b)
---
hw/virtio/vhost-vdpa.c | 32 ++++++++++----------------------
1 file changed, 10 insertions(+), 22 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index e5c264fb29..8eddf39f2a 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -882,7 +882,7 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev,
/**
* Unmap a SVQ area in the device
*/
-static bool vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v,
+static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v,
const DMAMap *needle)
{
const DMAMap *result = vhost_iova_tree_find_iova(v->iova_tree, needle);
@@ -891,38 +891,33 @@ static bool vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v,
if (unlikely(!result)) {
error_report("Unable to find SVQ address to unmap");
- return false;
+ return;
}
size = ROUND_UP(result->size, qemu_real_host_page_size);
r = vhost_vdpa_dma_unmap(v, result->iova, size);
if (unlikely(r < 0)) {
error_report("Unable to unmap SVQ vring: %s (%d)", g_strerror(-r), -r);
- return false;
+ return;
}
vhost_iova_tree_remove(v->iova_tree, *result);
- return r == 0;
}
-static bool vhost_vdpa_svq_unmap_rings(struct vhost_dev *dev,
+static void vhost_vdpa_svq_unmap_rings(struct vhost_dev *dev,
const VhostShadowVirtqueue *svq)
{
DMAMap needle = {};
struct vhost_vdpa *v = dev->opaque;
struct vhost_vring_addr svq_addr;
- bool ok;
vhost_svq_get_vring_addr(svq, &svq_addr);
needle.translated_addr = svq_addr.desc_user_addr;
- ok = vhost_vdpa_svq_unmap_ring(v, &needle);
- if (unlikely(!ok)) {
- return false;
- }
+ vhost_vdpa_svq_unmap_ring(v, &needle);
needle.translated_addr = svq_addr.used_user_addr;
- return vhost_vdpa_svq_unmap_ring(v, &needle);
+ vhost_vdpa_svq_unmap_ring(v, &needle);
}
/**
@@ -1093,26 +1088,22 @@ err:
return false;
}
-static bool vhost_vdpa_svqs_stop(struct vhost_dev *dev)
+static void vhost_vdpa_svqs_stop(struct vhost_dev *dev)
{
struct vhost_vdpa *v = dev->opaque;
if (!v->shadow_vqs) {
- return true;
+ return;
}
for (unsigned i = 0; i < v->shadow_vqs->len; ++i) {
VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
- bool ok = vhost_vdpa_svq_unmap_rings(dev, svq);
- if (unlikely(!ok)) {
- return false;
- }
+ vhost_vdpa_svq_unmap_rings(dev, svq);
}
if (v->migration_blocker) {
migrate_del_blocker(v->migration_blocker);
}
- return true;
}
static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
@@ -1129,10 +1120,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
}
vhost_vdpa_set_vring_ready(dev);
} else {
- ok = vhost_vdpa_svqs_stop(dev);
- if (unlikely(!ok)) {
- return -1;
- }
+ vhost_vdpa_svqs_stop(dev);
vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
}
--
2.31.1

View File

@ -0,0 +1,251 @@
From 70c72316c26e95cd18b4d46b83e78ba3a148212c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:33 +0200
Subject: [PATCH 18/23] vdpa: Move command buffers map to start of net device
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [17/21] 7a9824fa618f5c2904648b50e3078474cd3987aa (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
As this series will reuse them to restore the device state at the end of
a migration (or a device start), let's allocate only once at the device
start so we don't duplicate their map and unmap.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit d7d73dec14cebcebd8de774424795aeb821236c1)
---
net/vhost-vdpa.c | 123 ++++++++++++++++++++++-------------------------
1 file changed, 58 insertions(+), 65 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 03e4cf1abc..17626feb8d 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -263,29 +263,20 @@ static size_t vhost_vdpa_net_cvq_cmd_page_len(void)
return ROUND_UP(vhost_vdpa_net_cvq_cmd_len(), qemu_real_host_page_size);
}
-/** Copy and map a guest buffer. */
-static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
- const struct iovec *out_data,
- size_t out_num, size_t data_len, void *buf,
- size_t *written, bool write)
+/** Map CVQ buffer. */
+static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
+ bool write)
{
DMAMap map = {};
int r;
- if (unlikely(!data_len)) {
- qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid legnth of %s buffer\n",
- __func__, write ? "in" : "out");
- return false;
- }
-
- *written = iov_to_buf(out_data, out_num, 0, buf, data_len);
map.translated_addr = (hwaddr)(uintptr_t)buf;
- map.size = vhost_vdpa_net_cvq_cmd_page_len() - 1;
+ map.size = size - 1;
map.perm = write ? IOMMU_RW : IOMMU_RO,
r = vhost_iova_tree_map_alloc(v->iova_tree, &map);
if (unlikely(r != IOVA_OK)) {
error_report("Cannot map injected element");
- return false;
+ return r;
}
r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
@@ -294,50 +285,58 @@ static bool vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v,
goto dma_map_err;
}
- return true;
+ return 0;
dma_map_err:
vhost_iova_tree_remove(v->iova_tree, map);
- return false;
+ return r;
}
-/**
- * Copy the guest element into a dedicated buffer suitable to be sent to NIC
- *
- * @iov: [0] is the out buffer, [1] is the in one
- */
-static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
- VirtQueueElement *elem,
- struct iovec *iov)
+static int vhost_vdpa_net_cvq_start(NetClientState *nc)
{
- size_t in_copied;
- bool ok;
+ VhostVDPAState *s;
+ int r;
- iov[0].iov_base = s->cvq_cmd_out_buffer;
- ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, elem->out_num,
- vhost_vdpa_net_cvq_cmd_len(), iov[0].iov_base,
- &iov[0].iov_len, false);
- if (unlikely(!ok)) {
- return false;
+ assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
+
+ s = DO_UPCAST(VhostVDPAState, nc, nc);
+ if (!s->vhost_vdpa.shadow_vqs_enabled) {
+ return 0;
}
- iov[1].iov_base = s->cvq_cmd_in_buffer;
- ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
- sizeof(virtio_net_ctrl_ack), iov[1].iov_base,
- &in_copied, true);
- if (unlikely(!ok)) {
+ r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer,
+ vhost_vdpa_net_cvq_cmd_page_len(), false);
+ if (unlikely(r < 0)) {
+ return r;
+ }
+
+ r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer,
+ vhost_vdpa_net_cvq_cmd_page_len(), true);
+ if (unlikely(r < 0)) {
vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
- return false;
}
- iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
- return true;
+ return r;
+}
+
+static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
+{
+ VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
+
+ assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
+
+ if (s->vhost_vdpa.shadow_vqs_enabled) {
+ vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
+ vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer);
+ }
}
static NetClientInfo net_vhost_vdpa_cvq_info = {
.type = NET_CLIENT_DRIVER_VHOST_VDPA,
.size = sizeof(VhostVDPAState),
.receive = vhost_vdpa_receive,
+ .start = vhost_vdpa_net_cvq_start,
+ .stop = vhost_vdpa_net_cvq_stop,
.cleanup = vhost_vdpa_cleanup,
.has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
.has_ufo = vhost_vdpa_has_ufo,
@@ -348,19 +347,17 @@ static NetClientInfo net_vhost_vdpa_cvq_info = {
* Do not forward commands not supported by SVQ. Otherwise, the device could
* accept it and qemu would not know how to update the device model.
*/
-static bool vhost_vdpa_net_cvq_validate_cmd(const struct iovec *out,
- size_t out_num)
+static bool vhost_vdpa_net_cvq_validate_cmd(const void *out_buf, size_t len)
{
struct virtio_net_ctrl_hdr ctrl;
- size_t n;
- n = iov_to_buf(out, out_num, 0, &ctrl, sizeof(ctrl));
- if (unlikely(n < sizeof(ctrl))) {
+ if (unlikely(len < sizeof(ctrl))) {
qemu_log_mask(LOG_GUEST_ERROR,
- "%s: invalid legnth of out buffer %zu\n", __func__, n);
+ "%s: invalid legnth of out buffer %zu\n", __func__, len);
return false;
}
+ memcpy(&ctrl, out_buf, sizeof(ctrl));
switch (ctrl.class) {
case VIRTIO_NET_CTRL_MAC:
switch (ctrl.cmd) {
@@ -392,10 +389,14 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
VhostVDPAState *s = opaque;
size_t in_len, dev_written;
virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
- /* out and in buffers sent to the device */
- struct iovec dev_buffers[2] = {
- { .iov_base = s->cvq_cmd_out_buffer },
- { .iov_base = s->cvq_cmd_in_buffer },
+ /* Out buffer sent to both the vdpa device and the device model */
+ struct iovec out = {
+ .iov_base = s->cvq_cmd_out_buffer,
+ };
+ /* In buffer sent to the device */
+ const struct iovec dev_in = {
+ .iov_base = s->cvq_cmd_in_buffer,
+ .iov_len = sizeof(virtio_net_ctrl_ack),
};
/* in buffer used for device model */
const struct iovec in = {
@@ -405,17 +406,15 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
int r = -EINVAL;
bool ok;
- ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
- if (unlikely(!ok)) {
- goto out;
- }
-
- ok = vhost_vdpa_net_cvq_validate_cmd(&dev_buffers[0], 1);
+ out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
+ s->cvq_cmd_out_buffer,
+ vhost_vdpa_net_cvq_cmd_len());
+ ok = vhost_vdpa_net_cvq_validate_cmd(s->cvq_cmd_out_buffer, out.iov_len);
if (unlikely(!ok)) {
goto out;
}
- r = vhost_svq_add(svq, &dev_buffers[0], 1, &dev_buffers[1], 1, elem);
+ r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
if (unlikely(r != 0)) {
if (unlikely(r == -ENOSPC)) {
qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
@@ -435,13 +434,13 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
goto out;
}
- memcpy(&status, dev_buffers[1].iov_base, sizeof(status));
+ memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
if (status != VIRTIO_NET_OK) {
goto out;
}
status = VIRTIO_NET_ERR;
- virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, dev_buffers, 1);
+ virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
if (status != VIRTIO_NET_OK) {
error_report("Bad CVQ processing in model");
}
@@ -454,12 +453,6 @@ out:
}
vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
g_free(elem);
- if (dev_buffers[0].iov_base) {
- vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[0].iov_base);
- }
- if (dev_buffers[1].iov_base) {
- vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, dev_buffers[1].iov_base);
- }
return r;
}
--
2.31.1

View File

@ -0,0 +1,49 @@
From 51c1e9cf1612727ec4c6e795576ae8fa0c0b2d4c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:20:05 +0200
Subject: [PATCH 07/23] vdpa: Remove SVQ vring from iova_tree at shutdown
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [6/21] f72e67b9c90103151cbf86bff53e8f14b30f0e5b (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
Although the device will be reset before usage, the right thing to do is
to clean it.
Reported-by: Lei Yang <leiyang@redhat.com>
Fixes: 34e3c94eda ("vdpa: Add custom IOTLB translations to SVQ")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 0c45fa6c420ec3a1dd9ea9c40fa11bd943bb3be9)
---
hw/virtio/vhost-vdpa.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 39aa70f52d..e5c264fb29 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -896,6 +896,12 @@ static bool vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v,
size = ROUND_UP(result->size, qemu_real_host_page_size);
r = vhost_vdpa_dma_unmap(v, result->iova, size);
+ if (unlikely(r < 0)) {
+ error_report("Unable to unmap SVQ vring: %s (%d)", g_strerror(-r), -r);
+ return false;
+ }
+
+ vhost_iova_tree_remove(v->iova_tree, *result);
return r == 0;
}
--
2.31.1

View File

@ -0,0 +1,48 @@
From edde0b6a805085255bccc0ccdc3b9b6f81cef37b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:20:02 +0200
Subject: [PATCH 03/23] vdpa: Skip the maps not in the iova tree
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [2/21] 73acd16375a17cdf4c58830386541dd3a1b18bf7 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
Next patch will skip the registering of dma maps that the vdpa device
rejects in the iova tree. We need to consider that here or we cause a
SIGSEGV accessing result.
Reported-by: Lei Yang <leiyang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit a92ca0ffee5858636432a6059eb2790df1c9c77f)
---
hw/virtio/vhost-vdpa.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 96334ab5b6..aa7765c6bc 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -287,6 +287,10 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
};
result = vhost_iova_tree_find_iova(v->iova_tree, &mem_region);
+ if (!result) {
+ /* The memory listener map wasn't mapped */
+ return;
+ }
iova = result->iova;
vhost_iova_tree_remove(v->iova_tree, result);
}
--
2.31.1

View File

@ -0,0 +1,79 @@
From 89a67e0ce3e4c7b9f9b2d4cfb9fc5eeebc5643ac Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:20:08 +0200
Subject: [PATCH 10/23] vdpa: Use ring hwaddr at vhost_vdpa_svq_unmap_ring
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [9/21] 4420134d7be60fa8b04dc9a56566524bf8daddd4 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
Reduce code duplication.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 5a92452da95b2edfbffdd42ddc2612a7d09a5db0)
---
hw/virtio/vhost-vdpa.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 8eddf39f2a..0bea1e1eb9 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -882,10 +882,12 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev,
/**
* Unmap a SVQ area in the device
*/
-static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v,
- const DMAMap *needle)
+static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v, hwaddr addr)
{
- const DMAMap *result = vhost_iova_tree_find_iova(v->iova_tree, needle);
+ const DMAMap needle = {
+ .translated_addr = addr,
+ };
+ const DMAMap *result = vhost_iova_tree_find_iova(v->iova_tree, &needle);
hwaddr size;
int r;
@@ -907,17 +909,14 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v,
static void vhost_vdpa_svq_unmap_rings(struct vhost_dev *dev,
const VhostShadowVirtqueue *svq)
{
- DMAMap needle = {};
struct vhost_vdpa *v = dev->opaque;
struct vhost_vring_addr svq_addr;
vhost_svq_get_vring_addr(svq, &svq_addr);
- needle.translated_addr = svq_addr.desc_user_addr;
- vhost_vdpa_svq_unmap_ring(v, &needle);
+ vhost_vdpa_svq_unmap_ring(v, svq_addr.desc_user_addr);
- needle.translated_addr = svq_addr.used_user_addr;
- vhost_vdpa_svq_unmap_ring(v, &needle);
+ vhost_vdpa_svq_unmap_ring(v, svq_addr.used_user_addr);
}
/**
@@ -995,7 +994,7 @@ static bool vhost_vdpa_svq_map_rings(struct vhost_dev *dev,
ok = vhost_vdpa_svq_map_ring(v, &device_region, errp);
if (unlikely(!ok)) {
error_prepend(errp, "Cannot create vq device region: ");
- vhost_vdpa_svq_unmap_ring(v, &driver_region);
+ vhost_vdpa_svq_unmap_ring(v, driver_region.translated_addr);
}
addr->used_user_addr = device_region.iova;
--
2.31.1

View File

@ -0,0 +1,62 @@
From f92b0ef80b4889ae0beb0b2a026ec3892d576d79 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:32 +0200
Subject: [PATCH 17/23] vdpa: add net_vhost_vdpa_cvq_info NetClientInfo
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [16/21] c80c9fd89e81fc389e7d02e9d764331ab9fc7a0a (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
Next patches will add a new info callback to restore NIC status through
CVQ. Since only the CVQ vhost device is needed, create it with a new
NetClientInfo.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 9d379453404303069f93f9b8163ae3805bcd8c2e)
---
net/vhost-vdpa.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index ba65736f83..03e4cf1abc 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -334,6 +334,16 @@ static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
return true;
}
+static NetClientInfo net_vhost_vdpa_cvq_info = {
+ .type = NET_CLIENT_DRIVER_VHOST_VDPA,
+ .size = sizeof(VhostVDPAState),
+ .receive = vhost_vdpa_receive,
+ .cleanup = vhost_vdpa_cleanup,
+ .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
+ .has_ufo = vhost_vdpa_has_ufo,
+ .check_peer_type = vhost_vdpa_check_peer_type,
+};
+
/**
* Do not forward commands not supported by SVQ. Otherwise, the device could
* accept it and qemu would not know how to update the device model.
@@ -475,7 +485,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
name);
} else {
- nc = qemu_new_net_control_client(&net_vhost_vdpa_info, peer,
+ nc = qemu_new_net_control_client(&net_vhost_vdpa_cvq_info, peer,
device, name);
}
snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
--
2.31.1

View File

@ -0,0 +1,83 @@
From 6d16102aca24bab16c846fe6457071f4466b8e35 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:20:03 +0200
Subject: [PATCH 04/23] vdpa: do not save failed dma maps in SVQ iova tree
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [3/21] f9bea39f7fa14c5ef0f85774cbad0ca3b52c4498 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
If a map fails for whatever reason, it must not be saved in the tree.
Otherwise, qemu will try to unmap it in cleanup, leaving to more errors.
Fixes: 34e3c94eda ("vdpa: Add custom IOTLB translations to SVQ")
Reported-by: Lei Yang <leiyang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 6cc2ec65382fde205511ac00a324995ce6ee8f28)
---
hw/virtio/vhost-vdpa.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index aa7765c6bc..cc15b7d8ee 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -174,6 +174,7 @@ static void vhost_vdpa_listener_commit(MemoryListener *listener)
static void vhost_vdpa_listener_region_add(MemoryListener *listener,
MemoryRegionSection *section)
{
+ DMAMap mem_region = {};
struct vhost_vdpa *v = container_of(listener, struct vhost_vdpa, listener);
hwaddr iova;
Int128 llend, llsize;
@@ -210,13 +211,13 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
llsize = int128_sub(llend, int128_make64(iova));
if (v->shadow_vqs_enabled) {
- DMAMap mem_region = {
- .translated_addr = (hwaddr)(uintptr_t)vaddr,
- .size = int128_get64(llsize) - 1,
- .perm = IOMMU_ACCESS_FLAG(true, section->readonly),
- };
+ int r;
- int r = vhost_iova_tree_map_alloc(v->iova_tree, &mem_region);
+ mem_region.translated_addr = (hwaddr)(uintptr_t)vaddr,
+ mem_region.size = int128_get64(llsize) - 1,
+ mem_region.perm = IOMMU_ACCESS_FLAG(true, section->readonly),
+
+ r = vhost_iova_tree_map_alloc(v->iova_tree, &mem_region);
if (unlikely(r != IOVA_OK)) {
error_report("Can't allocate a mapping (%d)", r);
goto fail;
@@ -230,11 +231,16 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
vaddr, section->readonly);
if (ret) {
error_report("vhost vdpa map fail!");
- goto fail;
+ goto fail_map;
}
return;
+fail_map:
+ if (v->shadow_vqs_enabled) {
+ vhost_iova_tree_remove(v->iova_tree, &mem_region);
+ }
+
fail:
/*
* On the initfn path, store the first error in the container so we
--
2.31.1

View File

@ -0,0 +1,153 @@
From 56f4bebc591893e590481617da7cd7ecffeb166d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:34 +0200
Subject: [PATCH 19/23] vdpa: extract vhost_vdpa_net_cvq_add from
vhost_vdpa_net_handle_ctrl_avail
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [18/21] 08ab71dbf050f5c2e97c622d1915f71a56c135b8 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
So we can reuse it to inject state messages.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
--
v7:
* Remove double free error
v6:
* Do not assume in buffer sent to the device is sizeof(virtio_net_ctrl_ack)
v5:
* Do not use an artificial !NULL VirtQueueElement
* Use only out size instead of iovec dev_buffers for these functions.
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit d9afb1f0ee4d662ed67d3bc1220b943f7e4cfa6f)
---
net/vhost-vdpa.c | 59 +++++++++++++++++++++++++++++++-----------------
1 file changed, 38 insertions(+), 21 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 17626feb8d..f09f044ec1 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -331,6 +331,38 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
}
}
+static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
+ size_t in_len)
+{
+ /* Buffers for the device */
+ const struct iovec out = {
+ .iov_base = s->cvq_cmd_out_buffer,
+ .iov_len = out_len,
+ };
+ const struct iovec in = {
+ .iov_base = s->cvq_cmd_in_buffer,
+ .iov_len = sizeof(virtio_net_ctrl_ack),
+ };
+ VhostShadowVirtqueue *svq = g_ptr_array_index(s->vhost_vdpa.shadow_vqs, 0);
+ int r;
+
+ r = vhost_svq_add(svq, &out, 1, &in, 1, NULL);
+ if (unlikely(r != 0)) {
+ if (unlikely(r == -ENOSPC)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
+ __func__);
+ }
+ return r;
+ }
+
+ /*
+ * We can poll here since we've had BQL from the time we sent the
+ * descriptor. Also, we need to take the answer before SVQ pulls by itself,
+ * when BQL is released
+ */
+ return vhost_svq_poll(svq);
+}
+
static NetClientInfo net_vhost_vdpa_cvq_info = {
.type = NET_CLIENT_DRIVER_VHOST_VDPA,
.size = sizeof(VhostVDPAState),
@@ -387,23 +419,18 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
void *opaque)
{
VhostVDPAState *s = opaque;
- size_t in_len, dev_written;
+ size_t in_len;
virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
/* Out buffer sent to both the vdpa device and the device model */
struct iovec out = {
.iov_base = s->cvq_cmd_out_buffer,
};
- /* In buffer sent to the device */
- const struct iovec dev_in = {
- .iov_base = s->cvq_cmd_in_buffer,
- .iov_len = sizeof(virtio_net_ctrl_ack),
- };
/* in buffer used for device model */
const struct iovec in = {
.iov_base = &status,
.iov_len = sizeof(status),
};
- int r = -EINVAL;
+ ssize_t dev_written = -EINVAL;
bool ok;
out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
@@ -414,21 +441,11 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
goto out;
}
- r = vhost_svq_add(svq, &out, 1, &dev_in, 1, elem);
- if (unlikely(r != 0)) {
- if (unlikely(r == -ENOSPC)) {
- qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
- __func__);
- }
+ dev_written = vhost_vdpa_net_cvq_add(s, out.iov_len, sizeof(status));
+ if (unlikely(dev_written < 0)) {
goto out;
}
- /*
- * We can poll here since we've had BQL from the time we sent the
- * descriptor. Also, we need to take the answer before SVQ pulls by itself,
- * when BQL is released
- */
- dev_written = vhost_svq_poll(svq);
if (unlikely(dev_written < sizeof(status))) {
error_report("Insufficient written data (%zu)", dev_written);
goto out;
@@ -436,7 +453,7 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
if (status != VIRTIO_NET_OK) {
- goto out;
+ return VIRTIO_NET_ERR;
}
status = VIRTIO_NET_ERR;
@@ -453,7 +470,7 @@ out:
}
vhost_svq_push_elem(svq, elem, MIN(in_len, sizeof(status)));
g_free(elem);
- return r;
+ return dev_written < 0 ? dev_written : 0;
}
static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
--
2.31.1

View File

@ -0,0 +1,67 @@
From 6cde15c70c86819033337771eb522e94e3ea9e34 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:20:07 +0200
Subject: [PATCH 09/23] vhost: Always store new kick fd on
vhost_svq_set_svq_kick_fd
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [8/21] a09b8851c39d7cea67414560f6d322e988b9d59a (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
We can unbind twice a file descriptor if we call twice
vhost_svq_set_svq_kick_fd because of this. Since it comes from vhost and
not from SVQ, that file descriptor could be a different thing that
guest's vhost notifier.
Likewise, it can happens the same if a guest start and stop the device
multiple times.
Reported-by: Lei Yang <leiyang@redhat.com>
Fixes: dff4426fa6 ("vhost: Add Shadow VirtQueue kick forwarding capabilities")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 6867f29c1425add7e0e8d1d8d58cc0ffbb8df0e4)
---
hw/virtio/vhost-shadow-virtqueue.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index e53aac45f6..f420311b89 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -602,13 +602,13 @@ void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd)
event_notifier_set_handler(svq_kick, NULL);
}
+ event_notifier_init_fd(svq_kick, svq_kick_fd);
/*
* event_notifier_set_handler already checks for guest's notifications if
* they arrive at the new file descriptor in the switch, so there is no
* need to explicitly check for them.
*/
if (poll_start) {
- event_notifier_init_fd(svq_kick, svq_kick_fd);
event_notifier_set(svq_kick);
event_notifier_set_handler(svq_kick, vhost_handle_guest_kick_notifier);
}
@@ -655,7 +655,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
*/
void vhost_svq_stop(VhostShadowVirtqueue *svq)
{
- event_notifier_set_handler(&svq->svq_kick, NULL);
+ vhost_svq_set_svq_kick_fd(svq, VHOST_FILE_UNBIND);
g_autofree VirtQueueElement *next_avail_elem = NULL;
if (!svq->vq) {
--
2.31.1

View File

@ -0,0 +1,47 @@
From 773d1bb4e9ea9ca704372e52569955937f91f15c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:28 +0200
Subject: [PATCH 13/23] vhost: Delete useless read memory barrier
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [12/21] 0e238fe934b1fc2c7e10b6f693468bc25ea3243f (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
As discussed in previous series [1], this memory barrier is useless with
the atomic read of used idx at vhost_svq_more_used. Deleting it.
[1] https://lists.nongnu.org/archive/html/qemu-devel/2022-07/msg02616.html
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit cdfb1612ba0f9b76367c96ce26ba94fedc7a0e61)
---
hw/virtio/vhost-shadow-virtqueue.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 7792f3db1d..d36afbc547 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -509,9 +509,6 @@ size_t vhost_svq_poll(VhostShadowVirtqueue *svq)
if (unlikely(g_get_monotonic_time() - start_us > 10e6)) {
return 0;
}
-
- /* Make sure we read new used_idx */
- smp_rmb();
} while (true);
}
--
2.31.1

View File

@ -0,0 +1,63 @@
From 2f134d800a7ac521a637a0da2116b2603b12c8c0 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:29 +0200
Subject: [PATCH 14/23] vhost: Do not depend on !NULL VirtQueueElement on
vhost_svq_flush
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [13/21] 93ec7baa2a29031db25d86b7dc1a949388623370 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
Since QEMU will be able to inject new elements on CVQ to restore the
state, we need not to depend on a VirtQueueElement to know if a new
element has been used by the device or not. Instead of check that, check
if there are new elements only using used idx on vhost_svq_flush.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 7599f71c11c08b90f173c35ded1aaa1fdca86f1b)
---
hw/virtio/vhost-shadow-virtqueue.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index d36afbc547..c0e3c92e96 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -499,17 +499,20 @@ static void vhost_svq_flush(VhostShadowVirtqueue *svq,
size_t vhost_svq_poll(VhostShadowVirtqueue *svq)
{
int64_t start_us = g_get_monotonic_time();
+ uint32_t len;
+
do {
- uint32_t len;
- VirtQueueElement *elem = vhost_svq_get_buf(svq, &len);
- if (elem) {
- return len;
+ if (vhost_svq_more_used(svq)) {
+ break;
}
if (unlikely(g_get_monotonic_time() - start_us > 10e6)) {
return 0;
}
} while (true);
+
+ vhost_svq_get_buf(svq, &len);
+ return len;
}
/**
--
2.31.1

View File

@ -0,0 +1,87 @@
From 3f2ba7cce6b272a8b5c8953e8923e799e4aa7b88 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Mon, 18 Jul 2022 14:05:45 +0200
Subject: [PATCH 02/23] vhost: Get vring base from vq, not svq
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [1/21] e7e0294bbc98f69ccdbc4af4715857e77b017f80 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: Merged
The SVQ vring used idx usually match with the guest visible one, as long
as all the guest buffers (GPA) maps to exactly one buffer within qemu's
VA. However, as we can see in virtqueue_map_desc, a single guest buffer
could map to many buffers in SVQ vring.
Also, its also a mistake to rewind them at the source of migration.
Since VirtQueue is able to migrate the inflight descriptors, its
responsability of the destination to perform the rewind just in case it
cannot report the inflight descriptors to the device.
This makes easier to migrate between backends or to recover them in
vhost devices that support set in flight descriptors.
Fixes: 6d0b22266633 ("vdpa: Adapt vhost_vdpa_get_vring_base to SVQ")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 2fdac348fd3d243bb964937236af3cc27ae7af2b)
---
hw/virtio/vhost-vdpa.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 03dc6014b0..96334ab5b6 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1177,7 +1177,18 @@ static int vhost_vdpa_set_vring_base(struct vhost_dev *dev,
struct vhost_vring_state *ring)
{
struct vhost_vdpa *v = dev->opaque;
+ VirtQueue *vq = virtio_get_queue(dev->vdev, ring->index);
+ /*
+ * vhost-vdpa devices does not support in-flight requests. Set all of them
+ * as available.
+ *
+ * TODO: This is ok for networking, but other kinds of devices might
+ * have problems with these retransmissions.
+ */
+ while (virtqueue_rewind(vq, 1)) {
+ continue;
+ }
if (v->shadow_vqs_enabled) {
/*
* Device vring base was set at device start. SVQ base is handled by
@@ -1193,21 +1204,10 @@ static int vhost_vdpa_get_vring_base(struct vhost_dev *dev,
struct vhost_vring_state *ring)
{
struct vhost_vdpa *v = dev->opaque;
- int vdpa_idx = ring->index - dev->vq_index;
int ret;
if (v->shadow_vqs_enabled) {
- VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, vdpa_idx);
-
- /*
- * Setting base as last used idx, so destination will see as available
- * all the entries that the device did not use, including the in-flight
- * processing ones.
- *
- * TODO: This is ok for networking, but other kinds of devices might
- * have problems with these retransmissions.
- */
- ring->num = svq->last_used_idx;
+ ring->num = virtio_queue_get_last_avail_idx(dev->vdev, ring->index);
return 0;
}
--
2.31.1

View File

@ -0,0 +1,80 @@
From 45305ab202fa2191962152e5a501a9a13e31a0b2 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:26 +0200
Subject: [PATCH 11/23] vhost: stop transfer elem ownership in
vhost_handle_guest_kick
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [10/21] 697a5c0ad59efe27abf447f7965091993bc39756 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
It was easier to allow vhost_svq_add to handle the memory. Now that we
will allow qemu to add elements to a SVQ without the guest's knowledge,
it's better to handle it in the caller.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit eb42df8bb2c92a7313343d97409cd99ccba25b25)
---
hw/virtio/vhost-shadow-virtqueue.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index f420311b89..2ae47d90a1 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -233,9 +233,6 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
/**
* Add an element to a SVQ.
*
- * The caller must check that there is enough slots for the new element. It
- * takes ownership of the element: In case of failure not ENOSPC, it is free.
- *
* Return -EINVAL if element is invalid, -ENOSPC if dev queue is full
*/
int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
@@ -252,7 +249,6 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
ok = vhost_svq_add_split(svq, out_sg, out_num, in_sg, in_num, &qemu_head);
if (unlikely(!ok)) {
- g_free(elem);
return -EINVAL;
}
@@ -293,7 +289,7 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
virtio_queue_set_notification(svq->vq, false);
while (true) {
- VirtQueueElement *elem;
+ g_autofree VirtQueueElement *elem;
int r;
if (svq->next_guest_avail_elem) {
@@ -324,12 +320,14 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
* queue the current guest descriptor and ignore kicks
* until some elements are used.
*/
- svq->next_guest_avail_elem = elem;
+ svq->next_guest_avail_elem = g_steal_pointer(&elem);
}
/* VQ is full or broken, just return and ignore kicks */
return;
}
+ /* elem belongs to SVQ or external caller now */
+ elem = NULL;
}
virtio_queue_set_notification(svq->vq, true);
--
2.31.1

View File

@ -0,0 +1,55 @@
From 78b7d9af26ae802b3ca0d7b794b366ab4d515647 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:27 +0200
Subject: [PATCH 12/23] vhost: use SVQ element ndescs instead of opaque data
for desc validation
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [11/21] 536ba65ff7241c4dc66362294ba8de4354260d6f (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
Since we're going to allow SVQ to add elements without the guest's
knowledge and without its own VirtQueueElement, it's easier to check if
an element is a valid head checking a different thing than the
VirtQueueElement.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 70e0841722deb363b53cdcd465af12a0d1461b60)
---
hw/virtio/vhost-shadow-virtqueue.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 2ae47d90a1..7792f3db1d 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -414,7 +414,7 @@ static VirtQueueElement *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
return NULL;
}
- if (unlikely(!svq->desc_state[used_elem.id].elem)) {
+ if (unlikely(!svq->desc_state[used_elem.id].ndescs)) {
qemu_log_mask(LOG_GUEST_ERROR,
"Device %s says index %u is used, but it was not available",
svq->vdev->name, used_elem.id);
@@ -422,6 +422,7 @@ static VirtQueueElement *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
}
num = svq->desc_state[used_elem.id].ndescs;
+ svq->desc_state[used_elem.id].ndescs = 0;
last_used_chain = vhost_svq_last_desc_of_chain(svq, num, used_elem.id);
svq->desc_next[last_used_chain] = svq->free_head;
svq->free_head = used_elem.id;
--
2.31.1

View File

@ -0,0 +1,73 @@
From 6a6999311742b6dccdfce09f30742a63d72d1bd7 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:30 +0200
Subject: [PATCH 15/23] vhost_net: Add NetClientInfo start callback
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [14/21] df6a96ae3aec02ecae793bdbd8e9c2fcfac7871a (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
This is used by the backend to perform actions before the device is
started.
In particular, vdpa net use it to map CVQ buffers to the device, so it
can send control commands using them.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 80bda0e674fd0b439ac627ab7ecdbd4a1b46d525)
---
hw/net/vhost_net.c | 7 +++++++
include/net/net.h | 2 ++
2 files changed, 9 insertions(+)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index d6d7c51f62..1005f9d8e6 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -244,6 +244,13 @@ static int vhost_net_start_one(struct vhost_net *net,
struct vhost_vring_file file = { };
int r;
+ if (net->nc->info->start) {
+ r = net->nc->info->start(net->nc);
+ if (r < 0) {
+ return r;
+ }
+ }
+
r = vhost_dev_enable_notifiers(&net->dev, dev);
if (r < 0) {
goto fail_notifiers;
diff --git a/include/net/net.h b/include/net/net.h
index 523136c7ac..ad9e80083a 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -44,6 +44,7 @@ typedef struct NICConf {
typedef void (NetPoll)(NetClientState *, bool enable);
typedef bool (NetCanReceive)(NetClientState *);
+typedef int (NetStart)(NetClientState *);
typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
typedef void (NetCleanup) (NetClientState *);
@@ -71,6 +72,7 @@ typedef struct NetClientInfo {
NetReceive *receive_raw;
NetReceiveIOV *receive_iov;
NetCanReceive *can_receive;
+ NetStart *start;
NetCleanup *cleanup;
LinkStatusChanged *link_status_changed;
QueryRxFilter *query_rx_filter;
--
2.31.1

View File

@ -0,0 +1,68 @@
From effd0ed379deb43bb850f1aeff24fa85935d7f52 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:31 +0200
Subject: [PATCH 16/23] vhost_net: Add NetClientInfo stop callback
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [15/21] 9f8a3e9bfb0d21fa0479f54a7a17cb738aa46359 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
Used by the backend to perform actions after the device is stopped.
In particular, vdpa net use it to unmap CVQ buffers to the device,
cleaning the actions performed in prepare().
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit c6544e2331d721627fa7356da3592bcb46340f1b)
---
hw/net/vhost_net.c | 3 +++
include/net/net.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 1005f9d8e6..275ece5324 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -320,6 +320,9 @@ static void vhost_net_stop_one(struct vhost_net *net,
net->nc->info->poll(net->nc, true);
}
vhost_dev_stop(&net->dev, dev);
+ if (net->nc->info->stop) {
+ net->nc->info->stop(net->nc);
+ }
vhost_dev_disable_notifiers(&net->dev, dev);
}
diff --git a/include/net/net.h b/include/net/net.h
index ad9e80083a..476ad45b9a 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -45,6 +45,7 @@ typedef struct NICConf {
typedef void (NetPoll)(NetClientState *, bool enable);
typedef bool (NetCanReceive)(NetClientState *);
typedef int (NetStart)(NetClientState *);
+typedef void (NetStop)(NetClientState *);
typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
typedef void (NetCleanup) (NetClientState *);
@@ -73,6 +74,7 @@ typedef struct NetClientInfo {
NetReceiveIOV *receive_iov;
NetCanReceive *can_receive;
NetStart *start;
+ NetStop *stop;
NetCleanup *cleanup;
LinkStatusChanged *link_status_changed;
QueryRxFilter *query_rx_filter;
--
2.31.1

View File

@ -0,0 +1,73 @@
From 6a5c236b95ce475c556ccd92c2135ad48474e8fb Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:30:35 +0200
Subject: [PATCH 20/23] vhost_net: add NetClientState->load() callback
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [19/21] 439b4133a757b2f1c5f4a1441eca25329896491a (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
It allows per-net client operations right after device's successful
start. In particular, to load the device status.
Vhost-vdpa net will use it to add the CVQ buffers to restore the device
status.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 302f3d20e68a8a120d431f7ff7cb02a75917f54c)
---
hw/net/vhost_net.c | 7 +++++++
include/net/net.h | 2 ++
2 files changed, 9 insertions(+)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 275ece5324..ea3a8be1c9 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -281,6 +281,13 @@ static int vhost_net_start_one(struct vhost_net *net,
}
}
}
+
+ if (net->nc->info->load) {
+ r = net->nc->info->load(net->nc);
+ if (r < 0) {
+ goto fail;
+ }
+ }
return 0;
fail:
file.fd = -1;
diff --git a/include/net/net.h b/include/net/net.h
index 476ad45b9a..81d0b21def 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -45,6 +45,7 @@ typedef struct NICConf {
typedef void (NetPoll)(NetClientState *, bool enable);
typedef bool (NetCanReceive)(NetClientState *);
typedef int (NetStart)(NetClientState *);
+typedef int (NetLoad)(NetClientState *);
typedef void (NetStop)(NetClientState *);
typedef ssize_t (NetReceive)(NetClientState *, const uint8_t *, size_t);
typedef ssize_t (NetReceiveIOV)(NetClientState *, const struct iovec *, int);
@@ -74,6 +75,7 @@ typedef struct NetClientInfo {
NetReceiveIOV *receive_iov;
NetCanReceive *can_receive;
NetStart *start;
+ NetLoad *load;
NetStop *stop;
NetCleanup *cleanup;
LinkStatusChanged *link_status_changed;
--
2.31.1

View File

@ -0,0 +1,117 @@
From cbcab5ed1686fddeb2c6adb3a3f6ed0678a36e71 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Mon, 8 Aug 2022 12:21:34 -0400
Subject: [PATCH 23/23] virtio-scsi: fix race in virtio_scsi_dataplane_start()
RH-Author: Stefan Hajnoczi <stefanha@redhat.com>
RH-MergeRequest: 211: virtio-scsi: fix race in virtio_scsi_dataplane_start() (RHEL src-git)
RH-Commit: [1/1] 2d4964d8863e259326a73fb918fa2f5f63b4a60a
RH-Bugzilla: 2099541
RH-Acked-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Acked-by: Hanna Reitz <hreitz@redhat.com>
RH-Acked-by: Paolo Bonzini <pbonzini@redhat.com>
As soon as virtio_scsi_data_plane_start() attaches host notifiers the
IOThread may start virtqueue processing. There is a race between
IOThread virtqueue processing and virtio_scsi_data_plane_start() because
it only assigns s->dataplane_started after attaching host notifiers.
When a virtqueue handler function in the IOThread calls
virtio_scsi_defer_to_dataplane() it may see !s->dataplane_started and
attempt to start dataplane even though we're already in the IOThread:
#0 0x00007f67b360857c __pthread_kill_implementation (libc.so.6 + 0xa257c)
#1 0x00007f67b35bbd56 raise (libc.so.6 + 0x55d56)
#2 0x00007f67b358e833 abort (libc.so.6 + 0x28833)
#3 0x00007f67b358e75b __assert_fail_base.cold (libc.so.6 + 0x2875b)
#4 0x00007f67b35b4cd6 __assert_fail (libc.so.6 + 0x4ecd6)
#5 0x000055ca87fd411b memory_region_transaction_commit (qemu-kvm + 0x67511b)
#6 0x000055ca87e17811 virtio_pci_ioeventfd_assign (qemu-kvm + 0x4b8811)
#7 0x000055ca87e14836 virtio_bus_set_host_notifier (qemu-kvm + 0x4b5836)
#8 0x000055ca87f8e14e virtio_scsi_set_host_notifier (qemu-kvm + 0x62f14e)
#9 0x000055ca87f8dd62 virtio_scsi_dataplane_start (qemu-kvm + 0x62ed62)
#10 0x000055ca87e14610 virtio_bus_start_ioeventfd (qemu-kvm + 0x4b5610)
#11 0x000055ca87f8c29a virtio_scsi_handle_ctrl (qemu-kvm + 0x62d29a)
#12 0x000055ca87fa5902 virtio_queue_host_notifier_read (qemu-kvm + 0x646902)
#13 0x000055ca882c099e aio_dispatch_handler (qemu-kvm + 0x96199e)
#14 0x000055ca882c1761 aio_poll (qemu-kvm + 0x962761)
#15 0x000055ca880e1052 iothread_run (qemu-kvm + 0x782052)
#16 0x000055ca882c562a qemu_thread_start (qemu-kvm + 0x96662a)
This patch assigns s->dataplane_started before attaching host notifiers
so that virtqueue handler functions that run in the IOThread before
virtio_scsi_data_plane_start() returns correctly identify that dataplane
does not need to be started. This fix is taken from the virtio-blk
dataplane code and it's worth adding a comment in virtio-blk as well to
explain why it works.
Note that s->dataplane_started does not need the AioContext lock because
it is set before attaching host notifiers and cleared after detaching
host notifiers. In other words, the IOThread always sees the value true
and the main loop thread does not modify it while the IOThread is
active.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2099541
Reported-by: Qing Wang <qinwang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220808162134.240405-1-stefanha@redhat.com>
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 9a4b6a63aee885931622549c85669dcca03bed39)
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 5 +++++
hw/scsi/virtio-scsi-dataplane.c | 11 ++++++++---
2 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 49276e46f2..26f965cabc 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -219,6 +219,11 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
memory_region_transaction_commit();
+ /*
+ * These fields are visible to the IOThread so we rely on implicit barriers
+ * in aio_context_acquire() on the write side and aio_notify_accept() on
+ * the read side.
+ */
s->starting = false;
vblk->dataplane_started = true;
trace_virtio_blk_data_plane_start(s);
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 8bb6e6acfc..20bb91766e 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -136,6 +136,14 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
memory_region_transaction_commit();
+ /*
+ * These fields are visible to the IOThread so we rely on implicit barriers
+ * in aio_context_acquire() on the write side and aio_notify_accept() on
+ * the read side.
+ */
+ s->dataplane_starting = false;
+ s->dataplane_started = true;
+
aio_context_acquire(s->ctx);
virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
@@ -143,9 +151,6 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
for (i = 0; i < vs->conf.num_queues; i++) {
virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
}
-
- s->dataplane_starting = false;
- s->dataplane_started = true;
aio_context_release(s->ctx);
return 0;
--
2.31.1

View File

@ -151,7 +151,7 @@ Obsoletes: %{name}-block-ssh <= %{epoch}:%{version} \
Summary: QEMU is a machine emulator and virtualizer
Name: qemu-kvm
Version: 7.0.0
Release: 11%{?rcrel}%{?dist}%{?cc_suffix}
Release: 12%{?rcrel}%{?dist}%{?cc_suffix}
# Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped
# Epoch 15 used for RHEL 8
# Epoch 17 used for RHEL 9 (due to release versioning offset in RHEL 8.5)
@ -444,6 +444,52 @@ Patch144: kvm-vdpa-Fix-index-calculus-at-vhost_vdpa_svqs_start.patch
Patch145: kvm-vdpa-Fix-memory-listener-deletions-of-iova-tree.patch
# For bz#2116876 - Fixes for vDPA control virtqueue support in Qemu
Patch146: kvm-vdpa-Fix-file-descriptor-leak-on-get-features-error.patch
# For bz#2120275 - Wrong max_sectors_kb and Maximum transfer length on the pass-through device [rhel-9.1]
Patch147: kvm-scsi-generic-Fix-emulated-block-limits-VPD-page.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch148: kvm-vhost-Get-vring-base-from-vq-not-svq.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch149: kvm-vdpa-Skip-the-maps-not-in-the-iova-tree.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch150: kvm-vdpa-do-not-save-failed-dma-maps-in-SVQ-iova-tree.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch151: kvm-util-Return-void-on-iova_tree_remove.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch152: kvm-util-accept-iova_tree_remove_parameter-by-value.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch153: kvm-vdpa-Remove-SVQ-vring-from-iova_tree-at-shutdown.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch154: kvm-vdpa-Make-SVQ-vring-unmapping-return-void.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch155: kvm-vhost-Always-store-new-kick-fd-on-vhost_svq_set_svq_.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch156: kvm-vdpa-Use-ring-hwaddr-at-vhost_vdpa_svq_unmap_ring.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch157: kvm-vhost-stop-transfer-elem-ownership-in-vhost_handle_g.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch158: kvm-vhost-use-SVQ-element-ndescs-instead-of-opaque-data-.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch159: kvm-vhost-Delete-useless-read-memory-barrier.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch160: kvm-vhost-Do-not-depend-on-NULL-VirtQueueElement-on-vhos.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch161: kvm-vhost_net-Add-NetClientInfo-start-callback.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch162: kvm-vhost_net-Add-NetClientInfo-stop-callback.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch163: kvm-vdpa-add-net_vhost_vdpa_cvq_info-NetClientInfo.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch164: kvm-vdpa-Move-command-buffers-map-to-start-of-net-device.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch165: kvm-vdpa-extract-vhost_vdpa_net_cvq_add-from-vhost_vdpa_.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch166: kvm-vhost_net-add-NetClientState-load-callback.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch167: kvm-vdpa-Add-virtio-net-mac-address-via-CVQ-at-start.patch
# For bz#2114060 - vDPA state restore support through control virtqueue in Qemu
Patch168: kvm-vdpa-Delete-CVQ-migration-blocker.patch
# For bz#2099541 - qemu coredump with error Assertion `qemu_mutex_iothread_locked()' failed when repeatly hotplug/unplug disks in pause status
Patch169: kvm-virtio-scsi-fix-race-in-virtio_scsi_dataplane_start.patch
# Source-git patches
@ -1479,6 +1525,37 @@ useradd -r -u 107 -g qemu -G kvm -d / -s /sbin/nologin \
%endif
%changelog
* Fri Aug 26 2022 Miroslav Rezanina <mrezanin@redhat.com> - 7.0.0-12
- kvm-scsi-generic-Fix-emulated-block-limits-VPD-page.patch [bz#2120275]
- kvm-vhost-Get-vring-base-from-vq-not-svq.patch [bz#2114060]
- kvm-vdpa-Skip-the-maps-not-in-the-iova-tree.patch [bz#2114060]
- kvm-vdpa-do-not-save-failed-dma-maps-in-SVQ-iova-tree.patch [bz#2114060]
- kvm-util-Return-void-on-iova_tree_remove.patch [bz#2114060]
- kvm-util-accept-iova_tree_remove_parameter-by-value.patch [bz#2114060]
- kvm-vdpa-Remove-SVQ-vring-from-iova_tree-at-shutdown.patch [bz#2114060]
- kvm-vdpa-Make-SVQ-vring-unmapping-return-void.patch [bz#2114060]
- kvm-vhost-Always-store-new-kick-fd-on-vhost_svq_set_svq_.patch [bz#2114060]
- kvm-vdpa-Use-ring-hwaddr-at-vhost_vdpa_svq_unmap_ring.patch [bz#2114060]
- kvm-vhost-stop-transfer-elem-ownership-in-vhost_handle_g.patch [bz#2114060]
- kvm-vhost-use-SVQ-element-ndescs-instead-of-opaque-data-.patch [bz#2114060]
- kvm-vhost-Delete-useless-read-memory-barrier.patch [bz#2114060]
- kvm-vhost-Do-not-depend-on-NULL-VirtQueueElement-on-vhos.patch [bz#2114060]
- kvm-vhost_net-Add-NetClientInfo-start-callback.patch [bz#2114060]
- kvm-vhost_net-Add-NetClientInfo-stop-callback.patch [bz#2114060]
- kvm-vdpa-add-net_vhost_vdpa_cvq_info-NetClientInfo.patch [bz#2114060]
- kvm-vdpa-Move-command-buffers-map-to-start-of-net-device.patch [bz#2114060]
- kvm-vdpa-extract-vhost_vdpa_net_cvq_add-from-vhost_vdpa_.patch [bz#2114060]
- kvm-vhost_net-add-NetClientState-load-callback.patch [bz#2114060]
- kvm-vdpa-Add-virtio-net-mac-address-via-CVQ-at-start.patch [bz#2114060]
- kvm-vdpa-Delete-CVQ-migration-blocker.patch [bz#2114060]
- kvm-virtio-scsi-fix-race-in-virtio_scsi_dataplane_start.patch [bz#2099541]
- Resolves: bz#2120275
(Wrong max_sectors_kb and Maximum transfer length on the pass-through device [rhel-9.1])
- Resolves: bz#2114060
(vDPA state restore support through control virtqueue in Qemu)
- Resolves: bz#2099541
(qemu coredump with error Assertion `qemu_mutex_iothread_locked()' failed when repeatly hotplug/unplug disks in pause status)
* Mon Aug 15 2022 Miroslav Rezanina <mrezanin@redhat.com> - 7.0.0-11
- kvm-QIOChannelSocket-Fix-zero-copy-flush-returning-code-.patch [bz#2107466]
- kvm-Add-dirty-sync-missed-zero-copy-migration-stat.patch [bz#2107466]