qemu-kvm/kvm-vdpa-do-not-save-failed...

84 lines
3.2 KiB
Diff
Raw Normal View History

* Fri Aug 26 2022 Miroslav Rezanina <mrezanin@redhat.com> - 7.0.0-12 - kvm-scsi-generic-Fix-emulated-block-limits-VPD-page.patch [bz#2120275] - kvm-vhost-Get-vring-base-from-vq-not-svq.patch [bz#2114060] - kvm-vdpa-Skip-the-maps-not-in-the-iova-tree.patch [bz#2114060] - kvm-vdpa-do-not-save-failed-dma-maps-in-SVQ-iova-tree.patch [bz#2114060] - kvm-util-Return-void-on-iova_tree_remove.patch [bz#2114060] - kvm-util-accept-iova_tree_remove_parameter-by-value.patch [bz#2114060] - kvm-vdpa-Remove-SVQ-vring-from-iova_tree-at-shutdown.patch [bz#2114060] - kvm-vdpa-Make-SVQ-vring-unmapping-return-void.patch [bz#2114060] - kvm-vhost-Always-store-new-kick-fd-on-vhost_svq_set_svq_.patch [bz#2114060] - kvm-vdpa-Use-ring-hwaddr-at-vhost_vdpa_svq_unmap_ring.patch [bz#2114060] - kvm-vhost-stop-transfer-elem-ownership-in-vhost_handle_g.patch [bz#2114060] - kvm-vhost-use-SVQ-element-ndescs-instead-of-opaque-data-.patch [bz#2114060] - kvm-vhost-Delete-useless-read-memory-barrier.patch [bz#2114060] - kvm-vhost-Do-not-depend-on-NULL-VirtQueueElement-on-vhos.patch [bz#2114060] - kvm-vhost_net-Add-NetClientInfo-start-callback.patch [bz#2114060] - kvm-vhost_net-Add-NetClientInfo-stop-callback.patch [bz#2114060] - kvm-vdpa-add-net_vhost_vdpa_cvq_info-NetClientInfo.patch [bz#2114060] - kvm-vdpa-Move-command-buffers-map-to-start-of-net-device.patch [bz#2114060] - kvm-vdpa-extract-vhost_vdpa_net_cvq_add-from-vhost_vdpa_.patch [bz#2114060] - kvm-vhost_net-add-NetClientState-load-callback.patch [bz#2114060] - kvm-vdpa-Add-virtio-net-mac-address-via-CVQ-at-start.patch [bz#2114060] - kvm-vdpa-Delete-CVQ-migration-blocker.patch [bz#2114060] - kvm-virtio-scsi-fix-race-in-virtio_scsi_dataplane_start.patch [bz#2099541] - Resolves: bz#2120275 (Wrong max_sectors_kb and Maximum transfer length on the pass-through device [rhel-9.1]) - Resolves: bz#2114060 (vDPA state restore support through control virtqueue in Qemu) - Resolves: bz#2099541 (qemu coredump with error Assertion `qemu_mutex_iothread_locked()' failed when repeatly hotplug/unplug disks in pause status)
2022-08-26 07:05:53 +00:00
From 6d16102aca24bab16c846fe6457071f4466b8e35 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:20:03 +0200
Subject: [PATCH 04/23] vdpa: do not save failed dma maps in SVQ iova tree
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [3/21] f9bea39f7fa14c5ef0f85774cbad0ca3b52c4498 (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
If a map fails for whatever reason, it must not be saved in the tree.
Otherwise, qemu will try to unmap it in cleanup, leaving to more errors.
Fixes: 34e3c94eda ("vdpa: Add custom IOTLB translations to SVQ")
Reported-by: Lei Yang <leiyang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 6cc2ec65382fde205511ac00a324995ce6ee8f28)
---
hw/virtio/vhost-vdpa.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index aa7765c6bc..cc15b7d8ee 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -174,6 +174,7 @@ static void vhost_vdpa_listener_commit(MemoryListener *listener)
static void vhost_vdpa_listener_region_add(MemoryListener *listener,
MemoryRegionSection *section)
{
+ DMAMap mem_region = {};
struct vhost_vdpa *v = container_of(listener, struct vhost_vdpa, listener);
hwaddr iova;
Int128 llend, llsize;
@@ -210,13 +211,13 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
llsize = int128_sub(llend, int128_make64(iova));
if (v->shadow_vqs_enabled) {
- DMAMap mem_region = {
- .translated_addr = (hwaddr)(uintptr_t)vaddr,
- .size = int128_get64(llsize) - 1,
- .perm = IOMMU_ACCESS_FLAG(true, section->readonly),
- };
+ int r;
- int r = vhost_iova_tree_map_alloc(v->iova_tree, &mem_region);
+ mem_region.translated_addr = (hwaddr)(uintptr_t)vaddr,
+ mem_region.size = int128_get64(llsize) - 1,
+ mem_region.perm = IOMMU_ACCESS_FLAG(true, section->readonly),
+
+ r = vhost_iova_tree_map_alloc(v->iova_tree, &mem_region);
if (unlikely(r != IOVA_OK)) {
error_report("Can't allocate a mapping (%d)", r);
goto fail;
@@ -230,11 +231,16 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
vaddr, section->readonly);
if (ret) {
error_report("vhost vdpa map fail!");
- goto fail;
+ goto fail_map;
}
return;
+fail_map:
+ if (v->shadow_vqs_enabled) {
+ vhost_iova_tree_remove(v->iova_tree, &mem_region);
+ }
+
fail:
/*
* On the initfn path, store the first error in the container so we
--
2.31.1