* Mon Feb 06 2023 Miroslav Rezanina <mrezanin@redhat.com> - 7.2.0-7

- kvm-vdpa-use-v-shadow_vqs_enabled-in-vhost_vdpa_svqs_sta.patch [bz#2104412]
- kvm-vhost-set-SVQ-device-call-handler-at-SVQ-start.patch [bz#2104412]
- kvm-vhost-allocate-SVQ-device-file-descriptors-at-device.patch [bz#2104412]
- kvm-vhost-move-iova_tree-set-to-vhost_svq_start.patch [bz#2104412]
- kvm-vdpa-add-vhost_vdpa_net_valid_svq_features.patch [bz#2104412]
- kvm-vdpa-request-iova_range-only-once.patch [bz#2104412]
- kvm-vdpa-move-SVQ-vring-features-check-to-net.patch [bz#2104412]
- kvm-vdpa-allocate-SVQ-array-unconditionally.patch [bz#2104412]
- kvm-vdpa-add-asid-parameter-to-vhost_vdpa_dma_map-unmap.patch [bz#2104412]
- kvm-vdpa-store-x-svq-parameter-in-VhostVDPAState.patch [bz#2104412]
- kvm-vdpa-add-shadow_data-to-vhost_vdpa.patch [bz#2104412]
- kvm-vdpa-always-start-CVQ-in-SVQ-mode-if-possible.patch [bz#2104412]
- kvm-vdpa-fix-VHOST_BACKEND_F_IOTLB_ASID-flag-check.patch [bz#2104412]
- kvm-spec-Disable-VDUSE.patch [bz#2128222]
- Resolves: bz#2104412
  (vDPA ASID support in Qemu)
- Resolves: bz#2128222
  (VDUSE block export should be disabled in builds for now)
This commit is contained in:
Miroslav Rezanina 2023-02-06 10:05:42 -05:00
parent dd0eece2ef
commit 9b81b4ad6b
14 changed files with 1480 additions and 1 deletions

View File

@ -0,0 +1,221 @@
From d0e7f24a8d941ab142f2a1973ae18ed1bfdc074f Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:41 +0100
Subject: [PATCH 09/14] vdpa: add asid parameter to vhost_vdpa_dma_map/unmap
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [9/13] 3e7f89e57f73661017ccf0206f2ea77a72ca46bb (eperezmartin/qemu-kvm)
So the caller can choose which ASID is destined.
No need to update the batch functions as they will always be called from
memory listener updates at the moment. Memory listener updates will
always update ASID 0, as it's the passthrough ASID.
All vhost devices's ASID are 0 at this moment.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-10-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit cd831ed5c4add8ed6ee980c3645b241cbef5130f)
---
hw/virtio/trace-events | 4 ++--
hw/virtio/vhost-vdpa.c | 36 +++++++++++++++++++++++-----------
include/hw/virtio/vhost-vdpa.h | 14 ++++++++++---
net/vhost-vdpa.c | 6 +++---
4 files changed, 41 insertions(+), 19 deletions(-)
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 46f2faf04e..a87c5f39a2 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -30,8 +30,8 @@ vhost_user_write(uint32_t req, uint32_t flags) "req:%d flags:0x%"PRIx32""
vhost_user_create_notifier(int idx, void *n) "idx:%d n:%p"
# vhost-vdpa.c
-vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
-vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
+vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
+vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
vhost_vdpa_listener_begin_batch(void *v, int fd, uint32_t msg_type, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
vhost_vdpa_listener_commit(void *v, int fd, uint32_t msg_type, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
vhost_vdpa_listener_region_add(void *vdpa, uint64_t iova, uint64_t llend, void *vaddr, bool readonly) "vdpa: %p iova 0x%"PRIx64" llend 0x%"PRIx64" vaddr: %p read-only: %d"
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index dd2768634b..0ecf2bbaa0 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -72,22 +72,28 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
return false;
}
-int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
- void *vaddr, bool readonly)
+/*
+ * The caller must set asid = 0 if the device does not support asid.
+ * This is not an ABI break since it is set to 0 by the initializer anyway.
+ */
+int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+ hwaddr size, void *vaddr, bool readonly)
{
struct vhost_msg_v2 msg = {};
int fd = v->device_fd;
int ret = 0;
msg.type = v->msg_type;
+ msg.asid = asid;
msg.iotlb.iova = iova;
msg.iotlb.size = size;
msg.iotlb.uaddr = (uint64_t)(uintptr_t)vaddr;
msg.iotlb.perm = readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW;
msg.iotlb.type = VHOST_IOTLB_UPDATE;
- trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.iotlb.iova, msg.iotlb.size,
- msg.iotlb.uaddr, msg.iotlb.perm, msg.iotlb.type);
+ trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.asid, msg.iotlb.iova,
+ msg.iotlb.size, msg.iotlb.uaddr, msg.iotlb.perm,
+ msg.iotlb.type);
if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
error_report("failed to write, fd=%d, errno=%d (%s)",
@@ -98,18 +104,24 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
return ret;
}
-int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size)
+/*
+ * The caller must set asid = 0 if the device does not support asid.
+ * This is not an ABI break since it is set to 0 by the initializer anyway.
+ */
+int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+ hwaddr size)
{
struct vhost_msg_v2 msg = {};
int fd = v->device_fd;
int ret = 0;
msg.type = v->msg_type;
+ msg.asid = asid;
msg.iotlb.iova = iova;
msg.iotlb.size = size;
msg.iotlb.type = VHOST_IOTLB_INVALIDATE;
- trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.iotlb.iova,
+ trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.asid, msg.iotlb.iova,
msg.iotlb.size, msg.iotlb.type);
if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
@@ -229,8 +241,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
}
vhost_vdpa_iotlb_batch_begin_once(v);
- ret = vhost_vdpa_dma_map(v, iova, int128_get64(llsize),
- vaddr, section->readonly);
+ ret = vhost_vdpa_dma_map(v, VHOST_VDPA_GUEST_PA_ASID, iova,
+ int128_get64(llsize), vaddr, section->readonly);
if (ret) {
error_report("vhost vdpa map fail!");
goto fail_map;
@@ -303,7 +315,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
vhost_iova_tree_remove(v->iova_tree, *result);
}
vhost_vdpa_iotlb_batch_begin_once(v);
- ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
+ ret = vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova,
+ int128_get64(llsize));
if (ret) {
error_report("vhost_vdpa dma unmap error!");
}
@@ -876,7 +889,7 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v, hwaddr addr)
}
size = ROUND_UP(result->size, qemu_real_host_page_size());
- r = vhost_vdpa_dma_unmap(v, result->iova, size);
+ r = vhost_vdpa_dma_unmap(v, v->address_space_id, result->iova, size);
if (unlikely(r < 0)) {
error_report("Unable to unmap SVQ vring: %s (%d)", g_strerror(-r), -r);
return;
@@ -916,7 +929,8 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa *v, DMAMap *needle,
return false;
}
- r = vhost_vdpa_dma_map(v, needle->iova, needle->size + 1,
+ r = vhost_vdpa_dma_map(v, v->address_space_id, needle->iova,
+ needle->size + 1,
(void *)(uintptr_t)needle->translated_addr,
needle->perm == IOMMU_RO);
if (unlikely(r != 0)) {
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 1111d85643..e57dfa1fd1 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -19,6 +19,12 @@
#include "hw/virtio/virtio.h"
#include "standard-headers/linux/vhost_types.h"
+/*
+ * ASID dedicated to map guest's addresses. If SVQ is disabled it maps GPA to
+ * qemu's IOVA. If SVQ is enabled it maps also the SVQ vring here
+ */
+#define VHOST_VDPA_GUEST_PA_ASID 0
+
typedef struct VhostVDPAHostNotifier {
MemoryRegion mr;
void *addr;
@@ -29,6 +35,7 @@ typedef struct vhost_vdpa {
int index;
uint32_t msg_type;
bool iotlb_batch_begin_sent;
+ uint32_t address_space_id;
MemoryListener listener;
struct vhost_vdpa_iova_range iova_range;
uint64_t acked_features;
@@ -42,8 +49,9 @@ typedef struct vhost_vdpa {
VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
} VhostVDPA;
-int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
- void *vaddr, bool readonly);
-int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size);
+int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+ hwaddr size, void *vaddr, bool readonly);
+int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
+ hwaddr size);
#endif
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 85aa0da39a..c2f319eb88 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -258,7 +258,7 @@ static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
return;
}
- r = vhost_vdpa_dma_unmap(v, map->iova, map->size + 1);
+ r = vhost_vdpa_dma_unmap(v, v->address_space_id, map->iova, map->size + 1);
if (unlikely(r != 0)) {
error_report("Device cannot unmap: %s(%d)", g_strerror(r), r);
}
@@ -298,8 +298,8 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
return r;
}
- r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf,
- !write);
+ r = vhost_vdpa_dma_map(v, v->address_space_id, map.iova,
+ vhost_vdpa_net_cvq_cmd_page_len(), buf, !write);
if (unlikely(r < 0)) {
goto dma_map_err;
}
--
2.31.1

View File

@ -0,0 +1,94 @@
From 6282a83619f274ca45a52d61577c10a05a0714dc Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:43 +0100
Subject: [PATCH 11/14] vdpa: add shadow_data to vhost_vdpa
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [11/13] 9d317add1318b555ba06e19e4c67849069e047b9 (eperezmartin/qemu-kvm)
The memory listener that thells the device how to convert GPA to qemu's
va is registered against CVQ vhost_vdpa. memory listener translations
are always ASID 0, CVQ ones are ASID 1 if supported.
Let's tell the listener if it needs to register them on iova tree or
not.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-12-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 6188d78a19894ac8f2bf9484d48a5235a529d3b7)
---
hw/virtio/vhost-vdpa.c | 6 +++---
include/hw/virtio/vhost-vdpa.h | 2 ++
net/vhost-vdpa.c | 1 +
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 0ecf2bbaa0..dc3498e995 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -224,7 +224,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
vaddr, section->readonly);
llsize = int128_sub(llend, int128_make64(iova));
- if (v->shadow_vqs_enabled) {
+ if (v->shadow_data) {
int r;
mem_region.translated_addr = (hwaddr)(uintptr_t)vaddr,
@@ -251,7 +251,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
return;
fail_map:
- if (v->shadow_vqs_enabled) {
+ if (v->shadow_data) {
vhost_iova_tree_remove(v->iova_tree, mem_region);
}
@@ -296,7 +296,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
llsize = int128_sub(llend, int128_make64(iova));
- if (v->shadow_vqs_enabled) {
+ if (v->shadow_data) {
const DMAMap *result;
const void *vaddr = memory_region_get_ram_ptr(section->mr) +
section->offset_within_region +
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index e57dfa1fd1..45b969a311 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -40,6 +40,8 @@ typedef struct vhost_vdpa {
struct vhost_vdpa_iova_range iova_range;
uint64_t acked_features;
bool shadow_vqs_enabled;
+ /* Vdpa must send shadow addresses as IOTLB key for data queues, not GPA */
+ bool shadow_data;
/* IOVA mapping used by the Shadow Virtqueue */
VhostIOVATree *iova_tree;
GPtrArray *shadow_vqs;
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 1757f1d028..eea7a0df12 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -581,6 +581,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
s->always_svq = svq;
s->vhost_vdpa.shadow_vqs_enabled = svq;
s->vhost_vdpa.iova_range = iova_range;
+ s->vhost_vdpa.shadow_data = svq;
s->vhost_vdpa.iova_tree = iova_tree;
if (!is_datapath) {
s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
--
2.31.1

View File

@ -0,0 +1,76 @@
From 0f3a28e1e128754184c4af6a578f27e16c6a61d5 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:37 +0100
Subject: [PATCH 05/14] vdpa: add vhost_vdpa_net_valid_svq_features
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [5/13] 0b27e04f178ec73cb800f4fb05c17a92576142e4 (eperezmartin/qemu-kvm)
It will be reused at vdpa device start so let's extract in its own
function.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-6-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 36e4647247f200b6fa4d2f656133f567036e8a85)
---
net/vhost-vdpa.c | 26 +++++++++++++++++---------
1 file changed, 17 insertions(+), 9 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index b06540ac89..16a5ebe2dd 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -106,6 +106,22 @@ VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
return s->vhost_net;
}
+static bool vhost_vdpa_net_valid_svq_features(uint64_t features, Error **errp)
+{
+ uint64_t invalid_dev_features =
+ features & ~vdpa_svq_device_features &
+ /* Transport are all accepted at this point */
+ ~MAKE_64BIT_MASK(VIRTIO_TRANSPORT_F_START,
+ VIRTIO_TRANSPORT_F_END - VIRTIO_TRANSPORT_F_START);
+
+ if (invalid_dev_features) {
+ error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
+ invalid_dev_features);
+ }
+
+ return !invalid_dev_features;
+}
+
static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
{
uint32_t device_id;
@@ -684,15 +700,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
if (opts->x_svq) {
struct vhost_vdpa_iova_range iova_range;
- uint64_t invalid_dev_features =
- features & ~vdpa_svq_device_features &
- /* Transport are all accepted at this point */
- ~MAKE_64BIT_MASK(VIRTIO_TRANSPORT_F_START,
- VIRTIO_TRANSPORT_F_END - VIRTIO_TRANSPORT_F_START);
-
- if (invalid_dev_features) {
- error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
- invalid_dev_features);
+ if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
goto err_svq;
}
--
2.31.1

View File

@ -0,0 +1,50 @@
From 72f296870805750df8dfe5eaad77dd7d435a8f41 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:40 +0100
Subject: [PATCH 08/14] vdpa: allocate SVQ array unconditionally
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [8/13] 08cd86d0859f82d768794e29241cfeff25df667c (eperezmartin/qemu-kvm)
SVQ may run or not in a device depending on runtime conditions (for
example, if the device can move CVQ to its own group or not).
Allocate the SVQ array unconditionally at startup, since its hard to
move this allocation elsewhere.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-9-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 273e0003f0005cc17292dedae01e5edb0064b69c)
---
hw/virtio/vhost-vdpa.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 84218ce078..dd2768634b 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -532,10 +532,6 @@ static void vhost_vdpa_svq_cleanup(struct vhost_dev *dev)
struct vhost_vdpa *v = dev->opaque;
size_t idx;
- if (!v->shadow_vqs) {
- return;
- }
-
for (idx = 0; idx < v->shadow_vqs->len; ++idx) {
vhost_svq_stop(g_ptr_array_index(v->shadow_vqs, idx));
}
--
2.31.1

View File

@ -0,0 +1,193 @@
From 84c203faa570b85eec006215768c83371c9f0399 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:44 +0100
Subject: [PATCH 12/14] vdpa: always start CVQ in SVQ mode if possible
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [12/13] 83f94b3e163ca38d08dbf7c111a4cfa7a44e3dc2 (eperezmartin/qemu-kvm)
Isolate control virtqueue in its own group, allowing to intercept control
commands but letting dataplane run totally passthrough to the guest.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Message-Id: <20221215113144.322011-13-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit c1a1008685af0327d9d03f03d43bdb77e7af5bea)
---
hw/virtio/vhost-vdpa.c | 3 +-
net/vhost-vdpa.c | 110 ++++++++++++++++++++++++++++++++++++++++-
2 files changed, 111 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index dc3498e995..72ff06673c 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -638,7 +638,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
{
uint64_t features;
uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
- 0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH;
+ 0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
+ 0x1ULL << VHOST_BACKEND_F_IOTLB_ASID;
int r;
if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index eea7a0df12..07d33dae26 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -101,6 +101,8 @@ static const uint64_t vdpa_svq_device_features =
BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
BIT_ULL(VIRTIO_NET_F_STANDBY);
+#define VHOST_VDPA_NET_CVQ_ASID 1
+
VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
{
VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
@@ -242,6 +244,40 @@ static NetClientInfo net_vhost_vdpa_info = {
.check_peer_type = vhost_vdpa_check_peer_type,
};
+static int64_t vhost_vdpa_get_vring_group(int device_fd, unsigned vq_index)
+{
+ struct vhost_vring_state state = {
+ .index = vq_index,
+ };
+ int r = ioctl(device_fd, VHOST_VDPA_GET_VRING_GROUP, &state);
+
+ if (unlikely(r < 0)) {
+ error_report("Cannot get VQ %u group: %s", vq_index,
+ g_strerror(errno));
+ return r;
+ }
+
+ return state.num;
+}
+
+static int vhost_vdpa_set_address_space_id(struct vhost_vdpa *v,
+ unsigned vq_group,
+ unsigned asid_num)
+{
+ struct vhost_vring_state asid = {
+ .index = vq_group,
+ .num = asid_num,
+ };
+ int r;
+
+ r = ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid);
+ if (unlikely(r < 0)) {
+ error_report("Can't set vq group %u asid %u, errno=%d (%s)",
+ asid.index, asid.num, errno, g_strerror(errno));
+ }
+ return r;
+}
+
static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
{
VhostIOVATree *tree = v->iova_tree;
@@ -316,11 +352,75 @@ dma_map_err:
static int vhost_vdpa_net_cvq_start(NetClientState *nc)
{
VhostVDPAState *s;
- int r;
+ struct vhost_vdpa *v;
+ uint64_t backend_features;
+ int64_t cvq_group;
+ int cvq_index, r;
assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
s = DO_UPCAST(VhostVDPAState, nc, nc);
+ v = &s->vhost_vdpa;
+
+ v->shadow_data = s->always_svq;
+ v->shadow_vqs_enabled = s->always_svq;
+ s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
+
+ if (s->always_svq) {
+ /* SVQ is already configured for all virtqueues */
+ goto out;
+ }
+
+ /*
+ * If we early return in these cases SVQ will not be enabled. The migration
+ * will be blocked as long as vhost-vdpa backends will not offer _F_LOG.
+ *
+ * Calling VHOST_GET_BACKEND_FEATURES as they are not available in v->dev
+ * yet.
+ */
+ r = ioctl(v->device_fd, VHOST_GET_BACKEND_FEATURES, &backend_features);
+ if (unlikely(r < 0)) {
+ error_report("Cannot get vdpa backend_features: %s(%d)",
+ g_strerror(errno), errno);
+ return -1;
+ }
+ if (!(backend_features & VHOST_BACKEND_F_IOTLB_ASID) ||
+ !vhost_vdpa_net_valid_svq_features(v->dev->features, NULL)) {
+ return 0;
+ }
+
+ /*
+ * Check if all the virtqueues of the virtio device are in a different vq
+ * than the last vq. VQ group of last group passed in cvq_group.
+ */
+ cvq_index = v->dev->vq_index_end - 1;
+ cvq_group = vhost_vdpa_get_vring_group(v->device_fd, cvq_index);
+ if (unlikely(cvq_group < 0)) {
+ return cvq_group;
+ }
+ for (int i = 0; i < cvq_index; ++i) {
+ int64_t group = vhost_vdpa_get_vring_group(v->device_fd, i);
+
+ if (unlikely(group < 0)) {
+ return group;
+ }
+
+ if (group == cvq_group) {
+ return 0;
+ }
+ }
+
+ r = vhost_vdpa_set_address_space_id(v, cvq_group, VHOST_VDPA_NET_CVQ_ASID);
+ if (unlikely(r < 0)) {
+ return r;
+ }
+
+ v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
+ v->iova_range.last);
+ v->shadow_vqs_enabled = true;
+ s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID;
+
+out:
if (!s->vhost_vdpa.shadow_vqs_enabled) {
return 0;
}
@@ -349,6 +449,14 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
if (s->vhost_vdpa.shadow_vqs_enabled) {
vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->status);
+ if (!s->always_svq) {
+ /*
+ * If only the CVQ is shadowed we can delete this safely.
+ * If all the VQs are shadows this will be needed by the time the
+ * device is started again to register SVQ vrings and similar.
+ */
+ g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
+ }
}
}
--
2.31.1

View File

@ -0,0 +1,48 @@
From 46e80a9350a02fdb5689638df96bc7389e953cf8 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 17 Jan 2023 11:53:08 +0100
Subject: [PATCH 13/14] vdpa: fix VHOST_BACKEND_F_IOTLB_ASID flag check
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [13/13] b7fb4b8e9ea26b6664a9179ed0a88376acf5115f (eperezmartin/qemu-kvm)
VHOST_BACKEND_F_IOTLB_ASID is the feature bit, not the bitmask. Since
the device under test also provided VHOST_BACKEND_F_IOTLB_MSG_V2 and
VHOST_BACKEND_F_IOTLB_BATCH, this went unnoticed.
Fixes: c1a1008685 ("vdpa: always start CVQ in SVQ mode if possible")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Upstream status: git@github.com:jasowang/qemu.git
(cherry picked from commit 2bd492bca521ee8594f1d5db8dc9aac126fc4f85)
---
net/vhost-vdpa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 07d33dae26..7d9c4ea09d 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -384,7 +384,7 @@ static int vhost_vdpa_net_cvq_start(NetClientState *nc)
g_strerror(errno), errno);
return -1;
}
- if (!(backend_features & VHOST_BACKEND_F_IOTLB_ASID) ||
+ if (!(backend_features & BIT_ULL(VHOST_BACKEND_F_IOTLB_ASID)) ||
!vhost_vdpa_net_valid_svq_features(v->dev->features, NULL)) {
return 0;
}
--
2.31.1

View File

@ -0,0 +1,118 @@
From 63a45add7c9f7bb2b7775ae4cb2d7df22f7f2033 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:39 +0100
Subject: [PATCH 07/14] vdpa: move SVQ vring features check to net/
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [7/13] a24189aea4dbde3ed4486f685d0d88aeee1a0ee7 (eperezmartin/qemu-kvm)
The next patches will start control SVQ if possible. However, we don't
know if that will be possible at qemu boot anymore.
Since the moved checks will be already evaluated at net/ to know if it
is ok to shadow CVQ, move them.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-8-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 258a03941fd23108a322d09abc9c55341e09688d)
---
hw/virtio/vhost-vdpa.c | 32 ++------------------------------
net/vhost-vdpa.c | 3 ++-
2 files changed, 4 insertions(+), 31 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 9e7cbf1776..84218ce078 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -389,29 +389,9 @@ static int vhost_vdpa_get_dev_features(struct vhost_dev *dev,
return ret;
}
-static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
- Error **errp)
+static void vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v)
{
g_autoptr(GPtrArray) shadow_vqs = NULL;
- uint64_t dev_features, svq_features;
- int r;
- bool ok;
-
- if (!v->shadow_vqs_enabled) {
- return 0;
- }
-
- r = vhost_vdpa_get_dev_features(hdev, &dev_features);
- if (r != 0) {
- error_setg_errno(errp, -r, "Can't get vdpa device features");
- return r;
- }
-
- svq_features = dev_features;
- ok = vhost_svq_valid_features(svq_features, errp);
- if (unlikely(!ok)) {
- return -1;
- }
shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free);
for (unsigned n = 0; n < hdev->nvqs; ++n) {
@@ -422,7 +402,6 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
}
v->shadow_vqs = g_steal_pointer(&shadow_vqs);
- return 0;
}
static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
@@ -447,10 +426,7 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
dev->opaque = opaque ;
v->listener = vhost_vdpa_memory_listener;
v->msg_type = VHOST_IOTLB_MSG_V2;
- ret = vhost_vdpa_init_svq(dev, v, errp);
- if (ret) {
- goto err;
- }
+ vhost_vdpa_init_svq(dev, v);
if (!vhost_vdpa_first_dev(dev)) {
return 0;
@@ -460,10 +436,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
VIRTIO_CONFIG_S_DRIVER);
return 0;
-
-err:
- ram_block_discard_disable(false);
- return ret;
}
static void vhost_vdpa_host_notifier_uninit(struct vhost_dev *dev,
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 8d3ed095d0..85aa0da39a 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -117,9 +117,10 @@ static bool vhost_vdpa_net_valid_svq_features(uint64_t features, Error **errp)
if (invalid_dev_features) {
error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
invalid_dev_features);
+ return false;
}
- return !invalid_dev_features;
+ return vhost_svq_valid_features(features, errp);
}
static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
--
2.31.1

View File

@ -0,0 +1,145 @@
From 760169d538a4e6ba61006f6796cd55af967a7f1e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:38 +0100
Subject: [PATCH 06/14] vdpa: request iova_range only once
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [6/13] 2a8ae2f46ae88f01c5535038f38cb7895098b610 (eperezmartin/qemu-kvm)
Currently iova range is requested once per queue pair in the case of
net. Reduce the number of ioctls asking it once at initialization and
reusing that value for each vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Message-Id: <20221215113144.322011-7-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasonwang@redhat.com>
(cherry picked from commit a585fad26b2e6ccca156d9e65158ad1c5efd268d)
---
hw/virtio/vhost-vdpa.c | 15 ---------------
net/vhost-vdpa.c | 27 ++++++++++++++-------------
2 files changed, 14 insertions(+), 28 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index e65603022f..9e7cbf1776 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -365,19 +365,6 @@ static int vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
return 0;
}
-static void vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
-{
- int ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE,
- &v->iova_range);
- if (ret != 0) {
- v->iova_range.first = 0;
- v->iova_range.last = UINT64_MAX;
- }
-
- trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
- v->iova_range.last);
-}
-
/*
* The use of this function is for requests that only need to be
* applied once. Typically such request occurs at the beginning
@@ -465,8 +452,6 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
goto err;
}
- vhost_vdpa_get_iova_range(v);
-
if (!vhost_vdpa_first_dev(dev)) {
return 0;
}
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 16a5ebe2dd..8d3ed095d0 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -549,14 +549,15 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
};
static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
- const char *device,
- const char *name,
- int vdpa_device_fd,
- int queue_pair_index,
- int nvqs,
- bool is_datapath,
- bool svq,
- VhostIOVATree *iova_tree)
+ const char *device,
+ const char *name,
+ int vdpa_device_fd,
+ int queue_pair_index,
+ int nvqs,
+ bool is_datapath,
+ bool svq,
+ struct vhost_vdpa_iova_range iova_range,
+ VhostIOVATree *iova_tree)
{
NetClientState *nc = NULL;
VhostVDPAState *s;
@@ -575,6 +576,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
s->vhost_vdpa.device_fd = vdpa_device_fd;
s->vhost_vdpa.index = queue_pair_index;
s->vhost_vdpa.shadow_vqs_enabled = svq;
+ s->vhost_vdpa.iova_range = iova_range;
s->vhost_vdpa.iova_tree = iova_tree;
if (!is_datapath) {
s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
@@ -654,6 +656,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
int vdpa_device_fd;
g_autofree NetClientState **ncs = NULL;
g_autoptr(VhostIOVATree) iova_tree = NULL;
+ struct vhost_vdpa_iova_range iova_range;
NetClientState *nc;
int queue_pairs, r, i = 0, has_cvq = 0;
@@ -697,14 +700,12 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
return queue_pairs;
}
+ vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
if (opts->x_svq) {
- struct vhost_vdpa_iova_range iova_range;
-
if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
goto err_svq;
}
- vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
}
@@ -713,7 +714,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
for (i = 0; i < queue_pairs; i++) {
ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
vdpa_device_fd, i, 2, true, opts->x_svq,
- iova_tree);
+ iova_range, iova_tree);
if (!ncs[i])
goto err;
}
@@ -721,7 +722,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
if (has_cvq) {
nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
vdpa_device_fd, i, 1, false,
- opts->x_svq, iova_tree);
+ opts->x_svq, iova_range, iova_tree);
if (!nc)
goto err;
}
--
2.31.1

View File

@ -0,0 +1,62 @@
From 28163d7d61b6b0b8312b78d57dabc8f44bf39c46 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:42 +0100
Subject: [PATCH 10/14] vdpa: store x-svq parameter in VhostVDPAState
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [10/13] 53f3b2698b4a5caca434f55e4300103a78778548 (eperezmartin/qemu-kvm)
CVQ can be shadowed two ways:
- Device has x-svq=on parameter (current way)
- The device can isolate CVQ in its own vq group
QEMU needs to check for the second condition dynamically, because CVQ
index is not known before the driver ack the features. Since this is
dynamic, the CVQ isolation could vary with different conditions, making
it possible to go from "not isolated group" to "isolated".
Saving the cmdline parameter in an extra field so we never disable CVQ
SVQ in case the device was started with x-svq cmdline.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-11-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 7f211a28fd5482f76583988beecd8ee61588d45e)
---
net/vhost-vdpa.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index c2f319eb88..1757f1d028 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -38,6 +38,8 @@ typedef struct VhostVDPAState {
void *cvq_cmd_out_buffer;
virtio_net_ctrl_ack *status;
+ /* The device always have SVQ enabled */
+ bool always_svq;
bool started;
} VhostVDPAState;
@@ -576,6 +578,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
s->vhost_vdpa.device_fd = vdpa_device_fd;
s->vhost_vdpa.index = queue_pair_index;
+ s->always_svq = svq;
s->vhost_vdpa.shadow_vqs_enabled = svq;
s->vhost_vdpa.iova_range = iova_range;
s->vhost_vdpa.iova_tree = iova_tree;
--
2.31.1

View File

@ -0,0 +1,58 @@
From cb974f2f9a0c5b9520b6ac80bd1d1e4a6b12bbdc Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:33 +0100
Subject: [PATCH 01/14] vdpa: use v->shadow_vqs_enabled in
vhost_vdpa_svqs_start & stop
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [1/13] f0db50a95f87dd011418617be7b80aa6813a1146 (eperezmartin/qemu-kvm)
This function used to trust in v->shadow_vqs != NULL to know if it must
start svq or not.
This is not going to be valid anymore, as qemu is going to allocate svq
array unconditionally (but it will only start them conditionally).
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-2-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 712c1a3171cf62d501dac5af58f77d5fea70350d)
---
hw/virtio/vhost-vdpa.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index c5be2645b0..44e6a9b7b3 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1036,7 +1036,7 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
Error *err = NULL;
unsigned i;
- if (!v->shadow_vqs) {
+ if (!v->shadow_vqs_enabled) {
return true;
}
@@ -1089,7 +1089,7 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev)
{
struct vhost_vdpa *v = dev->opaque;
- if (!v->shadow_vqs) {
+ if (!v->shadow_vqs_enabled) {
return;
}
--
2.31.1

View File

@ -0,0 +1,171 @@
From bffccbd59a2e2c641810cd7362c7b5ecf5989ed8 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:35 +0100
Subject: [PATCH 03/14] vhost: allocate SVQ device file descriptors at device
start
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [3/13] bab2d43f0fc0d13a4917e706244b37e1a431b082 (eperezmartin/qemu-kvm)
The next patches will start control SVQ if possible. However, we don't
know if that will be possible at qemu boot anymore.
Delay device file descriptors until we know it at device start. This
will avoid to create them if the device does not support SVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-4-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 3cfb4d069cd2977b707fb519c455d7d416e1f4b0)
---
hw/virtio/vhost-shadow-virtqueue.c | 31 ++------------------------
hw/virtio/vhost-vdpa.c | 35 ++++++++++++++++++++++++------
2 files changed, 30 insertions(+), 36 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 264ddc166d..3b05bab44d 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -715,43 +715,18 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
* @iova_tree: Tree to perform descriptors translations
* @ops: SVQ owner callbacks
* @ops_opaque: ops opaque pointer
- *
- * Returns the new virtqueue or NULL.
- *
- * In case of error, reason is reported through error_report.
*/
VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
const VhostShadowVirtqueueOps *ops,
void *ops_opaque)
{
- g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
- int r;
-
- r = event_notifier_init(&svq->hdev_kick, 0);
- if (r != 0) {
- error_report("Couldn't create kick event notifier: %s (%d)",
- g_strerror(errno), errno);
- goto err_init_hdev_kick;
- }
-
- r = event_notifier_init(&svq->hdev_call, 0);
- if (r != 0) {
- error_report("Couldn't create call event notifier: %s (%d)",
- g_strerror(errno), errno);
- goto err_init_hdev_call;
- }
+ VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND);
svq->iova_tree = iova_tree;
svq->ops = ops;
svq->ops_opaque = ops_opaque;
- return g_steal_pointer(&svq);
-
-err_init_hdev_call:
- event_notifier_cleanup(&svq->hdev_kick);
-
-err_init_hdev_kick:
- return NULL;
+ return svq;
}
/**
@@ -763,7 +738,5 @@ void vhost_svq_free(gpointer pvq)
{
VhostShadowVirtqueue *vq = pvq;
vhost_svq_stop(vq);
- event_notifier_cleanup(&vq->hdev_kick);
- event_notifier_cleanup(&vq->hdev_call);
g_free(vq);
}
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 44e6a9b7b3..530d2ca362 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -428,15 +428,11 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free);
for (unsigned n = 0; n < hdev->nvqs; ++n) {
- g_autoptr(VhostShadowVirtqueue) svq;
+ VhostShadowVirtqueue *svq;
svq = vhost_svq_new(v->iova_tree, v->shadow_vq_ops,
v->shadow_vq_ops_opaque);
- if (unlikely(!svq)) {
- error_setg(errp, "Cannot create svq %u", n);
- return -1;
- }
- g_ptr_array_add(shadow_vqs, g_steal_pointer(&svq));
+ g_ptr_array_add(shadow_vqs, svq);
}
v->shadow_vqs = g_steal_pointer(&shadow_vqs);
@@ -871,11 +867,23 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev,
const EventNotifier *event_notifier = &svq->hdev_kick;
int r;
+ r = event_notifier_init(&svq->hdev_kick, 0);
+ if (r != 0) {
+ error_setg_errno(errp, -r, "Couldn't create kick event notifier");
+ goto err_init_hdev_kick;
+ }
+
+ r = event_notifier_init(&svq->hdev_call, 0);
+ if (r != 0) {
+ error_setg_errno(errp, -r, "Couldn't create call event notifier");
+ goto err_init_hdev_call;
+ }
+
file.fd = event_notifier_get_fd(event_notifier);
r = vhost_vdpa_set_vring_dev_kick(dev, &file);
if (unlikely(r != 0)) {
error_setg_errno(errp, -r, "Can't set device kick fd");
- return r;
+ goto err_init_set_dev_fd;
}
event_notifier = &svq->hdev_call;
@@ -883,8 +891,18 @@ static int vhost_vdpa_svq_set_fds(struct vhost_dev *dev,
r = vhost_vdpa_set_vring_dev_call(dev, &file);
if (unlikely(r != 0)) {
error_setg_errno(errp, -r, "Can't set device call fd");
+ goto err_init_set_dev_fd;
}
+ return 0;
+
+err_init_set_dev_fd:
+ event_notifier_set_handler(&svq->hdev_call, NULL);
+
+err_init_hdev_call:
+ event_notifier_cleanup(&svq->hdev_kick);
+
+err_init_hdev_kick:
return r;
}
@@ -1096,6 +1114,9 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev)
for (unsigned i = 0; i < v->shadow_vqs->len; ++i) {
VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, i);
vhost_vdpa_svq_unmap_rings(dev, svq);
+
+ event_notifier_cleanup(&svq->hdev_kick);
+ event_notifier_cleanup(&svq->hdev_call);
}
}
--
2.31.1

View File

@ -0,0 +1,122 @@
From 6584478deca49d0ea20add588e4fdb51cdc26f1d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:36 +0100
Subject: [PATCH 04/14] vhost: move iova_tree set to vhost_svq_start
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [4/13] 200d8e9b58e258a6e301430debc73ef7d962b732 (eperezmartin/qemu-kvm)
Since we don't know if we will use SVQ at qemu initialization, let's
allocate iova_tree only if needed. To do so, accept it at SVQ start, not
at initialization.
This will avoid to create it if the device does not support SVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-5-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 5fde952bbdd521c10fc018ee04f922a7dca5f663)
---
hw/virtio/vhost-shadow-virtqueue.c | 9 ++++-----
hw/virtio/vhost-shadow-virtqueue.h | 5 ++---
hw/virtio/vhost-vdpa.c | 5 ++---
3 files changed, 8 insertions(+), 11 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 3b05bab44d..4307296358 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -642,9 +642,10 @@ void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd)
* @svq: Shadow Virtqueue
* @vdev: VirtIO device
* @vq: Virtqueue to shadow
+ * @iova_tree: Tree to perform descriptors translations
*/
void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
- VirtQueue *vq)
+ VirtQueue *vq, VhostIOVATree *iova_tree)
{
size_t desc_size, driver_size, device_size;
@@ -655,6 +656,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
svq->last_used_idx = 0;
svq->vdev = vdev;
svq->vq = vq;
+ svq->iova_tree = iova_tree;
svq->vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
driver_size = vhost_svq_driver_area_size(svq);
@@ -712,18 +714,15 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
* Creates vhost shadow virtqueue, and instructs the vhost device to use the
* shadow methods and file descriptors.
*
- * @iova_tree: Tree to perform descriptors translations
* @ops: SVQ owner callbacks
* @ops_opaque: ops opaque pointer
*/
-VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
- const VhostShadowVirtqueueOps *ops,
+VhostShadowVirtqueue *vhost_svq_new(const VhostShadowVirtqueueOps *ops,
void *ops_opaque)
{
VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND);
- svq->iova_tree = iova_tree;
svq->ops = ops;
svq->ops_opaque = ops_opaque;
return svq;
diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
index d04c34a589..926a4897b1 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -126,11 +126,10 @@ size_t vhost_svq_driver_area_size(const VhostShadowVirtqueue *svq);
size_t vhost_svq_device_area_size(const VhostShadowVirtqueue *svq);
void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
- VirtQueue *vq);
+ VirtQueue *vq, VhostIOVATree *iova_tree);
void vhost_svq_stop(VhostShadowVirtqueue *svq);
-VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
- const VhostShadowVirtqueueOps *ops,
+VhostShadowVirtqueue *vhost_svq_new(const VhostShadowVirtqueueOps *ops,
void *ops_opaque);
void vhost_svq_free(gpointer vq);
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 530d2ca362..e65603022f 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -430,8 +430,7 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v,
for (unsigned n = 0; n < hdev->nvqs; ++n) {
VhostShadowVirtqueue *svq;
- svq = vhost_svq_new(v->iova_tree, v->shadow_vq_ops,
- v->shadow_vq_ops_opaque);
+ svq = vhost_svq_new(v->shadow_vq_ops, v->shadow_vq_ops_opaque);
g_ptr_array_add(shadow_vqs, svq);
}
@@ -1070,7 +1069,7 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev)
goto err;
}
- vhost_svq_start(svq, dev->vdev, vq);
+ vhost_svq_start(svq, dev->vdev, vq, v->iova_tree);
ok = vhost_vdpa_svq_map_rings(dev, svq, &addr, &err);
if (unlikely(!ok)) {
goto err_map;
--
2.31.1

View File

@ -0,0 +1,73 @@
From 2906f8df3c5e915a3dc05a705b87990211f114b5 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Thu, 15 Dec 2022 12:31:34 +0100
Subject: [PATCH 02/14] vhost: set SVQ device call handler at SVQ start
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 136: vDPA ASID support in Qemu
RH-Bugzilla: 2104412
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [2/13] ad90a6cc5c71b70d705904433d5a986e8fedb924 (eperezmartin/qemu-kvm)
By the end of this series CVQ is shadowed as long as the features
support it.
Since we don't know at the beginning of qemu running if this is
supported, move the event notifier handler setting to the start of the
SVQ, instead of the start of qemu run. This will avoid to create them if
the device does not support SVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221215113144.322011-3-eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 20e7412bfd63c68f1798fbdb799aedb7e05fee88)
---
hw/virtio/vhost-shadow-virtqueue.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index 5bd14cad96..264ddc166d 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -648,6 +648,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
{
size_t desc_size, driver_size, device_size;
+ event_notifier_set_handler(&svq->hdev_call, vhost_svq_handle_call);
svq->next_guest_avail_elem = NULL;
svq->shadow_avail_idx = 0;
svq->shadow_used_idx = 0;
@@ -704,6 +705,7 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
g_free(svq->desc_state);
qemu_vfree(svq->vring.desc);
qemu_vfree(svq->vring.used);
+ event_notifier_set_handler(&svq->hdev_call, NULL);
}
/**
@@ -740,7 +742,6 @@ VhostShadowVirtqueue *vhost_svq_new(VhostIOVATree *iova_tree,
}
event_notifier_init_fd(&svq->svq_kick, VHOST_FILE_UNBIND);
- event_notifier_set_handler(&svq->hdev_call, vhost_svq_handle_call);
svq->iova_tree = iova_tree;
svq->ops = ops;
svq->ops_opaque = ops_opaque;
@@ -763,7 +764,6 @@ void vhost_svq_free(gpointer pvq)
VhostShadowVirtqueue *vq = pvq;
vhost_svq_stop(vq);
event_notifier_cleanup(&vq->hdev_kick);
- event_notifier_set_handler(&vq->hdev_call, NULL);
event_notifier_cleanup(&vq->hdev_call);
g_free(vq);
}
--
2.31.1

View File

@ -148,7 +148,7 @@ Obsoletes: %{name}-block-ssh <= %{epoch}:%{version} \
Summary: QEMU is a machine emulator and virtualizer
Name: qemu-kvm
Version: 7.2.0
Release: 6%{?rcrel}%{?dist}%{?cc_suffix}
Release: 7%{?rcrel}%{?dist}%{?cc_suffix}
# Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped
# Epoch 15 used for RHEL 8
# Epoch 17 used for RHEL 9 (due to release versioning offset in RHEL 8.5)
@ -290,6 +290,32 @@ Patch70: kvm-s390x-pci-shrink-DMA-aperture-to-be-bound-by-vfio-DM.patch
Patch71: kvm-s390x-pci-reset-ISM-passthrough-devices-on-shutdown-.patch
# For bz#2149191 - [RFE][guest-agent] - USB bus type support
Patch72: kvm-qga-linux-add-usb-support-to-guest-get-fsinfo.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch73: kvm-vdpa-use-v-shadow_vqs_enabled-in-vhost_vdpa_svqs_sta.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch74: kvm-vhost-set-SVQ-device-call-handler-at-SVQ-start.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch75: kvm-vhost-allocate-SVQ-device-file-descriptors-at-device.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch76: kvm-vhost-move-iova_tree-set-to-vhost_svq_start.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch77: kvm-vdpa-add-vhost_vdpa_net_valid_svq_features.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch78: kvm-vdpa-request-iova_range-only-once.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch79: kvm-vdpa-move-SVQ-vring-features-check-to-net.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch80: kvm-vdpa-allocate-SVQ-array-unconditionally.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch81: kvm-vdpa-add-asid-parameter-to-vhost_vdpa_dma_map-unmap.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch82: kvm-vdpa-store-x-svq-parameter-in-VhostVDPAState.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch83: kvm-vdpa-add-shadow_data-to-vhost_vdpa.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch84: kvm-vdpa-always-start-CVQ-in-SVQ-mode-if-possible.patch
# For bz#2104412 - vDPA ASID support in Qemu
Patch85: kvm-vdpa-fix-VHOST_BACKEND_F_IOTLB_ASID-flag-check.patch
%if %{have_clang}
BuildRequires: clang
@ -657,6 +683,7 @@ ulimit -n 10240
--disable-libssh \\\
--disable-libudev \\\
--disable-libusb \\\
--disable-libvduse \\\
--disable-linux-aio \\\
--disable-linux-io-uring \\\
--disable-linux-user \\\
@ -712,6 +739,7 @@ ulimit -n 10240
--disable-user \\\
--disable-vde \\\
--disable-vdi \\\
--disable-vduse-blk-export \\\
--disable-vhost-crypto \\\
--disable-vhost-kernel \\\
--disable-vhost-net \\\
@ -1318,6 +1346,26 @@ useradd -r -u 107 -g qemu -G kvm -d / -s /sbin/nologin \
%endif
%changelog
* Mon Feb 06 2023 Miroslav Rezanina <mrezanin@redhat.com> - 7.2.0-7
- kvm-vdpa-use-v-shadow_vqs_enabled-in-vhost_vdpa_svqs_sta.patch [bz#2104412]
- kvm-vhost-set-SVQ-device-call-handler-at-SVQ-start.patch [bz#2104412]
- kvm-vhost-allocate-SVQ-device-file-descriptors-at-device.patch [bz#2104412]
- kvm-vhost-move-iova_tree-set-to-vhost_svq_start.patch [bz#2104412]
- kvm-vdpa-add-vhost_vdpa_net_valid_svq_features.patch [bz#2104412]
- kvm-vdpa-request-iova_range-only-once.patch [bz#2104412]
- kvm-vdpa-move-SVQ-vring-features-check-to-net.patch [bz#2104412]
- kvm-vdpa-allocate-SVQ-array-unconditionally.patch [bz#2104412]
- kvm-vdpa-add-asid-parameter-to-vhost_vdpa_dma_map-unmap.patch [bz#2104412]
- kvm-vdpa-store-x-svq-parameter-in-VhostVDPAState.patch [bz#2104412]
- kvm-vdpa-add-shadow_data-to-vhost_vdpa.patch [bz#2104412]
- kvm-vdpa-always-start-CVQ-in-SVQ-mode-if-possible.patch [bz#2104412]
- kvm-vdpa-fix-VHOST_BACKEND_F_IOTLB_ASID-flag-check.patch [bz#2104412]
- kvm-spec-Disable-VDUSE.patch [bz#2128222]
- Resolves: bz#2104412
(vDPA ASID support in Qemu)
- Resolves: bz#2128222
(VDUSE block export should be disabled in builds for now)
* Mon Jan 30 2023 Miroslav Rezanina <mrezanin@redhat.com> - 7.2.0-6
- kvm-virtio_net-Modify-virtio_net_get_config-to-early-ret.patch [bz#2141088]
- kvm-virtio_net-copy-VIRTIO_NET_S_ANNOUNCE-if-device-mode.patch [bz#2141088]