qemu-kvm/kvm-vhost-Always-store-new-kick-fd-on-vhost_svq_set_svq_.patch
Miroslav Rezanina 716a3942b3 * Fri Aug 26 2022 Miroslav Rezanina <mrezanin@redhat.com> - 7.0.0-12
- kvm-scsi-generic-Fix-emulated-block-limits-VPD-page.patch [bz#2120275]
- kvm-vhost-Get-vring-base-from-vq-not-svq.patch [bz#2114060]
- kvm-vdpa-Skip-the-maps-not-in-the-iova-tree.patch [bz#2114060]
- kvm-vdpa-do-not-save-failed-dma-maps-in-SVQ-iova-tree.patch [bz#2114060]
- kvm-util-Return-void-on-iova_tree_remove.patch [bz#2114060]
- kvm-util-accept-iova_tree_remove_parameter-by-value.patch [bz#2114060]
- kvm-vdpa-Remove-SVQ-vring-from-iova_tree-at-shutdown.patch [bz#2114060]
- kvm-vdpa-Make-SVQ-vring-unmapping-return-void.patch [bz#2114060]
- kvm-vhost-Always-store-new-kick-fd-on-vhost_svq_set_svq_.patch [bz#2114060]
- kvm-vdpa-Use-ring-hwaddr-at-vhost_vdpa_svq_unmap_ring.patch [bz#2114060]
- kvm-vhost-stop-transfer-elem-ownership-in-vhost_handle_g.patch [bz#2114060]
- kvm-vhost-use-SVQ-element-ndescs-instead-of-opaque-data-.patch [bz#2114060]
- kvm-vhost-Delete-useless-read-memory-barrier.patch [bz#2114060]
- kvm-vhost-Do-not-depend-on-NULL-VirtQueueElement-on-vhos.patch [bz#2114060]
- kvm-vhost_net-Add-NetClientInfo-start-callback.patch [bz#2114060]
- kvm-vhost_net-Add-NetClientInfo-stop-callback.patch [bz#2114060]
- kvm-vdpa-add-net_vhost_vdpa_cvq_info-NetClientInfo.patch [bz#2114060]
- kvm-vdpa-Move-command-buffers-map-to-start-of-net-device.patch [bz#2114060]
- kvm-vdpa-extract-vhost_vdpa_net_cvq_add-from-vhost_vdpa_.patch [bz#2114060]
- kvm-vhost_net-add-NetClientState-load-callback.patch [bz#2114060]
- kvm-vdpa-Add-virtio-net-mac-address-via-CVQ-at-start.patch [bz#2114060]
- kvm-vdpa-Delete-CVQ-migration-blocker.patch [bz#2114060]
- kvm-virtio-scsi-fix-race-in-virtio_scsi_dataplane_start.patch [bz#2099541]
- Resolves: bz#2120275
  (Wrong max_sectors_kb and Maximum transfer length on the pass-through device [rhel-9.1])
- Resolves: bz#2114060
  (vDPA state restore support through control virtqueue in Qemu)
- Resolves: bz#2099541
  (qemu coredump with error Assertion `qemu_mutex_iothread_locked()' failed when repeatly hotplug/unplug disks in pause status)
2022-08-26 03:05:53 -04:00

68 lines
2.6 KiB
Diff

From 6cde15c70c86819033337771eb522e94e3ea9e34 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>
Date: Tue, 23 Aug 2022 20:20:07 +0200
Subject: [PATCH 09/23] vhost: Always store new kick fd on
vhost_svq_set_svq_kick_fd
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
RH-Author: Eugenio Pérez <eperezma@redhat.com>
RH-MergeRequest: 116: vdpa: Restore device state on destination
RH-Bugzilla: 2114060
RH-Acked-by: Cindy Lu <lulu@redhat.com>
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [8/21] a09b8851c39d7cea67414560f6d322e988b9d59a (eperezmartin/qemu-kvm)
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2114060
Upstream status: git@github.com:jasowang/qemu.git net-next
We can unbind twice a file descriptor if we call twice
vhost_svq_set_svq_kick_fd because of this. Since it comes from vhost and
not from SVQ, that file descriptor could be a different thing that
guest's vhost notifier.
Likewise, it can happens the same if a guest start and stop the device
multiple times.
Reported-by: Lei Yang <leiyang@redhat.com>
Fixes: dff4426fa6 ("vhost: Add Shadow VirtQueue kick forwarding capabilities")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 6867f29c1425add7e0e8d1d8d58cc0ffbb8df0e4)
---
hw/virtio/vhost-shadow-virtqueue.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index e53aac45f6..f420311b89 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -602,13 +602,13 @@ void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd)
event_notifier_set_handler(svq_kick, NULL);
}
+ event_notifier_init_fd(svq_kick, svq_kick_fd);
/*
* event_notifier_set_handler already checks for guest's notifications if
* they arrive at the new file descriptor in the switch, so there is no
* need to explicitly check for them.
*/
if (poll_start) {
- event_notifier_init_fd(svq_kick, svq_kick_fd);
event_notifier_set(svq_kick);
event_notifier_set_handler(svq_kick, vhost_handle_guest_kick_notifier);
}
@@ -655,7 +655,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
*/
void vhost_svq_stop(VhostShadowVirtqueue *svq)
{
- event_notifier_set_handler(&svq->svq_kick, NULL);
+ vhost_svq_set_svq_kick_fd(svq, VHOST_FILE_UNBIND);
g_autofree VirtQueueElement *next_avail_elem = NULL;
if (!svq->vq) {
--
2.31.1