qemu-kvm/kvm-util-iov-Make-qiov_slice-public.patch
Miroslav Rezanina c6c2b61b24 Synchronization from CentOS 9 Stream Release 10 of 8.0.0:
--------------------------------------------------------
* Mon Jul 31 2023 Miroslav Rezanina <mrezanin@redhat.com> - 8.0.0-10

- kvm-util-iov-Make-qiov_slice-public.patch [bz#2174676]
- kvm-block-Collapse-padded-I-O-vecs-exceeding-IOV_MAX.patch [bz#2174676]
- kvm-util-iov-Remove-qemu_iovec_init_extended.patch [bz#2174676]
- kvm-iotests-iov-padding-New-test.patch [bz#2174676]
- kvm-block-Fix-pad_request-s-request-restriction.patch [bz#2174676]
- kvm-vdpa-do-not-block-migration-if-device-has-cvq-and-x-.patch [RHEL-573]
- kvm-virtio-net-correctly-report-maximum-tx_queue_size-va.patch [bz#2040509]
- kvm-hw-pci-Disable-PCI_ERR_UNCOR_MASK-reg-for-machine-ty.patch [bz#2223691]
- kvm-vhost-vdpa-mute-unaligned-memory-error-report.patch [bz#2141965]
- Resolves: bz#2174676
  (Guest hit EXT4-fs error on host 4K disk  when repeatedly hot-plug/unplug running IO disk [RHEL9])
- Resolves: RHEL-573
  ([mlx vhost_vdpa][rhel 9.3]live migration fail with "net vdpa cannot migrate with CVQ feature")
- Resolves: bz#2040509
  ([RFE]:Add support for changing "tx_queue_size" to a setable value)
- Resolves: bz#2223691
  ([machine type 9.2]Failed to migrate VM from RHEL 9.3 to RHEL 9.2)
- Resolves: bz#2141965
  ([TPM][vhost-vdpa][rhel9.2]Boot a guest with "vhost-vdpa + TPM emulator", qemu output: qemu-kvm: vhost_vdpa_listener_region_add received unaligned region)
2023-08-01 13:25:38 +02:00

98 lines
3.9 KiB
Diff

From 64652225695c23855cfb1252cea2b55c24da2260 Mon Sep 17 00:00:00 2001
From: Hanna Czenczek <hreitz@redhat.com>
Date: Tue, 11 Apr 2023 19:34:15 +0200
Subject: [PATCH 1/9] util/iov: Make qiov_slice() public
RH-Author: Hanna Czenczek <hreitz@redhat.com>
RH-MergeRequest: 189: block: Split padded I/O vectors exceeding IOV_MAX
RH-Bugzilla: 2174676
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Commit: [1/5] 9c3cd661f7139ce124ee4f4d5fcbeaf3dbb9c45c (hreitz/qemu-kvm-c-9-s)
We want to inline qemu_iovec_init_extended() in block/io.c for padding
requests, and having access to qiov_slice() is useful for this. As a
public function, it is renamed to qemu_iovec_slice().
(We will need to count the number of I/O vector elements of a slice
there, and then later process this slice. Without qiov_slice(), we
would need to call qemu_iovec_subvec_niov(), and all further
IOV-processing functions may need to skip prefixing elements to
accomodate for a qiov_offset. Because qemu_iovec_subvec_niov()
internally calls qiov_slice(), we can just have the block/io.c code call
qiov_slice() itself, thus get the number of elements, and also create an
iovec array with the superfluous prefixing elements stripped, so the
following processing functions no longer need to skip them.)
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20230411173418.19549-2-hreitz@redhat.com>
(cherry picked from commit 3d06cea8256d54a6b0238934c31012f7f17100f5)
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
include/qemu/iov.h | 3 +++
util/iov.c | 14 +++++++-------
2 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/include/qemu/iov.h b/include/qemu/iov.h
index 9330746680..46fadfb27a 100644
--- a/include/qemu/iov.h
+++ b/include/qemu/iov.h
@@ -229,6 +229,9 @@ int qemu_iovec_init_extended(
void *tail_buf, size_t tail_len);
void qemu_iovec_init_slice(QEMUIOVector *qiov, QEMUIOVector *source,
size_t offset, size_t len);
+struct iovec *qemu_iovec_slice(QEMUIOVector *qiov,
+ size_t offset, size_t len,
+ size_t *head, size_t *tail, int *niov);
int qemu_iovec_subvec_niov(QEMUIOVector *qiov, size_t offset, size_t len);
void qemu_iovec_add(QEMUIOVector *qiov, void *base, size_t len);
void qemu_iovec_concat(QEMUIOVector *dst,
diff --git a/util/iov.c b/util/iov.c
index b4be580022..65a70449da 100644
--- a/util/iov.c
+++ b/util/iov.c
@@ -378,15 +378,15 @@ static struct iovec *iov_skip_offset(struct iovec *iov, size_t offset,
}
/*
- * qiov_slice
+ * qemu_iovec_slice
*
* Find subarray of iovec's, containing requested range. @head would
* be offset in first iov (returned by the function), @tail would be
* count of extra bytes in last iovec (returned iov + @niov - 1).
*/
-static struct iovec *qiov_slice(QEMUIOVector *qiov,
- size_t offset, size_t len,
- size_t *head, size_t *tail, int *niov)
+struct iovec *qemu_iovec_slice(QEMUIOVector *qiov,
+ size_t offset, size_t len,
+ size_t *head, size_t *tail, int *niov)
{
struct iovec *iov, *end_iov;
@@ -411,7 +411,7 @@ int qemu_iovec_subvec_niov(QEMUIOVector *qiov, size_t offset, size_t len)
size_t head, tail;
int niov;
- qiov_slice(qiov, offset, len, &head, &tail, &niov);
+ qemu_iovec_slice(qiov, offset, len, &head, &tail, &niov);
return niov;
}
@@ -439,8 +439,8 @@ int qemu_iovec_init_extended(
}
if (mid_len) {
- mid_iov = qiov_slice(mid_qiov, mid_offset, mid_len,
- &mid_head, &mid_tail, &mid_niov);
+ mid_iov = qemu_iovec_slice(mid_qiov, mid_offset, mid_len,
+ &mid_head, &mid_tail, &mid_niov);
}
total_niov = !!head_len + mid_niov + !!tail_len;
--
2.39.3