c6c2b61b24
-------------------------------------------------------- * Mon Jul 31 2023 Miroslav Rezanina <mrezanin@redhat.com> - 8.0.0-10 - kvm-util-iov-Make-qiov_slice-public.patch [bz#2174676] - kvm-block-Collapse-padded-I-O-vecs-exceeding-IOV_MAX.patch [bz#2174676] - kvm-util-iov-Remove-qemu_iovec_init_extended.patch [bz#2174676] - kvm-iotests-iov-padding-New-test.patch [bz#2174676] - kvm-block-Fix-pad_request-s-request-restriction.patch [bz#2174676] - kvm-vdpa-do-not-block-migration-if-device-has-cvq-and-x-.patch [RHEL-573] - kvm-virtio-net-correctly-report-maximum-tx_queue_size-va.patch [bz#2040509] - kvm-hw-pci-Disable-PCI_ERR_UNCOR_MASK-reg-for-machine-ty.patch [bz#2223691] - kvm-vhost-vdpa-mute-unaligned-memory-error-report.patch [bz#2141965] - Resolves: bz#2174676 (Guest hit EXT4-fs error on host 4K disk when repeatedly hot-plug/unplug running IO disk [RHEL9]) - Resolves: RHEL-573 ([mlx vhost_vdpa][rhel 9.3]live migration fail with "net vdpa cannot migrate with CVQ feature") - Resolves: bz#2040509 ([RFE]:Add support for changing "tx_queue_size" to a setable value) - Resolves: bz#2223691 ([machine type 9.2]Failed to migrate VM from RHEL 9.3 to RHEL 9.2) - Resolves: bz#2141965 ([TPM][vhost-vdpa][rhel9.2]Boot a guest with "vhost-vdpa + TPM emulator", qemu output: qemu-kvm: vhost_vdpa_listener_region_add received unaligned region)
74 lines
3.1 KiB
Diff
74 lines
3.1 KiB
Diff
From 547f6bf93734f7c13675eebb93273ef2273f7c31 Mon Sep 17 00:00:00 2001
|
|
From: Hanna Czenczek <hreitz@redhat.com>
|
|
Date: Fri, 14 Jul 2023 10:59:38 +0200
|
|
Subject: [PATCH 5/9] block: Fix pad_request's request restriction
|
|
|
|
RH-Author: Hanna Czenczek <hreitz@redhat.com>
|
|
RH-MergeRequest: 189: block: Split padded I/O vectors exceeding IOV_MAX
|
|
RH-Bugzilla: 2174676
|
|
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
|
|
RH-Commit: [5/5] e8abc0485f6e0608a1ec55143ff40a14d273dfc8 (hreitz/qemu-kvm-c-9-s)
|
|
|
|
bdrv_pad_request() relies on requests' lengths not to exceed SIZE_MAX,
|
|
which bdrv_check_qiov_request() does not guarantee.
|
|
|
|
bdrv_check_request32() however will guarantee this, and both of
|
|
bdrv_pad_request()'s callers (bdrv_co_preadv_part() and
|
|
bdrv_co_pwritev_part()) already run it before calling
|
|
bdrv_pad_request(). Therefore, bdrv_pad_request() can safely call
|
|
bdrv_check_request32() without expecting error, too.
|
|
|
|
In effect, this patch will not change guest-visible behavior. It is a
|
|
clean-up to tighten a condition to match what is guaranteed by our
|
|
callers, and which exists purely to show clearly why the subsequent
|
|
assertion (`assert(*bytes <= SIZE_MAX)`) is always true.
|
|
|
|
Note there is a difference between the interfaces of
|
|
bdrv_check_qiov_request() and bdrv_check_request32(): The former takes
|
|
an errp, the latter does not, so we can no longer just pass
|
|
&error_abort. Instead, we need to check the returned value. While we
|
|
do expect success (because the callers have already run this function),
|
|
an assert(ret == 0) is not much simpler than just to return an error if
|
|
it occurs, so let us handle errors by returning them up the stack now.
|
|
|
|
Reported-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
|
|
Message-id: 20230714085938.202730-1-hreitz@redhat.com
|
|
Fixes: 18743311b829cafc1737a5f20bc3248d5f91ee2a
|
|
("block: Collapse padded I/O vecs exceeding IOV_MAX")
|
|
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
|
|
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
---
|
|
block/io.c | 8 ++++++--
|
|
1 file changed, 6 insertions(+), 2 deletions(-)
|
|
|
|
diff --git a/block/io.c b/block/io.c
|
|
index 4e8e90208b..807c9fb720 100644
|
|
--- a/block/io.c
|
|
+++ b/block/io.c
|
|
@@ -1708,7 +1708,11 @@ static int bdrv_pad_request(BlockDriverState *bs,
|
|
int sliced_niov;
|
|
size_t sliced_head, sliced_tail;
|
|
|
|
- bdrv_check_qiov_request(*offset, *bytes, *qiov, *qiov_offset, &error_abort);
|
|
+ /* Should have been checked by the caller already */
|
|
+ ret = bdrv_check_request32(*offset, *bytes, *qiov, *qiov_offset);
|
|
+ if (ret < 0) {
|
|
+ return ret;
|
|
+ }
|
|
|
|
if (!bdrv_init_padding(bs, *offset, *bytes, write, pad)) {
|
|
if (padded) {
|
|
@@ -1721,7 +1725,7 @@ static int bdrv_pad_request(BlockDriverState *bs,
|
|
&sliced_head, &sliced_tail,
|
|
&sliced_niov);
|
|
|
|
- /* Guaranteed by bdrv_check_qiov_request() */
|
|
+ /* Guaranteed by bdrv_check_request32() */
|
|
assert(*bytes <= SIZE_MAX);
|
|
ret = bdrv_create_padded_qiov(bs, pad, sliced_iov, sliced_niov,
|
|
sliced_head, *bytes);
|
|
--
|
|
2.39.3
|
|
|