Enable QXL device build

Enable building for ppc64le

Re-added Spice support

Don't remove slof.bin for ppc64le
This commit is contained in:
Eduard Abdullin 2025-02-18 03:25:09 +00:00 committed by AlmaLinux Autopatch
commit eda829e4c2
23 changed files with 3401 additions and 2 deletions

View File

@ -0,0 +1,317 @@
From 01973563401bf804505e36fecf0c229fd548eda4 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:13:52 +0100
Subject: [PATCH 07/22] block: Add 'active' field to BlockDeviceInfo
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [7/22] 944b834b1aa3138d87bdbfce3c9fce105bd09a9d (kmwolf/centos-qemu-kvm)
This allows querying from QMP (and also HMP) whether an image is
currently active or inactive (in the sense of BDRV_O_INACTIVE).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-2-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit aec81049c2daa8a97b89e59f03733b21ae0f8c2d)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block.c | 4 ++++
block/monitor/block-hmp-cmds.c | 5 +++--
block/qapi.c | 1 +
include/block/block-global-state.h | 3 +++
qapi/block-core.json | 6 +++++-
tests/qemu-iotests/184.out | 2 ++
tests/qemu-iotests/191.out | 16 ++++++++++++++++
tests/qemu-iotests/273.out | 5 +++++
8 files changed, 39 insertions(+), 3 deletions(-)
diff --git a/block.c b/block.c
index c317de9eaa..c94d78eefd 100644
--- a/block.c
+++ b/block.c
@@ -6824,6 +6824,10 @@ void bdrv_init_with_whitelist(void)
bdrv_init();
}
+bool bdrv_is_inactive(BlockDriverState *bs) {
+ return bs->open_flags & BDRV_O_INACTIVE;
+}
+
int bdrv_activate(BlockDriverState *bs, Error **errp)
{
BdrvChild *child, *parent;
diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
index bdf2eb50b6..cc832549e1 100644
--- a/block/monitor/block-hmp-cmds.c
+++ b/block/monitor/block-hmp-cmds.c
@@ -630,11 +630,12 @@ static void print_block_info(Monitor *mon, BlockInfo *info,
}
if (inserted) {
- monitor_printf(mon, ": %s (%s%s%s)\n",
+ monitor_printf(mon, ": %s (%s%s%s%s)\n",
inserted->file,
inserted->drv,
inserted->ro ? ", read-only" : "",
- inserted->encrypted ? ", encrypted" : "");
+ inserted->encrypted ? ", encrypted" : "",
+ inserted->active ? "" : ", inactive");
} else {
monitor_printf(mon, ": [not inserted]\n");
}
diff --git a/block/qapi.c b/block/qapi.c
index 2b5793f1d9..709170e63d 100644
--- a/block/qapi.c
+++ b/block/qapi.c
@@ -63,6 +63,7 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
info->file = g_strdup(bs->filename);
info->ro = bdrv_is_read_only(bs);
info->drv = g_strdup(bs->drv->format_name);
+ info->active = !bdrv_is_inactive(bs);
info->encrypted = bs->encrypted;
info->cache = g_new(BlockdevCacheInfo, 1);
diff --git a/include/block/block-global-state.h b/include/block/block-global-state.h
index bd7cecd1cf..a826bf5f78 100644
--- a/include/block/block-global-state.h
+++ b/include/block/block-global-state.h
@@ -175,6 +175,9 @@ BlockDriverState * GRAPH_RDLOCK
check_to_replace_node(BlockDriverState *parent_bs, const char *node_name,
Error **errp);
+
+bool GRAPH_RDLOCK bdrv_is_inactive(BlockDriverState *bs);
+
int no_coroutine_fn GRAPH_RDLOCK
bdrv_activate(BlockDriverState *bs, Error **errp);
diff --git a/qapi/block-core.json b/qapi/block-core.json
index aa40d44f1d..92af032744 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -486,6 +486,10 @@
# @backing_file_depth: number of files in the backing file chain
# (since: 1.2)
#
+# @active: true if the backend is active; typical cases for inactive backends
+# are on the migration source instance after migration completes and on the
+# destination before it completes. (since: 10.0)
+#
# @encrypted: true if the backing device is encrypted
#
# @detect_zeroes: detect and optimize zero writes (Since 2.1)
@@ -556,7 +560,7 @@
{ 'struct': 'BlockDeviceInfo',
'data': { 'file': 'str', '*node-name': 'str', 'ro': 'bool', 'drv': 'str',
'*backing_file': 'str', 'backing_file_depth': 'int',
- 'encrypted': 'bool',
+ 'active': 'bool', 'encrypted': 'bool',
'detect_zeroes': 'BlockdevDetectZeroesOptions',
'bps': 'int', 'bps_rd': 'int', 'bps_wr': 'int',
'iops': 'int', 'iops_rd': 'int', 'iops_wr': 'int',
diff --git a/tests/qemu-iotests/184.out b/tests/qemu-iotests/184.out
index e8f631f853..52692b6b3b 100644
--- a/tests/qemu-iotests/184.out
+++ b/tests/qemu-iotests/184.out
@@ -26,6 +26,7 @@ Testing:
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"backing-image": {
"virtual-size": 1073741824,
@@ -59,6 +60,7 @@ Testing:
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 1073741824,
"filename": "null-co://",
diff --git a/tests/qemu-iotests/191.out b/tests/qemu-iotests/191.out
index c3309e4bc6..2a72ca7106 100644
--- a/tests/qemu-iotests/191.out
+++ b/tests/qemu-iotests/191.out
@@ -114,6 +114,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"backing-image": {
"virtual-size": 67108864,
@@ -155,6 +156,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 197120,
"filename": "TEST_DIR/t.IMGFMT.ovl2",
@@ -183,6 +185,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"backing-image": {
"virtual-size": 67108864,
@@ -224,6 +227,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 197120,
"filename": "TEST_DIR/t.IMGFMT",
@@ -252,6 +256,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"backing-image": {
"virtual-size": 67108864,
@@ -293,6 +298,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 393216,
"filename": "TEST_DIR/t.IMGFMT.mid",
@@ -321,6 +327,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 67108864,
"filename": "TEST_DIR/t.IMGFMT.base",
@@ -350,6 +357,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 393216,
"filename": "TEST_DIR/t.IMGFMT.base",
@@ -521,6 +529,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"backing-image": {
"virtual-size": 67108864,
@@ -562,6 +571,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 197120,
"filename": "TEST_DIR/t.IMGFMT.ovl2",
@@ -590,6 +600,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"backing-image": {
"backing-image": {
@@ -642,6 +653,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 197120,
"filename": "TEST_DIR/t.IMGFMT.ovl3",
@@ -670,6 +682,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 67108864,
"filename": "TEST_DIR/t.IMGFMT.base",
@@ -699,6 +712,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 393216,
"filename": "TEST_DIR/t.IMGFMT.base",
@@ -727,6 +741,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"backing-image": {
"virtual-size": 67108864,
@@ -768,6 +783,7 @@ wrote 65536/65536 bytes at offset 1048576
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 197120,
"filename": "TEST_DIR/t.IMGFMT",
diff --git a/tests/qemu-iotests/273.out b/tests/qemu-iotests/273.out
index 71843f02de..c19753c685 100644
--- a/tests/qemu-iotests/273.out
+++ b/tests/qemu-iotests/273.out
@@ -23,6 +23,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"backing-image": {
"backing-image": {
@@ -74,6 +75,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 197120,
"filename": "TEST_DIR/t.IMGFMT",
@@ -102,6 +104,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"backing-image": {
"virtual-size": 197120,
@@ -142,6 +145,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 197120,
"filename": "TEST_DIR/t.IMGFMT.mid",
@@ -170,6 +174,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev
{
"iops_rd": 0,
"detect_zeroes": "off",
+ "active": true,
"image": {
"virtual-size": 197120,
"filename": "TEST_DIR/t.IMGFMT.base",
--
2.39.3

View File

@ -0,0 +1,187 @@
From d2cb8b847b6f88b4cbabea12a0b62f323d9000ff Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:13:59 +0100
Subject: [PATCH 14/22] block: Add blockdev-set-active QMP command
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [14/22] 0fbba1347acea9983b4fb3d35d1a4c00a09e5579 (kmwolf/centos-qemu-kvm)
The system emulator tries to automatically activate and inactivate block
nodes at the right point during migration. However, there are still
cases where it's necessary that the user can do this manually.
Images are only activated on the destination VM of a migration when the
VM is actually resumed. If the VM was paused, this doesn't happen
automatically. The user may want to perform some operation on a block
device (e.g. taking a snapshot or starting a block job) without also
resuming the VM yet. This is an example where a manual command is
necessary.
Another example is VM migration when the image files are opened by an
external qemu-storage-daemon instance on each side. In this case, the
process that needs to hand over the images isn't even part of the
migration and can't know when the migration completes. Management tools
need a way to explicitly inactivate images on the source and activate
them on the destination.
This adds a new blockdev-set-active QMP command that lets the user
change the status of individual nodes (this is necessary in
qemu-storage-daemon because it could be serving multiple VMs and only
one of them migrates at a time). For convenience, operating on all
devices (like QEMU does automatically during migration) is offered as an
option, too, and can be used in the context of single VM.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-9-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 8cd37207f8a90c5f995283ecf95f1cb5f7518a77)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block.c | 21 ++++++++++++++++++++
blockdev.c | 32 ++++++++++++++++++++++++++++++
include/block/block-global-state.h | 3 +++
qapi/block-core.json | 32 ++++++++++++++++++++++++++++++
4 files changed, 88 insertions(+)
diff --git a/block.c b/block.c
index fd2ac177ef..2140a5d3b7 100644
--- a/block.c
+++ b/block.c
@@ -7052,6 +7052,27 @@ bdrv_inactivate_recurse(BlockDriverState *bs, bool top_level)
return 0;
}
+int bdrv_inactivate(BlockDriverState *bs, Error **errp)
+{
+ int ret;
+
+ GLOBAL_STATE_CODE();
+ GRAPH_RDLOCK_GUARD_MAINLOOP();
+
+ if (bdrv_has_bds_parent(bs, true)) {
+ error_setg(errp, "Node has active parent node");
+ return -EPERM;
+ }
+
+ ret = bdrv_inactivate_recurse(bs, true);
+ if (ret < 0) {
+ error_setg_errno(errp, -ret, "Failed to inactivate node");
+ return ret;
+ }
+
+ return 0;
+}
+
int bdrv_inactivate_all(void)
{
BlockDriverState *bs = NULL;
diff --git a/blockdev.c b/blockdev.c
index 81430122df..70046b6690 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3468,6 +3468,38 @@ void qmp_blockdev_del(const char *node_name, Error **errp)
bdrv_unref(bs);
}
+void qmp_blockdev_set_active(const char *node_name, bool active, Error **errp)
+{
+ int ret;
+
+ GLOBAL_STATE_CODE();
+ GRAPH_RDLOCK_GUARD_MAINLOOP();
+
+ if (!node_name) {
+ if (active) {
+ bdrv_activate_all(errp);
+ } else {
+ ret = bdrv_inactivate_all();
+ if (ret < 0) {
+ error_setg_errno(errp, -ret, "Failed to inactivate all nodes");
+ }
+ }
+ } else {
+ BlockDriverState *bs = bdrv_find_node(node_name);
+ if (!bs) {
+ error_setg(errp, "Failed to find node with node-name='%s'",
+ node_name);
+ return;
+ }
+
+ if (active) {
+ bdrv_activate(bs, errp);
+ } else {
+ bdrv_inactivate(bs, errp);
+ }
+ }
+}
+
static BdrvChild * GRAPH_RDLOCK
bdrv_find_child(BlockDriverState *parent_bs, const char *child_name)
{
diff --git a/include/block/block-global-state.h b/include/block/block-global-state.h
index a826bf5f78..9be34b3c99 100644
--- a/include/block/block-global-state.h
+++ b/include/block/block-global-state.h
@@ -184,6 +184,9 @@ bdrv_activate(BlockDriverState *bs, Error **errp);
int coroutine_fn no_co_wrapper_bdrv_rdlock
bdrv_co_activate(BlockDriverState *bs, Error **errp);
+int no_coroutine_fn
+bdrv_inactivate(BlockDriverState *bs, Error **errp);
+
void bdrv_activate_all(Error **errp);
int bdrv_inactivate_all(void);
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 6ec603aa6f..c1af3d1f7d 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -4930,6 +4930,38 @@
{ 'command': 'blockdev-del', 'data': { 'node-name': 'str' },
'allow-preconfig': true }
+##
+# @blockdev-set-active:
+#
+# Activate or inactivate a block device. Use this to manage the handover of
+# block devices on migration with qemu-storage-daemon.
+#
+# Activating a node automatically activates all of its child nodes first.
+# Inactivating a node automatically inactivates any of its child nodes that are
+# not in use by a still active node.
+#
+# @node-name: Name of the graph node to activate or inactivate. By default, all
+# nodes are affected by the operation.
+#
+# @active: true if the nodes should be active when the command returns success,
+# false if they should be inactive.
+#
+# Since: 10.0
+#
+# .. qmp-example::
+#
+# -> { "execute": "blockdev-set-active",
+# "arguments": {
+# "node-name": "node0",
+# "active": false
+# }
+# }
+# <- { "return": {} }
+##
+{ 'command': 'blockdev-set-active',
+ 'data': { '*node-name': 'str', 'active': 'bool' },
+ 'allow-preconfig': true }
+
##
# @BlockdevCreateOptionsFile:
#
--
2.39.3

View File

@ -0,0 +1,102 @@
From 990a468ee87adc98bb53d859b52c8a5c7bbb8524 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:13:58 +0100
Subject: [PATCH 13/22] block: Add option to create inactive nodes
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [13/22] 4e5d2d86502ac5ace83f2abe4a48abfba188256d (kmwolf/centos-qemu-kvm)
In QEMU, nodes are automatically created inactive while expecting an
incoming migration (i.e. RUN_STATE_INMIGRATE). In qemu-storage-daemon,
the notion of runstates doesn't exist. It also wouldn't necessarily make
sense to introduce it because a single daemon can serve multiple VMs
that can be in different states.
Therefore, allow the user to explicitly open images as inactive with a
new option. The default is as before: Nodes are usually active, except
when created during RUN_STATE_INMIGRATE.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-8-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit faecd16fe5c65a25b5b55b5edbe4322cec5a9d96)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block.c | 9 +++++++++
include/block/block-common.h | 1 +
qapi/block-core.json | 6 ++++++
3 files changed, 16 insertions(+)
diff --git a/block.c b/block.c
index bedd54deaa..fd2ac177ef 100644
--- a/block.c
+++ b/block.c
@@ -1573,6 +1573,10 @@ static void update_flags_from_options(int *flags, QemuOpts *opts)
if (qemu_opt_get_bool_del(opts, BDRV_OPT_AUTO_READ_ONLY, false)) {
*flags |= BDRV_O_AUTO_RDONLY;
}
+
+ if (!qemu_opt_get_bool_del(opts, BDRV_OPT_ACTIVE, true)) {
+ *flags |= BDRV_O_INACTIVE;
+ }
}
static void update_options_from_flags(QDict *options, int flags)
@@ -1799,6 +1803,11 @@ QemuOptsList bdrv_runtime_opts = {
.type = QEMU_OPT_BOOL,
.help = "Ignore flush requests",
},
+ {
+ .name = BDRV_OPT_ACTIVE,
+ .type = QEMU_OPT_BOOL,
+ .help = "Node is activated",
+ },
{
.name = BDRV_OPT_READ_ONLY,
.type = QEMU_OPT_BOOL,
diff --git a/include/block/block-common.h b/include/block/block-common.h
index 338fe5ff7a..7030669f04 100644
--- a/include/block/block-common.h
+++ b/include/block/block-common.h
@@ -257,6 +257,7 @@ typedef enum {
#define BDRV_OPT_AUTO_READ_ONLY "auto-read-only"
#define BDRV_OPT_DISCARD "discard"
#define BDRV_OPT_FORCE_SHARE "force-share"
+#define BDRV_OPT_ACTIVE "active"
#define BDRV_SECTOR_BITS 9
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 92af032744..6ec603aa6f 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -4668,6 +4668,11 @@
#
# @cache: cache-related options
#
+# @active: whether the block node should be activated (default: true).
+# Having inactive block nodes is useful primarily for migration because it
+# allows opening an image on the destination while the source is still
+# holding locks for it. (Since 10.0)
+#
# @read-only: whether the block device should be read-only (default:
# false). Note that some block drivers support only read-only
# access, either generally or in certain configurations. In this
@@ -4694,6 +4699,7 @@
'*node-name': 'str',
'*discard': 'BlockdevDiscardOptions',
'*cache': 'BlockdevCacheOptions',
+ '*active': 'bool',
'*read-only': 'bool',
'*auto-read-only': 'bool',
'*force-share': 'bool',
--
2.39.3

View File

@ -0,0 +1,80 @@
From 87507aae02f0b381c658a71777baad6fe3129485 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:13:53 +0100
Subject: [PATCH 08/22] block: Allow inactivating already inactive nodes
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [8/22] 5247551bb2cc2ac51ce28f690f30b0f54c9abef5 (kmwolf/centos-qemu-kvm)
What we wanted to catch with the assertion is cases where the recursion
finds that a child was inactive before its parent. This should never
happen. But if the user tries to inactivate an image that is already
inactive, that's harmless and we don't want to fail the assertion.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-3-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit a6490ec9d56b9e95a13918813585a3a9891710bc)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/block.c b/block.c
index c94d78eefd..a2aa454312 100644
--- a/block.c
+++ b/block.c
@@ -6959,7 +6959,8 @@ bdrv_has_bds_parent(BlockDriverState *bs, bool only_active)
return false;
}
-static int GRAPH_RDLOCK bdrv_inactivate_recurse(BlockDriverState *bs)
+static int GRAPH_RDLOCK
+bdrv_inactivate_recurse(BlockDriverState *bs, bool top_level)
{
BdrvChild *child, *parent;
int ret;
@@ -6977,7 +6978,14 @@ static int GRAPH_RDLOCK bdrv_inactivate_recurse(BlockDriverState *bs)
return 0;
}
- assert(!(bs->open_flags & BDRV_O_INACTIVE));
+ /*
+ * Inactivating an already inactive node on user request is harmless, but if
+ * a child is already inactive before its parent, that's bad.
+ */
+ if (bs->open_flags & BDRV_O_INACTIVE) {
+ assert(top_level);
+ return 0;
+ }
/* Inactivate this node */
if (bs->drv->bdrv_inactivate) {
@@ -7014,7 +7022,7 @@ static int GRAPH_RDLOCK bdrv_inactivate_recurse(BlockDriverState *bs)
/* Recursively inactivate children */
QLIST_FOREACH(child, &bs->children, next) {
- ret = bdrv_inactivate_recurse(child->bs);
+ ret = bdrv_inactivate_recurse(child->bs, false);
if (ret < 0) {
return ret;
}
@@ -7039,7 +7047,7 @@ int bdrv_inactivate_all(void)
if (bdrv_has_bds_parent(bs, false)) {
continue;
}
- ret = bdrv_inactivate_recurse(bs);
+ ret = bdrv_inactivate_recurse(bs, true);
if (ret < 0) {
bdrv_next_cleanup(&it);
break;
--
2.39.3

View File

@ -0,0 +1,46 @@
From 85c5bd4a41fec70482b634ae2d3bb3c56631e337 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:13:56 +0100
Subject: [PATCH 11/22] block: Don't attach inactive child to active node
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [11/22] 7148d3e16eeda8a6142aedeab245a88b879b37b8 (kmwolf/centos-qemu-kvm)
An active node makes unrestricted use of its children and would possibly
run into assertion failures when it operates on an inactive child node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-6-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 9b81361aedcc47905de5e91f68221de89c6f5467)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/block.c b/block.c
index a2aa454312..41e72e6965 100644
--- a/block.c
+++ b/block.c
@@ -3183,6 +3183,11 @@ bdrv_attach_child_noperm(BlockDriverState *parent_bs,
child_bs->node_name, child_name, parent_bs->node_name);
return NULL;
}
+ if (bdrv_is_inactive(child_bs) && !bdrv_is_inactive(parent_bs)) {
+ error_setg(errp, "Inactive '%s' can't be a %s child of active '%s'",
+ child_bs->node_name, child_name, parent_bs->node_name);
+ return NULL;
+ }
bdrv_get_cumulative_perm(parent_bs, &perm, &shared_perm);
bdrv_child_perm(parent_bs, child_bs, NULL, child_role, NULL,
--
2.39.3

View File

@ -0,0 +1,52 @@
From e7fa06c5ad14fc4df863265ca4723c9b74368682 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:14:02 +0100
Subject: [PATCH 17/22] block: Drain nodes before inactivating them
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [17/22] 64b1f9168b91793dcf3cf1df39e4799c7dc0f73c (kmwolf/centos-qemu-kvm)
So far the assumption has always been that if we try to inactivate a
node, it is already idle. This doesn't hold true any more if we allow
inactivating exported nodes because we can't know when new external
requests come in.
Drain the node around setting BDRV_O_INACTIVE so that requests can't
start operating on an active node and then in the middle it suddenly
becomes inactive. With this change, it's enough for exports to check
for new requests that they operate on an active node (or, like reads,
are allowed even on an inactive node).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Message-ID: <20250204211407.381505-12-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 2849092a0024405e74c96f0a5ec41bb182ec8538)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block.c b/block.c
index 2140a5d3b7..38cb8481a8 100644
--- a/block.c
+++ b/block.c
@@ -7032,7 +7032,9 @@ bdrv_inactivate_recurse(BlockDriverState *bs, bool top_level)
return -EPERM;
}
+ bdrv_drained_begin(bs);
bs->open_flags |= BDRV_O_INACTIVE;
+ bdrv_drained_end(bs);
/*
* Update permissions, they may differ for inactive nodes.
--
2.39.3

View File

@ -0,0 +1,68 @@
From fcf20a7c75d01009701fd960247ff76914280e1a Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:13:57 +0100
Subject: [PATCH 12/22] block: Fix crash on block_resize on inactive node
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [12/22] 9edc182c7ac587b0eaea836203b77d94b4d9bd80 (kmwolf/centos-qemu-kvm)
In order for block_resize to fail gracefully on an inactive node instead
of crashing with an assertion failure in bdrv_co_write_req_prepare()
(called from bdrv_co_truncate()), we need to check for inactive nodes
also when they are attached as a root node and make sure that
BLK_PERM_RESIZE isn't among the permissions allowed for inactive nodes.
To this effect, don't enumerate the permissions that are incompatible
with inactive nodes any more, but allow only BLK_PERM_CONSISTENT_READ
for them.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-7-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 8c2c72a33581987af8d8c484d03af3cd69b9e10a)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block.c | 7 +++++++
block/block-backend.c | 2 +-
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/block.c b/block.c
index 41e72e6965..bedd54deaa 100644
--- a/block.c
+++ b/block.c
@@ -3077,6 +3077,13 @@ bdrv_attach_child_common(BlockDriverState *child_bs,
assert(child_class->get_parent_desc);
GLOBAL_STATE_CODE();
+ if (bdrv_is_inactive(child_bs) && (perm & ~BLK_PERM_CONSISTENT_READ)) {
+ g_autofree char *perm_names = bdrv_perm_names(perm);
+ error_setg(errp, "Permission '%s' unavailable on inactive node",
+ perm_names);
+ return NULL;
+ }
+
new_child = g_new(BdrvChild, 1);
*new_child = (BdrvChild) {
.bs = NULL,
diff --git a/block/block-backend.c b/block/block-backend.c
index db6f9b92a3..356db1b703 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -253,7 +253,7 @@ static bool blk_can_inactivate(BlockBackend *blk)
* guest. For block job BBs that satisfy this, we can just allow
* it. This is the case for mirror job source, which is required
* by libvirt non-shared block migration. */
- if (!(blk->perm & (BLK_PERM_WRITE | BLK_PERM_WRITE_UNCHANGED))) {
+ if (!(blk->perm & ~BLK_PERM_CONSISTENT_READ)) {
return true;
}
--
2.39.3

View File

@ -0,0 +1,68 @@
From b0b5dbd95c73a5cd4173c11d283f6144f0d78e04 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:13:54 +0100
Subject: [PATCH 09/22] block: Inactivate external snapshot overlays when
necessary
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [9/22] 82aa17e6d5009c0a89fb884afbf408ae2f4c5478 (kmwolf/centos-qemu-kvm)
Putting an active block node on top of an inactive one is strictly
speaking an invalid configuration and the next patch will turn it into a
hard error.
However, taking a snapshot while disk images are inactive after
completing migration has an important use case: After migrating to a
file, taking an external snapshot is what is needed to take a full VM
snapshot.
In order for this to keep working after the later patches, change
creating a snapshot such that it automatically inactivates an overlay
that is added on top of an already inactive node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-4-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit e80210ffb24c4e47650344ba77ce3ed354af596c)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
blockdev.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/blockdev.c b/blockdev.c
index 835064ed03..81430122df 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1497,6 +1497,22 @@ static void external_snapshot_action(TransactionAction *action,
return;
}
+ /*
+ * Older QEMU versions have allowed adding an active parent node to an
+ * inactive child node. This is unsafe in the general case, but there is an
+ * important use case, which is taking a VM snapshot with migration to file
+ * and then adding an external snapshot while the VM is still stopped and
+ * images are inactive. Requiring the user to explicitly create the overlay
+ * as inactive would break compatibility, so just do it automatically here
+ * to keep this working.
+ */
+ if (bdrv_is_inactive(state->old_bs) && !bdrv_is_inactive(state->new_bs)) {
+ ret = bdrv_inactivate(state->new_bs, errp);
+ if (ret < 0) {
+ return;
+ }
+ }
+
ret = bdrv_append(state->new_bs, state->old_bs, errp);
if (ret < 0) {
return;
--
2.39.3

View File

@ -0,0 +1,66 @@
From 1c4cdab823e271cea3bb980eb0b2714f3474c7fa Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:14:00 +0100
Subject: [PATCH 15/22] block: Support inactive nodes in blk_insert_bs()
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [15/22] fed3cf8ea4bd443ff9bf52328650b43e3ce3150d (kmwolf/centos-qemu-kvm)
Device models have a relatively complex way to set up their block
backends, in which blk_attach_dev() sets blk->disable_perm = true.
We want to support inactive images in exports, too, so that
qemu-storage-daemon can be used with migration. Because they don't use
blk_attach_dev(), they need another way to set this flag. The most
convenient is to do this automatically when an inactive node is attached
to a BlockBackend that can be inactivated.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-10-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c1c5c7cc4ef6c45ca769c640566fd40d2cb7d5c1)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block/block-backend.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/block/block-backend.c b/block/block-backend.c
index 356db1b703..4a5a1c1f6a 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -909,14 +909,24 @@ void blk_remove_bs(BlockBackend *blk)
int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
{
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
+ uint64_t perm, shared_perm;
GLOBAL_STATE_CODE();
bdrv_ref(bs);
bdrv_graph_wrlock();
+
+ if ((bs->open_flags & BDRV_O_INACTIVE) && blk_can_inactivate(blk)) {
+ blk->disable_perm = true;
+ perm = 0;
+ shared_perm = BLK_PERM_ALL;
+ } else {
+ perm = blk->perm;
+ shared_perm = blk->shared_perm;
+ }
+
blk->root = bdrv_root_attach_child(bs, "root", &child_root,
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
- blk->perm, blk->shared_perm,
- blk, errp);
+ perm, shared_perm, blk, errp);
bdrv_graph_wrunlock();
if (blk->root == NULL) {
return -EPERM;
--
2.39.3

View File

@ -0,0 +1,135 @@
From f4e875181720552ebdb9530e533fbad96b8d01d2 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:14:03 +0100
Subject: [PATCH 18/22] block/export: Add option to allow export of inactive
nodes
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [18/22] 0a2723f830936a67fbcc4ee1ee7e846468adc77b (kmwolf/centos-qemu-kvm)
Add an option in BlockExportOptions to allow creating an export on an
inactive node without activating the node. This mode needs to be
explicitly supported by the export type (so that it doesn't perform any
operations that are forbidden for inactive nodes), so this patch alone
doesn't allow this option to be successfully used yet.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-13-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 1600ef01ab1296ca8230daa6bc41ba983751f646)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block/export/export.c | 31 +++++++++++++++++++++----------
include/block/export.h | 3 +++
qapi/block-export.json | 10 +++++++++-
3 files changed, 33 insertions(+), 11 deletions(-)
diff --git a/block/export/export.c b/block/export/export.c
index 23a86efcdb..71af65b3e5 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -75,6 +75,7 @@ static const BlockExportDriver *blk_exp_find_driver(BlockExportType type)
BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
{
bool fixed_iothread = export->has_fixed_iothread && export->fixed_iothread;
+ bool allow_inactive = export->has_allow_inactive && export->allow_inactive;
const BlockExportDriver *drv;
BlockExport *exp = NULL;
BlockDriverState *bs;
@@ -138,17 +139,24 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
}
}
- /*
- * Block exports are used for non-shared storage migration. Make sure
- * that BDRV_O_INACTIVE is cleared and the image is ready for write
- * access since the export could be available before migration handover.
- * ctx was acquired in the caller.
- */
bdrv_graph_rdlock_main_loop();
- ret = bdrv_activate(bs, errp);
- if (ret < 0) {
- bdrv_graph_rdunlock_main_loop();
- goto fail;
+ if (allow_inactive) {
+ if (!drv->supports_inactive) {
+ error_setg(errp, "Export type does not support inactive exports");
+ bdrv_graph_rdunlock_main_loop();
+ goto fail;
+ }
+ } else {
+ /*
+ * Block exports are used for non-shared storage migration. Make sure
+ * that BDRV_O_INACTIVE is cleared and the image is ready for write
+ * access since the export could be available before migration handover.
+ */
+ ret = bdrv_activate(bs, errp);
+ if (ret < 0) {
+ bdrv_graph_rdunlock_main_loop();
+ goto fail;
+ }
}
bdrv_graph_rdunlock_main_loop();
@@ -162,6 +170,9 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
if (!fixed_iothread) {
blk_set_allow_aio_context_change(blk, true);
}
+ if (allow_inactive) {
+ blk_set_force_allow_inactivate(blk);
+ }
ret = blk_insert_bs(blk, bs, errp);
if (ret < 0) {
diff --git a/include/block/export.h b/include/block/export.h
index f2fe0f8078..4bd9531d4d 100644
--- a/include/block/export.h
+++ b/include/block/export.h
@@ -29,6 +29,9 @@ typedef struct BlockExportDriver {
*/
size_t instance_size;
+ /* True if the export type supports running on an inactive node */
+ bool supports_inactive;
+
/* Creates and starts a new block export */
int (*create)(BlockExport *, BlockExportOptions *, Error **);
diff --git a/qapi/block-export.json b/qapi/block-export.json
index ce33fe378d..117b05d13c 100644
--- a/qapi/block-export.json
+++ b/qapi/block-export.json
@@ -372,6 +372,13 @@
# cannot be moved to the iothread. The default is false.
# (since: 5.2)
#
+# @allow-inactive: If true, the export allows the exported node to be inactive.
+# If it is created for an inactive block node, the node remains inactive. If
+# the export type doesn't support running on an inactive node, an error is
+# returned. If false, inactive block nodes are automatically activated before
+# creating the export and trying to inactivate them later fails.
+# (since: 10.0; default: false)
+#
# Since: 4.2
##
{ 'union': 'BlockExportOptions',
@@ -381,7 +388,8 @@
'*iothread': 'str',
'node-name': 'str',
'*writable': 'bool',
- '*writethrough': 'bool' },
+ '*writethrough': 'bool',
+ '*allow-inactive': 'bool' },
'discriminator': 'type',
'data': {
'nbd': 'BlockExportOptionsNbd',
--
2.39.3

View File

@ -0,0 +1,50 @@
From 7e3b7fff56e0f8f16a898e7d22789ffad4166aca Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:14:01 +0100
Subject: [PATCH 16/22] block/export: Don't ignore image activation error in
blk_exp_add()
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [16/22] 3318a1c7eb1707a8a0b01a0c6edbd69deada8ca7 (kmwolf/centos-qemu-kvm)
Currently, block exports can't handle inactive images correctly.
Incoming write requests would run into assertion failures. Make sure
that we return an error when creating an export can't activate the
image.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-11-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 69f28176ca0af850db23a1c6364f0c8525b20801)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block/export/export.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/block/export/export.c b/block/export/export.c
index 6d51ae8ed7..23a86efcdb 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -145,7 +145,11 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
* ctx was acquired in the caller.
*/
bdrv_graph_rdlock_main_loop();
- bdrv_activate(bs, NULL);
+ ret = bdrv_activate(bs, errp);
+ if (ret < 0) {
+ bdrv_graph_rdunlock_main_loop();
+ goto fail;
+ }
bdrv_graph_rdunlock_main_loop();
perm = BLK_PERM_CONSISTENT_READ;
--
2.39.3

View File

@ -0,0 +1,609 @@
From 693f96281609b133244802fbb77f8e35061a1648 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:14:07 +0100
Subject: [PATCH 22/22] iotests: Add (NBD-based) tests for inactive nodes
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [22/22] 4715b88d95aefc8b7fdf74f3acb4a45811faea39 (kmwolf/centos-qemu-kvm)
This tests different types of operations on inactive block nodes
(including graph changes, block jobs and NBD exports) to make sure that
users manually activating and inactivating nodes doesn't break things.
Support for inactive nodes in other export types will have to come with
separate test cases because they have different dependencies like blkio
or root permissions and we don't want to disable this basic test when
they are not fulfilled.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Message-ID: <20250204211407.381505-17-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit bbf105ef3cc48fff282789e9bf56b7a81e1407bd)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
tests/qemu-iotests/iotests.py | 4 +
tests/qemu-iotests/tests/inactive-node-nbd | 303 ++++++++++++++++++
.../qemu-iotests/tests/inactive-node-nbd.out | 239 ++++++++++++++
3 files changed, 546 insertions(+)
create mode 100755 tests/qemu-iotests/tests/inactive-node-nbd
create mode 100644 tests/qemu-iotests/tests/inactive-node-nbd.out
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 1a42aa1416..c8cb028c2d 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -913,6 +913,10 @@ def add_incoming(self, addr):
self._args.append(addr)
return self
+ def add_paused(self):
+ self._args.append('-S')
+ return self
+
def hmp(self, command_line: str, use_log: bool = False) -> QMPMessage:
cmd = 'human-monitor-command'
kwargs: Dict[str, Any] = {'command-line': command_line}
diff --git a/tests/qemu-iotests/tests/inactive-node-nbd b/tests/qemu-iotests/tests/inactive-node-nbd
new file mode 100755
index 0000000000..a95b37e796
--- /dev/null
+++ b/tests/qemu-iotests/tests/inactive-node-nbd
@@ -0,0 +1,303 @@
+#!/usr/bin/env python3
+# group: rw quick
+#
+# Copyright (C) Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+#
+# Creator/Owner: Kevin Wolf <kwolf@redhat.com>
+
+import iotests
+
+from iotests import QemuIoInteractive
+from iotests import filter_qemu_io, filter_qtest, filter_qmp_testfiles
+
+iotests.script_initialize(supported_fmts=['generic'],
+ supported_protocols=['file'],
+ supported_platforms=['linux'])
+
+def get_export(node_name='disk-fmt', allow_inactive=None):
+ exp = {
+ 'id': 'exp0',
+ 'type': 'nbd',
+ 'node-name': node_name,
+ 'writable': True,
+ }
+
+ if allow_inactive is not None:
+ exp['allow-inactive'] = allow_inactive
+
+ return exp
+
+def node_is_active(_vm, node_name):
+ nodes = _vm.cmd('query-named-block-nodes', flat=True)
+ node = next(n for n in nodes if n['node-name'] == node_name)
+ return node['active']
+
+with iotests.FilePath('disk.img') as path, \
+ iotests.FilePath('snap.qcow2') as snap_path, \
+ iotests.FilePath('snap2.qcow2') as snap2_path, \
+ iotests.FilePath('target.img') as target_path, \
+ iotests.FilePath('nbd.sock', base_dir=iotests.sock_dir) as nbd_sock, \
+ iotests.VM() as vm:
+
+ img_size = '10M'
+
+ iotests.log('Preparing disk...')
+ iotests.qemu_img_create('-f', iotests.imgfmt, path, img_size)
+ iotests.qemu_img_create('-f', iotests.imgfmt, target_path, img_size)
+
+ iotests.qemu_img_create('-f', 'qcow2', '-b', path, '-F', iotests.imgfmt,
+ snap_path)
+ iotests.qemu_img_create('-f', 'qcow2', '-b', snap_path, '-F', 'qcow2',
+ snap2_path)
+
+ iotests.log('Launching VM...')
+ vm.add_blockdev(f'file,node-name=disk-file,filename={path}')
+ vm.add_blockdev(f'{iotests.imgfmt},file=disk-file,node-name=disk-fmt,'
+ 'active=off')
+ vm.add_blockdev(f'file,node-name=target-file,filename={target_path}')
+ vm.add_blockdev(f'{iotests.imgfmt},file=target-file,node-name=target-fmt')
+ vm.add_blockdev(f'file,node-name=snap-file,filename={snap_path}')
+ vm.add_blockdev(f'file,node-name=snap2-file,filename={snap2_path}')
+
+ # Actually running the VM activates all images
+ vm.add_paused()
+
+ vm.launch()
+ vm.qmp_log('nbd-server-start',
+ addr={'type': 'unix', 'data':{'path': nbd_sock}},
+ filters=[filter_qmp_testfiles])
+
+ iotests.log('\n=== Creating export of inactive node ===')
+
+ iotests.log('\nExports activate nodes without allow-inactive')
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ vm.qmp_log('block-export-add', **get_export())
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ vm.qmp_log('query-block-exports')
+ vm.qmp_log('block-export-del', id='exp0')
+ vm.event_wait('BLOCK_EXPORT_DELETED')
+ vm.qmp_log('query-block-exports')
+
+ iotests.log('\nExports activate nodes with allow-inactive=false')
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False)
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ vm.qmp_log('block-export-add', **get_export(allow_inactive=False))
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ vm.qmp_log('query-block-exports')
+ vm.qmp_log('block-export-del', id='exp0')
+ vm.event_wait('BLOCK_EXPORT_DELETED')
+ vm.qmp_log('query-block-exports')
+
+ iotests.log('\nExport leaves nodes inactive with allow-inactive=true')
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False)
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ vm.qmp_log('block-export-add', **get_export(allow_inactive=True))
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ vm.qmp_log('query-block-exports')
+ vm.qmp_log('block-export-del', id='exp0')
+ vm.event_wait('BLOCK_EXPORT_DELETED')
+ vm.qmp_log('query-block-exports')
+
+ iotests.log('\n=== Inactivating node with existing export ===')
+
+ iotests.log('\nInactivating nodes with an export fails without '
+ 'allow-inactive')
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True)
+ vm.qmp_log('block-export-add', **get_export(node_name='disk-fmt'))
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False)
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ vm.qmp_log('query-block-exports')
+ vm.qmp_log('block-export-del', id='exp0')
+ vm.event_wait('BLOCK_EXPORT_DELETED')
+ vm.qmp_log('query-block-exports')
+
+ iotests.log('\nInactivating nodes with an export fails with '
+ 'allow-inactive=false')
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True)
+ vm.qmp_log('block-export-add',
+ **get_export(node_name='disk-fmt', allow_inactive=False))
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False)
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ vm.qmp_log('query-block-exports')
+ vm.qmp_log('block-export-del', id='exp0')
+ vm.event_wait('BLOCK_EXPORT_DELETED')
+ vm.qmp_log('query-block-exports')
+
+ iotests.log('\nInactivating nodes with an export works with '
+ 'allow-inactive=true')
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True)
+ vm.qmp_log('block-export-add',
+ **get_export(node_name='disk-fmt', allow_inactive=True))
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False)
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ vm.qmp_log('query-block-exports')
+ vm.qmp_log('block-export-del', id='exp0')
+ vm.event_wait('BLOCK_EXPORT_DELETED')
+ vm.qmp_log('query-block-exports')
+
+ iotests.log('\n=== Inactive nodes with parent ===')
+
+ iotests.log('\nInactivating nodes with an active parent fails')
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True)
+ vm.qmp_log('blockdev-set-active', node_name='disk-file', active=False)
+ iotests.log('disk-file active: %s' % node_is_active(vm, 'disk-file'))
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+
+ iotests.log('\nInactivating nodes with an inactive parent works')
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False)
+ vm.qmp_log('blockdev-set-active', node_name='disk-file', active=False)
+ iotests.log('disk-file active: %s' % node_is_active(vm, 'disk-file'))
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+
+ iotests.log('\nCreating active parent node with an inactive child fails')
+ vm.qmp_log('blockdev-add', driver='raw', file='disk-fmt',
+ node_name='disk-filter')
+ vm.qmp_log('blockdev-add', driver='raw', file='disk-fmt',
+ node_name='disk-filter', active=True)
+
+ iotests.log('\nCreating inactive parent node with an inactive child works')
+ vm.qmp_log('blockdev-add', driver='raw', file='disk-fmt',
+ node_name='disk-filter', active=False)
+ vm.qmp_log('blockdev-del', node_name='disk-filter')
+
+ iotests.log('\n=== Resizing an inactive node ===')
+ vm.qmp_log('block_resize', node_name='disk-fmt', size=16*1024*1024)
+
+ iotests.log('\n=== Taking a snapshot of an inactive node ===')
+
+ iotests.log('\nActive overlay over inactive backing file automatically '
+ 'makes both inactive for compatibility')
+ vm.qmp_log('blockdev-add', driver='qcow2', node_name='snap-fmt',
+ file='snap-file', backing=None)
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt'))
+ vm.qmp_log('blockdev-snapshot', node='disk-fmt', overlay='snap-fmt')
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt'))
+ vm.qmp_log('blockdev-del', node_name='snap-fmt')
+
+ iotests.log('\nInactive overlay over inactive backing file just works')
+ vm.qmp_log('blockdev-add', driver='qcow2', node_name='snap-fmt',
+ file='snap-file', backing=None, active=False)
+ vm.qmp_log('blockdev-snapshot', node='disk-fmt', overlay='snap-fmt')
+
+ iotests.log('\n=== Block jobs with inactive nodes ===')
+
+ iotests.log('\nStreaming into an inactive node')
+ vm.qmp_log('block-stream', device='snap-fmt',
+ filters=[iotests.filter_qmp_generated_node_ids])
+
+ iotests.log('\nCommitting an inactive root node (active commit)')
+ vm.qmp_log('block-commit', job_id='job0', device='snap-fmt',
+ filters=[iotests.filter_qmp_generated_node_ids])
+
+ iotests.log('\nCommitting an inactive intermediate node to inactive base')
+ vm.qmp_log('blockdev-add', driver='qcow2', node_name='snap2-fmt',
+ file='snap2-file', backing='snap-fmt', active=False)
+
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt'))
+ iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt'))
+
+ vm.qmp_log('block-commit', job_id='job0', device='snap2-fmt',
+ top_node='snap-fmt',
+ filters=[iotests.filter_qmp_generated_node_ids])
+
+ iotests.log('\nCommitting an inactive intermediate node to active base')
+ vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True)
+ vm.qmp_log('block-commit', job_id='job0', device='snap2-fmt',
+ top_node='snap-fmt',
+ filters=[iotests.filter_qmp_generated_node_ids])
+
+ iotests.log('\nMirror from inactive source to active target')
+ vm.qmp_log('blockdev-mirror', job_id='job0', device='snap2-fmt',
+ target='target-fmt', sync='full',
+ filters=[iotests.filter_qmp_generated_node_ids])
+
+ iotests.log('\nMirror from active source to inactive target')
+
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt'))
+ iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt'))
+ iotests.log('target-fmt active: %s' % node_is_active(vm, 'target-fmt'))
+
+ # Activating snap2-fmt recursively activates the whole backing chain
+ vm.qmp_log('blockdev-set-active', node_name='snap2-fmt', active=True)
+ vm.qmp_log('blockdev-set-active', node_name='target-fmt', active=False)
+
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt'))
+ iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt'))
+ iotests.log('target-fmt active: %s' % node_is_active(vm, 'target-fmt'))
+
+ vm.qmp_log('blockdev-mirror', job_id='job0', device='snap2-fmt',
+ target='target-fmt', sync='full',
+ filters=[iotests.filter_qmp_generated_node_ids])
+
+ iotests.log('\nBackup from active source to inactive target')
+
+ vm.qmp_log('blockdev-backup', job_id='job0', device='snap2-fmt',
+ target='target-fmt', sync='full',
+ filters=[iotests.filter_qmp_generated_node_ids])
+
+ iotests.log('\nBackup from inactive source to active target')
+
+ # Inactivating snap2-fmt recursively inactivates the whole backing chain
+ vm.qmp_log('blockdev-set-active', node_name='snap2-fmt', active=False)
+ vm.qmp_log('blockdev-set-active', node_name='target-fmt', active=True)
+
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt'))
+ iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt'))
+ iotests.log('target-fmt active: %s' % node_is_active(vm, 'target-fmt'))
+
+ vm.qmp_log('blockdev-backup', job_id='job0', device='snap2-fmt',
+ target='target-fmt', sync='full',
+ filters=[iotests.filter_qmp_generated_node_ids])
+
+ iotests.log('\n=== Accessing export on inactive node ===')
+
+ # Use the target node because it has the right image format and isn't the
+ # (read-only) backing file of a qcow2 node
+ vm.qmp_log('blockdev-set-active', node_name='target-fmt', active=False)
+ vm.qmp_log('block-export-add',
+ **get_export(node_name='target-fmt', allow_inactive=True))
+
+ # The read should succeed, everything else should fail gracefully
+ qemu_io = QemuIoInteractive('-f', 'raw',
+ f'nbd+unix:///target-fmt?socket={nbd_sock}')
+ iotests.log(qemu_io.cmd('read 0 64k'), filters=[filter_qemu_io])
+ iotests.log(qemu_io.cmd('write 0 64k'), filters=[filter_qemu_io])
+ iotests.log(qemu_io.cmd('write -z 0 64k'), filters=[filter_qemu_io])
+ iotests.log(qemu_io.cmd('write -zu 0 64k'), filters=[filter_qemu_io])
+ iotests.log(qemu_io.cmd('discard 0 64k'), filters=[filter_qemu_io])
+ iotests.log(qemu_io.cmd('flush'), filters=[filter_qemu_io])
+ iotests.log(qemu_io.cmd('map'), filters=[filter_qemu_io])
+ qemu_io.close()
+
+ iotests.log('\n=== Resuming VM activates all images ===')
+ vm.qmp_log('cont')
+
+ iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt'))
+ iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt'))
+ iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt'))
+ iotests.log('target-fmt active: %s' % node_is_active(vm, 'target-fmt'))
+
+ iotests.log('\nShutting down...')
+ vm.shutdown()
+ log = vm.get_log()
+ if log:
+ iotests.log(log, [filter_qtest, filter_qemu_io])
diff --git a/tests/qemu-iotests/tests/inactive-node-nbd.out b/tests/qemu-iotests/tests/inactive-node-nbd.out
new file mode 100644
index 0000000000..a458b4fc05
--- /dev/null
+++ b/tests/qemu-iotests/tests/inactive-node-nbd.out
@@ -0,0 +1,239 @@
+Preparing disk...
+Launching VM...
+{"execute": "nbd-server-start", "arguments": {"addr": {"data": {"path": "SOCK_DIR/PID-nbd.sock"}, "type": "unix"}}}
+{"return": {}}
+
+=== Creating export of inactive node ===
+
+Exports activate nodes without allow-inactive
+disk-fmt active: False
+{"execute": "block-export-add", "arguments": {"id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}}
+{"return": {}}
+disk-fmt active: True
+{"execute": "query-block-exports", "arguments": {}}
+{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]}
+{"execute": "block-export-del", "arguments": {"id": "exp0"}}
+{"return": {}}
+{"execute": "query-block-exports", "arguments": {}}
+{"return": []}
+
+Exports activate nodes with allow-inactive=false
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}}
+{"return": {}}
+disk-fmt active: False
+{"execute": "block-export-add", "arguments": {"allow-inactive": false, "id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}}
+{"return": {}}
+disk-fmt active: True
+{"execute": "query-block-exports", "arguments": {}}
+{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]}
+{"execute": "block-export-del", "arguments": {"id": "exp0"}}
+{"return": {}}
+{"execute": "query-block-exports", "arguments": {}}
+{"return": []}
+
+Export leaves nodes inactive with allow-inactive=true
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}}
+{"return": {}}
+disk-fmt active: False
+{"execute": "block-export-add", "arguments": {"allow-inactive": true, "id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}}
+{"return": {}}
+disk-fmt active: False
+{"execute": "query-block-exports", "arguments": {}}
+{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]}
+{"execute": "block-export-del", "arguments": {"id": "exp0"}}
+{"return": {}}
+{"execute": "query-block-exports", "arguments": {}}
+{"return": []}
+
+=== Inactivating node with existing export ===
+
+Inactivating nodes with an export fails without allow-inactive
+{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}}
+{"return": {}}
+{"execute": "block-export-add", "arguments": {"id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}}
+{"return": {}}
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}}
+{"error": {"class": "GenericError", "desc": "Failed to inactivate node: Operation not permitted"}}
+disk-fmt active: True
+{"execute": "query-block-exports", "arguments": {}}
+{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]}
+{"execute": "block-export-del", "arguments": {"id": "exp0"}}
+{"return": {}}
+{"execute": "query-block-exports", "arguments": {}}
+{"return": []}
+
+Inactivating nodes with an export fails with allow-inactive=false
+{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}}
+{"return": {}}
+{"execute": "block-export-add", "arguments": {"allow-inactive": false, "id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}}
+{"return": {}}
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}}
+{"error": {"class": "GenericError", "desc": "Failed to inactivate node: Operation not permitted"}}
+disk-fmt active: True
+{"execute": "query-block-exports", "arguments": {}}
+{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]}
+{"execute": "block-export-del", "arguments": {"id": "exp0"}}
+{"return": {}}
+{"execute": "query-block-exports", "arguments": {}}
+{"return": []}
+
+Inactivating nodes with an export works with allow-inactive=true
+{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}}
+{"return": {}}
+{"execute": "block-export-add", "arguments": {"allow-inactive": true, "id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}}
+{"return": {}}
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}}
+{"return": {}}
+disk-fmt active: False
+{"execute": "query-block-exports", "arguments": {}}
+{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]}
+{"execute": "block-export-del", "arguments": {"id": "exp0"}}
+{"return": {}}
+{"execute": "query-block-exports", "arguments": {}}
+{"return": []}
+
+=== Inactive nodes with parent ===
+
+Inactivating nodes with an active parent fails
+{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}}
+{"return": {}}
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-file"}}
+{"error": {"class": "GenericError", "desc": "Node has active parent node"}}
+disk-file active: True
+disk-fmt active: True
+
+Inactivating nodes with an inactive parent works
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}}
+{"return": {}}
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-file"}}
+{"return": {}}
+disk-file active: False
+disk-fmt active: False
+
+Creating active parent node with an inactive child fails
+{"execute": "blockdev-add", "arguments": {"driver": "raw", "file": "disk-fmt", "node-name": "disk-filter"}}
+{"error": {"class": "GenericError", "desc": "Inactive 'disk-fmt' can't be a file child of active 'disk-filter'"}}
+{"execute": "blockdev-add", "arguments": {"active": true, "driver": "raw", "file": "disk-fmt", "node-name": "disk-filter"}}
+{"error": {"class": "GenericError", "desc": "Inactive 'disk-fmt' can't be a file child of active 'disk-filter'"}}
+
+Creating inactive parent node with an inactive child works
+{"execute": "blockdev-add", "arguments": {"active": false, "driver": "raw", "file": "disk-fmt", "node-name": "disk-filter"}}
+{"return": {}}
+{"execute": "blockdev-del", "arguments": {"node-name": "disk-filter"}}
+{"return": {}}
+
+=== Resizing an inactive node ===
+{"execute": "block_resize", "arguments": {"node-name": "disk-fmt", "size": 16777216}}
+{"error": {"class": "GenericError", "desc": "Permission 'resize' unavailable on inactive node"}}
+
+=== Taking a snapshot of an inactive node ===
+
+Active overlay over inactive backing file automatically makes both inactive for compatibility
+{"execute": "blockdev-add", "arguments": {"backing": null, "driver": "qcow2", "file": "snap-file", "node-name": "snap-fmt"}}
+{"return": {}}
+disk-fmt active: False
+snap-fmt active: True
+{"execute": "blockdev-snapshot", "arguments": {"node": "disk-fmt", "overlay": "snap-fmt"}}
+{"return": {}}
+disk-fmt active: False
+snap-fmt active: False
+{"execute": "blockdev-del", "arguments": {"node-name": "snap-fmt"}}
+{"return": {}}
+
+Inactive overlay over inactive backing file just works
+{"execute": "blockdev-add", "arguments": {"active": false, "backing": null, "driver": "qcow2", "file": "snap-file", "node-name": "snap-fmt"}}
+{"return": {}}
+{"execute": "blockdev-snapshot", "arguments": {"node": "disk-fmt", "overlay": "snap-fmt"}}
+{"return": {}}
+
+=== Block jobs with inactive nodes ===
+
+Streaming into an inactive node
+{"execute": "block-stream", "arguments": {"device": "snap-fmt"}}
+{"error": {"class": "GenericError", "desc": "Could not create node: Inactive 'snap-fmt' can't be a file child of active 'NODE_NAME'"}}
+
+Committing an inactive root node (active commit)
+{"execute": "block-commit", "arguments": {"device": "snap-fmt", "job-id": "job0"}}
+{"error": {"class": "GenericError", "desc": "Inactive 'snap-fmt' can't be a backing child of active 'NODE_NAME'"}}
+
+Committing an inactive intermediate node to inactive base
+{"execute": "blockdev-add", "arguments": {"active": false, "backing": "snap-fmt", "driver": "qcow2", "file": "snap2-file", "node-name": "snap2-fmt"}}
+{"return": {}}
+disk-fmt active: False
+snap-fmt active: False
+snap2-fmt active: False
+{"execute": "block-commit", "arguments": {"device": "snap2-fmt", "job-id": "job0", "top-node": "snap-fmt"}}
+{"error": {"class": "GenericError", "desc": "Inactive 'snap-fmt' can't be a backing child of active 'NODE_NAME'"}}
+
+Committing an inactive intermediate node to active base
+{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}}
+{"return": {}}
+{"execute": "block-commit", "arguments": {"device": "snap2-fmt", "job-id": "job0", "top-node": "snap-fmt"}}
+{"error": {"class": "GenericError", "desc": "Inactive 'snap-fmt' can't be a backing child of active 'NODE_NAME'"}}
+
+Mirror from inactive source to active target
+{"execute": "blockdev-mirror", "arguments": {"device": "snap2-fmt", "job-id": "job0", "sync": "full", "target": "target-fmt"}}
+{"error": {"class": "GenericError", "desc": "Inactive 'snap2-fmt' can't be a backing child of active 'NODE_NAME'"}}
+
+Mirror from active source to inactive target
+disk-fmt active: True
+snap-fmt active: False
+snap2-fmt active: False
+target-fmt active: True
+{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "snap2-fmt"}}
+{"return": {}}
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "target-fmt"}}
+{"return": {}}
+disk-fmt active: True
+snap-fmt active: True
+snap2-fmt active: True
+target-fmt active: False
+{"execute": "blockdev-mirror", "arguments": {"device": "snap2-fmt", "job-id": "job0", "sync": "full", "target": "target-fmt"}}
+{"error": {"class": "GenericError", "desc": "Permission 'write' unavailable on inactive node"}}
+
+Backup from active source to inactive target
+{"execute": "blockdev-backup", "arguments": {"device": "snap2-fmt", "job-id": "job0", "sync": "full", "target": "target-fmt"}}
+{"error": {"class": "GenericError", "desc": "Could not create node: Inactive 'target-fmt' can't be a target child of active 'NODE_NAME'"}}
+
+Backup from inactive source to active target
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "snap2-fmt"}}
+{"return": {}}
+{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "target-fmt"}}
+{"return": {}}
+disk-fmt active: False
+snap-fmt active: False
+snap2-fmt active: False
+target-fmt active: True
+{"execute": "blockdev-backup", "arguments": {"device": "snap2-fmt", "job-id": "job0", "sync": "full", "target": "target-fmt"}}
+{"error": {"class": "GenericError", "desc": "Could not create node: Inactive 'snap2-fmt' can't be a file child of active 'NODE_NAME'"}}
+
+=== Accessing export on inactive node ===
+{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "target-fmt"}}
+{"return": {}}
+{"execute": "block-export-add", "arguments": {"allow-inactive": true, "id": "exp0", "node-name": "target-fmt", "type": "nbd", "writable": true}}
+{"return": {}}
+read 65536/65536 bytes at offset 0
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+write failed: Operation not permitted
+
+write failed: Operation not permitted
+
+write failed: Operation not permitted
+
+discard failed: Operation not permitted
+
+
+qemu-io: Failed to get allocation status: Operation not permitted
+
+
+=== Resuming VM activates all images ===
+{"execute": "cont", "arguments": {}}
+{"return": {}}
+disk-fmt active: True
+snap-fmt active: True
+snap2-fmt active: True
+target-fmt active: True
+
+Shutting down...
+
--
2.39.3

View File

@ -0,0 +1,113 @@
From 30350ce735c55c416d98a566370bc43b7358ee1d Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:14:05 +0100
Subject: [PATCH 20/22] iotests: Add filter_qtest()
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [20/22] 46aecb1c268dcf666c8c7ef8e0d2f3fecaa934e2 (kmwolf/centos-qemu-kvm)
The open-coded form of this filter has been copied into enough tests
that it's better to move it into iotests.py.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-ID: <20250204211407.381505-15-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ed26db83673f4a190332d2a378e2f6e342b8904d)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
tests/qemu-iotests/041 | 4 +---
tests/qemu-iotests/165 | 4 +---
tests/qemu-iotests/iotests.py | 4 ++++
tests/qemu-iotests/tests/copy-before-write | 3 +--
tests/qemu-iotests/tests/migrate-bitmaps-test | 7 +++----
5 files changed, 10 insertions(+), 12 deletions(-)
diff --git a/tests/qemu-iotests/041 b/tests/qemu-iotests/041
index 98d17b1388..8452845f44 100755
--- a/tests/qemu-iotests/041
+++ b/tests/qemu-iotests/041
@@ -1100,10 +1100,8 @@ class TestRepairQuorum(iotests.QMPTestCase):
# Check the full error message now
self.vm.shutdown()
- log = self.vm.get_log()
- log = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', log)
+ log = iotests.filter_qtest(self.vm.get_log())
log = re.sub(r'^Formatting.*\n', '', log)
- log = re.sub(r'\n\[I \+\d+\.\d+\] CLOSED\n?$', '', log)
log = re.sub(r'^%s: ' % os.path.basename(iotests.qemu_prog), '', log)
self.assertEqual(log,
diff --git a/tests/qemu-iotests/165 b/tests/qemu-iotests/165
index b24907a62f..b3b1709d71 100755
--- a/tests/qemu-iotests/165
+++ b/tests/qemu-iotests/165
@@ -82,9 +82,7 @@ class TestPersistentDirtyBitmap(iotests.QMPTestCase):
self.vm.shutdown()
#catch 'Persistent bitmaps are lost' possible error
- log = self.vm.get_log()
- log = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', log)
- log = re.sub(r'\[I \+\d+\.\d+\] CLOSED\n?$', '', log)
+ log = iotests.filter_qtest(self.vm.get_log())
if log:
print(log)
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index ea48af4a7b..1a42aa1416 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -701,6 +701,10 @@ def _filter(_key, value):
def filter_nbd_exports(output: str) -> str:
return re.sub(r'((min|opt|max) block): [0-9]+', r'\1: XXX', output)
+def filter_qtest(output: str) -> str:
+ output = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', output)
+ output = re.sub(r'\n?\[I \+\d+\.\d+\] CLOSED\n?$', '', output)
+ return output
Msg = TypeVar('Msg', Dict[str, Any], List[Any], str)
diff --git a/tests/qemu-iotests/tests/copy-before-write b/tests/qemu-iotests/tests/copy-before-write
index d33bea577d..498c558008 100755
--- a/tests/qemu-iotests/tests/copy-before-write
+++ b/tests/qemu-iotests/tests/copy-before-write
@@ -95,8 +95,7 @@ class TestCbwError(iotests.QMPTestCase):
self.vm.shutdown()
log = self.vm.get_log()
- log = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', log)
- log = re.sub(r'\[I \+\d+\.\d+\] CLOSED\n?$', '', log)
+ log = iotests.filter_qtest(log)
log = iotests.filter_qemu_io(log)
return log
diff --git a/tests/qemu-iotests/tests/migrate-bitmaps-test b/tests/qemu-iotests/tests/migrate-bitmaps-test
index f98e721e97..8fb4099201 100755
--- a/tests/qemu-iotests/tests/migrate-bitmaps-test
+++ b/tests/qemu-iotests/tests/migrate-bitmaps-test
@@ -122,11 +122,10 @@ class TestDirtyBitmapMigration(iotests.QMPTestCase):
# catch 'Could not reopen qcow2 layer: Bitmap already exists'
# possible error
- log = self.vm_a.get_log()
- log = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', log)
- log = re.sub(r'^(wrote .* bytes at offset .*\n.*KiB.*ops.*sec.*\n){3}',
+ log = iotests.filter_qtest(self.vm_a.get_log())
+ log = re.sub(r'^(wrote .* bytes at offset .*\n'
+ r'.*KiB.*ops.*sec.*\n?){3}',
'', log)
- log = re.sub(r'\[I \+\d+\.\d+\] CLOSED\n?$', '', log)
self.assertEqual(log, '')
# test that bitmap is still persistent
--
2.39.3

View File

@ -0,0 +1,244 @@
From 10bfceb42fa5a7cb0cd0286e85f63da3bed3d806 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:14:06 +0100
Subject: [PATCH 21/22] iotests: Add qsd-migrate case
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [21/22] 50e7160617762ec15cc63f4062d47d65268c551a (kmwolf/centos-qemu-kvm)
Test that it's possible to migrate a VM that uses an image on shared
storage through qemu-storage-daemon.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-ID: <20250204211407.381505-16-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 3ea437ab3d561ca79b95a34c5128e370de4738e3)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
tests/qemu-iotests/tests/qsd-migrate | 140 +++++++++++++++++++++++
tests/qemu-iotests/tests/qsd-migrate.out | 59 ++++++++++
2 files changed, 199 insertions(+)
create mode 100755 tests/qemu-iotests/tests/qsd-migrate
create mode 100644 tests/qemu-iotests/tests/qsd-migrate.out
diff --git a/tests/qemu-iotests/tests/qsd-migrate b/tests/qemu-iotests/tests/qsd-migrate
new file mode 100755
index 0000000000..de17562cb0
--- /dev/null
+++ b/tests/qemu-iotests/tests/qsd-migrate
@@ -0,0 +1,140 @@
+#!/usr/bin/env python3
+# group: rw quick
+#
+# Copyright (C) Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+#
+# Creator/Owner: Kevin Wolf <kwolf@redhat.com>
+
+import iotests
+
+from iotests import filter_qemu_io, filter_qtest
+
+iotests.script_initialize(supported_fmts=['generic'],
+ supported_protocols=['file'],
+ supported_platforms=['linux'])
+
+with iotests.FilePath('disk.img') as path, \
+ iotests.FilePath('nbd-src.sock', base_dir=iotests.sock_dir) as nbd_src, \
+ iotests.FilePath('nbd-dst.sock', base_dir=iotests.sock_dir) as nbd_dst, \
+ iotests.FilePath('migrate.sock', base_dir=iotests.sock_dir) as mig_sock, \
+ iotests.VM(path_suffix="-src") as vm_src, \
+ iotests.VM(path_suffix="-dst") as vm_dst:
+
+ img_size = '10M'
+
+ iotests.log('Preparing disk...')
+ iotests.qemu_img_create('-f', iotests.imgfmt, path, img_size)
+
+ iotests.log('Launching source QSD...')
+ qsd_src = iotests.QemuStorageDaemon(
+ '--blockdev', f'file,node-name=disk-file,filename={path}',
+ '--blockdev', f'{iotests.imgfmt},file=disk-file,node-name=disk-fmt',
+ '--nbd-server', f'addr.type=unix,addr.path={nbd_src}',
+ '--export', 'nbd,id=exp0,node-name=disk-fmt,writable=true,'
+ 'allow-inactive=true',
+ qmp=True,
+ )
+
+ iotests.log('Launching source VM...')
+ vm_src.add_args('-blockdev', f'nbd,node-name=disk,server.type=unix,'
+ f'server.path={nbd_src},export=disk-fmt')
+ vm_src.add_args('-device', 'virtio-blk,drive=disk,id=virtio0')
+ vm_src.launch()
+
+ iotests.log('Launching destination QSD...')
+ qsd_dst = iotests.QemuStorageDaemon(
+ '--blockdev', f'file,node-name=disk-file,filename={path},active=off',
+ '--blockdev', f'{iotests.imgfmt},file=disk-file,node-name=disk-fmt,'
+ f'active=off',
+ '--nbd-server', f'addr.type=unix,addr.path={nbd_dst}',
+ '--export', 'nbd,id=exp0,node-name=disk-fmt,writable=true,'
+ 'allow-inactive=true',
+ qmp=True,
+ instance_id='b',
+ )
+
+ iotests.log('Launching destination VM...')
+ vm_dst.add_args('-blockdev', f'nbd,node-name=disk,server.type=unix,'
+ f'server.path={nbd_dst},export=disk-fmt')
+ vm_dst.add_args('-device', 'virtio-blk,drive=disk,id=virtio0')
+ vm_dst.add_args('-incoming', f'unix:{mig_sock}')
+ vm_dst.launch()
+
+ iotests.log('\nTest I/O on the source')
+ vm_src.hmp_qemu_io('virtio0/virtio-backend', 'write -P 0x11 0 4k',
+ use_log=True, qdev=True)
+ vm_src.hmp_qemu_io('virtio0/virtio-backend', 'read -P 0x11 0 4k',
+ use_log=True, qdev=True)
+
+ iotests.log('\nStarting migration...')
+
+ mig_caps = [
+ {'capability': 'events', 'state': True},
+ {'capability': 'pause-before-switchover', 'state': True},
+ ]
+ vm_src.qmp_log('migrate-set-capabilities', capabilities=mig_caps)
+ vm_dst.qmp_log('migrate-set-capabilities', capabilities=mig_caps)
+ vm_src.qmp_log('migrate', uri=f'unix:{mig_sock}',
+ filters=[iotests.filter_qmp_testfiles])
+
+ vm_src.event_wait('MIGRATION',
+ match={'data': {'status': 'pre-switchover'}})
+
+ iotests.log('\nPre-switchover: Reconfigure QSD instances')
+
+ iotests.log(qsd_src.qmp('blockdev-set-active', {'active': False}))
+
+ # Reading is okay from both sides while the image is inactive. Note that
+ # the destination may have stale data until it activates the image, though.
+ vm_src.hmp_qemu_io('virtio0/virtio-backend', 'read -P 0x11 0 4k',
+ use_log=True, qdev=True)
+ vm_dst.hmp_qemu_io('virtio0/virtio-backend', 'read 0 4k',
+ use_log=True, qdev=True)
+
+ iotests.log(qsd_dst.qmp('blockdev-set-active', {'active': True}))
+
+ iotests.log('\nCompleting migration...')
+
+ vm_src.qmp_log('migrate-continue', state='pre-switchover')
+ vm_dst.event_wait('MIGRATION', match={'data': {'status': 'completed'}})
+
+ iotests.log('\nTest I/O on the destination')
+
+ # Now the destination must see what the source wrote
+ vm_dst.hmp_qemu_io('virtio0/virtio-backend', 'read -P 0x11 0 4k',
+ use_log=True, qdev=True)
+
+ # And be able to overwrite it
+ vm_dst.hmp_qemu_io('virtio0/virtio-backend', 'write -P 0x22 0 4k',
+ use_log=True, qdev=True)
+ vm_dst.hmp_qemu_io('virtio0/virtio-backend', 'read -P 0x22 0 4k',
+ use_log=True, qdev=True)
+
+ iotests.log('\nDone')
+
+ vm_src.shutdown()
+ iotests.log('\n--- vm_src log ---')
+ log = vm_src.get_log()
+ if log:
+ iotests.log(log, [filter_qtest, filter_qemu_io])
+ qsd_src.stop()
+
+ vm_dst.shutdown()
+ iotests.log('\n--- vm_dst log ---')
+ log = vm_dst.get_log()
+ if log:
+ iotests.log(log, [filter_qtest, filter_qemu_io])
+ qsd_dst.stop()
diff --git a/tests/qemu-iotests/tests/qsd-migrate.out b/tests/qemu-iotests/tests/qsd-migrate.out
new file mode 100644
index 0000000000..4a5241e5d4
--- /dev/null
+++ b/tests/qemu-iotests/tests/qsd-migrate.out
@@ -0,0 +1,59 @@
+Preparing disk...
+Launching source QSD...
+Launching source VM...
+Launching destination QSD...
+Launching destination VM...
+
+Test I/O on the source
+{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"write -P 0x11 0 4k\""}}
+{"return": ""}
+{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read -P 0x11 0 4k\""}}
+{"return": ""}
+
+Starting migration...
+{"execute": "migrate-set-capabilities", "arguments": {"capabilities": [{"capability": "events", "state": true}, {"capability": "pause-before-switchover", "state": true}]}}
+{"return": {}}
+{"execute": "migrate-set-capabilities", "arguments": {"capabilities": [{"capability": "events", "state": true}, {"capability": "pause-before-switchover", "state": true}]}}
+{"return": {}}
+{"execute": "migrate", "arguments": {"uri": "unix:SOCK_DIR/PID-migrate.sock"}}
+{"return": {}}
+
+Pre-switchover: Reconfigure QSD instances
+{"return": {}}
+{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read -P 0x11 0 4k\""}}
+{"return": ""}
+{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read 0 4k\""}}
+{"return": ""}
+{"return": {}}
+
+Completing migration...
+{"execute": "migrate-continue", "arguments": {"state": "pre-switchover"}}
+{"return": {}}
+
+Test I/O on the destination
+{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read -P 0x11 0 4k\""}}
+{"return": ""}
+{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"write -P 0x22 0 4k\""}}
+{"return": ""}
+{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read -P 0x22 0 4k\""}}
+{"return": ""}
+
+Done
+
+--- vm_src log ---
+wrote 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+--- vm_dst log ---
+read 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--
2.39.3

View File

@ -0,0 +1,84 @@
From b6ed71f7b16e09a29ab479f437805d83ee0c85e0 Mon Sep 17 00:00:00 2001
From: Peter Xu <peterx@redhat.com>
Date: Fri, 6 Dec 2024 18:08:33 -0500
Subject: [PATCH 01/22] migration: Add helper to get target runstate
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [1/22] a64178a0575a1b921e8a868a8007d0a50eb7ae29 (kmwolf/centos-qemu-kvm)
In 99% cases, after QEMU migrates to dest host, it tries to detect the
target VM runstate using global_state_get_runstate().
There's one outlier so far which is Xen that won't send global state.
That's the major reason why global_state_received() check was always there
together with global_state_get_runstate().
However it's utterly confusing why global_state_received() has anything to
do with "let's start VM or not".
Provide a helper to explain it, then we have an unified entry for getting
the target dest QEMU runstate after migration.
Suggested-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20241206230838.1111496-2-peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 7815f69867da92335055d4b5248430b0f122ce4e)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
migration/migration.c | 21 +++++++++++++++++----
1 file changed, 17 insertions(+), 4 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 3dea06d577..c7a9e2e026 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -135,6 +135,21 @@ static bool migration_needs_multiple_sockets(void)
return migrate_multifd() || migrate_postcopy_preempt();
}
+static RunState migration_get_target_runstate(void)
+{
+ /*
+ * When the global state is not migrated, it means we don't know the
+ * runstate of the src QEMU. We don't have much choice but assuming
+ * the VM is running. NOTE: this is pretty rare case, so far only Xen
+ * uses it.
+ */
+ if (!global_state_received()) {
+ return RUN_STATE_RUNNING;
+ }
+
+ return global_state_get_runstate();
+}
+
static bool transport_supports_multi_channels(MigrationAddress *addr)
{
if (addr->transport == MIGRATION_ADDRESS_TYPE_SOCKET) {
@@ -727,8 +742,7 @@ static void process_incoming_migration_bh(void *opaque)
* unless we really are starting the VM.
*/
if (!migrate_late_block_activate() ||
- (autostart && (!global_state_received() ||
- runstate_is_live(global_state_get_runstate())))) {
+ (autostart && runstate_is_live(migration_get_target_runstate()))) {
/* Make sure all file formats throw away their mutable metadata.
* If we get an error here, just don't restart the VM yet. */
bdrv_activate_all(&local_err);
@@ -751,8 +765,7 @@ static void process_incoming_migration_bh(void *opaque)
dirty_bitmap_mig_before_vm_start();
- if (!global_state_received() ||
- runstate_is_live(global_state_get_runstate())) {
+ if (runstate_is_live(migration_get_target_runstate())) {
if (autostart) {
vm_start();
} else {
--
2.39.3

View File

@ -0,0 +1,72 @@
From 48773d81978e4c355445cb767c6c8b5346555092 Mon Sep 17 00:00:00 2001
From: Peter Xu <peterx@redhat.com>
Date: Fri, 6 Dec 2024 18:08:36 -0500
Subject: [PATCH 04/22] migration/block: Apply late-block-active behavior to
postcopy
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [4/22] 917b74fe13976f066f7c31dbd0eee85f424bbab5 (kmwolf/centos-qemu-kvm)
Postcopy never cared about late-block-active. However there's no mention
in the capability that it doesn't apply to postcopy.
Considering that we _assumed_ late activation is always good, do that too
for postcopy unconditionally, just like precopy. After this patch, we
should have unified the behavior across all.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20241206230838.1111496-5-peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 61f2b489987c51159c53101a072c6aa901b50506)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
migration/savevm.c | 25 ++++++++++++-------------
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/migration/savevm.c b/migration/savevm.c
index 6bb404b9c8..a0c4befdc1 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2156,22 +2156,21 @@ static void loadvm_postcopy_handle_run_bh(void *opaque)
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-announced");
- /* Make sure all file formats throw away their mutable metadata.
- * If we get an error here, just don't restart the VM yet. */
- bdrv_activate_all(&local_err);
- if (local_err) {
- error_report_err(local_err);
- local_err = NULL;
- autostart = false;
- }
-
- trace_vmstate_downtime_checkpoint("dst-postcopy-bh-cache-invalidated");
-
dirty_bitmap_mig_before_vm_start();
if (autostart) {
- /* Hold onto your hats, starting the CPU */
- vm_start();
+ /*
+ * Make sure all file formats throw away their mutable metadata.
+ * If we get an error here, just don't restart the VM yet.
+ */
+ bdrv_activate_all(&local_err);
+ trace_vmstate_downtime_checkpoint("dst-postcopy-bh-cache-invalidated");
+ if (local_err) {
+ error_report_err(local_err);
+ local_err = NULL;
+ } else {
+ vm_start();
+ }
} else {
/* leave it paused and let management decide when to start the CPU */
runstate_set(RUN_STATE_PAUSED);
--
2.39.3

View File

@ -0,0 +1,78 @@
From 3c6e09fe92972513d38c15c03db29a6843e44d3d Mon Sep 17 00:00:00 2001
From: Peter Xu <peterx@redhat.com>
Date: Fri, 6 Dec 2024 18:08:37 -0500
Subject: [PATCH 05/22] migration/block: Fix possible race with block_inactive
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [5/22] a88a20817cb28674367cc57dfe16e6c60c7122b1 (kmwolf/centos-qemu-kvm)
Src QEMU sets block_inactive=true very early before the invalidation takes
place. It means if something wrong happened during setting the flag but
before reaching qemu_savevm_state_complete_precopy_non_iterable() where it
did the invalidation work, it'll make block_inactive flag inconsistent.
For example, think about when qemu_savevm_state_complete_precopy_iterable()
can fail: it will have block_inactive set to true even if all block drives
are active.
Fix that by only update the flag after the invalidation is done.
No Fixes for any commit, because it's not an issue if bdrv_activate_all()
is re-entrant upon all-active disks - false positive block_inactive can
bring nothing more than "trying to active the blocks but they're already
active". However let's still do it right to avoid the inconsistent flag
v.s. reality.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20241206230838.1111496-6-peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 8c97c5a476d146b35b2873ef73df601216a494d9)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
migration/migration.c | 9 +++------
migration/savevm.c | 2 ++
2 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 8a262e01ff..784b7e9b90 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2779,14 +2779,11 @@ static int migration_completion_precopy(MigrationState *s,
goto out_unlock;
}
- /*
- * Inactivate disks except in COLO, and track that we have done so in order
- * to remember to reactivate them if migration fails or is cancelled.
- */
- s->block_inactive = !migrate_colo();
migration_rate_set(RATE_LIMIT_DISABLED);
+
+ /* Inactivate disks except in COLO */
ret = qemu_savevm_state_complete_precopy(s->to_dst_file, false,
- s->block_inactive);
+ !migrate_colo());
out_unlock:
bql_unlock();
return ret;
diff --git a/migration/savevm.c b/migration/savevm.c
index a0c4befdc1..b88dadd904 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1577,6 +1577,8 @@ int qemu_savevm_state_complete_precopy_non_iterable(QEMUFile *f,
qemu_file_set_error(f, ret);
return ret;
}
+ /* Remember that we did this */
+ s->block_inactive = true;
}
if (!in_postcopy) {
/* Postcopy stream will still be going */
--
2.39.3

View File

@ -0,0 +1,94 @@
From e97150d6dad119d3dd234c25f9b0373a2c323299 Mon Sep 17 00:00:00 2001
From: Peter Xu <peterx@redhat.com>
Date: Fri, 6 Dec 2024 18:08:35 -0500
Subject: [PATCH 03/22] migration/block: Make late-block-active the default
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [3/22] 9e197765811f282cd133013bd949be9bc49ca249 (kmwolf/centos-qemu-kvm)
Migration capability 'late-block-active' controls when the block drives
will be activated. If enabled, block drives will only be activated until
VM starts, either src runstate was "live" (RUNNING, or SUSPENDED), or it'll
be postponed until qmp_cont().
Let's do this unconditionally. There's no harm to delay activation of
block drives. Meanwhile there's no ABI breakage if dest does it, because
src QEMU has nothing to do with it, so it's no concern on ABI breakage.
IIUC we could avoid introducing this cap when introducing it before, but
now it's still not too late to just always do it. Cap now prone to
removal, but it'll be for later patches.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20241206230838.1111496-4-peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit fca9aef1c8d8fc4482cc541638dbfac76dc125d6)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
migration/migration.c | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index c7a9e2e026..8a262e01ff 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -735,24 +735,6 @@ static void process_incoming_migration_bh(void *opaque)
trace_vmstate_downtime_checkpoint("dst-precopy-bh-enter");
- /* If capability late_block_activate is set:
- * Only fire up the block code now if we're going to restart the
- * VM, else 'cont' will do it.
- * This causes file locking to happen; so we don't want it to happen
- * unless we really are starting the VM.
- */
- if (!migrate_late_block_activate() ||
- (autostart && runstate_is_live(migration_get_target_runstate()))) {
- /* Make sure all file formats throw away their mutable metadata.
- * If we get an error here, just don't restart the VM yet. */
- bdrv_activate_all(&local_err);
- if (local_err) {
- error_report_err(local_err);
- local_err = NULL;
- autostart = false;
- }
- }
-
/*
* This must happen after all error conditions are dealt with and
* we're sure the VM is going to be running on this host.
@@ -767,7 +749,25 @@ static void process_incoming_migration_bh(void *opaque)
if (runstate_is_live(migration_get_target_runstate())) {
if (autostart) {
- vm_start();
+ /*
+ * Block activation is always delayed until VM starts, either
+ * here (which means we need to start the dest VM right now..),
+ * or until qmp_cont() later.
+ *
+ * We used to have cap 'late-block-activate' but now we do this
+ * unconditionally, as it has no harm but only benefit. E.g.,
+ * it's not part of migration ABI on the time of disk activation.
+ *
+ * Make sure all file formats throw away their mutable
+ * metadata. If error, don't restart the VM yet.
+ */
+ bdrv_activate_all(&local_err);
+ if (local_err) {
+ error_report_err(local_err);
+ local_err = NULL;
+ } else {
+ vm_start();
+ }
} else {
runstate_set(RUN_STATE_PAUSED);
}
--
2.39.3

View File

@ -0,0 +1,565 @@
From d97a28baf4a05c67bf644ac543a3f48a0f2875c0 Mon Sep 17 00:00:00 2001
From: Peter Xu <peterx@redhat.com>
Date: Fri, 6 Dec 2024 18:08:38 -0500
Subject: [PATCH 06/22] migration/block: Rewrite disk activation
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [6/22] c029cea2613097e1f26c563e6c220a00caa18501 (kmwolf/centos-qemu-kvm)
This patch proposes a flag to maintain disk activation status globally. It
mostly rewrites disk activation mgmt for QEMU, including COLO and QMP
command xen_save_devices_state.
Backgrounds
===========
We have two problems on disk activations, one resolved, one not.
Problem 1: disk activation recover (for switchover interruptions)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When migration is either cancelled or failed during switchover, especially
when after the disks are inactivated, QEMU needs to remember re-activate
the disks again before vm starts.
It used to be done separately in two paths: one in qmp_migrate_cancel(),
the other one in the failure path of migration_completion().
It used to be fixed in different commits, all over the places in QEMU. So
these are the relevant changes I saw, I'm not sure if it's complete list:
- In 2016, commit fe904ea824 ("migration: regain control of images when
migration fails to complete")
- In 2017, commit 1d2acc3162 ("migration: re-active images while migration
been canceled after inactive them")
- In 2023, commit 6dab4c93ec ("migration: Attempt disk reactivation in
more failure scenarios")
Now since we have a slightly better picture maybe we can unify the
reactivation in a single path.
One side benefit of doing so is, we can move the disk operation outside QMP
command "migrate_cancel". It's possible that in the future we may want to
make "migrate_cancel" be OOB-compatible, while that requires the command
doesn't need BQL in the first place. This will already do that and make
migrate_cancel command lightweight.
Problem 2: disk invalidation on top of invalidated disks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is an unresolved bug for current QEMU. Link in "Resolves:" at the
end. It turns out besides the src switchover phase (problem 1 above), QEMU
also needs to remember block activation on destination.
Consider two continuous migration in a row, where the VM was always paused.
In that scenario, the disks are not activated even until migration
completed in the 1st round. When the 2nd round starts, if QEMU doesn't
know the status of the disks, it needs to try inactivate the disk again.
Here the issue is the block layer API bdrv_inactivate_all() will crash a
QEMU if invoked on already inactive disks for the 2nd migration. For
detail, see the bug link at the end.
Implementation
==============
This patch proposes to maintain disk activation with a global flag, so we
know:
- If we used to inactivate disks for migration, but migration got
cancelled, or failed, QEMU will know it should reactivate the disks.
- On incoming side, if the disks are never activated but then another
migration is triggered, QEMU should be able to tell that inactivate is
not needed for the 2nd migration.
We used to have disk_inactive, but it only solves the 1st issue, not the
2nd. Also, it's done in completely separate paths so it's extremely hard
to follow either how the flag changes, or the duration that the flag is
valid, and when we will reactivate the disks.
Convert the existing disk_inactive flag into that global flag (also invert
its naming), and maintain the disk activation status for the whole
lifecycle of qemu. That includes the incoming QEMU.
Put both of the error cases of source migration (failure, cancelled)
together into migration_iteration_finish(), which will be invoked for
either of the scenario. So from that part QEMU should behave the same as
before. However with such global maintenance on disk activation status, we
not only cleanup quite a few temporary paths that we try to maintain the
disk activation status (e.g. in postcopy code), meanwhile it fixes the
crash for problem 2 in one shot.
For freshly started QEMU, the flag is initialized to TRUE showing that the
QEMU owns the disks by default.
For incoming migrated QEMU, the flag will be initialized to FALSE once and
for all showing that the dest QEMU doesn't own the disks until switchover.
That is guaranteed by the "once" variable.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2395
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20241206230838.1111496-7-peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 8597af76153a87068b675d8099063c3ad8695773)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
include/migration/misc.h | 4 ++
migration/block-active.c | 94 ++++++++++++++++++++++++++++++++++++++++
migration/colo.c | 2 +-
migration/meson.build | 1 +
migration/migration.c | 80 ++++++++--------------------------
migration/migration.h | 5 +--
migration/savevm.c | 33 ++++++--------
migration/trace-events | 3 ++
monitor/qmp-cmds.c | 8 +---
9 files changed, 139 insertions(+), 91 deletions(-)
create mode 100644 migration/block-active.c
diff --git a/include/migration/misc.h b/include/migration/misc.h
index bfadc5613b..35ca8e1194 100644
--- a/include/migration/misc.h
+++ b/include/migration/misc.h
@@ -111,4 +111,8 @@ bool migration_in_bg_snapshot(void);
/* migration/block-dirty-bitmap.c */
void dirty_bitmap_mig_init(void);
+/* Wrapper for block active/inactive operations */
+bool migration_block_activate(Error **errp);
+bool migration_block_inactivate(void);
+
#endif
diff --git a/migration/block-active.c b/migration/block-active.c
new file mode 100644
index 0000000000..d477cf8182
--- /dev/null
+++ b/migration/block-active.c
@@ -0,0 +1,94 @@
+/*
+ * Block activation tracking for migration purpose
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Copyright (C) 2024 Red Hat, Inc.
+ */
+#include "qemu/osdep.h"
+#include "block/block.h"
+#include "qapi/error.h"
+#include "migration/migration.h"
+#include "qemu/error-report.h"
+#include "trace.h"
+
+/*
+ * Migration-only cache to remember the block layer activation status.
+ * Protected by BQL.
+ *
+ * We need this because..
+ *
+ * - Migration can fail after block devices are invalidated (during
+ * switchover phase). When that happens, we need to be able to recover
+ * the block drive status by re-activating them.
+ *
+ * - Currently bdrv_inactivate_all() is not safe to be invoked on top of
+ * invalidated drives (even if bdrv_activate_all() is actually safe to be
+ * called any time!). It means remembering this could help migration to
+ * make sure it won't invalidate twice in a row, crashing QEMU. It can
+ * happen when we migrate a PAUSED VM from host1 to host2, then migrate
+ * again to host3 without starting it. TODO: a cleaner solution is to
+ * allow safe invoke of bdrv_inactivate_all() at anytime, like
+ * bdrv_activate_all().
+ *
+ * For freshly started QEMU, the flag is initialized to TRUE reflecting the
+ * scenario where QEMU owns block device ownerships.
+ *
+ * For incoming QEMU taking a migration stream, the flag is initialized to
+ * FALSE reflecting that the incoming side doesn't own the block devices,
+ * not until switchover happens.
+ */
+static bool migration_block_active;
+
+/* Setup the disk activation status */
+void migration_block_active_setup(bool active)
+{
+ migration_block_active = active;
+}
+
+bool migration_block_activate(Error **errp)
+{
+ ERRP_GUARD();
+
+ assert(bql_locked());
+
+ if (migration_block_active) {
+ trace_migration_block_activation("active-skipped");
+ return true;
+ }
+
+ trace_migration_block_activation("active");
+
+ bdrv_activate_all(errp);
+ if (*errp) {
+ error_report_err(error_copy(*errp));
+ return false;
+ }
+
+ migration_block_active = true;
+ return true;
+}
+
+bool migration_block_inactivate(void)
+{
+ int ret;
+
+ assert(bql_locked());
+
+ if (!migration_block_active) {
+ trace_migration_block_activation("inactive-skipped");
+ return true;
+ }
+
+ trace_migration_block_activation("inactive");
+
+ ret = bdrv_inactivate_all();
+ if (ret) {
+ error_report("%s: bdrv_inactivate_all() failed: %d",
+ __func__, ret);
+ return false;
+ }
+
+ migration_block_active = false;
+ return true;
+}
diff --git a/migration/colo.c b/migration/colo.c
index 6449490221..ab903f34cb 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -836,7 +836,7 @@ static void *colo_process_incoming_thread(void *opaque)
/* Make sure all file formats throw away their mutable metadata */
bql_lock();
- bdrv_activate_all(&local_err);
+ migration_block_activate(&local_err);
bql_unlock();
if (local_err) {
error_report_err(local_err);
diff --git a/migration/meson.build b/migration/meson.build
index 5ce2acb41e..6b79861d3c 100644
--- a/migration/meson.build
+++ b/migration/meson.build
@@ -11,6 +11,7 @@ migration_files = files(
system_ss.add(files(
'block-dirty-bitmap.c',
+ 'block-active.c',
'channel.c',
'channel-block.c',
'dirtyrate.c',
diff --git a/migration/migration.c b/migration/migration.c
index 784b7e9b90..38631d1206 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -730,7 +730,6 @@ static void qemu_start_incoming_migration(const char *uri, bool has_channels,
static void process_incoming_migration_bh(void *opaque)
{
- Error *local_err = NULL;
MigrationIncomingState *mis = opaque;
trace_vmstate_downtime_checkpoint("dst-precopy-bh-enter");
@@ -761,11 +760,7 @@ static void process_incoming_migration_bh(void *opaque)
* Make sure all file formats throw away their mutable
* metadata. If error, don't restart the VM yet.
*/
- bdrv_activate_all(&local_err);
- if (local_err) {
- error_report_err(local_err);
- local_err = NULL;
- } else {
+ if (migration_block_activate(NULL)) {
vm_start();
}
} else {
@@ -1562,16 +1557,6 @@ static void migrate_fd_cancel(MigrationState *s)
}
}
}
- if (s->state == MIGRATION_STATUS_CANCELLING && s->block_inactive) {
- Error *local_err = NULL;
-
- bdrv_activate_all(&local_err);
- if (local_err) {
- error_report_err(local_err);
- } else {
- s->block_inactive = false;
- }
- }
}
void migration_add_notifier_mode(NotifierWithReturn *notify,
@@ -1890,6 +1875,12 @@ void qmp_migrate_incoming(const char *uri, bool has_channels,
return;
}
+ /*
+ * Newly setup incoming QEMU. Mark the block active state to reflect
+ * that the src currently owns the disks.
+ */
+ migration_block_active_setup(false);
+
once = false;
}
@@ -2542,7 +2533,6 @@ static int postcopy_start(MigrationState *ms, Error **errp)
QIOChannelBuffer *bioc;
QEMUFile *fb;
uint64_t bandwidth = migrate_max_postcopy_bandwidth();
- bool restart_block = false;
int cur_state = MIGRATION_STATUS_ACTIVE;
if (migrate_postcopy_preempt()) {
@@ -2578,13 +2568,10 @@ static int postcopy_start(MigrationState *ms, Error **errp)
goto fail;
}
- ret = bdrv_inactivate_all();
- if (ret < 0) {
- error_setg_errno(errp, -ret, "%s: Failed in bdrv_inactivate_all()",
- __func__);
+ if (!migration_block_inactivate()) {
+ error_setg(errp, "%s: Failed in bdrv_inactivate_all()", __func__);
goto fail;
}
- restart_block = true;
/*
* Cause any non-postcopiable, but iterative devices to
@@ -2654,8 +2641,6 @@ static int postcopy_start(MigrationState *ms, Error **errp)
goto fail_closefb;
}
- restart_block = false;
-
/* Now send that blob */
if (qemu_savevm_send_packaged(ms->to_dst_file, bioc->data, bioc->usage)) {
error_setg(errp, "%s: Failed to send packaged data", __func__);
@@ -2700,17 +2685,7 @@ fail_closefb:
fail:
migrate_set_state(&ms->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
MIGRATION_STATUS_FAILED);
- if (restart_block) {
- /* A failure happened early enough that we know the destination hasn't
- * accessed block devices, so we're safe to recover.
- */
- Error *local_err = NULL;
-
- bdrv_activate_all(&local_err);
- if (local_err) {
- error_report_err(local_err);
- }
- }
+ migration_block_activate(NULL);
migration_call_notifiers(ms, MIG_EVENT_PRECOPY_FAILED, NULL);
bql_unlock();
return -1;
@@ -2808,31 +2783,6 @@ static void migration_completion_postcopy(MigrationState *s)
trace_migration_completion_postcopy_end_after_complete();
}
-static void migration_completion_failed(MigrationState *s,
- int current_active_state)
-{
- if (s->block_inactive && (s->state == MIGRATION_STATUS_ACTIVE ||
- s->state == MIGRATION_STATUS_DEVICE)) {
- /*
- * If not doing postcopy, vm_start() will be called: let's
- * regain control on images.
- */
- Error *local_err = NULL;
-
- bql_lock();
- bdrv_activate_all(&local_err);
- if (local_err) {
- error_report_err(local_err);
- } else {
- s->block_inactive = false;
- }
- bql_unlock();
- }
-
- migrate_set_state(&s->state, current_active_state,
- MIGRATION_STATUS_FAILED);
-}
-
/**
* migration_completion: Used by migration_thread when there's not much left.
* The caller 'breaks' the loop when this returns.
@@ -2886,7 +2836,8 @@ fail:
error_free(local_err);
}
- migration_completion_failed(s, current_active_state);
+ migrate_set_state(&s->state, current_active_state,
+ MIGRATION_STATUS_FAILED);
}
/**
@@ -3309,6 +3260,11 @@ static void migration_iteration_finish(MigrationState *s)
case MIGRATION_STATUS_FAILED:
case MIGRATION_STATUS_CANCELLED:
case MIGRATION_STATUS_CANCELLING:
+ /*
+ * Re-activate the block drives if they're inactivated. Note, COLO
+ * shouldn't use block_active at all, so it should be no-op there.
+ */
+ migration_block_activate(NULL);
if (runstate_is_live(s->vm_old_state)) {
if (!runstate_check(RUN_STATE_SHUTDOWN)) {
vm_start();
@@ -3869,6 +3825,8 @@ static void migration_instance_init(Object *obj)
ms->state = MIGRATION_STATUS_NONE;
ms->mbps = -1;
ms->pages_per_second = -1;
+ /* Freshly started QEMU owns all the block devices */
+ migration_block_active_setup(true);
qemu_sem_init(&ms->pause_sem, 0);
qemu_mutex_init(&ms->error_mutex);
diff --git a/migration/migration.h b/migration/migration.h
index 38aa1402d5..5b17c1344d 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -356,9 +356,6 @@ struct MigrationState {
/* Flag set once the migration thread is running (and needs joining) */
bool migration_thread_running;
- /* Flag set once the migration thread called bdrv_inactivate_all */
- bool block_inactive;
-
/* Migration is waiting for guest to unplug device */
QemuSemaphore wait_unplug_sem;
@@ -537,4 +534,6 @@ int migration_rp_wait(MigrationState *s);
*/
void migration_rp_kick(MigrationState *s);
+/* migration/block-active.c */
+void migration_block_active_setup(bool active);
#endif
diff --git a/migration/savevm.c b/migration/savevm.c
index b88dadd904..7f8d177462 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1566,19 +1566,18 @@ int qemu_savevm_state_complete_precopy_non_iterable(QEMUFile *f,
}
if (inactivate_disks) {
- /* Inactivate before sending QEMU_VM_EOF so that the
- * bdrv_activate_all() on the other end won't fail. */
- ret = bdrv_inactivate_all();
- if (ret) {
- error_setg(&local_err, "%s: bdrv_inactivate_all() failed (%d)",
- __func__, ret);
+ /*
+ * Inactivate before sending QEMU_VM_EOF so that the
+ * bdrv_activate_all() on the other end won't fail.
+ */
+ if (!migration_block_inactivate()) {
+ error_setg(&local_err, "%s: bdrv_inactivate_all() failed",
+ __func__);
migrate_set_error(ms, local_err);
error_report_err(local_err);
- qemu_file_set_error(f, ret);
+ qemu_file_set_error(f, -EFAULT);
return ret;
}
- /* Remember that we did this */
- s->block_inactive = true;
}
if (!in_postcopy) {
/* Postcopy stream will still be going */
@@ -2142,7 +2141,6 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
static void loadvm_postcopy_handle_run_bh(void *opaque)
{
- Error *local_err = NULL;
MigrationIncomingState *mis = opaque;
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-enter");
@@ -2165,12 +2163,11 @@ static void loadvm_postcopy_handle_run_bh(void *opaque)
* Make sure all file formats throw away their mutable metadata.
* If we get an error here, just don't restart the VM yet.
*/
- bdrv_activate_all(&local_err);
+ bool success = migration_block_activate(NULL);
+
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-cache-invalidated");
- if (local_err) {
- error_report_err(local_err);
- local_err = NULL;
- } else {
+
+ if (success) {
vm_start();
}
} else {
@@ -3214,11 +3211,7 @@ void qmp_xen_save_devices_state(const char *filename, bool has_live, bool live,
* side of the migration take control of the images.
*/
if (live && !saved_vm_running) {
- ret = bdrv_inactivate_all();
- if (ret) {
- error_setg(errp, "%s: bdrv_inactivate_all() failed (%d)",
- __func__, ret);
- }
+ migration_block_inactivate();
}
}
diff --git a/migration/trace-events b/migration/trace-events
index 0b7c3324fb..62141dc2ff 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -377,3 +377,6 @@ migration_block_progression(unsigned percent) "Completed %u%%"
# page_cache.c
migration_pagecache_init(int64_t max_num_items) "Setting cache buckets to %" PRId64
migration_pagecache_insert(void) "Error allocating page"
+
+# block-active.c
+migration_block_activation(const char *name) "%s"
diff --git a/monitor/qmp-cmds.c b/monitor/qmp-cmds.c
index 76f21e8af3..6f76d9beaf 100644
--- a/monitor/qmp-cmds.c
+++ b/monitor/qmp-cmds.c
@@ -31,6 +31,7 @@
#include "qapi/type-helpers.h"
#include "hw/mem/memory-device.h"
#include "hw/intc/intc.h"
+#include "migration/misc.h"
NameInfo *qmp_query_name(Error **errp)
{
@@ -103,13 +104,8 @@ void qmp_cont(Error **errp)
* Continuing after completed migration. Images have been
* inactivated to allow the destination to take control. Need to
* get control back now.
- *
- * If there are no inactive block nodes (e.g. because the VM was
- * just paused rather than completing a migration),
- * bdrv_inactivate_all() simply doesn't do anything.
*/
- bdrv_activate_all(&local_err);
- if (local_err) {
+ if (!migration_block_activate(&local_err)) {
error_propagate(errp, local_err);
return;
}
--
2.39.3

View File

@ -0,0 +1,158 @@
From 927a37838380b5405596795f0f8968ac8b94bda2 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:13:55 +0100
Subject: [PATCH 10/22] migration/block-active: Remove global active flag
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [10/22] caa43249916319b11a18994510e68176fae61d50 (kmwolf/centos-qemu-kvm)
Block devices have an individual active state, a single global flag
can't cover this correctly. This becomes more important as we allow
users to manually manage which nodes are active or inactive.
Now that it's allowed to call bdrv_inactivate_all() even when some
nodes are already inactive, we can remove the flag and just
unconditionally call bdrv_inactivate_all() and, more importantly,
bdrv_activate_all() before we make use of the nodes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20250204211407.381505-5-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c2a189976e211c9ff782538d5a5ed5e5cffeccd6)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
migration/block-active.c | 46 ----------------------------------------
migration/migration.c | 8 -------
migration/migration.h | 2 --
3 files changed, 56 deletions(-)
diff --git a/migration/block-active.c b/migration/block-active.c
index d477cf8182..40e986aade 100644
--- a/migration/block-active.c
+++ b/migration/block-active.c
@@ -12,51 +12,12 @@
#include "qemu/error-report.h"
#include "trace.h"
-/*
- * Migration-only cache to remember the block layer activation status.
- * Protected by BQL.
- *
- * We need this because..
- *
- * - Migration can fail after block devices are invalidated (during
- * switchover phase). When that happens, we need to be able to recover
- * the block drive status by re-activating them.
- *
- * - Currently bdrv_inactivate_all() is not safe to be invoked on top of
- * invalidated drives (even if bdrv_activate_all() is actually safe to be
- * called any time!). It means remembering this could help migration to
- * make sure it won't invalidate twice in a row, crashing QEMU. It can
- * happen when we migrate a PAUSED VM from host1 to host2, then migrate
- * again to host3 without starting it. TODO: a cleaner solution is to
- * allow safe invoke of bdrv_inactivate_all() at anytime, like
- * bdrv_activate_all().
- *
- * For freshly started QEMU, the flag is initialized to TRUE reflecting the
- * scenario where QEMU owns block device ownerships.
- *
- * For incoming QEMU taking a migration stream, the flag is initialized to
- * FALSE reflecting that the incoming side doesn't own the block devices,
- * not until switchover happens.
- */
-static bool migration_block_active;
-
-/* Setup the disk activation status */
-void migration_block_active_setup(bool active)
-{
- migration_block_active = active;
-}
-
bool migration_block_activate(Error **errp)
{
ERRP_GUARD();
assert(bql_locked());
- if (migration_block_active) {
- trace_migration_block_activation("active-skipped");
- return true;
- }
-
trace_migration_block_activation("active");
bdrv_activate_all(errp);
@@ -65,7 +26,6 @@ bool migration_block_activate(Error **errp)
return false;
}
- migration_block_active = true;
return true;
}
@@ -75,11 +35,6 @@ bool migration_block_inactivate(void)
assert(bql_locked());
- if (!migration_block_active) {
- trace_migration_block_activation("inactive-skipped");
- return true;
- }
-
trace_migration_block_activation("inactive");
ret = bdrv_inactivate_all();
@@ -89,6 +44,5 @@ bool migration_block_inactivate(void)
return false;
}
- migration_block_active = false;
return true;
}
diff --git a/migration/migration.c b/migration/migration.c
index 38631d1206..999d4cac54 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1875,12 +1875,6 @@ void qmp_migrate_incoming(const char *uri, bool has_channels,
return;
}
- /*
- * Newly setup incoming QEMU. Mark the block active state to reflect
- * that the src currently owns the disks.
- */
- migration_block_active_setup(false);
-
once = false;
}
@@ -3825,8 +3819,6 @@ static void migration_instance_init(Object *obj)
ms->state = MIGRATION_STATUS_NONE;
ms->mbps = -1;
ms->pages_per_second = -1;
- /* Freshly started QEMU owns all the block devices */
- migration_block_active_setup(true);
qemu_sem_init(&ms->pause_sem, 0);
qemu_mutex_init(&ms->error_mutex);
diff --git a/migration/migration.h b/migration/migration.h
index 5b17c1344d..c38d2a37e4 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -534,6 +534,4 @@ int migration_rp_wait(MigrationState *s);
*/
void migration_rp_kick(MigrationState *s);
-/* migration/block-active.c */
-void migration_block_active_setup(bool active);
#endif
--
2.39.3

View File

@ -0,0 +1,68 @@
From 8c6301c578000fff63c5ee0406020ecc4d3ca170 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Tue, 4 Feb 2025 22:14:04 +0100
Subject: [PATCH 19/22] nbd/server: Support inactive nodes
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [19/22] 140f88e93d88437c8c9c63299f714230ebfd9228 (kmwolf/centos-qemu-kvm)
In order to support running an NBD export on inactive nodes, we must
make sure to return errors for any operations that aren't allowed on
inactive nodes. Reads are the only operation we know we need for
inactive images, so to err on the side of caution, return errors for
everything else, even if some operations could possibly be okay.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Message-ID: <20250204211407.381505-14-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 2e73a17c68f4d80023dc616e596e8c1f3ea8dd75)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
nbd/server.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/nbd/server.c b/nbd/server.c
index f64e47270c..2076fb2666 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -2026,6 +2026,7 @@ static void nbd_export_delete(BlockExport *blk_exp)
const BlockExportDriver blk_exp_nbd = {
.type = BLOCK_EXPORT_TYPE_NBD,
.instance_size = sizeof(NBDExport),
+ .supports_inactive = true,
.create = nbd_export_create,
.delete = nbd_export_delete,
.request_shutdown = nbd_export_request_shutdown,
@@ -2920,6 +2921,22 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
NBDExport *exp = client->exp;
char *msg;
size_t i;
+ bool inactive;
+
+ WITH_GRAPH_RDLOCK_GUARD() {
+ inactive = bdrv_is_inactive(blk_bs(exp->common.blk));
+ if (inactive) {
+ switch (request->type) {
+ case NBD_CMD_READ:
+ /* These commands are allowed on inactive nodes */
+ break;
+ default:
+ /* Return an error for the rest */
+ return nbd_send_generic_reply(client, request, -EPERM,
+ "export is inactive", errp);
+ }
+ }
+ }
switch (request->type) {
case NBD_CMD_CACHE:
--
2.39.3

View File

@ -0,0 +1,73 @@
From fd5603e42b6287c849bfed700d34c817b4b93891 Mon Sep 17 00:00:00 2001
From: Peter Xu <peterx@redhat.com>
Date: Fri, 6 Dec 2024 18:08:34 -0500
Subject: [PATCH 02/22] qmp/cont: Only activate disks if migration completed
RH-Author: Kevin Wolf <kwolf@redhat.com>
RH-MergeRequest: 340: QMP command for block device reactivation after migration
RH-Jira: RHEL-54670
RH-Acked-by: Eric Blake <eblake@redhat.com>
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
RH-Commit: [2/22] 554508370a344aa69e0e8888c0567d353cbcfe94 (kmwolf/centos-qemu-kvm)
As the comment says, the activation of disks is for the case where
migration has completed, rather than when QEMU is still during
migration (RUN_STATE_INMIGRATE).
Move the code over to reflect what the comment is describing.
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20241206230838.1111496-3-peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit e4e5e89bbd8e731e86735d9d25b7b5f49e8f08b6)
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
monitor/qmp-cmds.c | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/monitor/qmp-cmds.c b/monitor/qmp-cmds.c
index f84a0dc523..76f21e8af3 100644
--- a/monitor/qmp-cmds.c
+++ b/monitor/qmp-cmds.c
@@ -96,21 +96,23 @@ void qmp_cont(Error **errp)
}
}
- /* Continuing after completed migration. Images have been inactivated to
- * allow the destination to take control. Need to get control back now.
- *
- * If there are no inactive block nodes (e.g. because the VM was just
- * paused rather than completing a migration), bdrv_inactivate_all() simply
- * doesn't do anything. */
- bdrv_activate_all(&local_err);
- if (local_err) {
- error_propagate(errp, local_err);
- return;
- }
-
if (runstate_check(RUN_STATE_INMIGRATE)) {
autostart = 1;
} else {
+ /*
+ * Continuing after completed migration. Images have been
+ * inactivated to allow the destination to take control. Need to
+ * get control back now.
+ *
+ * If there are no inactive block nodes (e.g. because the VM was
+ * just paused rather than completing a migration),
+ * bdrv_inactivate_all() simply doesn't do anything.
+ */
+ bdrv_activate_all(&local_err);
+ if (local_err) {
+ error_propagate(errp, local_err);
+ return;
+ }
vm_start();
}
}
--
2.39.3

View File

@ -158,7 +158,7 @@ Obsoletes: %{name}-block-ssh <= %{epoch}:%{version} \
Summary: QEMU is a machine emulator and virtualizer
Name: qemu-kvm
Version: 9.1.0
Release: 14%{?rcrel}%{?dist}%{?cc_suffix}.alma.1
Release: 15%{?rcrel}%{?dist}%{?cc_suffix}.alma.1
# Epoch because we pushed a qemu-1.0 package. AIUI this can't ever be dropped
# Epoch 15 used for RHEL 8
# Epoch 17 used for RHEL 9 (due to release versioning offset in RHEL 8.5)
@ -453,6 +453,50 @@ Patch130: kvm-net-Fix-announce_self.patch
Patch131: kvm-vhost-Add-stubs-for-the-migration-state-transfer-int.patch
# For RHEL-78370 - Add vhost-user internal migration for passt
Patch132: kvm-virtio-net-vhost-user-Implement-internal-migration.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch133: kvm-migration-Add-helper-to-get-target-runstate.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch134: kvm-qmp-cont-Only-activate-disks-if-migration-completed.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch135: kvm-migration-block-Make-late-block-active-the-default.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch136: kvm-migration-block-Apply-late-block-active-behavior-to-.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch137: kvm-migration-block-Fix-possible-race-with-block_inactiv.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch138: kvm-migration-block-Rewrite-disk-activation.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch139: kvm-block-Add-active-field-to-BlockDeviceInfo.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch140: kvm-block-Allow-inactivating-already-inactive-nodes.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch141: kvm-block-Inactivate-external-snapshot-overlays-when-nec.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch142: kvm-migration-block-active-Remove-global-active-flag.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch143: kvm-block-Don-t-attach-inactive-child-to-active-node.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch144: kvm-block-Fix-crash-on-block_resize-on-inactive-node.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch145: kvm-block-Add-option-to-create-inactive-nodes.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch146: kvm-block-Add-blockdev-set-active-QMP-command.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch147: kvm-block-Support-inactive-nodes-in-blk_insert_bs.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch148: kvm-block-export-Don-t-ignore-image-activation-error-in-.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch149: kvm-block-Drain-nodes-before-inactivating-them.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch150: kvm-block-export-Add-option-to-allow-export-of-inactive-.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch151: kvm-nbd-server-Support-inactive-nodes.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch152: kvm-iotests-Add-filter_qtest.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch153: kvm-iotests-Add-qsd-migrate-case.patch
# For RHEL-54670 - Provide QMP command for block device reactivation after migration [rhel-10.0]
Patch154: kvm-iotests-Add-NBD-based-tests-for-inactive-nodes.patch
# AlmaLinux Patch
Patch2001: 2001-Add-ppc64-support.patch
@ -1569,12 +1613,38 @@ useradd -r -u 107 -g qemu -G kvm -d / -s /sbin/nologin \
%endif
%changelog
* Wed Feb 12 2025 Eduard Abdullin <eabdullin@almalinux.org> - 18:9.1.0-14.alma.1
* Tue Feb 18 2025 Eduard Abdullin <eabdullin@almalinux.org> - 18:9.1.0-15.alma.1
- Enable QXL device build
- Enable building for ppc64le
- Re-added Spice support
- Don't remove slof.bin for ppc64le
* Mon Feb 17 2025 Miroslav Rezanina <mrezanin@redhat.com> - 9.1.0-15
- kvm-migration-Add-helper-to-get-target-runstate.patch [RHEL-54670]
- kvm-qmp-cont-Only-activate-disks-if-migration-completed.patch [RHEL-54670]
- kvm-migration-block-Make-late-block-active-the-default.patch [RHEL-54670]
- kvm-migration-block-Apply-late-block-active-behavior-to-.patch [RHEL-54670]
- kvm-migration-block-Fix-possible-race-with-block_inactiv.patch [RHEL-54670]
- kvm-migration-block-Rewrite-disk-activation.patch [RHEL-54670]
- kvm-block-Add-active-field-to-BlockDeviceInfo.patch [RHEL-54670]
- kvm-block-Allow-inactivating-already-inactive-nodes.patch [RHEL-54670]
- kvm-block-Inactivate-external-snapshot-overlays-when-nec.patch [RHEL-54670]
- kvm-migration-block-active-Remove-global-active-flag.patch [RHEL-54670]
- kvm-block-Don-t-attach-inactive-child-to-active-node.patch [RHEL-54670]
- kvm-block-Fix-crash-on-block_resize-on-inactive-node.patch [RHEL-54670]
- kvm-block-Add-option-to-create-inactive-nodes.patch [RHEL-54670]
- kvm-block-Add-blockdev-set-active-QMP-command.patch [RHEL-54670]
- kvm-block-Support-inactive-nodes-in-blk_insert_bs.patch [RHEL-54670]
- kvm-block-export-Don-t-ignore-image-activation-error-in-.patch [RHEL-54670]
- kvm-block-Drain-nodes-before-inactivating-them.patch [RHEL-54670]
- kvm-block-export-Add-option-to-allow-export-of-inactive-.patch [RHEL-54670]
- kvm-nbd-server-Support-inactive-nodes.patch [RHEL-54670]
- kvm-iotests-Add-filter_qtest.patch [RHEL-54670]
- kvm-iotests-Add-qsd-migrate-case.patch [RHEL-54670]
- kvm-iotests-Add-NBD-based-tests-for-inactive-nodes.patch [RHEL-54670]
- Resolves: RHEL-54670
(Provide QMP command for block device reactivation after migration [rhel-10.0])
* Mon Feb 10 2025 Miroslav Rezanina <mrezanin@redhat.com> - 9.1.0-14
- kvm-net-Fix-announce_self.patch [RHEL-73894]
- kvm-vhost-Add-stubs-for-the-migration-state-transfer-int.patch [RHEL-78370]