qemu-kvm/SOURCES/kvm-migration-Attempt-disk-reactivation-in-more-failure-.patch
2023-11-07 09:43:59 +00:00

112 lines
4.2 KiB
Diff

From 3691bb5f956e3c60dbf6de183011b31dbc7a7801 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Tue, 2 May 2023 15:52:12 -0500
Subject: [PATCH 01/56] migration: Attempt disk reactivation in more failure
scenarios
RH-Author: Eric Blake <eblake@redhat.com>
RH-MergeRequest: 161: Avoid migration assertion from failed NFS server.
RH-Bugzilla: 2058982
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
RH-Acked-by: quintela1 <quintela@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
RH-Commit: [1/1] 5999b747b314641259d3b8809033b057805eed3f (ebblake/centos-qemu-kvm)
Commit fe904ea824 added a fail_inactivate label, which tries to
reactivate disks on the source after a failure while s->state ==
MIGRATION_STATUS_ACTIVE, but didn't actually use the label if
qemu_savevm_state_complete_precopy() failed. This failure to
reactivate is also present in commit 6039dd5b1c (also covering the new
s->state == MIGRATION_STATUS_DEVICE state) and 403d18ae (ensuring
s->block_inactive is set more reliably).
Consolidate the two labels back into one - no matter HOW migration is
failed, if there is any chance we can reach vm_start() after having
attempted inactivation, it is essential that we have tried to restart
disks before then. This also makes the cleanup more like
migrate_fd_cancel().
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20230502205212.134680-1-eblake@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6dab4c93ecfae48e2e67b984d1032c1e988d3005)
[eblake: downstream migrate_colo() => migrate_colo_enabled()]
Signed-off-by: Eric Blake <eblake@redhat.com>
---
migration/migration.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 08007cef4e..99f86bd6c2 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3443,6 +3443,11 @@ static void migration_completion(MigrationState *s)
MIGRATION_STATUS_DEVICE);
}
if (ret >= 0) {
+ /*
+ * Inactivate disks except in COLO, and track that we
+ * have done so in order to remember to reactivate
+ * them if migration fails or is cancelled.
+ */
s->block_inactive = !migrate_colo_enabled();
qemu_file_set_rate_limit(s->to_dst_file, INT64_MAX);
ret = qemu_savevm_state_complete_precopy(s->to_dst_file, false,
@@ -3487,13 +3492,13 @@ static void migration_completion(MigrationState *s)
rp_error = await_return_path_close_on_source(s);
trace_migration_return_path_end_after(rp_error);
if (rp_error) {
- goto fail_invalidate;
+ goto fail;
}
}
if (qemu_file_get_error(s->to_dst_file)) {
trace_migration_completion_file_err();
- goto fail_invalidate;
+ goto fail;
}
if (migrate_colo_enabled() && s->state == MIGRATION_STATUS_ACTIVE) {
@@ -3507,26 +3512,25 @@ static void migration_completion(MigrationState *s)
return;
-fail_invalidate:
- /* If not doing postcopy, vm_start() will be called: let's regain
- * control on images.
- */
- if (s->state == MIGRATION_STATUS_ACTIVE ||
- s->state == MIGRATION_STATUS_DEVICE) {
+fail:
+ if (s->block_inactive && (s->state == MIGRATION_STATUS_ACTIVE ||
+ s->state == MIGRATION_STATUS_DEVICE)) {
+ /*
+ * If not doing postcopy, vm_start() will be called: let's
+ * regain control on images.
+ */
Error *local_err = NULL;
qemu_mutex_lock_iothread();
bdrv_activate_all(&local_err);
if (local_err) {
error_report_err(local_err);
- s->block_inactive = true;
} else {
s->block_inactive = false;
}
qemu_mutex_unlock_iothread();
}
-fail:
migrate_set_state(&s->state, current_active_state,
MIGRATION_STATUS_FAILED);
}
--
2.39.1