qemu-kvm/kvm-migration-multifd-fix-nullptr-access-in-multifd_send.patch
Danilo C. L. de Paula cb4ea43665 * Wed Mar 11 2020 Danilo Cesar Lemes de Paula <ddepaula@redhat.com> - 4.2.0-14.el8
- kvm-hw-smbios-set-new-default-SMBIOS-fields-for-Windows-.patch [bz#1782529]
- kvm-migration-multifd-clean-pages-after-filling-packet.patch [bz#1738451]
- kvm-migration-Make-sure-that-we-don-t-call-write-in-case.patch [bz#1738451]
- kvm-migration-multifd-fix-nullptr-access-in-terminating-.patch [bz#1738451]
- kvm-migration-multifd-fix-destroyed-mutex-access-in-term.patch [bz#1738451]
- kvm-multifd-Make-sure-that-we-don-t-do-any-IO-after-an-e.patch [bz#1738451]
- kvm-qemu-file-Don-t-do-IO-after-shutdown.patch [bz#1738451]
- kvm-migration-Don-t-send-data-if-we-have-stopped.patch [bz#1738451]
- kvm-migration-Create-migration_is_running.patch [bz#1738451]
- kvm-migration-multifd-fix-nullptr-access-in-multifd_send.patch [bz#1738451]
- kvm-migration-Maybe-VM-is-paused-when-migration-is-cance.patch [bz#1738451]
- kvm-virtiofsd-Remove-fuse_req_getgroups.patch [bz#1797064]
- kvm-virtiofsd-fv_create_listen_socket-error-path-socket-.patch [bz#1797064]
- kvm-virtiofsd-load_capng-missing-unlock.patch [bz#1797064]
- kvm-virtiofsd-do_read-missing-NULL-check.patch [bz#1797064]
- kvm-tools-virtiofsd-fuse_lowlevel-Fix-fuse_out_header-er.patch [bz#1797064]
- kvm-virtiofsd-passthrough_ll-cleanup-getxattr-listxattr.patch [bz#1797064]
- kvm-virtiofsd-Fix-xattr-operations.patch [bz#1797064]
- Resolves: bz#1738451
  (qemu on src host core dump after set multifd-channels and do migration twice (first migration execute migrate_cancel))
- Resolves: bz#1782529
  (Windows Update Enablement with default smbios strings in qemu)
- Resolves: bz#1797064
  (virtiofsd: Fixes)
2020-03-11 20:25:54 +00:00

76 lines
3.7 KiB
Diff

From 517a99c5fba163bf684978fe3d9476b619481391 Mon Sep 17 00:00:00 2001
From: Juan Quintela <quintela@redhat.com>
Date: Tue, 3 Mar 2020 14:51:42 +0000
Subject: [PATCH 10/18] migration/multifd: fix nullptr access in
multifd_send_terminate_threads
RH-Author: Juan Quintela <quintela@redhat.com>
Message-id: <20200303145143.149290-10-quintela@redhat.com>
Patchwork-id: 94117
O-Subject: [RHEL-AV-8.2.0 qemu-kvm PATCH v2 09/10] migration/multifd: fix nullptr access in multifd_send_terminate_threads
Bugzilla: 1738451
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
RH-Acked-by: Peter Xu <peterx@redhat.com>
RH-Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
From: Zhimin Feng <fengzhimin1@huawei.com>
If the multifd_send_threads is not created when migration is failed,
multifd_save_cleanup would be called twice. In this senario, the
multifd_send_state is accessed after it has been released, the result
is that the source VM is crashing down.
Here is the coredump stack:
Program received signal SIGSEGV, Segmentation fault.
0x00005629333a78ef in multifd_send_terminate_threads (err=err@entry=0x0) at migration/ram.c:1012
1012 MultiFDSendParams *p = &multifd_send_state->params[i];
#0 0x00005629333a78ef in multifd_send_terminate_threads (err=err@entry=0x0) at migration/ram.c:1012
#1 0x00005629333ab8a9 in multifd_save_cleanup () at migration/ram.c:1028
#2 0x00005629333abaea in multifd_new_send_channel_async (task=0x562935450e70, opaque=<optimized out>) at migration/ram.c:1202
#3 0x000056293373a562 in qio_task_complete (task=task@entry=0x562935450e70) at io/task.c:196
#4 0x000056293373a6e0 in qio_task_thread_result (opaque=0x562935450e70) at io/task.c:111
#5 0x00007f475d4d75a7 in g_idle_dispatch () from /usr/lib64/libglib-2.0.so.0
#6 0x00007f475d4da9a9 in g_main_context_dispatch () from /usr/lib64/libglib-2.0.so.0
#7 0x0000562933785b33 in glib_pollfds_poll () at util/main-loop.c:219
#8 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
#9 main_loop_wait (nonblocking=nonblocking@entry=0) at util/main-loop.c:518
#10 0x00005629334c5acf in main_loop () at vl.c:1810
#11 0x000056293334d7bb in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4471
If the multifd_send_threads is not created when migration is failed.
In this senario, we don't call multifd_save_cleanup in multifd_new_send_channel_async.
Signed-off-by: Zhimin Feng <fengzhimin1@huawei.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 9c4d333c092e9c26d38f740ff3616deb42f21681)
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
migration/ram.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index 902c56c..3891eff 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1229,7 +1229,15 @@ static void multifd_new_send_channel_async(QIOTask *task, gpointer opaque)
trace_multifd_new_send_channel_async(p->id);
if (qio_task_propagate_error(task, &local_err)) {
migrate_set_error(migrate_get_current(), local_err);
- multifd_save_cleanup();
+ /* Error happen, we need to tell who pay attention to me */
+ qemu_sem_post(&multifd_send_state->channels_ready);
+ qemu_sem_post(&p->sem_sync);
+ /*
+ * Although multifd_send_thread is not created, but main migration
+ * thread neet to judge whether it is running, so we need to mark
+ * its status.
+ */
+ p->quit = true;
} else {
p->c = QIO_CHANNEL(sioc);
qio_channel_set_delay(p->c, false);
--
1.8.3.1