qemu-kvm/0031-jobs-add-exit-shim.patch

109 lines
3.7 KiB
Diff

From 17511eb281e005da6e617acd12c81a0a1fa1771d Mon Sep 17 00:00:00 2001
From: John Snow <jsnow@redhat.com>
Date: Tue, 25 Sep 2018 22:34:09 +0100
Subject: jobs: add exit shim
RH-Author: John Snow <jsnow@redhat.com>
Message-id: <20180925223431.24791-4-jsnow@redhat.com>
Patchwork-id: 82273
O-Subject: [RHEL8/rhel qemu-kvm PATCH 03/25] jobs: add exit shim
Bugzilla: 1632939
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
RH-Acked-by: Max Reitz <mreitz@redhat.com>
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
All jobs do the same thing when they leave their running loop:
- Store the return code in a structure
- wait to receive this structure in the main thread
- signal job completion via job_completed
Few jobs do anything beyond exactly this. Consolidate this exit
logic for a net reduction in SLOC.
More seriously, when we utilize job_defer_to_main_loop_bh to call
a function that calls job_completed, job_finalize_single will run
in a context where it has recursively taken the aio_context lock,
which can cause hangs if it puts down a reference that causes a flush.
You can observe this in practice by looking at mirror_exit's careful
placement of job_completed and bdrv_unref calls.
If we centralize job exiting, we can signal job completion from outside
of the aio_context, which should allow for job cleanup code to run with
only one lock, which makes cleanup callbacks less tricky to write.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180830015734.19765-4-jsnow@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 00359a71d45a414ee47d8e423104dc0afd24ec65)
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
---
include/qemu/job.h | 11 +++++++++++
job.c | 18 ++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/include/qemu/job.h b/include/qemu/job.h
index e0e9987..1144d67 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -209,6 +209,17 @@ struct JobDriver {
void (*drain)(Job *job);
/**
+ * If the callback is not NULL, exit will be invoked from the main thread
+ * when the job's coroutine has finished, but before transactional
+ * convergence; before @prepare or @abort.
+ *
+ * FIXME TODO: This callback is only temporary to transition remaining jobs
+ * to prepare/commit/abort/clean callbacks and will be removed before 3.1.
+ * is released.
+ */
+ void (*exit)(Job *job);
+
+ /**
* If the callback is not NULL, prepare will be invoked when all the jobs
* belonging to the same transaction complete; or upon this job's completion
* if it is not in a transaction.
diff --git a/job.c b/job.c
index 276024a..abe91af 100644
--- a/job.c
+++ b/job.c
@@ -535,6 +535,18 @@ void job_drain(Job *job)
}
}
+static void job_exit(void *opaque)
+{
+ Job *job = (Job *)opaque;
+ AioContext *aio_context = job->aio_context;
+
+ if (job->driver->exit) {
+ aio_context_acquire(aio_context);
+ job->driver->exit(job);
+ aio_context_release(aio_context);
+ }
+ job_completed(job, job->ret);
+}
/**
* All jobs must allow a pause point before entering their job proper. This
@@ -547,6 +559,12 @@ static void coroutine_fn job_co_entry(void *opaque)
assert(job && job->driver && job->driver->run);
job_pause_point(job);
job->ret = job->driver->run(job, &job->err);
+ if (!job->deferred_to_main_loop) {
+ job->deferred_to_main_loop = true;
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
+ job_exit,
+ job);
+ }
}
--
1.8.3.1